Aim: Identify technical weaknesses in systems on your network and prioritize them based on the importance of affected systems and a cost-benefit analysis of the solutions to determine which vulnerabilities should be addressed first.
Prerequisite: You will need a policy authorizing you to scan hosts on the network, a machine to run scanning software (inside and outside your network), and a strategy of what you are looking for and how to handle the results.
Return on Investment: Scanning for vulnerabilities can be performed at low cost using free tools and off-the-shelf hardware. The results of these scans are invaluable, often enabling you to catch security weaknesses before they become a costly problem.
Motivation: Computers on the Internet are already being probed for vulnerabilities regularly by individuals and self-propagating programs. The goal of self-assessment is to find and fix vulnerabilities before an intruder or worm exploits them.
Depending on the size and structure of the institution, the approach to vulnerability scanning might differ. Small institutions that have a good understanding of IT resources throughout the enterprise might centralize vulnerability scanning. Larger institutions are more likely to have some degree of decentralization, so vulnerability scanning might be the responsibility of individual units. Some institutions might have a blend of both centralized and decentralized vulnerability assessment. Regardless, before starting a vulnerability scanning program, it is important to have authority to conduct the scans and to understand the targets that will be scanned.
Because probing a network for vulnerabilities can disrupt systems and expose private data, higher education institutions need a policy in place and buy-in from the top before performing vulnerability assessments. Many colleges and universities address this issue in their acceptable use policies, making consent to vulnerability scanning a condition of connecting to the network. Additionally, it is important to clarify that the main purpose of seeking vulnerabilities is to defend against outside attackers. A public health metaphor may help people understand the need for scanning-we are looking for symptoms of illness.
There is also a need for clarification of how a University network is integrated with the Internet at large. Some schools have classes of IP address that are within the public space (IP-wise) and have no edge firewall or NAT address translation. These types of open environments could be considered to have no inside vs. outside paradigm.
There is also a need for policies and ethical guidelines for those who have access to data from vulnerability scans. These individuals need to understand the appropriate action when illegal materials are found on their systems during a vulnerability scan. The appropriate action will vary between institutions (for example, public regulations in Georgia versus public regulations in California). Some organizations may want to write specifics into policy, whereas others leave policy more open to interpretation and address specific issues through procedures such as consulting legal counsel.
Those with responsibility for scanning should maintain awareness about current threats and vulnerabilities. Many alert and advisory resources are available. Vendors often notify customers about vulnerabilities through email distribution lists or via the web. Many vendor resources are available for threats and vulnerabilities.
It is important to know and understand the resources that will be targeted for vulnerability assessments. It is helpful to have an inventory or some other documentation of all IT resources. Nmap is a helpful open source tool that can be used update and maintain documentation about IT resources. Nmap will identify hosts and the services they provide.
Documentation about potential scan targets should include appropriate contacts including those who should be notified about scans and those who will be responsible for resolving vulnerabilities.
In practice, several different approaches to vulnerability assessment are used in higher education institutions, depending on the situation.
After installing a new system or making changes to an existing system, the person responsible usually wants to know if there are any obvious exposures. To address this need, Microsoft developed the Baseline Security Analyzer to help individuals find any missing patches and common insecurities in the operating system and applications. However, this tool is limited to Microsoft operating systems and applications, giving only a limited view of the system. Therefore, some universities assist their users by scanning computers on request using an external vulnerability scanning tool. As the number of such requests increase, some higher education institutions develop self-service vulnerability scanning services. Stanford has developed its own Security Self-Test for Macintosh and Windows-based computers that perform basic checks on the system, including applications and settings that are unique to their network.
Nessus enables self-service scanning using account/keys and limiting scans to specific hosts for each user, but this approach is not user-friendly. To facilitate self-service vulnerability scanning using Nessus, CERIAS developed the Web-based Vulnerability Scanning Cluster that is in operation at Purdue.
The Internet Scanner from ISS is also popular in higher education institutions. Indiana University created a tool called Scanager that enables individuals to request self-service ISS Internet Scanner reports of their computers.
Case Study: Self-Service/Automated Security Vulnerability Assessment Program (Scanager) - Indiana University
Virginia Tech has developed SafetyNet (SN) to do remote security vulnerability scanning of computing resources. SN is unlike other vulnerability scanning systems (such as Purdue's VSC or Indiana's ITSO tools). SN is not a NetReg or quarantine service. It is an extensible framework for building a suite of scanning tools into a standard web based interface which maintains authentication, authorization, IP and DNS information, scan history and remediation documentation in a secure, stable and scalable environment.
In an effort to find major vulnerabilities before an intruder exploits them, many higher education institutions routinely scan their entire network for a few specific vulnerabilities. For instance, routinely scanning for the SANS Top 20list of vulnerabilities, which includes blank Windows Administrator passwords, is an effective way to discover dangerously insecure systems. Also, when a new vulnerability is publicized, it is common for colleges and universities to scan all computers connected to their network for just this exposure.
Whenever feasible, a limited list of targets should be generated prior to performing tactical scans. Creating a short list of target systems enables you to gather results quickly, reduces the amount of resulting data, and limits the potential negative impact of the scanning process. For instance, when a new vulnerability in mod_ssl is announced, a tool like Nmap can be used to generate a list of Web servers running mod_ssl. This list may contain sufficient information to determine which systems are vulnerable or the hosts on the target list can be checked using a vulnerability scanning tool to determine which ones are vulnerable. Notably, this strategy is used by many computer intruders to find vulnerable hosts on a network.
Another tactical approach to vulnerability scanning is to use a tool like Nessus to compare current scan results with a past baseline and only view deviations from the baseline.
When responding to a security incident such as a network worm or an intruder gaining unauthorized access to multiple computers, it is necessary to scour the network for compromised systems. One approach is to scan the network for systems with the vulnerability that is being exploited. However, some computer intruders and worms fix the vulnerability they exploit, undermining this type of vulnerability scanning. In such situations, it may still be possible to scan for external signs of intrusion such as a known intruder backdoor, rogue FTP servers, or denial-of-service attack drones.
This example demonstrates that incident response scanning can be combined with network monitoring to identify the majority of compromised systems in a large scale incident. Even then, it can take years to completely eradicate an intruder or worm that gained access to a large number of systems.
One of the most effective approaches to performing vulnerability assessments is to look for hosts that violate security policies. Rather than constantly scanning you network for vulnerabilities and trying to persuade individuals to fix each individual problem, the approach of checking for policy violations raises security to a standard level (resulting in fewer vulnerabilities) and can be scheduled and managed more easily (requiring few resources). Additionally, progress can be measured more easily such as counting the number of hosts in violation of policy rather than the number of hosts with arbitrary issues/vulnerabilities. This approach to scanning for policy violations does not appear to be very common in higher education institutions, probably because few colleges and universities have security policies to enforce.
Allowing free 3rd party tools to remotely measure and monitor your infrastructure gives you a externalized view of your environment and computing resources. There are many free (or low cost) tools that help identify issues with your infrastructure, including network, DNS, e-mail and websites. Some of the tools that can be set up to provide these types of measurements include:
Several useful tools are described here with their strengths, weaknesses, unique/useful features, and case examples.
- Commercial Nessus (ProfessionalFeed is a paid subscription)
- OpenVAS (continuation of the last free Nessus version)
- Nikto Web server scanner
- w3af Web server scanner
- Linux boot disks (Bootable without HD installation)
It is advisable to use multiple scanning tools in parallel to ensure complete coverage and check the results of one tool with another. For instance, by scanning the same systems using two scanning tools, it may become apparent that one tool does not reliably detect a particular vulnerability while the other does. Comparisons between various scanners are available at:
- Top125 Security Tools (filter the list to just vulnerability scanners)
- Network Computing (2001)
- Network World Fusion (2002)
- The NSS Group
College and university information security officers rarely have the time and resources required to perform an in-depth, focused vulnerability assessment on even a small number of systems. As a result, they often simply deliver a vulnerability scan to the system administrators, provide a list of generic security recommendations, perform a cursory inspection of the host configuration, and give a few specific suggestions to address major vulnerabilities. This ad hoc approach may overlook significant vulnerabilities and is not conducive to tracking improvements.
A more effective approach is to first provide system administrators with a set of requirements. Then, the vulnerability assessment can look for deviations from the requirements and help system administrators with implementation as needed. This approach has the added benefit of making security implementation easier. This approach depends on buy-in from upper management and support from allies. Information security officers rarely have the authority to require system administrators to make changes (even on critical systems) unless there is a policy that is supported by upper management.
Outsourcing is common practice in industry and some higher education institutions hire outside consultants to perform a vulnerability assessment of their networks in the hope that upper management will take commercial evaluation more seriously. Even if internal vulnerability assessments have generated the same results in the past but were ignored by upper management, the perceived objectivity, external validation, or professional presentation provided by an outside consultant may motivate upper management to pay more attention to the problems. However, most colleges and universities cannot afford this expense, and it is likely that the external evaluators will find too many vulnerabilities to deal with. Furthermore, outsourcing vulnerability assessment does not build capacity within the institution to perform future assessments. Therefore, employees will be deprived of an opportunity to develop new skills, and future vulnerability assessments will be a recurring cost. Having said this, some higher education institutions have made effective use of commercial vulnerability assessment services.
Case Study: Lessons Learned from RIT's First Security Posture Assessment - Rochester Institute of Technology (RIT)
Because vulnerability assessments can be very resource intensive, some universities are banding together to improve security on their campuses. By pooling resources, spreading the cost among several institutions, and employing outside consultants to provide initial training and direction, colleges and universities with limited budgets have developed highly skilled, well-equipped vulnerability assessment teams.
Case Study: Collaborative Information Security Project - Vulnerability Assessments - California State University, San Bernardino
This type of collaboration is highly effective not just for vulnerability assessment but also for other aspects of information security, including risk assessment and development of standards, policies, and procedures.
The purpose of metrics in this area is to measure the number of vulnerabilities that have been identified, the number that have been fixed, time to resolution, and other data useful for measuring progress, identifying trends, and identifying problem areas such as particular departments with a high number and/or frequency of vulnerabilities. For instance:
- Number of critical Windows IIS/SQL vulnerabilities per month
- Number of critical UNIX RPC vulnerabilities per month
- Average time to fix vulnerabilities
- Total number of hours spent resolving vulnerabilities
If the average time to fix vulnerabilities is three weeks, this may be deemed unsafe because exploits and worms are being developed within weeks of vulnerabilities being publicized. If the majority of vulnerabilities are being found in Windows systems running IIS or MSSQL, these systems may be given more attention. These metrics are very effective at making upper management aware of value of investing in security and the risks in not doing so. These metrics can help demonstrate the need for additional funding and can be used to justify large-scale solutions such as changes to Security Architecture (for example, blocking NetBIOS at the Internet, VPN, and PPP borders).
Questions or comments? Contact us.
Except where otherwise noted, this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.