Availability Availability assures that a system’s authorized users have timely and uninterrupted access to the information in the system and to the network. Other important terms Also important to network security are the following four C-I-A–related terms: ? Identification—The act of a user professing an identity to the system, such as a logon ID ? Authentication—Verification that the user’s claimed identity is valid, such as through the use of a password ? Accountability—Determination of the actions and behavior of a single ndividual within a system, and holding the individual responsible for his or her actions ? Authorization—The privileges allocated to an individual (or process) that enable access to a computer resource Defense-in-Depth Defense-in-Depth is a layered protection scheme for critical information system components. The Defense-in-Depth strategy comprises the following areas: ? Defending the network and infrastructure ? Defending the enclave boundary ? Defending the computing environment ? Supporting Infrastructures
The term enclave as used in the Defense-in-Depth protection strategy refers to a “collection of computing environments connected by one or more internal networks under the control of a single authority and security policy, including personnel and physical security. Enclaves always assume the highest mission assurance category and security classification of the automated information system (AIS) applications or outsourced IT-based processes they support, and derive their security needs from those systems.
The Essay on The Information System Security
1. List the five (5) steps of the Hacking process. Reconnaissance, Scanning, Gaining Access, Maintaining Access, Covering Tracks 2. In order to exploit or attack the targeted systems, what can you do as an initial first step to collect as much information as possible about the targets prior to devising an attack and penetration test plan? The first step would be the reconnaissance or footprinting ...
They provide standard information assurance (IA) capabilities such as boundary defense, incident detection and response, and key management, and also deliver common applications such as office automation and electronic mail. Enclaves are analogous to general support systems as defined in OMB A-130. Enclaves may be specific to an organization or a mission, and the computing environments may be organized by physical proximity or by function independent of location. Examples of enclaves include local area networks (LANs) and the applications they host, backbone networks, and data processing centers. ” (DoD Directive 8500. , “Information Assurance (IA), October 24, 2002).
The enclaves in the U. S. federal and defense computing environments can be categorized as public, private, or classified. The Defense-in-Depth strategy is built on three critical elements: people, technology, and operations. People To implement effective information assurance in an organization, management must have a high-level commitment to the process. This commitment is manifested through the following items and activities: ? Development of information assurance policies and procedures ? Assignment of roles and responsibilities ? Training of critical personnel Enforcement of personal accountability ?
Commitment of resources ? Establishment of physical security controls ? Establishment of personnel security controls ? Penalties associated with unauthorized behavior Technology An organization has to ensure that the proper technologies are acquired and deployed to implement the required information protection services. These objectives are accomplished through the following processes and policies for the acquisition of technology: ? A security policy ? System-level information assurance architectures ? System-level information assurance standards ? Information assurance principles Specification criteria for the required information assurance products ? Acquisition of reliable, third-party, validated products ? Configuration recommendations ? Risk assessment processes for the integrated systems Operations Operations emphasize the activities and items necessary to maintain an organization’s effective security posture on a day-to-day basis. These activities and items include the following: ? A visible and up-to-date security policy ? Enforcement of the information security policy ? Certification and accreditation ? Information security posture management ? Key management services ? Readiness assessments Protection of the infrastructure ? Performing systems security assessments ? Monitoring and reacting to threats ? Attack sensing, warning, and response (ASW&R) ? Recovery and reconstitution The Defense-in-Depth strategy is defined to defend against the following types of attacks, as described in IATF document 3. 1: ? Passive—Passive attacks include traffic analysis, monitoring of unprotected communications, decrypting weakly encrypted traffic, and capture of authentication information (such as passwords).
The Essay on Social Security System
The social security system in the United States is a federal program that benefits the retirees and the disabled that was introduced in 1935. This program contains the disability income, public housing, retirement benefits, food stamps program and also the veterans’ pension. These benefits are not like any other government welfare programs. This is because the social security welfare system ...
Passive intercept of network operations can give adversaries indications and warnings of impending actions.
Passive attacks can result in disclosure of information or data files to an attacker without the consent or knowledge of the user. Examples include the disclosure of personal information such as credit card numbers and medical files. ? Active—Active attacks include attempts to circumvent or break protection features, introduce malicious code, or steal or modify information. These attacks may be mounted against a network backbone, exploit information in transit, electronically penetrate an enclave, or attack an authorized remote user during an attempt to connect to an enclave.
Active attacks can result in the disclosure or dissemination of data files, denial of service, or modification of data. ? Close-in—Close-in attacks consist of individuals attaining physical proximity to networks, systems, or facilities for the purpose of modifying, gathering, or denying access to information. Close physical proximity is achieved through surreptitious entry, open access, or both. ? Insider—Insider attacks can be malicious or nonmalicious. Malicious insiders intentionally eavesdrop, steal, or damage information; use information in a fraudulent manner; or deny access to other authorized sers. Nonmalicious attacks typically result from carelessness, lack of knowledge, or intentional circumvention of security for such reasons as “getting the job done. ” ? Distribution—Distribution attacks focus on the malicious modification of hardware or software at the factory or during distribution. These attacks can introduce malicious code into a product, such as a back door to gain unauthorized access to information or a system function at a later date. To resist these types of attacks, Defense-in-Depth applies the following techniques: ?
The Essay on Management For Information Systems
MANAGEMENT FOR INFORMATION SYSTEMS Today there are a lot of current trends and challenges in information management. The digital world is the linking of people, decisions, tasks and processes via computers and computers with other computers. Cyberspace represents the real time transmitting and sharing of text, voice, graphics, video and the like over a variety of computer-based networks. ...
Defense in multiple places—Deployment of information protection mechanisms at multiple locations to protect against internal and external threats. ? Layered defenses—Deployment of multiple information protection and detection mechanisms so that an adversary or threat will have to negotiate multiple barriers to gain access to critical information. ? Security robustness—Based on the value of the information system component to be protected and the anticipated threats, estimation of the robustness of each information assurance components. Robustness is measured in terms of assurance and strength of the information assurance component. Deploy KMI/PKI—Deployment of robust key management infrastructures (KMI) and public key infrastructures (PKI).
? Deploy intrusion detection systems—Deployment of intrusion detection mechanisms to detect intrusions, evaluate information, examine results, and, if necessary, to take action. Implementing the Defense-in-Depth approach can be resource intensive. To assist in the cost-effective implementation of Defense-in-Depth, IATF document 3. 1 provides the following guidelines: ? Make information assurance decisions based on risk analysis and keyed to the organization’s operational objectives. Draw from all three facets of Defense-in-Depth—people, operations, and technology. Technical mitigations are of no value without trained people to use them and operational procedures to guide their application. ? Establish a comprehensive program of education, training, practical experience, and awareness. Professionalization and certification licensing provide a validated and recognized expert cadre of system administrators. ? Exploit available commercial off-the-shelf (COTS) products and rely on inhouse development for those items not otherwise available. Periodically assess the IA posture of the information infrastructure. Technology tools, such as automated scanners for networks, can assist in vulnerability assessments. ? Take into account, not only the actions of those with hostile intent, but also inadvertent or careless actions. ? Employ multiple means of threat mitigation, overlapping protection approaches to counter anticipated events so that loss or failure of a single barrier does not compromise the overall information infrastructure. ? Ensure that only trustworthy personnel have physical access to the system.
The Essay on Information Systems 7
INFORMATION SYSTEMS In a general sense, the term information system (IS) refers to a system of people, data records and activities that process the data and information in an organization, and it includes the organization's manual and automated processes. In a narrow sense, the term information system (or computer-based information system) refers to the specific application software that is used ...
Methods of providing such assurance include appropriate background investigations, security clearances, credentials, and badges. ? Use established procedures to report incident information provided by intrusion detection mechanisms to authorities and specialized analysis and response centers. Systems engineering processes A number of paradigms are applicable to implementing systems engineering and some useful approaches are listed here: ? IEEE STD 1220-1998 processes: • Requirements Analysis • Requirements Verification • Functional Analysis • Functional Verification • Synthesis • Design Verification DoD 5000. 2-R processes: • Requirements Analysis • Functional Analysis/Allocation • Synthesis A commonly used set of processes in the U. S. government is described in IATF document 3. 1, and this set is the basis for deriving information system security engineering (ISSE) processes. These “generic” SE processes are as follows: ? Discover needs ? Define system requirements ? Design system architecture ? Develop detailed design ? Implement system ? Assess effectiveness These processes emphasize the application of SE over the entire development life cycle. The Information Systems Security Engineering process
The ISSE processes are based on the generic SE processes, as shown in the following pairings: ? Discover information protection needs—Discover needs ? Define system security requirements—Define system requirements ? Design system security architecture—Design system architecture ? Develop detailed security design—Develop detailed design ? Implement system security—Implement system ? Assess information protection effectiveness—Assess effectiveness The six ISSE processes are comprised of the activities (as described in IATF document 3. 1) discussed in the following sections. Discover information protection needs
The Term Paper on INFORMATION SYSTEMS DEVELOPMENT PROCESS
INTRODUCTION In recent years, there has been an abundance of new technologies in the information systems field. These new technologies have altered the very development process itself. Information systems have gone from being a series of one level databases to three dimensional reality, virtual reality, and multimedia systems. In the early days of information systems, the demands were for ...
The objectives of this process are to understand and document the customer’s needs and to develop solutions that will meet these needs. The information systems security engineer should use any reliable sources of information to learn about the customer’s mission and business operations, including areas such as human resources, finance, command and control, engineering, logistics, and research and development. This knowledge can be used to generate a concept of operations (CONOPS) document or a mission needs statement (MNS).
The Committee on National Security Systems (CNSS) Instruction No. 009, “National Information Assurance (IA) Glossary” defines a CONOPS as “a document detailing the method, act, process, or effect of using an information system (IS).
Then, with this information in hand, an information management model (IMM) should be developed that ultimately defines a number of information domains. Information management includes the following: ? Creating information ? Acquiring information ? Processing information ? Storing and retrieving information ? Transferring information ? Deleting information The information management model should take into account information domains that comprise the following items: The information being processed ? Processes being used ? Information generators ? Information consumers ? User roles ? Information management policy requirements ? Regulations ? Agreements or contracts The principle of least privilege should be used in developing the model by permitting users to access only the information required for them to accomplish their assigned tasks. Table 1-1 provides an example of an IMM. Table 1-1 [pic] Information Management Model A similar example of the output domains of the IMM is given in Table 1-2. Table 1-2 [pic] IMM Information Domain Example
The information systems security engineer must document all elements of the Discover Information Protection Needs activity of the ISSE process, including the following: ? Roles ? Responsibilities ? Threats ? Strengths ? Security services ? Priorities ? Design constraints These elements comprise the fundamental concepts of an Information Protection Policy (IPP), which in turn becomes a component of the customer’s information management policy (IMP).
The Term Paper on White paper on Information security systems
Over the past decade Panther Industries has been providing banks worldwide with safe banking solutions and protection against cybercriminals. However with 128 million malware programs written each year1, banking institutions are only becoming more vulnerable to the threats of cyber-attacks. So it is no surprise that Panther Industries – a world leader in web-banking technology has itself become a ...
The information systems security engineer must also support the certification and accreditation (C&A) of the system.
Certification is the comprehensive evaluation of the technical and nontechnical security features of an information system and the other safeguards, which are created in support of the accreditation process, to establish the extent in which a particular design and implementation meets the set of specified security requirements. Accreditation is the formal declaration by a Designated Approving Authority (DAA) that an information system is approved to operate in a particular security mode by using a prescribed set of safeguards at an acceptable level of risk.
Recertification and re-accreditation are required when changes occur in the system or its environment, or after a defined period of time after accreditation. Design system security architecture In this stage, the information systems security engineer performs a functional decomposition of the requirements that can be used to select the components required to implement the designated functions. Tools and techniques such as timeline analysis, flow block diagrams, and a requirements allocation sheet are used to accomplish the decomposition.
The result of the functional decomposition is the functional architecture of the information security system. In the decomposition process, the performance requirements at the higher level are mapped onto the lower-level functions to ensure that the resulting system performs as required. Also, as part of this activity, the information systems security engineer determines, at a functional level, the security services that should be assigned to the system to be protected as well as to external systems.
Such services include encryption, key management, and digital signatures. Because implementations are not specified in this activity, a complete risk analysis is not possible. General risk analysis, however, can be done by estimating the vulnerabilities in the classes of components that are likely to be used. Develop detailed security design The detailed security design is accomplished through continuous assessments of risks and the comparison of these risks with the information system security requirements.
This design activity involves both the SE and ISSE professionals and specifies the system and components, but does not specify products or vendors. Note In conducting this activity, the information systems security engineer performs the following functions: ? Develops specifications such as Common Criteria protection profiles ? Maps security mechanisms to system security design elements ? Catalogs candidate commercial off-the-shelf (COTS) products ? Catalogs candidate government off-the-shelf (GOTS) products ? Catalogs custom security products Qualifies external and internal element and system interfaces The results of this effort should include a revised security CONOPS, identification of failures to meet the security requirements, meeting of the customers design constraints, and placing of the design documents under configuration control. Implement system security This activity bridges the design phase and the operational phase. It includes a system effectiveness assessment that provides evidence that the system meets the requirements and needs of the mission.
Security accreditation usually follows this assessment. The information systems security engineer approaches this task by doing the following: ? Applying information protection assurance mechanisms related to system implementation and testing ? Verifying that the implemented system does address and protect against the threats itemized in the original threat assessment ? Providing input to the C&A process ? Providing input to and reviewing the evolving system life-cycle support plans ? Providing input to and reviewing the operational procedures ?
Providing input to and reviewing the maintenance training materials ? Taking part in multidisciplinary examinations of all system issues and concerns This activity identifies the specific components of the information system security solution. In selecting these components, the information system security engineer must consider the following items: ? Cost ? Form factor ? Reliability ? Availability now and in the future ? Risk to system caused by substandard performance ? Conformance to design specifications ? Compatibility with existing components Meeting or exceeding evaluation criteria (Typical evaluation criteria include the Commercial COMSEC Evaluation Program [CCEP], National Information Assurance Partnership [NIAP], Federal Information Processing Standards [FIPS], NSA criteria, and NIST criteria. ) In some cases, components might have to be built and customized to meet the requirements if no suitable components are available for purchase or lease. In addition, the systems and design engineers in cooperation with the information systems security engineer are involved with the following: Developing test procedures to ensure that the designed system performs as required; these procedures should incorporate the following: • Test planning, to include facilities, schedule, personnel, tools, and required resources • Integration testing • Functional testing to ensure that systems and subsystems operate properly • Generation of test reports ? Tests of all interfaces, as feasible ? Conducting unit testing of components ? Developing documentation and placing documentation under version control; the documentation should include the following: • Installation procedures Operational procedures • Support procedures • Maintenance procedures • Defects discovered in the procedures Assess information protection effectiveness This activity, even though listed last, must be conducted as part of all the activities of the complete ISSE and SE processes. Table 1-3 summarizes the tasks of the Assess Information Protection activity that correspond to the other activities of the ISSE process. (Information taken from the IATF document, Release 3. 1. ) [pic] [pic] [pic] [pic] 5 WINDOWS SECURITY Windows security is an important component to the verall security of a network or enterprise. The Windows workstation holds a critical position in a defense-in-depth strategy. Figure 4-1 illustrates the defense-in-depth strategy. Defense-in-depth is a general methodology to slow down and obstruct an attacker. Defense-in-depth can also reduce the damage that occurs from an attack or other security incident. Should any one security control (defense) fail, defense-in-depth slows an attacker down by ensuring that there are still more obstacles in the way. This approach might give administrators time to discover and react to the threat.
The “onion” shown in Figure 4-1 has the following layers of protection: ? Managing users—The vigilance and security awareness of users can be crucial to all the other security controls being effective. ? Harden hosts—Default features are prime targets for attackers and always make the Top 10 on vulnerability lists. ? Virtual local area network (VLAN) separation—Trust but separate; no one aside from payroll personnel and administrators has a need to be able to reach payroll workstations. ? Server separation—Provide a place of enhanced security for high-value targets. Wide area network (WAN) separation—Establish needto- know or need-to-access criteria between hosts and servers. ? Customer separation—Assume that any users and hosts outside of an organization’s control are insecure. ? Internet perimeter—The Internet contains many threats, but the FBI finds that most attacks come from the inside. [pic] Figure 5. 1 : Defense-in-Depth Methodology Defense-in-depth can also serve to discourage an attacker. Attackers will take the path of least resistance. Many attacks are opportunistic. The attacker sees a vulnerability and explores it.
In such cases, the attacker will pursue the attack until resistance is met. If the attacker then senses that there will be little resistance (no defense-in-depth), they may be motivated to continue. If, on the other hand, the attacker meets resistance and difficulty, they may abandon the attack and seek easier prey. Defense-in-depth also reduces the number of successful attackers. With defense-indepth, the attacker must be knowledgeable and able to execute several attacks. This requirement eliminates the threat from the largest group of attackers, script kiddies.
Script kiddies are generally recognized as immature, anarchist hackers that acquire tools developed by knowledgeable hackers. The script kiddies could not develop these tools or even execute the attack manually. However, they are capable of running the tools and causing damage. Most of these tools target a single vulnerability or flaw. Script kiddies are not proficient at stringing tools together, so they are often thwarted by defense-in-depth. Windows Security at the Heart of the Defense All attacks will require that the perpetrator must affect a networked device. In most cases, this device is a host or server.
Securing the Windows operating system should be considered as important as any other security control, such as a firewall. Because most work is typically done on the Windows workstation, it is often configured for ease-of-use. There is a natural trade-off between ease-of-use and security. The easier a system or application is to use, the less secure it will be. This trade-off is illustrated in Figure 4-2. If an attacker has made it as far as a user’s Windows machine, there is a good chance the attack will be successful. [pic] Figure 4-2: The trade-off between convenience and security Who would target me?
If you have chosen to make Windows your dominant platform, you can count yourself among the vast majority of computer users. That in itself does not make you a target for attack. However, the smart business approach that Microsoft has taken may not lead to the best security for you. When you purchase your Windows product, it will most likely do what you want it to do. Overall, the purchase will be a pleasant experience, and this is a great business approach for Microsoft. Another good business move by Microsoft is to make the product very easy to install. Windows requires very little information for the initial installation.
Older versions of Windows 95 and Windows NT required some technical knowledge for selecting options, but the newer versions of Windows require less knowledge about how computers work while installing. But there is a price to be paid for easy installation of an off-the-shelf product. The product has many features that you probably do not need. Windows features that the user does not use but that do not impact system performance do not, at first, appear to be a problem. In fact, in the vast majority of cases, no thought is given to configuring the operating system after the initial installation.
Most users are obliviously happy, having just experienced an easy installation and seeing that the system does everything that they expected. The security problem is that these circumstances, a very large installation base, and a feature-rich default installation, feed right into the hands of the hackers. The hackers purchase and install the very same Windows as their future victims. They know that if they find a vulnerability, they will have potentially millions of workstations to try to exploit. The hackers also know that most users will settle for the default installation of Windows, leaving most features enabled.
This situation all leads up to Windows being the most targeted operating system for hackers. It is a debate in the security community as to which operating systems have the most published vulnerabilities. However, it is agreed that an out-of-the-box installation of Windows will be attacked and compromised in short order, depending, of course, on the network to which it is attached. Be afraid . . . It is a dangerous and cruel world out there. Most of us seek out safe havens in our communities, at home, and at work. However, we still lock our cars and our front doors.
Consider your Windows workstation an entry point for the outside world to get into your safe haven. Consider some of the ramifications of having such easy access to the outside world: ? Credit card data could be stolen and used. ? Private data could be stolen and used for identity theft. ? Private e-mail could be read by strangers or by colleagues at work. ? Pornographic and undesirable material, which would never be allowed in the home or workplace, could be put on the workstation. ? Complete strangers could learn information about your family and children.
Such information could lure your children into a “trust” situation. ? Viruses and worms may annoy your friends and colleagues, damaging your personal and professional reputation. ? The Windows system could be used to attack and damage others, perhaps leaving you liable. You would never let a criminal or con artist into your home to look through private papers. The same level of concern and protection should be extended to a Windows workstation. It is common to hear potential victims say, “I have nothing important to lose on that system. ” That is a false sense of security. Be paranoid and avoid the risk.
Microsoft recommendations Microsoft, on its official Web site at www. microsoft. com/security/protect, recommends the following three steps to improving a computer’s security: 1. Use an Internet firewall. 2. Get computer updates. 3. Use up-to-date antivirus software. Microsoft recommends either a hardware or software firewall to “prevent hackers, and many types of viruses and worms, from accessing your computer. ” This chapter discusses additional ways to protect against these threats. Microsoft also recommends a daily update using the automatic update feature available in Windows 2000 Service Pack 3 or later.
This chapter will discuss the need to keep all applications updated, as well as some associated steps such as testing and backups. Microsoft suggests getting antivirus software from Computer Associate, McAfee Security, or Symantec, keeping it up-to-date, and configuring it properly. This chapter further expands on these recommendations. These are important steps to maintaining the security of a Windows workstation. This chapter will discuss many more steps that can be taken to secure a Windows workstation. The following recommendations are in support of hardening systems on the network: Establish a plan to harden any host that will interact with the outside world, including placing the Windows workstation on a local area network (LAN) or sharing files with other computers. It is most important to develop a procedure that is kept up-to-date. Many points will be learned along the way and it is important that these be noted and get incorporated into the procedures. Each time you harden a system, it will go quickly and smoothly. This is assumed to take 50 hours. ? Never put an out-of-the-box operating system on a LAN other than a very secure test LAN.
Original equipment manufacturers (OEMs) prepare Windows workstations for a wide audience. An organization needs to have systems stripped down to the minimal needed for the business to be done. ? Never put a Windows workstation that has previously been on the Internet on a trusted LAN. Any host that has been unprotected on the Internet for more than an hour should be considered suspect and corrupted. If placed on a trusted LAN, it will pose a risk to all the other workstations. Any Windows system that has been on the Internet, unprotected, should be completely rebuilt before being put into a trusted environment. Turn off unneeded ports on the Windows workstation. Numerous tools, such as nmap, are available to check the open ports by scanning from the network. On the Windows workstation, run netstat -a from a command prompt will list the open ports. ? Turn off unneeded services on the Windows workstation, even if these services do not open ports onto the network. ? Use the Microsoft update site to determine the recommended patches and upgrades to the Windows operating system. ? Install and maintain a good antivirus application. ? Put personal firewalls on the Windows workstations.
This is a good defense-indepth deterrent for attackers. Free versions are available such as ZoneAlarm by Zone Labs or Tiny Firewall by Tiny Software. ? Do not run high-visibility services (Web, mail, file sharing, information sharing—LDAP, FTP) on the Windows workstation without a business case. A review of all such services running should be done to determine their need. Any that are not needed should be shut down or disabled. ? Do not use services that have reliable secure versions. For example, use SSH for Telnet, IMAP instead of POP3, and secure FTP for FTP. Identify mission-critical applications and maintain the security patches and upgrades for these applications. ? Establish a program to scan the Windows workstation periodically to determine what ports are open and why. Optimally, the workstation should be checked quarterly. ? Use strong passwords. Change the password frequently—every 60 days or sooner is recommended. ? Operate safely. Don’t open or launch any application that you are not 100 percent sure about. Don’t open e-mails from strangers. Don’t open any e-mail attachment you do not expect ahead of time. Remove unneeded data and history iles from the workstation. Use encryption. ? Watch for performance issues. If metrics are not in place to notice performance, put them in place. ? Run a host-based intrusion detection system (HIDS) on critical Windows workstations. The HIDS will detect unauthorized activity on the host as well as raise the alarm if certain files are changed. The costly part of running an HIDS is the learning curve. Because each site’s administrators will manage their own HIDS systems, this learning curve is repeated several times. The learning curve continues because the HIDS must be monitored and adjusted on the installed servers.
It is expected that the administrators will spend a couple of hours a week working with the HIDS. Out-of-the-Box Operating System Hardening This section examines steps to improve the security of a Windows system, prior to putting the workstation on the network. Prior to system hardening Physically disconnect the workstation from any network. Out-of-the-box installations of Windows are so prominently targeted that a new system can be compromised in a matter of minutes, or even seconds—far too fast for an administrator to harden the system. If reinstalling Windows on an existing workstation, be certain to back up your data.
Back up to external media or to a different hard drive or partition. Most people back up their data files, such as letters and photos. It is also important to capture some other information while the system is still functioning. Consider the following, for example: ? Write down the type of video card and how much memory it has. ? Record the network interface card (NIC) type and any TCP/IP settings. If using wireless, record the service set identifier (SSID) and any encryption keys. ? Check dialup connections for phone numbers. ? Go into the Web browser and save the bookmarked pages. ? Record printer configuration settings. Record any types and configurations for other hardware such as sound cards, Web cameras, or scanners. The general process of system hardening No matter which Windows version you are dealing with, the general process for hardening the operating system and workstation is the same. The process is as follows. ? Assess and understand the role of the Windows workstation to be hardened. This entails understanding the users and their responsibilities. It also involves knowing where on the network the workstation will be placed. ? Acquire hardening procedures for other Windows workstations that have a similar role.
If prior procedures are not available, get a listing of applications and settings on a similar workstation (winmsd. exe provides a good starting point).
There are many hardening guides available to assist an administrator in the hardening of their operating system. These guides offer an administrator a step-by-step procedure for securing the workstation. They also assist network administrators by offering a repeatable process so that steps are not missed. ? Install a clean version of the operating system; then document the changes and burn a ghost image. Remove some services that are not required; then document the changes and burn another ghost image. ? If at any point, the system becomes unusable, which should be expected, drop back to the most recent ghost image and try again. However, this time, do not remove the service that caused the problem. ? Remove any extra applications that may have been loaded with the Windows operating system, then document the changes and burn another ghost image. ? Check and close any ports that are not explicitly required for performing the mission of this workstation. Open ports can be detected by opening a command prompt window and running netstat -an.
Any protocol listed in the results that shows a status of “LISTENING” has an open port. ? Locate and close any shares that are not explicitly required by the role that this workstation will have. Open shares can be listed by opening a command prompt window and running net share. This will list the share name and the resource (folder) being shared. You can disable sharing through the Windows Explorer by clicking on the properties of the folder. ? Install only the needed applications, document the changes, and burn a final ghost image. ? Install a personal firewall on the Windows workstation. Thoroughly test the system. It is important to document every step of the system hardening for both successes and failures. If failures occur, usually in the form of the system crashing or hanging, it is important to know exactly what was done so the procedure can be altered. The most common failure will be a result of a needed service having been disabled or removed. It is also important to document the case when the system hardening goes successfully. A detailed procedure for how to harden a system for a particular organization or user will be very useful when adding future systems to the inventory.
A Web search will produce a number of sets of procedures for hardening various Windows operating systems. Many colleges and universities provide this information, primarily trying to reach their student population, but they make the information available to the public in the process. By now, you should see the need to have frequent ghost images. The relatively short delay taken to burn a ghost image will be more than recouped the first time the system crashes and needs to be rebuilt. The ghosting application makes a compressed snapshot of the partition on the hard drive.
This snapshot or image can either be burned to a CD-RW disk, or put on another hard drive partition. 6 UNIX and Linux Security UNIX, Linux, and other similar operating systems are gaining in popularity and market share. UNIX is still a dominant player in the server arena. Most of the growth in UNIX popularity has been in the workstation arena. Most of the security issues raised in earlier apply to operating a UNIX workstation safely. However, some of UNIX’s unique aspects are covered in this chapter. The Focus of UNIX/Linux Security
UNIX, Linux, FreeBsd, AIX, and so on (all referred to as UNIX in this chapter) have great potential for both being very secure and being exploited. Some of the same features that make UNIX a good target for security attacks make it powerful enough to be operated safely. UNIX as a target There is an ongoing debate among system administrators as to whether Windows or UNIX is the more vulnerable operating system. This debate often degrades to a mere count of vulnerabilities applicable to one side or the other. In any case, it is useful to start with an examinaton of why UNIX and Linux ight be a target of security attacks. The following lists the four main reasons that UNIX is a target: ? Linux (and much of the other UNIX implementations) are open source. ? UNIX installations are easy to obtain, both in terms of being inexpensive (often free) and readily distributed. ? ? ? ? In This Chapter The focus is on the following areas: UNIX/Linux security; Physical security; Controlling the Configuration; Operating UNIX Safely; Hardening UNIX. ? ? ? ? Before we go into further details note these points: ? Most hacking tools are available for UNIX. UNIX is a good environment to exchange hacks and code. Open source Open source means products made available along with the source code needed to rebuild or recompile the products. Open source does not mean free of cost or licenses, although it is in many cases. Many people view open source as a major security threat. In fact, this has not turned out to be the case. While it is true that a hacker can get a head start on finding security issues by examining the code, this concern is certainly overrated, because of the extremely long hours that would be required to walk through the thousands of lines of code.
However, once a flaw is identified, the source code can be very useful to the hacker in developing an exploit. Ironicly, over time, the ultimate effect of having code open to all may be that the code is better and more secure. Problems tend to be quickly fixed and thoroughly vetted. This is discussed in more detail in the section “Open Source Issues” later in this chapter. Easy-to-obtain operating system That Linux is low cost and freely distributed on the Internet makes it a popular operating system for experimentation. Many public forums exist in which novices can get help and support for their Linux implementation.
Even solutions for complicated and obscure problems can be found with a minimal amount of searching on the Internet. If it is a popular operating system to use, it can be expected to be popular for hacking as well. Network and development tools Another attractive feature of UNIX and Linux for the hacker is the abundance of network tools available. Most networking tools are developed under Linux or FreeBSD first and later ported to other operating systems. Certainly the open source software and the plethora of code examples contribute to the early development of tools on UNIX.
Some examples of the free network tools that support hackers in their quest for vulnerabilities and exploits include the following: ? tcpdump—A low-level traffic capture application that sniffs traffic at the International Standards Organization (OSI) model’s layers 2, 3, and 4. tcpdump comes standard on most UNIX installations and supports a wide variety of layer 2 media. Because tcpdump is so universally available, its output is often used as input into traffic analysis tools. ? Ethereal—A network traffic sniffing application. Ethereal also provides a nice interface to work with traffic captured with other low-level tools such as tcpdump. tcpreplay—Allows for traffic captured in tcpdump to be put back on the wire. This permits the hackers to better analyze traffic and ultimately to debug their own applications. ? nmap—A popular port-scanning tool. It will check the status of ports on a system from the network by attempting to connect to the ports. The method of connection can be varied; likewise, nmap can run more or less aggressively through the hosts and ports. ? Nessus—A vulnerability scanner that calls nmap to discover open ports, then tests the ports for possible vulnerabilities. Nessus has over 500 tests and can detect most older vulnerabilities. Perl, sh, and ksh—Scripting languages that, in the hands of the hacker, become a powerful tool for automating procedures. In addition to network tools, UNIX systems come with a fully functional development environment. All the compilers and libraries needed to completely rebuild the kernel and operating system are available as open source resources. With these development tools, the hacker can produce everything from kernel module root kits to sophisticated attack tools of their own. Information exchange UNIX is an attractive platform for the exchange of tools and techniques under development by hackers.
Hackers are able to exchange source code and then readily recompile the applications. The hacker community has a lot of expertise in UNIX and this expertise is shared in the form of code and advice. UNIX/Linux as a poor target UNIX has some characteristics that make it a less attractive target for security attacks. Some of these characteristics are as follows: ? There are many versions and builds. ? Users are generally more expert. ? Scripts are not as easily run (compared with Outlook).
? File ownership limits malware spread. Many versions and builds
While code and hacks are easily exchanged, specific exploits may not work on a majority of UNIX platforms. For example, a kernel root kit initially developed for Linux 2. 4. 20-8 on Red Hat may have to be tested and adapted to be useful on other systems, such as Debian. This requires a level of discipline and configuration management that is not normally a trait of the typical troublemaking hacker. As a result, there may be many exploits developed for UNIX, but few of them are universally dangerous. Expert users UNIX has not made great inroads as a popular desktop workstation for the masses.
It is still primarily used on servers, embedded systems, and software development platforms. All of these uses tend to make the average UNIX user more knowledgeable about the operating system and security. Therefore, if the technical expertise of the average user is greater, attacks against their platforms will, on the whole, be less frequent and harder to accomplish. Attackers, like most forces, will seek the path of least resistance. Attacking a workstation that is managed by a non-technical person will certainly be easier than attacking one managed by an expert. Scripts not as easily run
There are many scripting techniques in UNIX. They range from Perl to Bourne shell. However, unlike Windows, the scripting is not tightly integrated into common applications (such as Outlook and Word).
In UNIX, scripts can be integrated into applications such as mail and word processing, but it is not the default configuration. This makes UNIX much less vulnerable than a Windows system that is running Outlook and allows users to commonly run powerful Visual Basic scripts. File ownership It is not uncommon for malware to take advantage of commonly run executables to propagate an attack.
In these cases, the malware writes itself to a file that is later executed by the unaware user. This kind of attack is made possible because, to perform normal computing functions, restricted users are permitted to run executables that have root or administrator-level access to system resources. This is true for UNIX as well. Where UNIX has an advantage is in that the file ownership is different than file execution permission. Although a user may be able to run a critical application, they usually do not own the application and therefore would not normally be able to write or alter the executable.
The inability of a common user to alter an executable is a severe restriction on viruses and worms that depend on users to propagate their malware. Open source issues The first thing that comes to mind when considering the security issues pertaining to open source is that anyone can see the code. This means that hackers looking to cause trouble can spend countless hours analyzing your code to find logical errors, such as the following: ? Hackers will look for embedded passwords or backdoors. The software developer may hard code a password authentication into the application as a convenience feature to the end user.
This is clearly a poor practice because the passwords are easily detected and acquired. Attackers can then use the passwords to gain access to the resources outside of the constraints of the application. ? Hackers may identify places in the code where input is not properly checked. Most programmers have tunnel vision when writing code; they assume they will receive the proper input from the end user (or from another function).
For code to be truly secure, programmers must assume that they will receive completely irrelevant input from the user.
The programmer must really think out of the box about validating input. For example, special characters and Unicode should be considered. Simple checks on the input data may not detect data put in following a \0 (NULL terminator).
If hackers find locations where the input is not properly checked, they will attempt to exploit this by entering strange data. The result may be that the application reacts in a totally unpredicted manner, giving hackers a means to exploit the application. ? Hackers will examine the open source code for variables that are set but not properly checked.
Programmers should check that their variables do not go out of range, meaning that only valid values are assigned to the variables. If a variable goes out of range, it may clobber memory or have other unintended consequences. If hackers can manipulate the application in such a way as to cause variables to go out of range, the application’s behavior may be unpredictable. Under these circumstances, the application may exhibit a vulnerability that can be exploited. ? Hackers will look for instances in the open source code in which the user’s input is used as code or instructions.
A common example of this might be when an end user is allowed to build an SQL query. The query might then be passed to a function that executes the query. This is a dangerous practice. Merely checking the input for proper format will not suffice in this case. The end user’s input should not be processed directly; rather an interpreter should be written to read the input and rebuild the SQL necessary to run the queries. This interpreter must be very restrictive in the calls it will use and make. Having found potentially vulnerable points in the code, hackers can attempt to exploit the vulnerabilities.
This is still not a simple process, but hackers are well ahead of the game just by knowing where to concentrate their efforts. In addition to exposing the open source code to the hacker community, the code is also scrutinized by the user community. The user community does not spend time reading source code for logical (security) flaws. The user community will only identify logic errors that are encountered during an examination of the code for some other reason, the most common reasons being that the code is faulty (does not work for that user), or the user wants to extend or improve the code to cover his or her particular circumstances.
While not as good as employing an army of “white hat” hackers (persons who test applications to find flaws to better the product, not exploit it) to scrutinize the code, software developers have hundreds of extra eyes going over their code when they make it open source. Because most people are honorable and reputable, this appears to be a net gain for software developers, from a security perspective. When logic problems are found and brought to the attention of the user community, in open source software, they tend to be corrected quickly due to the following: ?
Software developers can’t pretend that the flaws don’t exist because they are there for the whole world to see. ? The user community will often contribute to making the fix. Open source has an added benefit of allowing security to influence the evolution of software products. For the reasons stated earlier, the development of software in an open source manner may contribute to the improvement of software available for a given need. Most user requirements have a number of applications that could satisfy the need.
Except in the case of a monopoly, the user community will eventually decide which of the applications survive and become popular. When open source is subjected to this selection process, it can be assumed that the security (lack of logic errors) of the code will be a factor when the user community chooses their favorite applications. All things being equal, users will choose a secure application over a risky one. Physical Security The first (or last, depending on your perspective) line of defense against a security threat is physical security. Physical security, in a nutshell, is oncerned with physical facilities – keys, locks, chains, enclosures, iron gates, safes, doors, lights, radiation, sound, riverbeds, combination keys, underground wells, etc. , that help to secure and safeguard IT equipment, the communications network, and the associated infrastructure. Some measures can be taken to improve the security of a UNIX workstation in the event that an attacker gains physical access to the device. The following UNIX-specific methods to improve the physical security of a workstation are discussed here: ? Limit access to the UNIX workstation during boot operations. Detect hardware changes to understand any physical changes to the system. ? Disk portioning can lessen the impact of damage from a security problem. ? Prepare for the inevitable security attack. Limiting access It is a general principle that any network device (such as a UNIX workstation) can be compromised if an attacker has physical access to the device. The type of compromise varies depending on the network device. For a UNIX workstation, the following are some possible means to achieve this compromise: ? Reboot—If the workstation can be rebooted with a floppy or CD, an attacker can boot an operating system of their choice.
In this way, they could have full access to all the workstation’s resources. ? Data collection—If an attacker installs a covert monitoring device, such as a keystroke capturer, sensitive information may then be stored on the monitoring device. The device may either phone home the information to the attacker or the attacker may get physical access to the box a second time and retrieve the device. ? Theft—An attacker who can remove a hard drive from the premises will have sufficient time and resources to extract all the information on the drive. BIOS control—If an attacker is able to reboot the workstation and get into BIOS, the person may set a BIOS password to lock everyone else out of the workstation. This would constitute an effective denial-of-service (DoS) attack. The following steps will improve the physical security of the UNIX workstation. These measures should be considered part of a defense-in-depth methodology, because all these steps together will still not completely secure a workstation that has been physically compromised: ? Enable the BIOS password. BIOS changes will be protected from change if this password is set.
Also, if the BIOS password is in the disabled state, the attacker can enable it and set a password. This can result in a denial-of-service attack because legitimate users will not be able to boot and use the workstation. ? Change BIOS settings. BIOS settings should be changed to prevent booting from a floppy or CD. These are typically infrequent events, therefore the impact will, in most cases, be minimal. ? Set the boot loader password. Typically, this involves the Linux Loader (LILO) or Grand Unify Bootloader (GRUB) loaders.
If an attacker can modify the boot loader configuration, he or she will be able to access and change resources that were otherwise off limits. Some versions of Linux can be booted directly into a root account using one of the following commands at the boot prompt: linux single or linux init=/bin/sh In the first case, Linux boots using the single user mode. This mode, in UNIX, gives someone root access to all the resources on the host machine without needing to log in with a password. Requiring a password during the boot process will provide additional security.
Single-user mode access will require a password if the following line is inserted in the /etc/inittab file after the initdefault line: ~~:S:wait:/sbin/sulogin In the second case, linux init=/bin/sh, Linux is booted and runs a Bourne shell instead of the init process. This provides the user with root access. To add a password to the LILO prompt, put the following lines in the /etc/lilo. conf file: restricted password=”” The boot loader password takes effect after rebooting. When prompted, enter the root password. Now when the workstation is rebooted, any additional boot arguments will require the root password.
Detecting hardware changes The application kudzu detects and configures new and/or changed hardware on a Linux system. When started, kudzu detects the current hardware and checks it against a database stored in /etc/sysconfig/hwconf, if one exists. It then determines if any hardware has been added or removed from the system. If new hardware is found, the user is prompted to configure the hardware. If hardware is expected but not found, the user can remove the configuration. Kudzu then updates the database in /etc/sysconfig/hwconf.
If no previous database exists, kudzu attempts to determine what devices have already been configured by looking at /etc/modules. conf, /etc/sysconfig/network-scripts/, and /etc/X11/XF86Config. The following are just a few of the pieces of hardware identified and stored in the hwconf database. The full listing can be obtained with the command kudzu -p. Shown in the following listing are a network interface card (NIC), a floppy drive, a CD-ROM drive, and a hard drive. By storing this information and comparing it with current values, any changes in the physical hardware can be found. lass: NETWORK bus: PCI detached: 0 device: eth driver: 3c59x desc: “3Com Corporation|3c905C-TX/TX-M [Tornado]” vendorId: 10b7 deviceId: 9200 subVendorId: 1028 subDeviceId: 00d5 pciType: 1 class: FLOPPY bus: MISC detached: 0 device: fd0 driver: unknown desc: “3. 5” 1. 44MB floppy drive” class: CDROM bus: SCSI detached: 0 device: scd0 driver: ignore desc: “Matshita CDRW/DVD UJDA740” host: 0 id: 0 channel: 0 lun: 0 generic: sg0 class: HD bus: IDE detached: 0 device: hda driver: ignore desc: “FUJITSU MHT2060AT” physical: 116280/16/63 logical: 7296/255/63 Disk partitioning
Partioning of disks on a UNIX platform can be a physical security issue. Older UNIX versions had a serious problem with the loss of a partition due to a physical error. For example, a sudden power loss may cause a mismatch between the file pointers (inodes) stored in memory and those already written to disk. Such a mismatch could cause the loss of some data on the partition. This risk is greatly mitigated with the new versions of the UNIX file systems. These file systems, such as ext3 in Linux, use journaling to make the recovery of damaged file systems more reliable.
Journaling provides for a fast file system restart in the event of a system crash. By using database techniques, journaling can restore a file system in a matter of minutes, or even seconds, versus hours or days with non-journaled file systems. In addition to ext3, jfs, xfs, and reiserfs are also journaling file systems. Even with journaling, data in a file system (partition) can be lost due to disk damage. One measure that can be taken to reduce this risk is to spread files (based on their use) across different partions. One partition should contain non-changing operating system files.
This is usually the /usr directory. If this partition is lost due to some physical problem, the partion can readily be restored either from backup, or by re-installing the operation system. Because this partion will rarely change, incremental backups can be done quickly. The directory /usr/local is one place under /usr where applications may install themselves. Even though this appears to be on the /usr partition, it can be mounted as a separate partition during the boot process. The most common way to do this is in the /etc/fstab with a line such as the following: dev/hda6 /usr/local ext3 defaults 1 2 It is advisable to put the /home directory on a separate partition. This partition holds the home directories of the users who can log in to the workstation. In many cases, these directories will hold configuration information for the individual users. There should also be one or more partitions that hold the data that will be used by the organization or the particular workstation (referred to here as the /data directory).
The advantage to having the data in a separate partition is that it can be backed up and restored separately.
Also, when the UNIX operating system is upgraded, the /data directory can be brought forward without the need to copy it off and then back onto the workstation. Consider directories that could grow very large and, as a result, cause a denial of service for the whole workstation. Typically, these are the /tmp and /var directories. These should each be put on a separate partition. If the /tmp or /var partition fills up, performance and operations may be impacted or impaired, but recovery will be simple.
If, instead, the / directory is filled up (because /tmp was on the same partition) the operating system might hang and not be able to reboot without special procedures. Prepare for the eventual attack You can take certain steps to prepare a UNIX workstation for the inevitable attack. From a security perspective, these steps are usually put under the category of incident response or disaster recovery. See Chapter 17 for more detailed information on incident response and disaster recovery. Preparing for an attack is a three-part process—backup, inventory, and detection. The frequency and extent of the backups (copying data and files and moving them off the workstation) should be determined by the risk of losing the files or data. The more frequently the data changes and the more critical would be the loss, the more frequent the backups should be. It is not uncommon in a rapid development environment to see several backups daily. However, other environments, such as a home user environment, might do weekly or monthly backups. Backups should be done in a manner consistent with the sensitivity and attention given to the workstation.
In most cases, daily backups are recommended. Cross- Reference A normal backup cycle is for incremental backups to be done every day and full backups to be done on Friday. How long the backups will be kept or, in the case of reusable media, re-used depends on the sensitivity and attention placed on the workstation. The more sensitive the data, the longer the backups should be kept. In some cases, financial data might be kept for years. If the workstation does not get a lot of monitoring and it is suspected that an attack might not be readily detected, the backups should be kept for a longer period than normal. Inventory involves the system administrator knowing the key files on the workstation that must be checked in the event of an attack. From an operating system perspective these include password files (/etc/passwd) and startup scripts (/etc/rc. d/init/*).
However, individual organizations will have other equally critical files that control the mission, such as database files. ? Detection is key to any preparation against an attack. Detection or monitoring allows for the initation of a timely response. This can be a significant factor in limiting the damage done by the attack.
If any of these three protective measures—backup, inventory, or detection—is missing or weak, the other two may be hindered to the point of not being effective. Consider the following scenarios: ? Backups without detection—Without adequate detection, an attacker may be on the workstation for a period of time that spans a number of backups. If the compromise is then detected and the system administrator attempts to restore from backup, they may be restoring compromised files. ? Inventory and weak detection—It is important to keep an inventory or status of key files on the workstation to be better prepared to respond to an attack or incident.
However, without quick detection of an attack, some of these files may be changed by users and administrators over the course of normal business. If the valid users make changes on top of an attacker’s changes, it will be very difficult to determine what was done by the attacker and how to mitigate the risk. ? Detection without inventory and backups—If inventories of key files and backups are adequately conducted, prompt detection can lead to a response that will limit the attacker’s abilities to continue the attack. However, if inadequate backups were done, the recovery from the attack can be hampered.
In such cases, the entire workstation may have to be taken offline and the operating system rebuilt from scratch. The bottom line in responding to an attack or a compromised system is if you can’t be 100 percent assured that you have found and corrected everything that an attacker has done, you should take the workstation offline, rebuild the operating system, and reharden the workstation, hopefully, taking the opportunity to establish good backups, inventories, and detection capabilities. Controlling the Configuration Controlling the configuration of a UNIX workstation is important for network security.
Even stripped down and hardened, a UNIX workstation can be a powerful tool from which to launch attacks on the network or on other hosts. The configuration concerns will be addressed in two areas: ? Installed packages or applications—Eliminating unneeded applications and keeping required ones properly patched is key to a defense-in-depth strategy. ? Kernel-related issues—Because the kernel has root-level control over resources and processes, it is a critical part of the UNIX system to keep under configuration control. Installed packages It is important for an administrator to know what packages are installed.
The “Operating Safely” section later in this chapter discuses how to control which applications are running. Even if an application is not running or planned to run, its installation should still be limited or controlled. Attackers may seek to take over a workstation to use its resources. By stripping the available software packages down to a minimum, the workstation becomes a less valuable target to the attacker. Additionally, if the workstation is overtaken, the usefulness of it to the attacker is reduced. Following are some typical packages that should not be installed unless they have a legitimate use: Mail server—Sendmail (or an equivalent application) is commonly installed on UNIX systems. While the mail server may not be used by the average UNIX user, it is a useful tool to an attacker who has taken over control of the workstation. ? Automatic update servers—If automatic update services are not being used, these services should not be installed. For example, on Red Hat systems, rhnsd is a daemon process that runs in the background and periodically polls the Red Hat Network to see if there are any queued actions available. If any actions are queued, they are run and the system is automaticly updated. File-sharing services—On UNIX systems, smbd is a server daemon that provides file sharing and printing services to Windows clients. The server provides filespace and printer services to clients using the Server Message Block (SMB) or Common Internet File System (CIFS) protocol. This is compatible with the LanManager protocol, and can service LanManager clients. ? File transfer services—The File Transfer Protocol (FTP) service is a program that allows a user to transfer files to and from a remote network site. Attackers have been known to activate FTP capabilities to use systems for their personal file transfer.
On Linux, the command rpm -qai will list all installed rpm packages. This produces information on each package. Following is the information available for a typical sendmail package: Name : sendmail Relocations: (not relocateable) Version : 8. 12. 8 Vendor: Red Hat, Inc. Release : 4 Build Date: Mon 24 Feb 2003 07:16:00 PM EST Install Date: Wed 15 Oct 2003 09:36:17 PM EDT Build Host: stripples. devel. redhat. com Group : System Environment/Daemons Source RPM: sendmail-8. 12. 8- 4. src. rpm Size : 4389045 License: BSD Signature : DSA/SHA1, Mon 24 Feb 2003 11:30:42 PM EST, Key ID 219180cddb42a60e
Packager : Red Hat, Inc. Summary : A widely used Mail Transport Agent (MTA).
Description : The Sendmail program is a very widely used Mail Transport Agent (MTA).
MTAs send mail from one machine to another. Sendmail is not a client program, which you use to read your email. Sendmail is a behind-the-scenes program which actually moves your email over networks or the Internet to where you want it to go. If you ever need to reconfigure Sendmail, you will also need to have the sendmail. cf package installed. If you need documentation on Sendmail, you can install the sendmail-doc package.
Kernel configurations The kernel is a relatively small program that controls the most critical resources on the system, such as the hard drives, memory, and video card. The kernel allows for many applications to run simultaneously by controlling their access to critical resources. Applications access these resources through system calls. Most of the kernel code consists of device drivers—over 90 percent of which are probably not needed by any one particular workstation. Usually, the installation of UNIX or Linux does not include a compilation of the kernel.
As a result, the kernel must be prepared to support a wide variety of architectures and hardware configurations. This leads to a lot of code that is not used. As a general security principle, there is no advantage to keeping unused kernel code around. Note that most of this unused code is not compiled directly into the kernel but is available to be loaded as a module when needed. Kernel modules are discussed later in this chapter in the “Kernel Modules” section. UNIX has two modes: supervisor mode and user mode. In user mode, library functions are used.
These functions then make system calls, which execute on behalf of the libraries. Because the system calls are part of the kernel itself, they have privileged access to critical system resources. Once the task (system call) is completed, control is returned to user mode. Kernel options A typical kernel has many options, perhaps as many as 1300 or more in the Linux 2. 4 kernel. Some of the more significant security-related options are as follows: ? iptables—iptables is a powerful firewall that can be used on UNIX workstations. Because iptables operates at the kernel level, it must be compiled into the kernel. ptables is discussed in greater detail in the “Networking” section of this chapter. ? IP forwarding—With forwarding turned on, the workstation can function as a gateway or router. Traffic sent to the workstation but destined for a different IP will be routed according to the workstation’s route table. This can be a secuity risk. Certain network safeguards may be circumvented because the traffic will appear to come from the workstation instead of the originator. Additionally, if the workstation is multihomed (two or more NICs on differnet subnets), the workstation may allow traffic onto a different network.
This may circumvent security controls for that network, such as a firewall or proxy. If not disabled in the kernel, IP forwarding can also be disabled after a system has booted. In Linux, the file /proc/sys/net/ipv4/ip_forward should contain 0 to disable fowarding. ? Support for multiprocessors—If multiple processors are detected on your workstation, the installation process may configure your boot loader to load a multiprocessor version of the kernel. In most cases, this will not make a difference in the security of the workstation.
However, if the workstation is doing development and testing of kernel modules and system calls, the multiprocessor kernel might introduce unwanted effects. ? Source-routed frames—The kernel can be configured to drop source-routed frames. A source-routed frame is a packet that contains all the information needed for the packet to traverse the network and reach its destination. This source routing is not normally needed and is most often used as a small part of a larger attack. By configuring the kernel to drop source-routed frames, an added measure of security is gained.
The typical UNIX kernel comes with many features enabled that are not required. By rebuilding the kernel and eliminating these options, you will increase the overall security of the workstation. Any unneeded code is a potential source of vulnerability. Additionally, if the workstation is compromised, these unneeded features may be useful to the attacker. Following is a short list of some options that have been turned on. You can see from this small sample that a wide variety of configuration items are possible. CONFIG_SCSI_CONSTANTS=y CONFIG_AIC7XXX_TCQ_ON_BY_DEFAULT=y
CONFIG_AIC7XXX_OLD_TCQ_ON_BY_DEFAULT=y CONFIG_AIC79XX_ENABLE_RD_STRM=y CONFIG_SCSI_EATA_TAGGED_QUEUE=y CONFIG_SCSI_G_NCR5380_PORT=y CONFIG_SCSI_NCR53C7xx_FAST=y CONFIG_SCSI_NCR53C7xx_DISCONNECT=y CONFIG_SCSI_PCMCIA=y CONFIG_IEEE1394_PCILYNX_PORTS=y CONFIG_IEEE1394_SBP2_PHYS_DMA=y CONFIG_NETDEVICES=y CONFIG_APPLETALK=y CONFIG_DEV_APPLETALK=y CONFIG_COPS_DAYNA=y CONFIG_COPS_TANGENT=y CONFIG_IPDDP_ENCAP=y CONFIG_IPDDP_DECAP=y CONFIG_NET_ETHERNET=y CONFIG_NET_VENDOR_3COM=y Kernel modules Kernel modules are dynamic extensions to the kernel that can be added without requiring a kernel rebuild or even a reboot.
Kernel modules allow for the following: ? The dynamic extension of kernel capabilities after the detection of new hardware—When a Personal Computer Memory Card International Association (PCMCIA) card is inserted into a UNIX laptop, the operating system can load the appropriate kernel modules. Adding a Universal Serial Bus (USB) device invokes a similar response. ? The rapid testing and modification of kernel capabilities under development—The system call developer does not have to go through timeconsuming rebuilds and reboots just to test a new version. The size of the kernel loaded at boot time can be kept smaller—Many capabilities are designated as loadable modules, so the boot time size of the kernel is kept small and manageable. A UNIX administrator must know how to check for root kits that have been loaded as a kernel module. The lsmod command will list kernel modules that have been loaded. The following is a subset of typical modules loaded in a Linux 2. 4 kernel: Module Size Used by Tainted: PF i810_audio 27720 1 (autoclean) ac97_codec 13640 0 (autoclean) [i810_audio] soundcore 6404 2 (autoclean) [i810_audio] agpgart 47776 3 (autoclean) vidia 2126120 6 (autoclean) parport_pc 19076 1 (autoclean) lp 8996 0 (autoclean) parport 37056 1 (autoclean) [parport_pc lp] ipt_state 1048 3 (autoclean) iptable_nat 21720 0 (autoclean) (unused) ip_conntrack 26976 2 (autoclean) [ipt_state iptable_nat] iptable_filter 2412 1 (autoclean) ip_tables 15096 5 [ipt_state iptable_nat iptable_filter] sg 36524 0 (autoclean) sr_mod 18136 0 (autoclean) ide-scsi 12208 0 scsi_mod 107160 3 [sg sr_mod ide-scsi] ide-cd 35708 0 cdrom 33728 0 [sr_mod ide-cd] keybdev 2944 0 (unused) mousedev 5492 1 hid 22148 0 (unused) input 5856 0 [keybdev mousedev hid] sb-uhci 26348 0 (unused) usbcore 78784 1 [hid usb-uhci] ext3 70784 7 jbd 51892 7 [ext3] System calls A system call is a request to the operating system kernel for access to critical resources. System calls are accomplished using special instructions that allow a switch to the supervisor mode. These calls are the services provided by the kernel to application programs. In other words, a system call is a routine that performs a system-level function on behalf of a process. All system operations are allocated, initiated, monitored, manipulated, and terminated through system calls.
System calls can assist an administrator in evaluating an application’s security. By examining calls that an application makes to the kernel, an administrator can determine if a security risk is involved. By viewing the system calls made by a process, it can be determined if the hard drive is being accessed when it should not be. Also, the system calls will reveal network access in a process that has no business on the network. On a Linux system, the strace command is a system call tracer tool that prints out a trace of all the system calls made by a process or application.
The ltrace command will similarly print out all library calls made. On FreeBSD you can use ktrace, and on Solaris truss. The following example is a session that shows the use of strace on a simple Hello World program. First the program source is listed: # cat helloworld. c /* * helloworld – simple hello world program */ #include int main(int argc, char **argv) { ntf(“Hello World\n”); } Now the program is executed normally: # . /a. out Hello World Finally, the program is executed with strace: # strace . /a. out execve(“. /a. out”, [“. /a. out”], [/* 35 vars */]) = 0 name({sys=”Linux”, node=”localhost. localdomain”, … }) = 0 brk(0) = 0x8049510 old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40016000 open(“/etc/ld. so. preload”, O_RDONLY) = -1 ENOENT (No such file or directory) open(“/etc/ld. so. cache”, O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=81158, … }) = 0 old_mmap(NULL, 81158, PROT_READ, MAP_PRIVATE, 3, 0) = 0x40017000 close(3) = 0 open(“/lib/tls/libc. so. 6”, O_RDONLY) = 3 read(3, “\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0`V\1B4\0”… , 512) = 512 stat64(3, {st_mode=S_IFREG|0755, st_size=1531064, … }) = 0 old_mmap(0x42000000, 1257224, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) = 0x42000000 old_mmap(0x4212e000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x12e000) = 0x4212e000 old_mmap(0x42131000, 7944, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x42131000 close(3) = 0 set_thread_area({entry_number:-1 -> 6, base_addr:0x400169e0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0 munmap(0x40017000, 81158) = 0 stat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 3), … }) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40017000 write(1, “Hello World\n”, 12Hello World) = 12 munmap(0x40017000, 4096) = 0 exit_group(12) = ? en strace is run on a program that accesses the network, you see certain calls that belie that access: # strace ping -c 1 192. 168. 131. 131 execve(“/bin/ping”, [“ping”, “-c”, “1”, “192. 168. 131. 131”], [/* 35 vars */]) = 0 socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = 3 getuid32() = 0 setuid32(0) = 0 ocket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(1025), sin_addr=inet_addr(“192. 168. 131. 131”)}, 16) = 0 getsockname(4, {sa_family=AF_INET, sin_port=htons(32796), sin_addr=inet_addr(“192. 168. 123. 10”)}, [16]) = 0 close(4) = 0 setsockopt(3, SOL_RAW, ICMP_FILTER, ~(ICMP_ECHOREPLY|ICMP_DEST_UNREACH|ICMP_SOURCE_QUENCH|ICMP_REDIRECT|ICMP_T IME_EXCEEDED|ICMP_PARAMETERPROB), 4) = 0 setsockopt(3, SOL_IP, IP_RECVERR, [1], 4) = 0 setsockopt(3, SOL_SOCKET, SO_SNDBUF, [324], 4) = 0 setsockopt(3, SOL_SOCKET, SO_RCVBUF, [65536], 4) = 0 etsockopt(3, SOL_SOCKET, SO_RCVBUF, [131072], [4]) = 0 brk(0) = 0x8062c80 brk(0x8063c80) = 0x8063c80 brk(0) = 0x8063c80 brk(0x8064000) = 0x8064000 fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 6), … }) = 0 exit_group(0) = ? /proc file system The /proc directory is a pseudo–file system used as an interface to kernel data structures rather than reading and interpreting kernel memory. Most of /proc is read-only, but some files allow kernel variables to be changed. The kernel variable that determines whether the system can act as a router and forward IP packets is one such example.
If IP forwarding is to be turned on, a 1 should be written into the file or variable at /proc/sys/net/ipv4/ip_forward. Without IP forwarding enabled, a value of 0 is in this file. The /proc directory contains many parameters and kernel values needed by system calls to maintain a stable environment. The Linux manual pages describe the available pseudo-files. A few that might be of interest to a network security administrator are as follows: ? Process ID—There is a numerical subdirectory for each running process. The subdirectory is named by the process ID. Each subdirectory contains pseudofiles and directories.
Two pseudo-files in these subdirectories are as follows: • cmdline—This holds the complete command line for the process, unless the whole process has been swapped out or the process is a zombie. In either of these two cases, there is nothing in this file (a read on this file will return 0 characters).
The command line arguments appear in this file as a set of null-separated strings, with a further null byte after the last string. • cwd—This is a link to the process’s current working directory. To determine the current working directory of process 2250, enter the following command: s -l /proc/2250/cwd This will produce the following output showing the current working directory of /root: lrwxrwxrwx 1 root root 0 Sep 29 22:28 /proc/2250/cwd -> /root/ ? cmdline—This pseudo-file contains the argments passed to the Linux kernel at boot time. ? kcore—This file represents the system’s physical memory and is stored in the Executable Linking Format (ELF) core file format. With this pseudo-file and an unstripped kernel (/usr/src/linux/vmlinux) binary, the gdb command can be used to examine the current state of any kernel data structures.
To see all the data in the kernel, it needs to be compiled with the -g option. The total length of the file is the size of physical memory (RAM) plus 4KB. ? net—This subdirectory contains various net pseudo-files, all of which give the status of some part of the networking layer. These files contain ASCII structures and are, therefore, readable with the cat command. However, the standard netstat suite provides much cleaner access to these files. ? net/arp—This holds an ASCII readable dump of the kernel Address Resolution Protocol (ARP) table.
It will show both dynamically learned and pre-programmed ARP entries. ? sys—This directory contains a number of files and subdirectories corresponding to kernel variables. These variables can be read and sometimes modified using the proc file system and the sysctl system call. ? kernel/ctrl-alt-del—The ctrl-alt-del controls the handling of Ctrl-Alt-Del from the keyboard. When the value in this file is 0, Ctrl-Alt-Del is trapped and sent to the init program to handle a graceful restart. When the value is > 0, Linux’s reaction will be an immediate reboot, without even syncing its dirty buffers. domainname, hostname—The files domainname and hostname can be used to set the NIS/YP domain name and the host name of your box in exactly the same way as the commands domainname and hostname. Operating UNIX Safely UNIX is a powerful operating system with many tools and capabilities. Even a system that has been properly configured and hardened is still a security risk if users and processes are not properly controlled and monitored. Any network security attack on a workstation ultimately will come down to running code. The code can fall into one of two categories: Malcode—This consists of viruses, worms, and Trojan horses. This code is either run by the user or on the user’s behalf by some scripting application, such as a Web browser. ? Host services—In this case, the attacker comes in from the network and remotely gets a foothold or access to the workstation by exploiting an open port and its associated service. This is discussed further in the next section, “Controlling processes. ” The protection against malcode is twofold: Use antivirus protection and don’t engage in risky behavior. Although this topic is discussed in detail in Chapter 4, it is worth mentioning here.
Avoiding risky behavior includes the following: ? Don’t open, launch, download, or execute anything that comes from a questionable source. In other words, “Don’t talk to strangers. ” This needs to be periodically reinforced to every level of an organizaton. The weakest-link principle definitely applies here. ? Whenever possible, disable scripting capabilities on e-mail clients, word processing, and other office productivity products. Note that the security of any workstation can be enhanced by using encryption. This is discussed briefly in the “Encryption and certificates” section of this chapter.
Controlling processes In the early days of UNIX, installations tended to be bare boned, meaning that only the bare essentials were brought into a system during installation. As UNIX and Linux got more popular, installations were made easier and the trend now is to bring in many features. All these unneeded features or applications are potential security risks. Cross- Reference In terms of security, you can group processes or services into three categories, as follows: ? Avoid if at all possible—Certain services are either out of date or so inherently insecure that they should be avoided and alternatives found. Use as needed—A small group of services are probably worth the risk and are generally more helpful. ? Probably not needed—Most processes probably fall into this category. Under certain circumstances they have a use but should not be run on most UNIX workstations. Services to avoid For the security of the UNIX workstation and the network, it is important that system administrators be kept abreast of the processes running. Because many applications in UNIX operate in a daemon or server mode, they can be ready targets for attackers to exploit. It is a security principle that unneeded applications or services should not be running.
Here are a few services commonly found on a UNIX workstation that are not normally needed. ? FTP (vsftpd or wuftpd)—FTP is a widely available method of transferring files. It has some vulnerabilities if anonymous access is permitted and it sends passwords in the clear (unencrypted).
For these reasons, more secure methods of file transfer, such as scp or sFTP, should be used instead. ? Network File System (NFS)—Designed for sharing files over a network but over not the Internet. NFS is a remote procedure call (RPC) service using portmap. NFS makes the spreading of malcode such as Trojan horses easier for the attacker. nfslock—The NFS file locking service. If NFS is not being used, this service should be disabled. ? RPC—This protocol has some inherent security problems and should be avoided if not needed. Few applications these days use RPC. Most users could operate their workstations for years and never need to use RPC. Therefore, it is advisable to turn off RPC services unless otherwise needed. Most implementations of RPC deal with homegrown remote control of the computer or distributed processing. Both of these circumstances are rare. ? portmap—This service uses RPC to support nfslock. r commands (rsh, rcp, rlogin)—These protocols have weak authentication and pass information in the clear (unencrypted).
There are a number of better replacements, such as ssh and scp. ? telnet—This very simple service allows remote access to a UNIX workstation. Information is passed in the clear, so a third party could easily capture passwords and other sensitive information. Telnet sessions can easily be hijacked and taken over or redirected. Useful services The following services should be used if needed. In some cases, they can be made more secure by blocking their ports from the network. iptables—This is a kernel resident packet filter that works off rules controlling packets on input, output, and when they are forwarded through the workstation’s network interfaces. iptables adds another layer of security and is an important defense-in-depth addition to the UNIX workstation. ? keytable—This script loads a keyboard map and system font during the boot. ? kudzu—This is a hardware-detection program that runs during the boot. It is useful if your workstation frequently has hardware changes, such as a laptop that changes docking stations frequently. If the workstation is stable and does not change, this service can be disabled. network—This script starts the network interfaces and is required if the workstation is connecting to the network. ? pcmcia—This is the script that inserts pcmcia kernel modules for PCMCIA cards on laptops. Even though laptops probably constitute only a small percent of installed UNIX workstations, this service is often on by default. If not applicable to the workstation’s hardware, it should be disabled. ? Print daemons (cupsd, lpd)—These processes allow the UNIX workstation to print to network printers. While useful for that purpose, these services should not be accessible from the network. ptables should be used to block these ports. ? random—This script provides for the random seed for the system. ? rawdevices—This service enables raw Input-Output (IO).
? sshd—This is the server that supports remote access to the workstation using a Secure Shell (ssh) client. If remote access into the workstation is not needed, this may be disabled. ? syslog—This process supports the logging of system messages, which can be sent to a central server for analysis and auditing. ? xfs—The X Font server shares fonts with other machines to speed up font rendering and to support TrueType–style fonts.
This process may be required for XWindows to function efficiently. In these cases, the port can be blocked with iptables. Also, XWindows can be started without the feature of xfs looking out to the network. To do this, start X with startx — -nolisten tcp. ? xinetd (inetd)—This service starts other services on demand. xinetd is responsible for starting many of the common, small networking daemons. It only runs the daemon when a connection request is made for the particular service. For example, when the machine receives a pop3 request, xinetd starts up the ipop3d daemon to respond to the request.
Any service can be made available via xinetd. A simple configuration file identifying the port and the service to run is put in the /etc/xinetd/ directory. The following are typical services run via xinetd. None of these should be needed for a typical UNIX workstation that is not functioning as a server. • chargen—A service that continuously generates characters until the connection is dropped. The characters look something like this: # ! ”#$%&'( )*+,-. /0123456789:;? @ABCDEFGHIJKLMNOPQRST UVWXYZ[\]^_`abcdefg. • cups-lpd—An on-demand version of the print daemons discussed earlier. daytime—A service that gets the current system time then prints it out in a format such as “Wed Nov 13 22:30:27 EST 2002. ” • echo—A service that echos characters back. • finger—A service that displays information about users on a system. With the advent of brute force and social engineering attacks, it is no longer advisable to provide user information to non-authenticated users over the network. • imap—A service that allows remote users to access their mail using an Internet Message Access Protocol (IMAP) client such as Mutt, Pine, fetchmail, or Netscape Communicator. imaps—A service that allows remote users to access their mail using an IMAP client with Secure Sockets Layer (SSL) support, such as Netscape Communicator or fetchmail. • ipop2—A service that allows remote users to access their mail using a POP2 client such as fetchmail. In most cases, clients support POP3 instead of POP2, so enabling this service is rarely necessary. • ipop3—A service that allows remote users to access their mail using a POP3 client such as Netscape Communicator, Mutt, or fetchmail. • ktalk—A K Desktop Environment (KDE) version of the talk server (accepting talk requests for chatting with users on other systems).
ntalk—A server that accepts ntalk connections, for chatting with users on different systems. • pop3s—A service that allows remote users to access their mail using a POP3 client with SSL support such as fetchmail. • rexec—A server for the rexec routine. The server provides remote execution facilities with authentication based on user names and passwords. • rlogin—A server for the rlogin program. The server provides a remote login facility with authentication based on privileged port numbers from trusted hosts.