security

Evil twin hot spots

From Dan Ilett’s Evil twin could pose Wi-Fi threat (CNET News.com: 21 January 2005):

Researchers at Cranfield University are warning that “evil twin” hot spots, networks set up by hackers to resemble legitimate Wi-Fi hot spots, present the latest security threat to Web users.

Attackers interfere with a connection to the legitimate network by sending a stronger signal from a base station close to the wireless client, turning the fake access point into a so-called evil twin.

Evil twin hot spots Read More »

Most PCs are rife with malware, & owners don’t know it

From Robert Lemos’s Plague carriers: Most users unaware of PC infections (CNET News.com: 25 October 2004):

A study of home PCs released Monday found that about 80 percent had been infected with spyware almost entirely unbeknownst to their users.

The study, funded by America Online and the National Cyber Security Alliance, found home users mostly unprotected from online threats and largely ignorant of the dangers. AOL and the NCSA sent technicians to 329 homes to inspect computers. …

Nearly three in five users do not know the difference between a firewall and antivirus software. Desktop firewall software regulates which applications on a PC can communicate across the network, while antivirus software detects malicious code that attempts to run on a computer, typically by pattern matching. Two-thirds of users don’t have a firewall installed on their computer, and while 85 percent of PC owners had installed antivirus software, two-thirds of them had not updated the software in the last week. The study found one in five users had an active virus on their machines.

Most PCs are rife with malware, & owners don’t know it Read More »

Identity theft method: file false unemployment claims

From Michael Alter’s States fiddle while defrauders steal (CNET News.com: 21 June 2005):

More than 9 million American consumers fall victim to identity theft each year. But the most underpublicized identity theft crime is one in which thieves defraud state governments of payroll taxes by filing fraudulent unemployment claims.

It can be a fairly lucrative scheme, too. File a false unemployment claim and you can receive $400 per week for 26 weeks. Do it for 100 Social Security numbers and you’ve made a quick $1.04 million. It’s tough to make crime pay much better than that.

The victims in this crime–the state work force agencies that tirelessly oversee our unemployment insurance programs and the U.S. Department of Labor–are reluctant to discuss this topic for obvious reasons. …

The slow response of state and federal agencies is quickly threatening the integrity of the unemployment insurance system. It turns out that crime is a very efficient market and word spreads quickly. Got a stolen Social Security number? You can more easily turn it into money by defrauding the government than by defrauding the credit card companies.

The net result of this fraud is that unemployment taxes are going up, and that makes it that much harder for small businesses and big businesses to do business. Even more, higher payroll taxes slow down economic growth because they make it more expensive to hire new employees.

Identity theft method: file false unemployment claims Read More »

Arrested for directory truncation

From Sol Terra’s [IP] Use the Dots, Go to Jail – that’s the law (Interesting People: 24 October 2005):

Today, Daniel Cuthbert was found guilty.

Daniel Cuthbert saw the devastating images of the Tsunami disaster and decided to donate £30 via the website that was hastily set up to be able to process payments. He is a computer security consultant, regarded in his field as an expert and respected by colleagues and employers alike. He entered his full personal details (home address, number, name and full card details). He did not receive confirmation of payment or a reference and became concerned as he has had issues with fraud on his card on a previous occasion. He then did a couple of very basic penetration tests. If they resulted in the site being insecure as he suspected, he would have contacted the authorities, as he had nothing to gain from doing this for fun and keeping the fact to himself that he suspected the site to be a phishing site and all this money pledged was going to some South American somewhere in South America.

The first test he used was the (dot dot slash, 3 times) ../../../ sequence. The ../ command is called a Directory Traversal which allows you to move up the hierarchy of a file. The triple sequence amounts to a DTA (Directory Traversal Attack), allows you to move three times. It is not a complete attack as that would require a further command, it was merely a light =knock on the door˜. The other test, which constituted an apostrophe( ‘ ) was also used. He was then satisfied that the site was safe as his received no error messages in response to his query, then went about his work duties. There were no warnings or dialogue boxes showing that he had accessed an unauthorised area.

20 days later he was arrested at his place of work and had his house searched. In the first part of his interview, he did not readily acknowledge his actions, but in the second half of the interview, he did. He was a little distraught and confused upon arrest, as anyone would be in that situation and did not ask for a solicitor, as he maintained he did nothing wrong. His tests were done in a 2 minute timeframe, then forgotten about.

Arrested for directory truncation Read More »

Rainbow cracking is now a public service

From Robert Lemos’s Rainbow warriors crack password hashes (The Register: 10 November 2005):

Over the past two years, three security enthusiasts from the United States and Europe set a host of computers to the task of creating eleven enormous tables of data that can be used to look up common passwords. The tables – totaling 500GB – form the core data of a technique known as rainbow cracking, which uses vast dictionaries of data to let anyone reverse the process of creating hashes – the statistically unique codes that, among other duties, are used to obfuscate a user’s password. Last week, the trio went public with their service. Called RainbowCrack Online, the site allows anyone to pay a subscription fee and submit password hashes for cracking.

“Usually people think that a complex, but short, password is very secure, something like $FT%_3^,” said Travis, one of the founders of RainbowCrack Online, who asked that his last name not be used. “However, you will find that our tables handle that password quite easily.”

Rainbow cracking is now a public service Read More »

Banks have more to fear from internal attacks than external

From electricnews.net’s Internal security attacks affecting banks (The Register: 23 June 2005):

Internal security breaches at the world’s banks are growing faster than external attacks, as institutions invest in technology, instead of employee training.

According to the 2005 Global Security Survey, published by Deloitte Touche Tohmatsu, 35 per cent of respondents said that they had encountered attacks from inside their organisation within the last 12 months, up from 14 per cent in 2004. In contrast, only 26 per cent confirmed external attacks, compared to 23 per cent in 2004. Click Here

The report, which surveyed senior security officers from the world’s top 100 financial institutions, found that incidences of phishing and pharming, two online scams which exploit human behaviour, are growing rapidly.

Banks have more to fear from internal attacks than external Read More »

How virtual machines work

From Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, & Jacob R. Lorch’s “SubVirt: Implementing malware with virtual machines
” [PDF] (: ):

A virtual-machine monitor (VMM) manages the resources of the underlying hardware and provides an abstraction of one or more virtual machines [20]. Each virtual machine can run a complete operating system and its applications. Figure 1 shows the architecture used by two modern VMMs (VMware and VirtualPC). Software running within a virtual machine is called guest software (i.e., guest operating systems and guest applications). All guest software (including the guest OS) runs in user mode; only the VMM runs in the most privileged level (kernel mode). The host OS in Figure 1 is used to provide portable access to a wide variety of I/O devices [44].

VMMs export hardware-level abstractions to guest software using emulated hardware. The guest OS interacts with the virtual hardware in the same manner as it would with real hardware (e.g., in/out instructions, DMA), and these interactions are trapped by the VMM and emulated in software. This emulation allows the guest OS to run without modification while maintaining control over the system at the VMM layer.

A VMM can support multiple OSes on one computer by multiplexing that computer’s hardware and providing the illusion of multiple, distinct virtual computers, each of which can run a separate operating system and its applications. The VMM isolates all resources of each virtual computer through redirection. For example, the VMM can map two virtual disks to different sectors of a shared physical disk, and the VMM can map the physical memory space of each virtual machine to different pages in the real machine’s memory. In addition to multiplexing a computer’s hardware, VMMs also provide a powerful platform for adding services to an existing system. For example, VMMs have been used to debug operating systems and system configurations [30, 49], migrate live machines [40], detect or prevent intrusions [18, 27, 8], and attest for code integrity [17]. These VM services are typically implemented outside the guest they are serving in order to avoid perturbing the guest.

One problem faced by VM services is the difficulty in understanding the states and events inside the guest they are serving; VM services operate at a different level of abstraction from guest software. Software running outside of a virtual machine views lowlevel virtual-machine state such as disk blocks, network packets, and memory. Software inside the virtual machine interprets this state as high-level abstractions such as files, TCP connections, and variables. This gap between the VMM’s view of data/events and guest software’s view of data/events is called the semantic gap [13].

Virtual-machine introspection (VMI) [18, 27] describes a family of techniques that enables a VM service to understand and modify states and events within the guest. VMI translates variables and guest memory addresses by reading the guest OS and applications’ symbol tables and page tables. VMI uses hardware or software breakpoints to enable a VM service to gain control at specific instruction addresses. Finally, VMI allows a VM service to invoke guest OS or application code. Invoking guest OS code allows the VM service to leverage existing, complex guest code to carry out general-purpose functionality such as reading a guest file from the file cache/disk system. VM services can protect themselves from guest code by disallowing external I/O. They can protect the guest data from perturbation by checkpointing it before changing its state and rolling the guest back later.

How virtual machines work Read More »

Virtual-machine based rootkits

From Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, & Jacob R. Lorch’s “SubVirt: Implementing malware with virtual machines
” [PDF] (: ):

We evaluate a new type of malicious software that gains qualitatively more control over a system. This new type of malware, which we call a virtual-machine based rootkit (VMBR), installs a virtual-machine monitor underneath an existing operating system and hoists the original operating system into a virtual machine. Virtual-machine based rootkits are hard to detect and remove because their state cannot be accessed by software running in the target system. Further, VMBRs support general-purpose malicious services by allowing such services to run in a separate operating system that is protected from the target system. We evaluate this new threat by implementing two proof-of-concept VMBRs. We use our proof-of-concept VMBRs to subvert Windows XP and Linux target systems, and we implement four example malicious services using the VMBR platform. Last, we use what we learn from our proof-of-concept VMBRs to explore ways to defend against this new threat. We discuss possible ways to detect and prevent VMBRs, and we implement a defense strategy suitable for protecting systems against this threat. …

A major goal of malware writers is control, by which we mean the ability of an attacker to monitor, intercept, and modify the state and actions of other software on the system. Controlling the system allows malware to remain invisible by lying to or disabling intrusion detection software.

Control of a system is determined by which side occupies the lower layer in the system. Lower layers can control upper layers because lower layers implement the abstractions upon which upper layers depend. For example, an operating system has complete control over an application’s view of memory because the operating system mediates access to physical memory through the abstraction of per-process address spaces. Thus, the side that controls the lower layer in the system has a fundamental advantage in the arms race between attackers and defenders. If the defender’s security service occupies a lower layer than the malware, then that security service should be able to detect, contain, and remove the malware. Conversely, if the malware occupies a lower layer than the security service, then the malware should be able to evade the security service and manipulate its execution.

Because of the greater control afforded by lower layers in the system, both security services and rootkits have evolved by migrating to these layers. Early rootkits simply replaced user-level programs, such as ps, with trojan horse programs that lied about which processes were running. These user-level rootkits were detected easily by user-level intrusion detection systems such as TripWire [29], and so rootkits moved into the operating system kernel. Kernel-level rootkits such as FU [16] hide malicious processes by modifying kernel data structures [12]. In response, intrusion detectors also moved to the kernel to check the integrity of the kernel’s data structures [11, 38]. Recently, researchers have sought to hide the memory footprint of malware from kernel-level detectors by modifying page protections and intercepting page faults [43]. To combat such techniques, future detectors may reset page protections and examine the code of the page-fault handler. …

Our project, which is called SubVirt, shows how attackers can use virtual-machine technology to address the limitations of current malware and rootkits. We show how attackers can install a virtual-machine monitor (VMM) underneath an existing operating system and use that VMM to host arbitrary malicious software. The resulting malware, which we call a virtual- machine based rootkit (VMBR), exercises qualitatively more control than current malware, supports general-purpose functionality, yet can completely hide all its state and activity from intrusion detection systems running in the target operating system and applications. …

A virtual-machine monitor is a powerful platform for malware. A VMBR moves the targeted system into a virtual machine then runs malware in the VMM or in a second virtual machine. The targeted system sees little to no difference in its memory space, disk space, or execution (depending on how completely the machine is virtualized). The VMM also isolates the malware’s state and events completely from those of the target system, so software in the target system cannot see or modify the malicious software. At the same time, the VMM can see all state and events in the target system, such as keystrokes, network packets, disk state, and memory state. A VMBR can observe and modify these states and events—without its own actions being observed—because it completely controls the virtual hardware presented to the operating system and applications. Finally, a VMBR provides a convenient platform for developing malicious services. A malicious service can benefit from all the conveniences of running in a separate, general-purpose operating system while remaining invisible to all intrusion detection software running in the targeted system. In addition, a malicious service can use virtual-machine introspection to understand the events and states taking place in the targeted system. …

In the overall structure of a VMBR, a VMBR runs beneath the existing (target) operating system and its applications (Figure 2). To accomplish this, a VMBR must insert itself beneath the target operating system and run the target OS as a guest. To insert itself beneath an existing system, a VMBR must manipulate the system boot sequence to ensure that the VMBR loads before the target operating system and applications. After the VMBR loads, it boots the target OS using the VMM. As a result, the target OS runs normally, but the VMBR sits silently beneath it.

To install a VMBR on a computer, an attacker must first gain access to the system with sufficient privileges to modify the system boot sequence. There are numerous ways an attacker can attain this privilege level. For example, an attacker could exploit a remote vulnerability, fool a user into installing malicious software, bribe an OEM or vendor, or corrupt a bootable CD-ROM or DVD image present on a peer-to-peer network. On many systems, an attacker who attains root or Administrator privileges can manipulate the system boot sequence. On other systems, an attacker must execute code in kernel mode to manipulate the boot sequence. We assume the attacker can run arbitrary code on the target system with root or Administrator privileges and can install kernel modules if needed. …

VMBRs use a separate attack OS to deploy malware that is invisible from the perspective of the target OS but is still easy to implement. None of the states or events of the attack OS are visible from within the target OS, so any code running within an attack OS is effectively invisible. The ability to run invisible malicious services in an attack OS gives intruders the freedom to use user-mode code with less fear of detection.

We classify malicious services into three categories: those that need not interact with the target system at all, those that observe information about the target system, and those that intentionally perturb the execution of the target system. In the remainder of this section, we discuss how VMBRs support each class of service.

The first class of malicious service does not communicate with the target system. Examples of such services are spam relays, distributed denial-of-service zombies, and phishing web servers. A VMBR supports these services by allowing them to run in the attack OS. This provides the convenience of user-mode execution without exposing the malicious service to the target OS.

The second class of malicious service observes data or events from the target system. VMBRs enable stealthy logging of hardware-level data (e.g., keystrokes, network packets) by modifying the VMM’s device emulation software. This modification does not affect the virtual devices presented to the target OS. For example, a VMBR can log all network packets by modifying the VMM’s emulated network card. These modifications are invisible to the target OS because the interface to the network card does not change, but the VMBR can still record all network packets. …

The third class of malicious service deliberately modifies the execution of the target system. For example, a malicious service could modify network communication, delete e-mail messages, or change the execution of a target application. A VMBR can customize the VMM’s device emulation layer to modify hardware-level data. A VMBR can also modify data or execution within the target through virtual-machine introspection.

Using our proof-of-concept VMBRs, we developed four malicious services that represent a range of services a writer of malicious software may want to deploy. We implemented a phishing web server, a keystroke logger, a service that scans the target file system looking for sensitive files, and a defense countermeasure that defeats a current virtual-machine detector. …

To avoid being removed, a VMBR must protect its state by maintaining control of the system. As long as the VMBR controls the system, it can thwart any attempt by the target to modify the VMBR’s state. The VMBR’s state is protected because the target system has access only to the virtual disk, not the physical disk.

The only time the VMBR loses control of the system is in the period of time after the system powers up until the VMBR starts. Any code that runs in this period can access the VMBR’s state directly. The first code that runs in this period is the system BIOS. The system BIOS initializes devices and chooses which medium to boot from. In a typical scenario, the BIOS will boot the VMBR, after which the VMBR regains control of the system. However, if the BIOS boots a program on an alternative medium, that program can access the VMBR’s state.

Because VMBRs lose control when the system is powered off, they may try to minimize the number of times full system power-off occurs. The events that typically cause power cycles are reboots and shutdowns. VMBRs handle reboots by restarting the virtual hardware rather than resetting the underlying physical hardware. By restarting the virtual hardware, VMBRs provide the illusion of resetting the underlying physical hardware without relinquishing control. Any alternative bootable medium used after a target reboot will run under the control of the VMBR.

In addition to handling target reboots, VMBRs can also emulate system shutdowns such that the system appears to shutdown, but the VMBR remains running on the system. We use ACPI sleep states [3] to emulate system shutdowns and to avoid system power-downs. ACPI sleep states are used to switch hardware into a low-power mode. This low-power mode includes spinning down hard disks, turning off fans, and placing the monitor into a power-saving mode. All of these actions make the computer appear to be powered off. Power is still applied to RAM, so the system can come out of ACPI sleep quickly with all memory state intact. When the user presses the power button to “power-up” the system, the computer comes out of the low-power sleep state and resumes the software that initiated the sleep. Our VMBR leverage this low-power mode to make the system appear to be shutdown; when the user “powers-up” the system by pressing the power button the VMBR resumes. If the user attempts to boot from an alternative medium at this point, it will run under the control of the VMBR. We implemented shutdown emulation for our VMware-based VMBR. …

We first measure the disk space required to install the VMBR. Our Virtual PC-based VMBR image is 106 MB compressed and occupies 251 MB of disk space when uncompressed. Our VMware-based VMBR image is 95 MB compressed and occupies 228 MB of disk space uncompressed. The compressed VMBR images take about 4 minutes to download on a 3 Mb/s cable modem connection and occupy only a small fraction of the total disk space present on modern systems. …

The installation measurements include the time it takes to uncompress the attack image, allocate disk blocks, store the attack files, and modify the system boot sequence. Installation time for the VMware-based VMBR is 24 seconds. Installation for the Virtual PC-based VMBR takes longer (262 seconds) because the hardware used for this test is much slower and has less memory. In addition, when installing a VMBR underneath Windows XP, we swap the contents of the disk blocks used to store the VMBR with those in the beginning of the Windows XP disk partition, and these extra disk reads/writes further lengthen the installation time.

We next measure boot time, which we define as the amount of time it takes for an OS to boot and reach an initial login prompt. Booting a target Linux system without a VMBR takes 53 seconds. After installing the VMware-based VMBR, booting the target system takes 74 seconds after a virtual reboot and 96 seconds after a virtual shutdown. It takes longer after a virtual shutdown than after a virtual reboot because the VMM must re-initialize the physical hardware after coming out of ACPI sleep. In the uncommon case that power is removed from the physical system, the host OS and VMM must boot before loading the target Linux OS. The VMware-based VMBR takes 52 seconds to boot the host OS and load the VMM and another 93 seconds to boot the target Linux OS. We speculate that it takes longer to boot the target OS after full system power-down than after a virtual reboot because some performance optimizations within the VMware VMM take time to warm up.

Booting a target Windows XP system without a VMBR takes 23 seconds. After installing the Virtual PC-based VMBR, booting the target system takes 54 seconds after a virtual reboot. If power is removed from the physical system, the Virtual PC-based VMBR takes 45 seconds to boot the host OS and load the VMM and another 56 seconds to boot the target Windows XP OS. …

Despite using specialized guest drivers, our current proof-of-concept VMBRs use virtualized video cards which may not export the same functionality as the underlying physical video card. Thus, some high-end video applications, like 3D games or video editing applications, may experience degraded performance.

The physical memory allocated to the VMM and attack OS is a small percentage of the total memory on the system (roughly 3%) and thus has little performance impact on a target OS running above the VMBR. …

In this section, we explore techniques that can be used to detect the presence of a VMBR. VMBRs are fundamentally more difficult to detect than traditional malware because they virtualize the state seen by the target system and because an ideal VMBR modifies no state inside the target system. Nonetheless, a VMBR does leave signs of its presence that a determined intrusion detection system can observe. We classify the techniques that be used to detect a VMBR by whether the detection system is running below the VMBR, or whether the detection system is running above the VMBR (i.e., within the target system). …

There are various ways to gain control below the VMBR. One way to gain control below the VMBR is to use secure hardware. Intel’s LaGrande [25], AMD’s platform for trustworthy computing [2], and Copilot [36] all propose hardware that can be used to develop and deploy low-layer security software that would run beneath a VMBR.

Another way to gain control below the VMBR is to boot from a safe medium such as a CD-ROM, USB drive or network boot server. This boot code can run on the system before the VMBR loads and can view the VMBR’s quiescent disk state. …

A third way to gain control below the VMBR is to use a secure VMM [17]. Like alternative bootable media, secure VMMs gain control of the system before the operating system boots. Running a secure VMM does not by itself stop a VMBR, as a VMBR can still insert itself between the VMM and the operating system. However, a secure VMM does retain control over the system as it runs and could easily add a check to stop a VMBR from modifying the boot sequence above the secure VMM. …

There are several reasons why traditional techniques (e.g., having the target system scan memory or disk) may not be able to detect a VMBR. First, the target system is confined to its virtual machine. Since the VMBR stores its state outside that virtual machine, the target system will see no suspicious changes in its own state. Second, even if the target system did see something amiss, the VMBR could tamper with the execution of the detector and force it to report incorrect results. …

A fundamental perturbation of the VMBR is that it uses machine resources such as CPU time, memory and disk space, and possibly network bandwidth. A VMBR adds CPU overhead to trap and emulate privileged instructions, as well as to run any malicious services. …

In addition to CPU overhead, a VMBR uses memory and disk space. A detector can detect this overhead by running a program that requires the entire machine’s memory or disk space. A VMBR can hide its space overhead in several ways. It can hide its memory overhead by mapping target pages with the same contents to the same physical page [47] or by paging some data to disk. While this extra paging activity may increase the running time of the program, the memory overhead of VMware was too small to measure in our tests. A VMBR can hide its disk overhead by compressing a portion of the disk, sharing disk blocks with the same contents, or pretending that the disk space it uses have bad sectors. …

Another type of perturbation is caused by the variety of I/O devices on the computer. Recall that a VMM typically virtualizes all I/O devices. However, virtualizing the I/O device without changing the target’s view of the device requires one to understand the interface and semantics of each device. This is a formidable task, given the wide variety of I/O devices present on today’s computing platforms. Instead, current VMMs emulate a small number of virtual devices (often with customized interfaces to improve performance [1, 34]). The target OS must then use the drivers for the emulated virtual devices. …

A final source of perturbations are caused by the imperfect virtualization of today’s x86 processors. Sensitive, non-privileged instructions like sidt leak information about the VMM yet do not trap to the VMM [31, 37]. …

We expect future enhancements to the x86 platform to reduce these perturbations. Upcoming virtualization support from Intel [45] and AMD [7] will enable more efficient virtualization. These enhancements eliminate sensitive, non-privileged instructions so they cannot be used from the CPU’s user-mode to detect the presence of a VMM. These enhancements may also accelerate transitions to and from the VMM, and this may reduce the need to run specialized guest drivers. …

However, VMBRs have a number of disadvantages compared to traditional forms of malware. When compared to traditional forms of malware, VMBRs tend to have more state, be more difficult to install, require a reboot before they can run, and have more of an impact on the overall system. Although VMBRs do offer greater control over the compromised system, the cost of this higher level of control may not be justified for all malicious applications.

Virtual-machine based rootkits Read More »

The inventor of UNIX on its security … or lack thereof

From Dennis M. Ritchie’s “On the Security of UNIX” (: ):

The first fact to face is that UNIX was not developed with security, in any realistic sense, in mind; this fact alone guarantees a vast number of holes. (Actually the same statement can be made with respect to most systems.) The area of security in which UNIX is theoretically weakest is in protecting against crashing or at least crippling the operation of the system. The problem here is not mainly in uncritical acceptance of bad parameters to system calls – there may be bugs in this area, but none are known – but rather in lack of checks for excessive consumption of resources. Most notably, there is no limit on the amount of disk storage used, either in total space allocated or in the number of files or directories. Here is a particularly ghastly shell sequence guaranteed to stop the system:

while : ; do
mkdir x
cd x
done

Either a panic will occur because all the i-nodes on the device are used up, or all the disk blocks will be consumed, thus preventing anyone from writing files on the device. …

The picture is considerably brighter in the area of protection of information from unauthorized perusal and destruction. Here the degree of security seems (almost) adequate theoretically, and the problems lie more in the necessity for care in the actual use of the system. …

It must be recognized that the mere notion of a super-user is a theoretical, and usually practical, blemish on any protection scheme. …

On the issue of password security, UNIX is probably better than most systems. Passwords are stored in an encrypted form which, in the absence of serious attention from specialists in the field, appears reasonably secure, provided its limitations are understood. … We have observed that users choose passwords that are easy to guess: they are short, or from a limited alphabet, or in a dictionary. Passwords should be at least six characters long and randomly chosen from an alphabet which includes digits and special characters.

Of course there also exist feasible non-cryptanalytic ways of finding out passwords. For example: write a program which types out login: on the typewriter and copies whatever is typed to a file of your own. Then invoke the command and go away until the victim arrives. …

The fact that users can mount their own disks and tapes as file systems can be another way of gaining super-user status. Once a disk pack is mounted, the system believes what is on it. Thus one can take a blank disk pack, put on it anything desired, and mount it. There are obvious and unfortunate consequences. For example: a mounted disk with garbage on it will crash the system; one of the files on the mounted disk can easily be a password-free version of su; other files can be unprotected entries for special files. The only easy fix for this problem is to forbid the use of mount to unprivileged users. A partial solution, not so restrictive, would be to have the mount command examine the special file for bad data, set-UID programs owned by others, and accessible special files, and balk at unprivileged invokers.

The inventor of UNIX on its security … or lack thereof Read More »

The math behind Flash Worms

From Stuart Staniford, David Moore, Vern Paxson, & Nicholas Weaver’s “The Top Speed of Flash Worms” [PDF] (29 October 2004):

Flash worms follow a precomputed spread tree using prior knowledge of all systems vulnerable to the worm’s exploit. In previous work we suggested that a flash worm could saturate one million vulnerable hosts on the Internet in under 30 seconds [18]. We grossly over-estimated.

In this paper, we revisit the problem in the context of single packet UDP worms (inspired by Slammer and Witty). Simulating a flash version of Slammer, calibrated by current Internet latency measurements and observed worm packet delivery rates, we show that a worm could saturate 95% of one million vulnerable hosts on the Internet in 510 milliseconds. A similar worm using a TCP based service could 95% saturate in 1.3 seconds. …

Since Code Red in July 2001 [11], worms have been of great interest in the security research community. This is because worms can spread so fast that existing signature-based anti-virus and intrusion-prevention defenses risk being irrelevant; signatures cannot be manually generated fast enough …

The premise of a flash worm is that a worm releaser has somehow acquired a list of vulnerable addresses, perhaps by stealthy scanning of the target address space or perhaps by obtaining a database of parties to the vulnerable protocol. The worm releaser, in advance, computes an efficient spread tree and encodes it in the worm. This allows the worm to be far more efficient than a scan- ning worm; it does not make large numbers of wild guesses for every successful infection. Instead, it successfully infects on most attempts. This makes it less vulnerable to containment defenses based on looking for missed connections [7, 16, 24], or too many connections [20, 25]. …

A difficulty for the flash worm releaser is a lack of robustness if the list of vulnerable addresses is imperfect. Since it is assembled in advance, and networks constantly change, the list is likely to be more-or-less out of date by the time of use. This has two effects. Firstly, a certain proportion of actually vulnerable and reachable machines may not be on the list, thus preventing the worm from saturating as fully as otherwise possible. More seriously, some ad- dresses on the list may not be vulnerable. If such nodes are near the base of the spread tree, they may prevent large numbers of vulnerable machines from being infected by the worm. Very deep spread trees are particularly prone to this. Thus in thinking about flash worms, we need to explore the issue of robustness as well as speed. …

The Slammer worm [10, 22] of January 2003 was the fastest scanning worm to date by far and is likely close to the lower bound on the size of a worm. Data on observed Slammer infections (and on those of the similar Witty worm) provide us with estimates for packet rate and minimum code size in future flash worms. Slammer infected Microsoft’s SQL server. A single UDP packet served as exploit and worm and required no acknowledgment. The size of the data was 376 bytes, giving a 404 byte IP packet. This consisted of the following sections:

• IP header
• UDP header
• Data to overflow buffer and gain control
• Code to find the addresses of needed functions.
• Code to initialize a UDP socket
• Code to seed the pseudo-random number generator
• Code to generate a random address
• Code to copy the worm to the address via the socket …

In this paper, we assume that the target vulnerable population is N = 1000000 (one million hosts-somewhat larger than the 360, 000 infected by Code Red [11]). Thus in much less than a sec- ond, the initial host can directly infect a first generation of roughly 5,000 – 50,000 intermediate nodes, leaving each of those with only 20-200 hosts to infect to saturate the population. There would be no need for a third layer in the tree.

This implies that the address list for the intermediate hosts can fit in the same packet as the worm; 200 addresses only consumes 800 bytes. A flash version of Slammer need only be slightly different than the original: the address list of nodes to be infected would be carried immediately after the end of the code, and the final loop could traverse that list sending out packets to infect it (instead of generating pseudo-random addresses). …

The graph indicates clearly that such flash worms can indeed be extraordinarily fast-infecting 95% of hosts in 510ms, and 99% in 1.2s. There is a long tail at the end due to the long tail in Internet latency data; some parts of the Internet are poorly connected and take a few seconds to reach. …

Can these results be extended to TCP services? If so, then our results are more grave; TCP offers worm writers a wealth of additional services to exploit. In this section we explore these issues. We conclude that top-speed propagation is viable for TCP worms, too, at the cost of an extra round-trip in latency to establish the connection and double the bandwidth if we want to quickly recover from loss. …

We believe a TCP worm could be written to be not much larger than Slammer. In addition to that 404 bytes, it needs a few more ioctl calls to set up a low level socket to send crafted SYN packets, and to set up a separate thread to listen for SYN-ACKs and send out copies of the worm. We estimate 600 bytes total. Such a worm could send out SYNs at line rate, confident that the SYN-ACKs would come back slower due to latency spread. The initial node can maintain a big enough buffer for the SYN-ACKs and the secondary nodes only send out a small number of SYNs. Both will likely be limited by the latency of the SYN-ACKs returning rather than the small amount of time required to deliver all the worms at their respective line rates.

To estimate the performance of such a small TCP flash worm, we repeated the Monte Carlo simulation we performed for the UDP worm with the latency increased by a factor of three for the hand- shake and the outbound delivery rates adjusted for 40 byte SYN packets. The results are shown in Figure 6. This simulation predicts 95% compromise after 1.3s, and 99% compromise after 3.3s. Thus TCP flash worms are a little slower than UDP ones because of the handshake latency, but can still be very fast. …

It appears that the optimum solution for the attacker – considering the plausible near-term worm defenses – is for a flash worm author to simply ignore the defenses and concentrate on making the worm as fast and reliable as possible, rather than slowing the worm to avoid detection. Any system behind a fully working defense can simply be considered as resistant, which the worm author counters by using the resiliency mechanisms outlined in the previous sections, combined with optimizing for minimum infection time.

Thus, for the defender, the current best hope is to keep the list of vulnerable addresses out of the hands of the attacker. …

The fastest worm seen in the wild so far was Slammer [10]. That was a random scanning worm, but saturated over 90% of vulnerable machines in under 10 minutes, and appears to have mainly been limited by bandwidth. The early exponential spread had an 8.5s time constant.

In this paper, we performed detailed analysis of how long a flash worm might take to spread on the contemporary Internet. These analyses use simulations based on actual data about Internet latencies and observed packet delivery rates by worms. Flash worms can complete their spread extremly quickly – with most infections occuring in much less than a second for single packet UDP worms and only a few seconds for small TCP worms. Anyone designing worm defenses needs to bear these time factors in mind.

The math behind Flash Worms Read More »

An overview of Flash Worms

From Stuart Staniford, Gary Grim, & Roelof Jonkman’s “Flash Worms: Thirty Seconds to Infect the Internet” (Silicon Defense: 16 August 2001):

In a recent very ingenious analysis, Nick Weaver at UC Berkeley proposed the possibility of a Warhol Worm that could spread across the Internet and infect all vulnerable servers in less than 15 minutes (much faster than the hours or days seen in Worm infections to date, such as Code Red).

In this note, we observe that there is a variant of the Warhol strategy that could plausibly be used and that could result in all vulnerable servers on the Internet being infected in less than thirty seconds (possibly significantly less). We refer to this as a Flash Worm, or flash infection. …

For the well funded three-letter agency with an OC12 connection to the Internet, we believe a scan of the entire Internet address space can be conducted in a little less than two hours (we estimate about 750,000 syn packets per second can be fit down the 622Mbps of an OC12, allowing for ATM/AAL framing of the 40 byte TCP segments. The return traffic will be smaller in size than the outbound. Faster links could scan even faster. …

Given that an attacker has the determination and foresight to assemble a list of all or most Internet connected addresses with the relevant service open, a worm can spread most efficiently by simply attacking addresses on that list. There are about 12 million web servers on the Internet (according to Netcraft), so the size of that particular address list would be 48MB, uncompressed. …

In conclusion, we argue that a small worm that begins with a list including all likely vulnerable addresses, and that has initial knowledge of some vulnerable sites with high-bandwidth links, can infect almost all vulnerable servers on the Internet in less than thirty seconds.

An overview of Flash Worms Read More »

Prices for zombies in the Underground

From Byron Acohido and Jon Swartz’s “Going price for network of zombie PCs: $2,000-$3,000” (USA TODAY: 8 September 2004):

In the calculus of Internet crime, two of the most sought-after commodities are zombie PCs and valid e-mail addresses.

One indication of the going rate for zombie PCs comes from a June 11 posting on SpecialHam.com, an electronic forum for spammers. The asking price for use of a network of 20,000 zombie PCs: $2,000 to $3,000. …

To put a zombie network to work, an attacker needs a list of targets in the form of e-mail addresses. Lists can be purchased from specialists who “harvest” anything that looks like an e-mail address from Web sites, news groups, chat rooms and subscriber lists. Compiled on CDs, such lists cost as little as $5 per million e-mail addresses. But you get what you pay for: Many CD entries tend to be either obsolete or “spam traps” — addresses seeded across the Internet by spam-filtering companies to identify, and block, spammers.

Valid e-mail addresses command a steep price. In June, authorities arrested a 24-year-old America Online engineer, Jason Smathers, and charged him with stealing 92 million AOL customer screen names and selling them to a spammer for $100,000.

Prices for zombies in the Underground Read More »

Ballmer says Windows is more secure than Linux

From Steven J. Vaughan-Nichols’s “Longhorn ‘Wave’ Rolling In” (eWeek: 20 October 2004):

The questions led into a discussion of Linux, with Bittmann observing that there’s a market perception that Linux is more secure.

“It’s just not true,” Ballmer responded. “We’re more secure than the other guys. There are more vulnerabilities in Linux; it takes longer for Linux developers to fix security problems. It’s a good decision to go with Windows.”

Ballmer says Windows is more secure than Linux Read More »

Steve Ballmer couldn’t fix an infected Windows PC

From David Frith’s “Microsoft takes on net nasties” (Australian IT: 6 June 2006):

MICROSOFT executives love telling stories against each other. Here’s one that platforms vice-president Jim Allchin told at a recent Windows Vista reviewers conference about chief executive Steve Ballmer.

It seems Steve was at a friend’s wedding reception when the bride’s father complained that his PC had slowed to a crawl and would Steve mind taking a look.

Allchin says Ballmer, the world’s 13th wealthiest man with a fortune of about $18 billion, spent almost two days trying to rid the PC of worms, viruses, spyware, malware and severe fragmentation without success.

He lumped the thing back to Microsoft’s headquarters and turned it over to a team of top engineers, who spent several days on the machine, finding it infected with more than 100 pieces of malware, some of which were nearly impossible to eradicate.

Among the problems was a program that automatically disabled any antivirus software.

“This really opened our eyes to what goes on in the real world,” Allchin told the audience.

If the man at the top and a team of Microsoft’s best engineers faced defeat, what chance do ordinary punters have of keeping their Windows PCs virus-free?

Steve Ballmer couldn’t fix an infected Windows PC Read More »

Credit cards sold in the Underground

From David Kirkpatrick’s “The Net’s not-so-secret economy of crime” (Fortune: 15 May 2006):

Raze Software offers a product called CC2Bank 1.3, available in freeware form – if you like it, please pay for it. …

But CC2Bank’s purpose is the management of stolen credit cards. Release 1.3 enables you to type in any credit card number and learn the type of card, name of the issuing bank, the bank’s phone number and the country where the card was issued, among other info. …

Says Marc Gaffan, a marketer at RSA: “There’s an organized industry out there with defined roles and specialties. There are means of communications, rules of engagement, and even ethics. It’s a whole value chain of facilitating fraud, and only the last steps of the chain are actually dedicated to translating activity into money.”

This ecosystem of support for crime includes services and tools to make theft simpler, harder to detect, and more lucrative. …

… a site called TalkCash.net. It’s a members-only forum, for both verified and non-verified members. To verify a new member, the administrators of the site must do due diligence, for example by requiring the applicant to turn over a few credit card numbers to demonstrate that they work.

It’s an honorable exchange for dishonorable information. “I’m proud to be a vendor here,” writes one seller.

“Have a good carding day and good luck,” writes another seller …

These sleazeballs don’t just deal in card numbers, but also in so-called “CVV” numbers. That’s the Creditcard Validation Value – an extra three- or four-digit number on the front or back of a card that’s supposed to prove the user has physical possession of the card.

On TalkCash.net you can buy CVVs for card numbers you already have, or you can buy card numbers with CVVs included. (That costs more, of course.)

“All CVV are guaranteed: fresh and valid,” writes one dealer, who charges $3 per CVV, or $20 for a card number with CVV and the user’s date of birth. “Meet me at ICQ: 264535650,” he writes, referring to the instant message service (owned by AOL) where he conducts business. …

Gaffan says these credit card numbers and data are almost never obtained by criminals as a result of legitimate online card use. More often the fraudsters get them through offline credit card number thefts in places like restaurants, when computer tapes are stolen or lost, or using “pharming” sites, which mimic a genuine bank site and dupe cardholders into entering precious private information. Another source of credit card data are the very common “phishing” scams, in which an e-mail that looks like it’s from a bank prompts someone to hand over personal data.

Also available on TalkCash is access to hijacked home broadband computers – many of them in the United States – which can be used to host various kinds of criminal exploits, including phishing e-mails and pharming sites.

Credit cards sold in the Underground Read More »

Search for “high score” told them who stole the PC

From Robert Alberti’s “more on Supposedly Destroyed Hard Drive Purchased In Chicago” (Interesting People mailing list: 3 June 2006):

It would be interesting to analyze that drive to see if anyone else was using it during the period between when it went to Best Buy, and when it turned up at the garage sale. We once discovered who stole, and then returned, a Macintosh from a department at the University of Minnesota with its drive erased. We did a hex search of the drive surface for the words “high score”. There was the name of the thief, one of the janitors, who confessed when presented with evidence.

Search for “high score” told them who stole the PC Read More »

It’s easy to track someone using a MetroCard

From Brendan I. Koerner’s “Your Cellphone is a Homing Device” (Legal Affairs: July/August 2003):

Law enforcement likewise views privacy laws as an impediment, especially now that it has grown accustomed to accessing location data virtually at will. Take the MetroCard, the only way for New York City commuters to pay their transit fares since the elimination of tokens. Unbeknownst to the vast majority of straphangers, the humble MetroCard is essentially a floppy disk, uniquely identified by a serial number on the flip side. Each time a subway rider swipes the card, the turnstile reads the bevy of information stored on the card’s magnetic stripe, such as serial number, value, and expiration date. That data is then relayed back to the Metropolitan Transportation Authority’s central computers, which also record the passenger’s station and entry time; the stated reason is that this allows for free transfers between buses and subways. (Bus fare machines communicate with MTA computers wirelessly.) Police have been taking full advantage of this location info to confirm or destroy alibis; in 2000, The Daily News estimated that detectives were requesting that roughly 1,000 MetroCard records be checked each year.

A mere request seems sufficient for the MTA to fork over the data. The authority learned its lesson back in 1997, when it initially balked at a New York Police Department request to view the E-ZPass toll records of a murder suspect; the cops wanted to see whether or not he’d crossed the Verrazano Narrows Bridge around the time of the crime. The MTA demanded that the NYPD obtain a subpoena, but then-Justice Colleen McMahon of the State Supreme Court disagreed. She ruled that “a reasonable person holds no expectation of confidentiality” when using E-ZPass on a public highway, and an administrative subpoena – a simple OK from a police higher-up – was enough to compel the MTA to hand over the goods.

It’s easy to track someone using a MetroCard Read More »

Tracking via cell phone is easy

From Brendan I. Koerner’s “Your Cellphone is a Homing Device” (Legal Affairs: July/August 2003):

What your salesman probably failed to tell you – and may not even realize – is that an E911-capable phone can give your wireless carrier continual updates on your location. The phone is embedded with a Global Positioning System chip, which can calculate your coordinates to within a few yards by receiving signals from satellites. GPS technology gave U.S. military commanders a vital edge during Gulf War II, and sailors and pilots depend on it as well. In the E911-capable phone, the GPS chip does not wait until it senses danger, springing to life when catastrophe strikes; it’s switched on whenever your handset is powered up and is always ready to transmit your location data back to a wireless carrier’s computers. Verizon or T-Mobile can figure out which manicurist you visit just as easily as they can pinpoint a stranded motorist on Highway 59.

So what’s preventing them from doing so, at the behest of either direct marketers or, perhaps more chillingly, the police? Not the law, which is essentially mum on the subject of location-data privacy. As often happens with emergent technology, the law has struggled to keep pace with the gizmo. No federal statute is keeping your wireless provider from informing Dunkin’ Donuts that your visits to Starbucks have been dropping off and you may be ripe for a special coupon offer. Nor are cops explicitly required to obtain a judicial warrant before compiling a record of where you sneaked off to last Thursday night. Despite such obvious potential for abuse, the Federal Communications Commission and the Federal Trade Commission, the American consumer’s ostensible protectors, show little enthusiasm for stepping into the breach. As things stand now, the only real barrier to the dissemination of your daily movements is the benevolence of the telecommunications industry. A show of hands from those who find this a comforting thought? Anyone? …

THE WIRELESS INDUSTRY HAS A NAME FOR SUCH CUSTOM-TAILORED HAWKING: “location-based services,” or LBS. The idea is that GPS chips can be used to locate friends, find the nearest pizzeria, or ensure that Junior is really at the library rather than a keg party. One estimate expects LBS to be a $15 billion market by 2007, a much-needed boost for the flagging telecom sector.

That may be fine for some consumers, but what about those who’d rather opt out of the tracking? The industry’s promise is that LBS customers will have to give explicit permission for their data to be shared with third parties. This is certainly in the spirit of the Wireless Communications and Public Safety Act of 1999, which anticipated that all cellphone carriers will feature E911 technology by 2006. The law stipulated that E911 data – that is, an individual’s second-by-second GPS coordinates – could only be used for nonemergency purposes if “express prior authorization” was provided by the consumer. …

Tracking via cell phone is easy Read More »