innovation

It’s hard to judge the young, but the market can

From Paul Graham’s “Hiring is Obsolete” (May 2005):

It’s hard to judge the young because (a) they change rapidly, (b) there is great variation between them, and (c) they’re individually inconsistent. That last one is a big problem. When you’re young, you occasionally say and do stupid things even when you’re smart. So if the algorithm is to filter out people who say stupid things, as many investors and employers unconsciously do, you’re going to get a lot of false positives. …

The market is a lot more discerning than any employer. And it is completely non-discriminatory. On the Internet, nobody knows you’re a dog. And more to the point, nobody knows you’re 22. All users care about is whether your site or software gives them what they want. They don’t care if the person behind it is a high school kid.

It’s hard to judge the young, but the market can Read More »

Why did it take so long for blogging to take off?

From Paul Graham’s “Hiring is Obsolete” (May 2005):

Have you ever noticed that when animals are let out of cages, they don’t always realize at first that the door’s open? Often they have to be poked with a stick to get them out. Something similar happened with blogs. People could have been publishing online in 1995, and yet blogging has only really taken off in the last couple years. In 1995 we thought only professional writers were entitled to publish their ideas, and that anyone else who did was a crank. Now publishing online is becoming so popular that everyone wants to do it, even print journalists. But blogging has not taken off recently because of any technical innovation; it just took eight years for everyone to realize the cage was open.

Why did it take so long for blogging to take off? Read More »

Quick ‘n dirty explanation of onion routing

From Ann Harrison’s Onion Routing Averts Prying Eyes (Wired News: 5 August 2004):

Computer programmers are modifying a communications system, originally developed by the U.S. Naval Research Lab, to help Internet users surf the Web anonymously and shield their online activities from corporate or government eyes.

The system is based on a concept called onion routing. It works like this: Messages, or packets of information, are sent through a distributed network of randomly selected servers, or nodes, each of which knows only its predecessor and successor. Messages flowing through this network are unwrapped by a symmetric encryption key at each server that peels off one layer and reveals instructions for the next downstream node. …

The Navy is financing the development of a second-generation onion-routing system called Tor, which addresses many of the flaws in the original design and makes it easier to use. The Tor client behaves like a SOCKS proxy (a common protocol for developing secure communication services), allowing applications like Mozilla, SSH and FTP clients to talk directly to Tor and route data streams through a network of onion routers, without long delays.

Quick ‘n dirty explanation of onion routing Read More »

AT&T’s security tv station

From Stephen Lawson & Robert McMillan’s AT&T plans CNN-syle security channel (InfoWorld: 23 June 2005):

Security experts at AT&T are about to take a page from CNN’s playbook. Within the next year they will begin delivering a video streaming service that will carry Internet security news 24 hours a day, seven days a week, according to the executive in charge of AT&T Labs.

The service, which currently goes by the code name Internet Security News Network, (ISN) is under development at AT&T Labs, but it will be offered as an additional service to the company’s customers within the next nine to 12 months, according to Hossein Eslambolchi, president of AT&T’s Global Networking Technology Services and AT&T Labs

ISN will look very much like Time Warner’s Cable News Network, except that it will be broadcast exclusively over the Internet, Eslambolchi said. “It’s like CNN,” he said. “When a new attack is spotted, we’ll be able to offer constant updates, monitoring, and advice.”

AT&T’s security tv station Read More »

Remote fingerprinting of devices connected to the Net

Anonymous Internet access is now a thing of the past. A doctoral student at the University of California has conclusively fingerprinted computer hardware remotely, allowing it to be tracked wherever it is on the Internet.

In a paper on his research, primary author and Ph.D. student Tadayoshi Kohno said: “There are now a number of powerful techniques for remote operating system fingerprinting, that is, remotely determining the operating systems of devices on the Internet. We push this idea further and introduce the notion of remote physical device fingerprinting … without the fingerprinted device’s known cooperation.”

The potential applications for Kohno’s technique are impressive. For example, “tracking, with some probability, a physical device as it connects to the Internet from different access points, counting the number of devices behind a NAT even when the devices use constant or random IP identifications, remotely probing a block of addresses to determine if the addresses correspond to virtual hosts (for example, as part of a virtual honeynet), and unanonymising anonymised network traces.” …

Another application for Kohno’s technique is to “obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device.”

The technique works by “exploiting small, microscopic deviations in device hardware: clock skews.” In practice, Kohno’s paper says, his techniques “exploit the fact that most modern TCP stacks implement the TCP timestamps option from RFC 1323 whereby, for performance purposes, each party in a TCP flow includes information about its perception of time in each outgoing packet. A fingerprinter can use the information contained within the TCP headers to estimate a device’s clock skew and thereby fingerprint a physical device.”

Kohno goes on to say: ” Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall.”

And the paper stresses that “For all our methods, we stress that the fingerprinter does not require any modification to or cooperation from the fingerprintee.” Kohno and his team tested their techniques on many operating systems, including Windows XP and 2000, Mac OS X Panther, Red Hat and Debian Linux, FreeBSD, OpenBSD and even Windows for Pocket PCs 2002.

Remote fingerprinting of devices connected to the Net Read More »

Evil twin hot spots

From Dan Ilett’s Evil twin could pose Wi-Fi threat (CNET News.com: 21 January 2005):

Researchers at Cranfield University are warning that “evil twin” hot spots, networks set up by hackers to resemble legitimate Wi-Fi hot spots, present the latest security threat to Web users.

Attackers interfere with a connection to the legitimate network by sending a stronger signal from a base station close to the wireless client, turning the fake access point into a so-called evil twin.

Evil twin hot spots Read More »

Virtual-machine based rootkits

From Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, & Jacob R. Lorch’s “SubVirt: Implementing malware with virtual machines
” [PDF] (: ):

We evaluate a new type of malicious software that gains qualitatively more control over a system. This new type of malware, which we call a virtual-machine based rootkit (VMBR), installs a virtual-machine monitor underneath an existing operating system and hoists the original operating system into a virtual machine. Virtual-machine based rootkits are hard to detect and remove because their state cannot be accessed by software running in the target system. Further, VMBRs support general-purpose malicious services by allowing such services to run in a separate operating system that is protected from the target system. We evaluate this new threat by implementing two proof-of-concept VMBRs. We use our proof-of-concept VMBRs to subvert Windows XP and Linux target systems, and we implement four example malicious services using the VMBR platform. Last, we use what we learn from our proof-of-concept VMBRs to explore ways to defend against this new threat. We discuss possible ways to detect and prevent VMBRs, and we implement a defense strategy suitable for protecting systems against this threat. …

A major goal of malware writers is control, by which we mean the ability of an attacker to monitor, intercept, and modify the state and actions of other software on the system. Controlling the system allows malware to remain invisible by lying to or disabling intrusion detection software.

Control of a system is determined by which side occupies the lower layer in the system. Lower layers can control upper layers because lower layers implement the abstractions upon which upper layers depend. For example, an operating system has complete control over an application’s view of memory because the operating system mediates access to physical memory through the abstraction of per-process address spaces. Thus, the side that controls the lower layer in the system has a fundamental advantage in the arms race between attackers and defenders. If the defender’s security service occupies a lower layer than the malware, then that security service should be able to detect, contain, and remove the malware. Conversely, if the malware occupies a lower layer than the security service, then the malware should be able to evade the security service and manipulate its execution.

Because of the greater control afforded by lower layers in the system, both security services and rootkits have evolved by migrating to these layers. Early rootkits simply replaced user-level programs, such as ps, with trojan horse programs that lied about which processes were running. These user-level rootkits were detected easily by user-level intrusion detection systems such as TripWire [29], and so rootkits moved into the operating system kernel. Kernel-level rootkits such as FU [16] hide malicious processes by modifying kernel data structures [12]. In response, intrusion detectors also moved to the kernel to check the integrity of the kernel’s data structures [11, 38]. Recently, researchers have sought to hide the memory footprint of malware from kernel-level detectors by modifying page protections and intercepting page faults [43]. To combat such techniques, future detectors may reset page protections and examine the code of the page-fault handler. …

Our project, which is called SubVirt, shows how attackers can use virtual-machine technology to address the limitations of current malware and rootkits. We show how attackers can install a virtual-machine monitor (VMM) underneath an existing operating system and use that VMM to host arbitrary malicious software. The resulting malware, which we call a virtual- machine based rootkit (VMBR), exercises qualitatively more control than current malware, supports general-purpose functionality, yet can completely hide all its state and activity from intrusion detection systems running in the target operating system and applications. …

A virtual-machine monitor is a powerful platform for malware. A VMBR moves the targeted system into a virtual machine then runs malware in the VMM or in a second virtual machine. The targeted system sees little to no difference in its memory space, disk space, or execution (depending on how completely the machine is virtualized). The VMM also isolates the malware’s state and events completely from those of the target system, so software in the target system cannot see or modify the malicious software. At the same time, the VMM can see all state and events in the target system, such as keystrokes, network packets, disk state, and memory state. A VMBR can observe and modify these states and events—without its own actions being observed—because it completely controls the virtual hardware presented to the operating system and applications. Finally, a VMBR provides a convenient platform for developing malicious services. A malicious service can benefit from all the conveniences of running in a separate, general-purpose operating system while remaining invisible to all intrusion detection software running in the targeted system. In addition, a malicious service can use virtual-machine introspection to understand the events and states taking place in the targeted system. …

In the overall structure of a VMBR, a VMBR runs beneath the existing (target) operating system and its applications (Figure 2). To accomplish this, a VMBR must insert itself beneath the target operating system and run the target OS as a guest. To insert itself beneath an existing system, a VMBR must manipulate the system boot sequence to ensure that the VMBR loads before the target operating system and applications. After the VMBR loads, it boots the target OS using the VMM. As a result, the target OS runs normally, but the VMBR sits silently beneath it.

To install a VMBR on a computer, an attacker must first gain access to the system with sufficient privileges to modify the system boot sequence. There are numerous ways an attacker can attain this privilege level. For example, an attacker could exploit a remote vulnerability, fool a user into installing malicious software, bribe an OEM or vendor, or corrupt a bootable CD-ROM or DVD image present on a peer-to-peer network. On many systems, an attacker who attains root or Administrator privileges can manipulate the system boot sequence. On other systems, an attacker must execute code in kernel mode to manipulate the boot sequence. We assume the attacker can run arbitrary code on the target system with root or Administrator privileges and can install kernel modules if needed. …

VMBRs use a separate attack OS to deploy malware that is invisible from the perspective of the target OS but is still easy to implement. None of the states or events of the attack OS are visible from within the target OS, so any code running within an attack OS is effectively invisible. The ability to run invisible malicious services in an attack OS gives intruders the freedom to use user-mode code with less fear of detection.

We classify malicious services into three categories: those that need not interact with the target system at all, those that observe information about the target system, and those that intentionally perturb the execution of the target system. In the remainder of this section, we discuss how VMBRs support each class of service.

The first class of malicious service does not communicate with the target system. Examples of such services are spam relays, distributed denial-of-service zombies, and phishing web servers. A VMBR supports these services by allowing them to run in the attack OS. This provides the convenience of user-mode execution without exposing the malicious service to the target OS.

The second class of malicious service observes data or events from the target system. VMBRs enable stealthy logging of hardware-level data (e.g., keystrokes, network packets) by modifying the VMM’s device emulation software. This modification does not affect the virtual devices presented to the target OS. For example, a VMBR can log all network packets by modifying the VMM’s emulated network card. These modifications are invisible to the target OS because the interface to the network card does not change, but the VMBR can still record all network packets. …

The third class of malicious service deliberately modifies the execution of the target system. For example, a malicious service could modify network communication, delete e-mail messages, or change the execution of a target application. A VMBR can customize the VMM’s device emulation layer to modify hardware-level data. A VMBR can also modify data or execution within the target through virtual-machine introspection.

Using our proof-of-concept VMBRs, we developed four malicious services that represent a range of services a writer of malicious software may want to deploy. We implemented a phishing web server, a keystroke logger, a service that scans the target file system looking for sensitive files, and a defense countermeasure that defeats a current virtual-machine detector. …

To avoid being removed, a VMBR must protect its state by maintaining control of the system. As long as the VMBR controls the system, it can thwart any attempt by the target to modify the VMBR’s state. The VMBR’s state is protected because the target system has access only to the virtual disk, not the physical disk.

The only time the VMBR loses control of the system is in the period of time after the system powers up until the VMBR starts. Any code that runs in this period can access the VMBR’s state directly. The first code that runs in this period is the system BIOS. The system BIOS initializes devices and chooses which medium to boot from. In a typical scenario, the BIOS will boot the VMBR, after which the VMBR regains control of the system. However, if the BIOS boots a program on an alternative medium, that program can access the VMBR’s state.

Because VMBRs lose control when the system is powered off, they may try to minimize the number of times full system power-off occurs. The events that typically cause power cycles are reboots and shutdowns. VMBRs handle reboots by restarting the virtual hardware rather than resetting the underlying physical hardware. By restarting the virtual hardware, VMBRs provide the illusion of resetting the underlying physical hardware without relinquishing control. Any alternative bootable medium used after a target reboot will run under the control of the VMBR.

In addition to handling target reboots, VMBRs can also emulate system shutdowns such that the system appears to shutdown, but the VMBR remains running on the system. We use ACPI sleep states [3] to emulate system shutdowns and to avoid system power-downs. ACPI sleep states are used to switch hardware into a low-power mode. This low-power mode includes spinning down hard disks, turning off fans, and placing the monitor into a power-saving mode. All of these actions make the computer appear to be powered off. Power is still applied to RAM, so the system can come out of ACPI sleep quickly with all memory state intact. When the user presses the power button to “power-up” the system, the computer comes out of the low-power sleep state and resumes the software that initiated the sleep. Our VMBR leverage this low-power mode to make the system appear to be shutdown; when the user “powers-up” the system by pressing the power button the VMBR resumes. If the user attempts to boot from an alternative medium at this point, it will run under the control of the VMBR. We implemented shutdown emulation for our VMware-based VMBR. …

We first measure the disk space required to install the VMBR. Our Virtual PC-based VMBR image is 106 MB compressed and occupies 251 MB of disk space when uncompressed. Our VMware-based VMBR image is 95 MB compressed and occupies 228 MB of disk space uncompressed. The compressed VMBR images take about 4 minutes to download on a 3 Mb/s cable modem connection and occupy only a small fraction of the total disk space present on modern systems. …

The installation measurements include the time it takes to uncompress the attack image, allocate disk blocks, store the attack files, and modify the system boot sequence. Installation time for the VMware-based VMBR is 24 seconds. Installation for the Virtual PC-based VMBR takes longer (262 seconds) because the hardware used for this test is much slower and has less memory. In addition, when installing a VMBR underneath Windows XP, we swap the contents of the disk blocks used to store the VMBR with those in the beginning of the Windows XP disk partition, and these extra disk reads/writes further lengthen the installation time.

We next measure boot time, which we define as the amount of time it takes for an OS to boot and reach an initial login prompt. Booting a target Linux system without a VMBR takes 53 seconds. After installing the VMware-based VMBR, booting the target system takes 74 seconds after a virtual reboot and 96 seconds after a virtual shutdown. It takes longer after a virtual shutdown than after a virtual reboot because the VMM must re-initialize the physical hardware after coming out of ACPI sleep. In the uncommon case that power is removed from the physical system, the host OS and VMM must boot before loading the target Linux OS. The VMware-based VMBR takes 52 seconds to boot the host OS and load the VMM and another 93 seconds to boot the target Linux OS. We speculate that it takes longer to boot the target OS after full system power-down than after a virtual reboot because some performance optimizations within the VMware VMM take time to warm up.

Booting a target Windows XP system without a VMBR takes 23 seconds. After installing the Virtual PC-based VMBR, booting the target system takes 54 seconds after a virtual reboot. If power is removed from the physical system, the Virtual PC-based VMBR takes 45 seconds to boot the host OS and load the VMM and another 56 seconds to boot the target Windows XP OS. …

Despite using specialized guest drivers, our current proof-of-concept VMBRs use virtualized video cards which may not export the same functionality as the underlying physical video card. Thus, some high-end video applications, like 3D games or video editing applications, may experience degraded performance.

The physical memory allocated to the VMM and attack OS is a small percentage of the total memory on the system (roughly 3%) and thus has little performance impact on a target OS running above the VMBR. …

In this section, we explore techniques that can be used to detect the presence of a VMBR. VMBRs are fundamentally more difficult to detect than traditional malware because they virtualize the state seen by the target system and because an ideal VMBR modifies no state inside the target system. Nonetheless, a VMBR does leave signs of its presence that a determined intrusion detection system can observe. We classify the techniques that be used to detect a VMBR by whether the detection system is running below the VMBR, or whether the detection system is running above the VMBR (i.e., within the target system). …

There are various ways to gain control below the VMBR. One way to gain control below the VMBR is to use secure hardware. Intel’s LaGrande [25], AMD’s platform for trustworthy computing [2], and Copilot [36] all propose hardware that can be used to develop and deploy low-layer security software that would run beneath a VMBR.

Another way to gain control below the VMBR is to boot from a safe medium such as a CD-ROM, USB drive or network boot server. This boot code can run on the system before the VMBR loads and can view the VMBR’s quiescent disk state. …

A third way to gain control below the VMBR is to use a secure VMM [17]. Like alternative bootable media, secure VMMs gain control of the system before the operating system boots. Running a secure VMM does not by itself stop a VMBR, as a VMBR can still insert itself between the VMM and the operating system. However, a secure VMM does retain control over the system as it runs and could easily add a check to stop a VMBR from modifying the boot sequence above the secure VMM. …

There are several reasons why traditional techniques (e.g., having the target system scan memory or disk) may not be able to detect a VMBR. First, the target system is confined to its virtual machine. Since the VMBR stores its state outside that virtual machine, the target system will see no suspicious changes in its own state. Second, even if the target system did see something amiss, the VMBR could tamper with the execution of the detector and force it to report incorrect results. …

A fundamental perturbation of the VMBR is that it uses machine resources such as CPU time, memory and disk space, and possibly network bandwidth. A VMBR adds CPU overhead to trap and emulate privileged instructions, as well as to run any malicious services. …

In addition to CPU overhead, a VMBR uses memory and disk space. A detector can detect this overhead by running a program that requires the entire machine’s memory or disk space. A VMBR can hide its space overhead in several ways. It can hide its memory overhead by mapping target pages with the same contents to the same physical page [47] or by paging some data to disk. While this extra paging activity may increase the running time of the program, the memory overhead of VMware was too small to measure in our tests. A VMBR can hide its disk overhead by compressing a portion of the disk, sharing disk blocks with the same contents, or pretending that the disk space it uses have bad sectors. …

Another type of perturbation is caused by the variety of I/O devices on the computer. Recall that a VMM typically virtualizes all I/O devices. However, virtualizing the I/O device without changing the target’s view of the device requires one to understand the interface and semantics of each device. This is a formidable task, given the wide variety of I/O devices present on today’s computing platforms. Instead, current VMMs emulate a small number of virtual devices (often with customized interfaces to improve performance [1, 34]). The target OS must then use the drivers for the emulated virtual devices. …

A final source of perturbations are caused by the imperfect virtualization of today’s x86 processors. Sensitive, non-privileged instructions like sidt leak information about the VMM yet do not trap to the VMM [31, 37]. …

We expect future enhancements to the x86 platform to reduce these perturbations. Upcoming virtualization support from Intel [45] and AMD [7] will enable more efficient virtualization. These enhancements eliminate sensitive, non-privileged instructions so they cannot be used from the CPU’s user-mode to detect the presence of a VMM. These enhancements may also accelerate transitions to and from the VMM, and this may reduce the need to run specialized guest drivers. …

However, VMBRs have a number of disadvantages compared to traditional forms of malware. When compared to traditional forms of malware, VMBRs tend to have more state, be more difficult to install, require a reboot before they can run, and have more of an impact on the overall system. Although VMBRs do offer greater control over the compromised system, the cost of this higher level of control may not be justified for all malicious applications.

Virtual-machine based rootkits Read More »

A brief history of American bodysnatching

From Emily Bazelon’s “Grave Offense” (Legal Affairs: July/August 2002):

In December 1882, hundreds of black Philadelphians gathered at the city morgue. They feared that family members whom they had recently buried were, as a reporter put it, “amongst the staring corpses” that lay inside. Six bodies that had been taken from their graves at Lebanon Cemetery, the burial ground for Philadelphia’s African-Americans, had been brought to the morgue after being discovered on the back of a wagon bound for Jefferson Medical College. The cemetery’s black superintendent had admitted that for many years he let three grave robbers, his brother and two white men, steal as many corpses as they could sell to the college for dissection in anatomy classes.

At the morgue, a man asked others to bare their heads and swear on the bodies before them that they would kill the grave robbers. Another man found the body of his 29-year-old brother and screamed. A weeping elderly woman identified one of the corpses as her dead husband. According to the Philadelphia Press, which broke the story, to pay for her husband’s burial she had begged $22 at the wharves where he had once worked.

Medical science lay behind the body snatchings at Lebanon Cemetery and similar crimes throughout the Northeast and Midwest during the 19th century. By the 1820s, anatomy instruction had become central to medical education, but laws of the time, if they allowed for dissection, let medical schools use corpses only of condemned murderers. In their scramble to find other cadavers for students, doctors who taught anatomy competed for the booty of grave robbers—or sent medical students to rob the graves themselves. …

In the early 19th century, doctors were eager to distinguish themselves from midwives and homeopaths, and embraced anatomy as a critical source of their exclusive knowledge. In the words of a speaker at a New York medical society meeting in 1834, a physician who had not dissected a human body was “a disgrace to himself, a pest in society, and would maintain but a level with steam and red pepper quacks.” …

According to Michael Sappol’s recent book, A Traffic of Dead Bodies, Harvard Medical School moved its campus from Cambridge to Boston (where it remains) expecting to get bodies from an almshouse there. …

“Men seem prompted by their very nature to an earnest desire that their deceased friends be decently interred,” explained the grand jury charged with investigating a 1788 dissection-sparked riot in which 5,000 people stormed New York Hospital.

To protect the graves of their loved ones, 19th-century families who could afford it bought sturdy coffins and plots in a churchyard or cemetery guarded by night watchmen. Bodies buried in black cemeteries and paupers’ burial grounds, which often lacked those safeguards, were more vulnerable. In 1827, a black newspaper called Freedom’s Journal instructed readers that they could cheaply guard against body snatching by packing straw into the graves. In 1820s Philadelphia, several medical schools secretly bribed the superintendent of the public graveyard for 12 to 20 cadavers a week during “dissecting season.” He made sure to keep strict watch “to prevent adventurers from robbing him—not to prevent them from emptying the pits,” Philadelphia doctor John D. Godman wrote in 1829.

When a body snatching was detected, it made for fury and headlines. The 1788 New York riot, in which three people were killed, began when an anatomy instructor shooed some children away from his window with the dismembered arm of a corpse, which (legend has it) belonged to the recently buried mother of one of the boys; her body had been stolen from its coffin. In 1824, the body of a farmer’s daughter was found beneath the floor of the cellar of Yale’s medical school. An assistant suspected of the crime was almost tarred and feathered. In 1852, after a woman’s body was found in a cesspool near Cleveland’s medical school, a mob led by her father set fire to the building, wrecking a laboratory and a museum inside. …

In the morning, news spread that the robbers had been taken into custody. An “immense crowd of people surrounded the magistrate’s office and threatened to kill the resurrectionists,” the Press reported. …

The doctors got what they asked for. A new Pennsylvania law, passed in 1883, required officials at every almshouse, prison, morgue, hospital, and public institution in the state to give medical schools corpses that would otherwise be buried at public expense.

A brief history of American bodysnatching Read More »

The history of the Poison Pill

From Len Costa “The Perfect Pill” (Legal Affairs: March/April 2005):

THE MODERN HISTORY OF MERGERS AND ACQUISITIONS divides neatly into two eras marked by a landmark ruling of the Delaware Supreme Court in 1985. Before then, financiers like T. Boone Pickens and Carl Icahn regularly struck terror in the hearts of corporate boards. If these dealmakers wanted to take over a company in a hostile maneuver, break it into pieces, and then spin those pieces off for a profit, it was difficult to stop them. But after a decision by the Delaware court, directors regained control of their companies’ destinies.

The directors’ trump card is a controversial innovation technically called a preferred share purchase rights plan but nicknamed the “poison pill.” Its legality was affirmed unequivocally for the first time in the Delaware ruling of Moran v. Household International. By the unanimous vote of a three-judge panel, the court held that a company could threaten to flood the market with newly issued shares if a hostile suitor started buying up lots of its stock, thus diluting the suitor’s existing holdings and rendering the acquisition prohibitively expensive. …

Still, both sides agree that the poison pill is an ingenious creation. “As a matter of lawyering, it’s absolutely brilliant,” said Stanford University law professor Ronald Gilson, a longstanding critic who nonetheless considers the poison pill to be the most significant piece of corporate legal artistry in the 20th century. …

If a hostile bidder acquires more than a preset share of the target company’s stock, typically 10 to 15 percent, all shareholders-except, crucially, the hostile bidder-can exercise a right to purchase additional stock at a 50 percent discount, thus massively diluting the suitor’s equity stake in the takeover target.

The history of the Poison Pill Read More »

The Witty Worm was special

From CAIDA’s “The Spread of the Witty Worm“:

On Friday March 19, 2004 at approximately 8:45pm PST, an Internet worm began to spread, targeting a buffer overflow vulnerability in several Internet Security Systems (ISS) products, including ISS RealSecure Network, RealSecure Server Sensor, RealSecure Desktop, and BlackICE. The worm takes advantage of a security flaw in these firewall applications that was discovered earlier this month by eEye Digital Security. Once the Witty worm infects a computer, it deletes a randomly chosen section of the hard drive, over time rendering the machine unusable. The worm’s payload contained the phrase “(^.^) insert witty message here (^.^)” so it came to be known as the Witty worm.

While the Witty worm is only the latest in a string of self-propagating remote exploits, it distinguishes itself through several interesting features:

  • Witty was the first widely propagated Internet worm to carry a destructive payload.
  • Witty was started in an organized manner with an order of magnitude more ground-zero hosts than any previous worm.
  • Witty represents the shortest known interval between vulnerability disclosure and worm release — it began to spread the day after the ISS vulnerability was publicized.
  • Witty spread through a host population in which every compromised host was doing something proactive to secure their computers and networks.
  • Witty spread through a population almost an order of magnitude smaller than that of previous worms, demonstrating the viability of worms as an automated mechanism to rapidly compromise machines on the Internet, even in niches without a software monopoly. …

Once Witty infects a host, the host sends 20,000 packets by generating packets with a random destination IP address, a random size between 796 and 1307 bytes, and a destination port. The worm payload of 637 bytes is padded with data from system memory to fill this random size and a packet is sent out from source port 4000. After sending 20,000 packets, Witty seeks to a random point on the hard disk, writes 65k of data from the beginning of iss-pam1.dll to the disk. After closing the disk, the worm repeats this process until the machine is rebooted or until the worm permanently crashes the machine.

Witty Worm Spread

With previous Internet worms, including Code-Red, Nimda, and SQL Slammer, a few hosts were seeded with the worm and proceeded to spread it to the rest of the vulnerable population. The spread was slow early on and then accelerates dramatically as the number of infected machines spewing worm packets to the rest of the Internet rises. Eventually as the victim population becomes saturated, the spread of the worm slows because there are few vulnerable machines left to compromise. Plotted on a graph, this worm growth appears as an S-shaped exponential growth curve called a sigmoid.

At 8:45:18pm[4] PST on March 19, 2004, the network telescope received its first Witty worm packet. In contrast to previous worms, we observed 110 hosts infected in the first ten seconds, and 160 at the end of 30 seconds. The chances of a single instance of the worm infecting 110 machines so quickly are vanishingly small — worse than 10-607. This rapid onset indicates that the worm used either a hitlist or previously compromised vulnerable hosts to start the worm. …

After the sharp rise in initial coordinated activity, the Witty worm followed a normal exponential growth curve for a pathogen spreading in a fixed population. Witty reached its peak after approximately 45 minutes, at which point the majority of vulnerable hosts had been infected. After that time, the churn caused by dynamic addressing causes the IP address count to inflate without any additional Witty infections. At the peak of the infection, Witty hosts flooded the Internet with more than 90Gbits/second of traffic (more than 11 million packets per second). …

The vulnerable host population pool for the Witty worm was quite different from that of previous virulent worms. Previous worms have lagged several weeks behind publication of details about the remote-exploit bug, and large portions of the victim populations appeared to not know what software was running on their machines, let alone take steps to make sure that software was up to date with security patches. In contrast, the Witty worm infected a population of hosts that were proactive about security — they were running firewall software. The Witty worm also started to spread the day after information about the exploit and the software upgrades to fix the bug were available. …

By infecting firewall devices, Witty proved particularly adept at thwarting security measures and successfully infecting hosts on internal networks. …

The Witty worm incorporates a number of dangerous characteristics. It is the first widely spreading Internet worm to actively damage infected machines. It was started from a large set of machines simultaneously, indicating the use of a hit list or a large number of compromised machines. Witty demonstrated that any minimally deployed piece of software with a remotely exploitable bug can be a vector for wide-scale compromise of host machines without any action on the part of a victim. The practical implications of this are staggering; with minimal skill, a malevolent individual could break into thousands of machines and use them for almost any purpose with little evidence of the perpetrator left on most of the compromised hosts.

The Witty Worm was special Read More »

What can we use instead of gasoline in cars?

From Popular Mechanics‘ “How far can you drive on a bushel of corn?“:

It is East Kansas Agri-Energy’s ethanol facility, one of 100 or so such heartland garrisons in America’s slowly gathering battle to reduce its dependence on fossil fuels. The plant processes about 13 million bushels of corn to produce approximately 36 million gal. of ethanol a year. “That’s enough high-quality motor fuel to replace 55,000 barrels of imported petroleum,” the plant’s manager, Derek Peine, says. …

It takes five barrels of crude oil to produce enough gasoline (nearly 97 gal.) to power a Honda Civic from New York to California. …

Ethanol/E85

E85 is a blend of 85 percent ethanol and 15 percent gasoline. … A gallon of E85 has an energy content of about 80,000 BTU, compared to gasoline’s 124,800 BTU. So about 1.56 gal. of E85 takes you as far as 1 gal. of gas.

Case For: Ethanol is an excellent, clean-burning fuel, potentially providing more horsepower than gasoline. In fact, ethanol has a higher octane rating (over 100) and burns cooler than gasoline. However, pure alcohol isn’t volatile enough to get an engine started on cold days, hence E85. …

Cynics claim that it takes more energy to grow corn and distill it into alcohol than you can get out of the alcohol. However, according to the DOE, the growing, fermenting and distillation chain actually results in a surplus of energy that ranges from 34 to 66 percent. Moreover, the carbon dioxide (CO2) that an engine produces started out as atmospheric CO2 that the cornstalk captured during growth, making ethanol greenhouse gas neutral. Recent DOE studies note that using ethanol in blends lowers carbon monoxide (CO) and CO2 emissions substantially. In 2005, burning such blends had the same effect on greenhouse gas emissions as removing 1 million cars from American roads. …

One acre of corn can produce 300 gal. of ethanol per growing season. So, in order to replace that 200 billion gal. of petroleum products, American farmers would need to dedicate 675 million acres, or 71 percent of the nation’s 938 million acres of farmland, to growing feedstock. Clearly, ethanol alone won’t kick our fossil fuel dependence–unless we want to replace our oil imports with food imports. …

Biodiesel

Fuels for diesel engines made from sources other than petroleum are known as biodiesel. Among the common sources are vegetable oils, rendered chicken fat and used fry oil. …

Case For: Modern diesel engines can run on 100 percent biodiesel with little degradation in performance compared to petrodiesel because the BTU content of both fuels is similar–120,000 to 130,000 BTU per gallon. In addition, biodiesel burns cleaner than petrodiesel, with reduced emissions. Unlike petrodiesel, biodiesel molecules are oxygen-bearing, and partially support their own combustion.

According to the DOE, pure biodiesel reduces CO emissions by more than 75 percent over petroleum diesel. A blend of 20 percent biodiesel and 80 percent petrodiesel, sold as B20, reduces CO2 emissions by around 15 percent.

Case Against: Pure biodiesel, B100, costs about $3.50–roughly a dollar more per gallon than petrodiesel. And, in low temperatures, higher-concentration blends–B30, B100–turn into waxy solids and do not flow. Special additives or fuel warmers are needed to prevent fuel waxing. …

Electricity

Case For: Vehicles that operate only on electricity require no warmup, run almost silently and have excellent performance up to the limit of their range. Also, electric cars are cheap to “refuel.” At the average price of 10 cents per kwh, it costs around 2 cents per mile. …

A strong appeal of the electric car–and of a hybrid when it’s running on electricity–is that it produces no tailpipe emissions. Even when emissions created by power plants are factored in, electric vehicles emit less than 10 percent of the pollution of an internal-combustion car.

Case Against: Pure electric cars still have limited range, typically no more than 100 to 120 miles. In addition, electrics suffer from slow charging, which, in effect, reduces their usability….

And then there’s the environmental cost. Only 2.3 percent of the nation’s electricity comes from renewable resources; about half is generated in coal-burning plants.

Hydrogen

Hydrogen is the most abundant element on Earth, forming part of many chemical compounds. Pure hydrogen can be made by electrolysis–passing electricity through water. This liberates the oxygen, which can be used for many industrial purposes. Most hydrogen currently is made from petroleum.

Case For: Though hydrogen can fuel a modified internal-combustion engine, most see hydrogen as a way to power fuel cells to move cars electrically. The only byproduct of a hydrogen fuel cell is water.

Case Against: … And, despite the chemical simplicity of electrolysis, producing hydrogen is expensive and energy consuming. It takes about 17 kwh of electricity, which costs about $1.70, to make just 100 cu. ft. of hydrogen. That amount would power a fuel cell vehicle for about 20 miles.

What can we use instead of gasoline in cars? Read More »

OmniPerception = facial recognition + smart card

From Technology Review‘s’ “Face Forward“:

To get around these problems, OmniPerception, a spinoff from the University of Surrey in England, has combined its facial-recognition technology with a smart-card system. This could make face recognition more robust and better suited to applications such as passport authentication and building access control, which, if they use biometrics at all, rely mainly on fingerprint verification, says David McIntosh, the company’s CEO. With OmniPerception’s technology, an image of a person’s face is verified against a “facial PIN” carried on the card, eliminating the need to search a central database and making the system less intimidating to privacy-conscious users. …

OmniPerception’s technology creates a PIN about 2,500 digits long from its analysis of the most distinctive features of a person’s face. The number is embedded in a smart card-such as those, say, that grant access to a building-and used to verify that the card belongs to the person presenting it. A user would place his or her card in or near a reader and face a camera, which would take a photo and feed it to the card. The card would then compare the PIN it carried to information it derived from the new photo and either accept or reject the person as the rightful owner of the card. The technology could also be used to ensure passport or driver’s license authenticity and to secure ATM or Internet banking transactions, says McIntosh.

OmniPerception = facial recognition + smart card Read More »

Monopolies & Internet innovation

From Andrew Odlyzko’s “Pricing and Architecture of the Internet: Historical Perspectives from Telecommunications and Transportation“:

The power to price discriminate, especially for a monopolist, is like the power of taxation, something that can be used to destroy. There are many governments that are interested in controlling Internet traffic for political or other reasons, and are interfering (with various degrees of success) with the end-to-end principle. However, in most democratic societies, the pressure to change the architecture of the Internet is coming primarily from economic concerns, trying to extract more revenues from users. This does not necessarily threaten political liberty, but it does impede innovation. If some new protocol or service is invented, gains from its use could be appropriated by the carriers if they could impose special charges for it.

The power of price discrimination was well understood in ancient times, even if the economic concept was not defined. As the many historical vignettes presented before show, differential pricing was frequently allowed, but only to a controlled degree. The main con- cern in the early days was about general fairness and about service providers leveraging their control of a key facility into control over other businesses. Personal discrimination was particularly hated, and preference was given to general rules applying to broad classes (such as student or senior citizen discounts today). Very often bounds on charges were imposed to limit price discrimination. …

Openness, non-discrimination, and the end-to-end principle have contributed greatly to the success of the Internet, by allowing innovation to flourish. Service providers have traditionally been very poor in introducing services that mattered and even in forecasting where their profits would come from. Sometimes this was because of ignorance, as in the failure of WAP and success of SMS, both of which came as great surprises to the wireless industry, even though this should have been the easiest thing to predict [55]. Sometimes it was because the industry tried to control usage excessively. For example, services such as Minitel have turned out to be disappointments for their proponents largely because of the built-in limitations. We can also recall the attempts by the local telephone monopolies in the mid-to late-1990s to impose special fees on Internet access calls. Various studies were trotted out about the harm that long Internet calls were causing to the network. In retrospect, though, Internet access was a key source of the increased revenues and profits at the local telcos in the late 1990s. Since the main value of the phone was its accessibility at any time, long Internet calls led to installation of second lines that were highly profitable for service providers. (The average length of time that a phone line was in use remained remarkably constant during that period [49].)

Much of the progress in telecommunications over the last couple of decades was due to innovations by users. The “killer apps” on the Internet, email, Web, browser, search engines, and Napster, were all invented by end users, not by carriers. (Even email was specifically not designed into the ARPANET, the progenitor of the Internet, and its dominance came as a surprise [55].)

Monopolies & Internet innovation Read More »

Big companies & their blind spots

From Paul Graham’s “Are Software Patents Evil?“:

Fortunately for startups, big companies are extremely good at denial. If you take the trouble to attack them from an oblique angle, they’ll meet you half-way and maneuver to keep you in their blind spot. To sue a startup would mean admitting it was dangerous, and that often means seeing something the big company doesn’t want to see. IBM used to sue its mainframe competitors regularly, but they didn’t bother much about the microcomputer industry because they didn’t want to see the threat it posed. Companies building web based apps are similarly protected from Microsoft, which even now doesn’t want to imagine a world in which Windows is irrelevant. …

Big companies & their blind spots Read More »

The history of tabs (card, folder, & UI)

From Technology Review‘s “Keeping Tabs“:

Starting in the late 14th century, scribes began to leave pieces of leather at the edges of manuscripts for ready reference. But with the introduction of page numbering in the Renaissance, they went out of fashion.

The modern tab was an improvement on a momentous 19th-century innovation, the index card. Libraries had previously listed their books in bound ledgers. During the French Revolution, authorities divided the nationalized collections of monasteries and aristocrats among public institutions, using the backs of playing cards to record data about each volume. …

It took decades to add tabs to cards. In 1876, Melvil Dewey, inventor of decimal classification, helped organize a company called the Library Bureau, which sold both cards and wooden cases. An aca­demic entrepreneur, Dewey was a perfectionist supplier. His cards were made to last, made from linen recycled from the shirt factories of Troy, NY. His card cabi­nets were so sturdy that I have found at least one set still in use, in excellent order. Dewey also standardized the dimension of the catalogue card, at three inches by five inches, or rather 75 millimeters by 125 millimeters. (He was a tireless advocate of the metric system.) …

The tab was the idea of a young man named James Newton Gunn (1867–1927), who started using file cards to achieve savings in cost accounting while working for a manufacturer of portable forges. After further experience as a railroad cashier, Gunn developed a new way to access the contents of a set of index cards, separating them with other cards distinguished by projections marked with letters of the alphabet, dates, or other information.

Gunn’s background in bookkeeping filled what Ronald S. Burt, the University of Chicago sociologist, has called a structural hole, a need best met by insights from unconnected disciplines. In 1896 he applied for a U.S. patent, which was granted as number 583,227 on May 25, 1897. By then, Gunn was working for the Library Bureau, to which he had sold the patent. …

The Library Bureau also produced some of the first modern filing cabinets, proudly exhibiting them at the World’s Columbian Exposition in Chicago in 1893. Files had once been stored horizontally on shelves. Now they could be organized with file folders for better visibility and quicker access. …

But the tab is [Gunn’s] lasting legacy. And it is ubiquitous: in the dialogue boxes of Microsoft Windows and Mac OS X, at the bottom of Microsoft Excel spreadsheets, at the side of Adobe Acrobat documents, across the top of the Opera and Firefox Web browsers, and—even now—on manila file folders. We’ve kept tabs.

The history of tabs (card, folder, & UI) Read More »

Security will retard innovation

From Technology Review‘s “Terror’s Server“:

Zittrain [Jonathan Zittrain, codirector of the Berkman Center for Internet and Society at Harvard Law School] concurs with Neumann [Peter Neumann, a computer scientist at SRI International, a nonprofit research institute in Menlo Park, CA] but also predicts an impending overreaction. Terrorism or no terrorism, he sees a convergence of security, legal, and business trends that will force the Internet to change, and not necessarily for the better. “Collectively speaking, there are going to be technological changes to how the Internet functions — driven either by the law or by collective action. If you look at what they are doing about spam, it has this shape to it,” Zittrain says. And while technologi­cal change might improve online security, he says, “it will make the Internet less flexible. If it’s no longer possible for two guys in a garage to write and distribute killer-app code without clearing it first with entrenched interests, we stand to lose the very processes that gave us the Web browser, instant messaging, Linux, and e-mail.”

Security will retard innovation Read More »

A very brief history of programming

From Brian Hayes’ “The Post-OOP Paradigm“:

The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when “the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” Half a century later, we’re still debugging.

The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1s and 0s. Moreover, it was up to the programmer to keep track of where everything was stored in the machine’s memory. Before you could call a subroutine, you had to calculate its address.

The technology that lifted these burdens from the programmer was assembly language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming.

Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x 2+y 2 might require dozens of assembly-language instructions. Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x 2+y 2 would be written simply as X**2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler.

… By the 1960s large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large-system programming a “tar pit” and remarked, “Everyone seems to have been surprised by the stickiness of the problem.”

One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra’s brief letter to the editor titled “Go to statement considered harmful.” Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the goto command, which allows jumps into or out of the middle of a routine). Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Böhm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs.

Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.

Object-oriented programming addresses these issues by packing both data and procedures—both nouns and verbs—into a single object. An object named triangle would have inside it some data structure representing a three-sided shape, but it would also include the procedures (called methods in this context) for acting on the data. To rotate a triangle, you send a message to the triangle object, telling it to rotate itself. Sending and receiving messages is the only way objects communicate with one another; outsiders are not allowed direct access to the data. Because only the object’s own methods know about the internal data structures, it’s easier to keep them in sync.

You define the class triangle just once; individual triangles are created as instances of the class. A mechanism called inheritance takes this idea a step further. You might define a more-general class polygon, which would have triangle as a subclass, along with other subclasses such as quadrilateral, pentagon and hexagon. Some methods would be common to all polygons; one example is the calculation of perimeter, which can be done by adding the lengths of the sides, no matter how many sides there are. If you define the method calculate-perimeter in the class polygon, all the subclasses inherit this code.

A very brief history of programming Read More »

Intel: anyone can challenge anyone

From FORTUNE’s “Lessons in Leadership: The Education of Andy Grove“:

[Intel CEO Andy] Grove had never been one to rely on others’ interpretations of reality. … At Intel he fostered a culture in which “knowledge power” would trump “position power.” Anyone could challenge anyone else’s idea, so long as it was about the idea and not the person–and so long as you were ready for the demand “Prove it.” That required data. Without data, an idea was only a story–a representation of reality and thus subject to distortion.

Intel: anyone can challenge anyone Read More »

Intel’s ups and downs

From FORTUNE’s “Lessons in Leadership: The Education of Andy Grove“:

By 1983, when Grove distilled much of his thinking in his book High Output Management (still a worthwhile read), he was president of a fast-growing $1.1-billion-a-year corporation, a leading maker of memory chips, whose CEO was Gordon Moore. … What Moore’s Law did not and could not predict was that Japanese firms, too, might master this process and turn memory chips into a commodity. …

Intel kept denying the cliff ahead until its profits went over the edge, plummeting from $198 million in 1984 to less than $2 million in 1985. It was in the middle of this crisis, when many managers would have obsessed about specifics, that Grove stepped outside himself. He and Moore had been agonizing over their dilemma for weeks, he recounts in Only the Paranoid Survive, when something happened: “I looked out the window at the Ferris wheel of the Great America amusement park revolving in the distance when I turned back to Gordon, and I asked, ‘If we got kicked out and the board brought in a new CEO, what do you think he would do?’ Gordon answered without hesitation, ‘He would get us out of memories.’ I stared at him, numb, then said, ‘Why shouldn’t you and I walk out the door, come back, and do it ourselves?'”

… once IBM chose Intel’s microprocessor to be the chip at the heart of its PCs, demand began to explode. Even so, the shift from memory chips was brutally hard–in 1986, Intel fired some 8,000 people and lost more than $180 million on $1.3 billion in sales–the only loss the company has ever posted since its early days as a startup.

Intel’s ups and downs Read More »