science

How doctors measure what percentage of your body is burned

From Daniel Engber’s “How Much of Me Is Burned?” (Slate: 11 July 2006):

rule-of-nines.gif In the 1950s, doctors developed an easy way to estimate the ratio of the area of a patient’s burns to the total area of his skin. The system works by assigning standard percentages to major body parts. (Most of these happen to be multiples of nine.) The skin on each arm, for example, covers 9 percent of a patient’s total surface area. Each leg comprises 18 percent, as do the front and back of the torso. The head and neck together make up another 9 percent, and the last bit (or 1 percent) covers the genitalia and perineum. This breakdown makes it easy for doctors to estimate the size of a burn in relation to a body—a burn that covered half the arm would add 4 or 5 percent to the total figure. …

Another method uses the size of a patient’s palm as a reference. As a general rule, the skin on the palm of your hand comprises 0.5 percent of your total surface area. (For children, it’s 1 percent.) A doctor can check the size of a patient’s hand and compare it with the size of a burn to make a quick guess about the percentage.

How doctors measure what percentage of your body is burned Read More »

Neal Stephenson on being Isaac Newton

From Laura Miller’s “Everybody loves Spinoza” (Salon: 17 May 2006):

Goldstein’s description [of Spinoza’s conception of God] reminds me of a passage in Neal Stephenson’s historical novel Quicksilver, in which a fictional character has an intimation about a friend, a real genius and contemporary of Spinoza’s: “[He] experienced a faint echo of what it must be like, all the time, to be Isaac Newton: a permanent ongoing epiphany, an endless immersion in lurid radiance, a drowning in light, a ringing of cosmic harmonies in the ears.”

Neal Stephenson on being Isaac Newton Read More »

5 reasons people exaggerate risks

From Bruce Schneier’s “Movie Plot Threat Contest: Status Report” (Crypto-Gram Newsletter: 15 May 2006):

In my book, Beyond Fear, I discussed five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

1. People exaggerate spectacular but rare risks and downplay common risks.

2. People have trouble estimating risks for anything not exactly like their normal situation.

3. Personified risks are perceived to be greater than anonymous risks.

4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.

5. People overestimate risks that are being talked about and remain an object of public scrutiny.

5 reasons people exaggerate risks Read More »

Why airport security fails constantly

From Bruce Schneier’s “Airport Passenger Screening” (Crypto-Gram Newsletter: 15 April 2006):

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns, and 60 percent of (fake) bombs. And recently, testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. …

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces. …

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for — for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

Why airport security fails constantly Read More »

How virtual machines work

From Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, & Jacob R. Lorch’s “SubVirt: Implementing malware with virtual machines
” [PDF] (: ):

A virtual-machine monitor (VMM) manages the resources of the underlying hardware and provides an abstraction of one or more virtual machines [20]. Each virtual machine can run a complete operating system and its applications. Figure 1 shows the architecture used by two modern VMMs (VMware and VirtualPC). Software running within a virtual machine is called guest software (i.e., guest operating systems and guest applications). All guest software (including the guest OS) runs in user mode; only the VMM runs in the most privileged level (kernel mode). The host OS in Figure 1 is used to provide portable access to a wide variety of I/O devices [44].

VMMs export hardware-level abstractions to guest software using emulated hardware. The guest OS interacts with the virtual hardware in the same manner as it would with real hardware (e.g., in/out instructions, DMA), and these interactions are trapped by the VMM and emulated in software. This emulation allows the guest OS to run without modification while maintaining control over the system at the VMM layer.

A VMM can support multiple OSes on one computer by multiplexing that computer’s hardware and providing the illusion of multiple, distinct virtual computers, each of which can run a separate operating system and its applications. The VMM isolates all resources of each virtual computer through redirection. For example, the VMM can map two virtual disks to different sectors of a shared physical disk, and the VMM can map the physical memory space of each virtual machine to different pages in the real machine’s memory. In addition to multiplexing a computer’s hardware, VMMs also provide a powerful platform for adding services to an existing system. For example, VMMs have been used to debug operating systems and system configurations [30, 49], migrate live machines [40], detect or prevent intrusions [18, 27, 8], and attest for code integrity [17]. These VM services are typically implemented outside the guest they are serving in order to avoid perturbing the guest.

One problem faced by VM services is the difficulty in understanding the states and events inside the guest they are serving; VM services operate at a different level of abstraction from guest software. Software running outside of a virtual machine views lowlevel virtual-machine state such as disk blocks, network packets, and memory. Software inside the virtual machine interprets this state as high-level abstractions such as files, TCP connections, and variables. This gap between the VMM’s view of data/events and guest software’s view of data/events is called the semantic gap [13].

Virtual-machine introspection (VMI) [18, 27] describes a family of techniques that enables a VM service to understand and modify states and events within the guest. VMI translates variables and guest memory addresses by reading the guest OS and applications’ symbol tables and page tables. VMI uses hardware or software breakpoints to enable a VM service to gain control at specific instruction addresses. Finally, VMI allows a VM service to invoke guest OS or application code. Invoking guest OS code allows the VM service to leverage existing, complex guest code to carry out general-purpose functionality such as reading a guest file from the file cache/disk system. VM services can protect themselves from guest code by disallowing external I/O. They can protect the guest data from perturbation by checkpointing it before changing its state and rolling the guest back later.

How virtual machines work Read More »

The math behind Flash Worms

From Stuart Staniford, David Moore, Vern Paxson, & Nicholas Weaver’s “The Top Speed of Flash Worms” [PDF] (29 October 2004):

Flash worms follow a precomputed spread tree using prior knowledge of all systems vulnerable to the worm’s exploit. In previous work we suggested that a flash worm could saturate one million vulnerable hosts on the Internet in under 30 seconds [18]. We grossly over-estimated.

In this paper, we revisit the problem in the context of single packet UDP worms (inspired by Slammer and Witty). Simulating a flash version of Slammer, calibrated by current Internet latency measurements and observed worm packet delivery rates, we show that a worm could saturate 95% of one million vulnerable hosts on the Internet in 510 milliseconds. A similar worm using a TCP based service could 95% saturate in 1.3 seconds. …

Since Code Red in July 2001 [11], worms have been of great interest in the security research community. This is because worms can spread so fast that existing signature-based anti-virus and intrusion-prevention defenses risk being irrelevant; signatures cannot be manually generated fast enough …

The premise of a flash worm is that a worm releaser has somehow acquired a list of vulnerable addresses, perhaps by stealthy scanning of the target address space or perhaps by obtaining a database of parties to the vulnerable protocol. The worm releaser, in advance, computes an efficient spread tree and encodes it in the worm. This allows the worm to be far more efficient than a scan- ning worm; it does not make large numbers of wild guesses for every successful infection. Instead, it successfully infects on most attempts. This makes it less vulnerable to containment defenses based on looking for missed connections [7, 16, 24], or too many connections [20, 25]. …

A difficulty for the flash worm releaser is a lack of robustness if the list of vulnerable addresses is imperfect. Since it is assembled in advance, and networks constantly change, the list is likely to be more-or-less out of date by the time of use. This has two effects. Firstly, a certain proportion of actually vulnerable and reachable machines may not be on the list, thus preventing the worm from saturating as fully as otherwise possible. More seriously, some ad- dresses on the list may not be vulnerable. If such nodes are near the base of the spread tree, they may prevent large numbers of vulnerable machines from being infected by the worm. Very deep spread trees are particularly prone to this. Thus in thinking about flash worms, we need to explore the issue of robustness as well as speed. …

The Slammer worm [10, 22] of January 2003 was the fastest scanning worm to date by far and is likely close to the lower bound on the size of a worm. Data on observed Slammer infections (and on those of the similar Witty worm) provide us with estimates for packet rate and minimum code size in future flash worms. Slammer infected Microsoft’s SQL server. A single UDP packet served as exploit and worm and required no acknowledgment. The size of the data was 376 bytes, giving a 404 byte IP packet. This consisted of the following sections:

• IP header
• UDP header
• Data to overflow buffer and gain control
• Code to find the addresses of needed functions.
• Code to initialize a UDP socket
• Code to seed the pseudo-random number generator
• Code to generate a random address
• Code to copy the worm to the address via the socket …

In this paper, we assume that the target vulnerable population is N = 1000000 (one million hosts-somewhat larger than the 360, 000 infected by Code Red [11]). Thus in much less than a sec- ond, the initial host can directly infect a first generation of roughly 5,000 – 50,000 intermediate nodes, leaving each of those with only 20-200 hosts to infect to saturate the population. There would be no need for a third layer in the tree.

This implies that the address list for the intermediate hosts can fit in the same packet as the worm; 200 addresses only consumes 800 bytes. A flash version of Slammer need only be slightly different than the original: the address list of nodes to be infected would be carried immediately after the end of the code, and the final loop could traverse that list sending out packets to infect it (instead of generating pseudo-random addresses). …

The graph indicates clearly that such flash worms can indeed be extraordinarily fast-infecting 95% of hosts in 510ms, and 99% in 1.2s. There is a long tail at the end due to the long tail in Internet latency data; some parts of the Internet are poorly connected and take a few seconds to reach. …

Can these results be extended to TCP services? If so, then our results are more grave; TCP offers worm writers a wealth of additional services to exploit. In this section we explore these issues. We conclude that top-speed propagation is viable for TCP worms, too, at the cost of an extra round-trip in latency to establish the connection and double the bandwidth if we want to quickly recover from loss. …

We believe a TCP worm could be written to be not much larger than Slammer. In addition to that 404 bytes, it needs a few more ioctl calls to set up a low level socket to send crafted SYN packets, and to set up a separate thread to listen for SYN-ACKs and send out copies of the worm. We estimate 600 bytes total. Such a worm could send out SYNs at line rate, confident that the SYN-ACKs would come back slower due to latency spread. The initial node can maintain a big enough buffer for the SYN-ACKs and the secondary nodes only send out a small number of SYNs. Both will likely be limited by the latency of the SYN-ACKs returning rather than the small amount of time required to deliver all the worms at their respective line rates.

To estimate the performance of such a small TCP flash worm, we repeated the Monte Carlo simulation we performed for the UDP worm with the latency increased by a factor of three for the hand- shake and the outbound delivery rates adjusted for 40 byte SYN packets. The results are shown in Figure 6. This simulation predicts 95% compromise after 1.3s, and 99% compromise after 3.3s. Thus TCP flash worms are a little slower than UDP ones because of the handshake latency, but can still be very fast. …

It appears that the optimum solution for the attacker – considering the plausible near-term worm defenses – is for a flash worm author to simply ignore the defenses and concentrate on making the worm as fast and reliable as possible, rather than slowing the worm to avoid detection. Any system behind a fully working defense can simply be considered as resistant, which the worm author counters by using the resiliency mechanisms outlined in the previous sections, combined with optimizing for minimum infection time.

Thus, for the defender, the current best hope is to keep the list of vulnerable addresses out of the hands of the attacker. …

The fastest worm seen in the wild so far was Slammer [10]. That was a random scanning worm, but saturated over 90% of vulnerable machines in under 10 minutes, and appears to have mainly been limited by bandwidth. The early exponential spread had an 8.5s time constant.

In this paper, we performed detailed analysis of how long a flash worm might take to spread on the contemporary Internet. These analyses use simulations based on actual data about Internet latencies and observed packet delivery rates by worms. Flash worms can complete their spread extremly quickly – with most infections occuring in much less than a second for single packet UDP worms and only a few seconds for small TCP worms. Anyone designing worm defenses needs to bear these time factors in mind.

The math behind Flash Worms Read More »

The Vitruvian Triad & the Urban Triad

From Andrés Duany’s “Classic Urbanism“:

From time to time there appears a concept of exceptional longevity. In architecture, the pre-eminent instance is the Vitruvian triad of Comoditas, Utilitas, e Venustas. This Roman epigram was propelled into immortality by Lord Burlington’s felicitous translation as Commodity, Firmness and Delight.

It has thus passed down the centuries and remains authoritative, even if not always applied in practice; Commodity: That a building must accommodate its program; Firmness: That it must stand up to the natural elements, among them gravity; Delight: that it must be satisfying to the eye, is with the aberrant exception of the tiny, current avant garde, the ideal of architecture. …

Let me propose the urban triad of Function, Disposition and Configuration as categories that would both describe and “test” the urban performance of a building.

Function describes the use to which the building lends itself, towards the ideal of mixed-use. In urbanism the range of function a first cut may include: exclusively residential, primarily residential, primarily commercial or exclusively commercial. The middle two being the best in urban performance although the extremes have justification in the urban to rural transect. An elaboration should probably differentiate the function at the all-important sidewalk level from the function above.

Disposition describes the location of the building on its lot or site. This may range from a building placed across the frontage of its lot, creating a most urban condition to the rural condition of the building freestanding in the center of its site. Perhaps the easiest way to categorize the disposition of the building is by describing it by its yards: The rearyard building has the building along the frontage, the courtyard building internalizes the space and is just as urban, the sideyard building is the zero-lot line or “Charleston single house” and the edgeyard building is a freestanding object closest to the rural edge of the transect.

The third component of the urban triad is Configuration. This describes the massing, height of a building and, for those who believe that harmony is a tool of urbanism, the architectural syntax and constructional tectonic. It can be argued that the surface of a building is a tool of urbanism no less than its form. Silence of expression is required to achieve the “wall” that defines public space, and that reserves the exalted configuration to differentiate the public building. Harmony in the architectural language is the secret of mixed-use. People seem not to mind variation of function as long as the container looks similar. It is certainly a concern of urbanism.

The Vitruvian Triad & the Urban Triad Read More »

The way to trick smart people

From Paul’s “The easiest way to fool smart people“:

There’s a saying among con-men that smart people are easier targets, because they don’t think they can be conned.

I’m not sure if that’s true, but there’s one scam that’s almost guaranteed to make smart people switch off their brains and reach for their wallets. It’s a trick that’s used so pervasively in our culture, that once you become aware of it, you start to see it everywhere. …

Most smart people have a hidden weakness and it’s this – they’re absolute suckers for anything that sounds clever.

As soon as you start hitting people with technical terms, fancy graphs, famous names and the like, you’ll immediately increase your credibility. If they’re smart, they’re even more likely to find themselves nodding in agreement. Many intelligent people would rather cut off a finger than admit they don’t know what you’re talking about. …

Even better, they can pretend to be teaching their audience something important. A person who was previously completely ignorant about quantum physics now feels as if they understand something about it – even if that something is absolute baloney. The audience have been fed ideas they’ll now defend even against someone who’s a real expert in that subject. Nobody likes to be told that something they’ve been led to believe is wrong. …

Consultants behave this way because they know that’s how to get a sale. Bombard people with clever-sounding stuff they don’t really understand, and they’ll assume that you’re some kind of genius. It’s a great way of making money.

Stock analysts, economic forecasters, management consultants, futurologists, investment advisors and so on use this tactic all the time. It’s their chief marketing strategy for the simple reason that it works.

The way to trick smart people Read More »

The history of solitary confinement

From Daniel Brook’s “A History of Hard Time” (Legal Affairs: January/February 2003):

Dickens wasn’t the first European intellectual who had crossed the Atlantic to visit Eastern State Penitentiary. A decade earlier, Alexis de Tocqueville had been sent by the French government to study the Philadelphia prison. …

What drew the attention of Americans and Europeans was an innovative method of punishment being pioneered at the prison called solitary confinement. While the practice had roots in medieval monasteries, where it was used to punish disobedient monks, solitary confinement came to prominence as a form of criminal punishment in the United States soon after the Revolution. …

In colonial America, capital punishment had been common, and not just for murder – burglary and sodomy could earn an offender the death penalty as well. For less serious offenses, criminals were generally subjected to physical punishments meted out on the public square. In a frontier nation of small towns, public embarrassment was seen as the key to deterring crime. Physical punishment, whether in the form of the stockade or the whipping post, was combined with the psychological punishment of being shamed in front of the community. Jails existed, but they were used mainly to hold criminals before trial and punishment. There were no cells and few rules: Men and women were housed together, and alcohol was often available. …

In 1787, at a soiree held in Benjamin Franklin’s living room, [Dr. Benjamin Rush of Philadelphia, a signatory of the Declaration of Independence & widely regarded as America’s foremost physician] presented an essay titled, “An Enquiry Into the Effects of Public Punishments Upon Criminals, and Upon Society.” Rush declared that “crimes should be punished in private, or not punished at all.” He claimed that public punishment failed to rehabilitate the criminal and risked letting the convict become an object of community sympathy. In lieu of public, physical punishments, Rush endorsed the creation of a “house of repentance.” Grounded in the Quaker principle that each individual is blessed with “Inner Light,” Rush envisioned a place of anonymity, solitude, and silence, where prisoners could dwell on their crimes, repent, and return rehabilitated into society. …

In 1821, the reformers finally convinced the Pennsylvania legislature to approve funding for Eastern State Penitentiary, which would be the largest public building in the country; with a price tag of nearly $800,000, it was likely the most costly one as well. No expense was spared: To prevent disease, each cell in the new prison was equipped with a toilet, a rare luxury at the time. When the penitentiary opened in 1829, President Andrew Jackson was still using an outhouse on the White House lawn.

The principles of the penitentiary system – silence, solitude, surveillance, and anonymity – were incorporated into the architectural plan. Eastern State was designed by John Haviland, a young architect, who proposed a hub-and-spokes model that allowed for constant surveillance. Inmates were housed in 8-by-12-foot cells arranged along a series of cellblocks radiating out from a central observation tower.

Each prisoner remained in his cell at all times, save for a brief daily exercise period held in an individual pen adjoining each cell. Prisoners ate their meals in their cells and did small-scale prison labor there like shoemaking. On the rare occasions when prisoners were allowed to leave their cells, they were prevented from interacting with other prisoners by hoods they were forced to wear to protect their anonymity. They were also forced to use numbers instead of names for the same reason. Silence was maintained at all times in the prison, and reading the Bible was the only activity other than labor that was permitted. Reformers believed that cutting inmates off from the world would foster meditation that would lead to rehabilitation, so visits from family or friends were prohibited. On average, inmates spent two to four years alone in their cells, underneath a single round skylight, known in the prison as the “eye of God.”

The expense of the building limited its influence in the United States, but Eastern State was widely copied in Europe and even in Latin America and Japan, where economic conditions made the model more attractive. Over 300 prisons were built on Eastern States’ hub-and-spokes model, in cities as diverse as London, Paris, Milan, St. Petersburg, and Beijing. Architectural historians consider the hub-and-spokes penitentiary to be the only American building type to have had global influence until the first skyscrapers began to rise in Chicago and New York in the 1880s. …

Dickens, who also interviewed prisoners at Eastern State, was far more skeptical. In his travelogue, American Notes, he described Philadelphia’s system of “rigid, strict, and hopeless solitary confinement” as “cruel and wrong.” …

Dickens didn’t accept that the penitentiary represented human progress over the days of floggings on the public square, or as his prose suggested, even the medieval torture chamber. “I hold this slow and daily tampering with the mysteries of the brain to be immeasurably worse than any torture of the body.” …

In New York, at the Auburn prison near Syracuse and later at Sing Sing in Westchester County, a modified system of solitary confinement was being put into practice. While inmates spent their nights in solitary cells, they worked together silently in a common area during the day. This allowed wardens to set up profitable prison industries that could offset the costs of prison construction. …

Despite this vehement defense of the solitary system, in the period after the Civil War, the regimen at Eastern State was slowly abandoned. … Without enough funding to keep the system running, inmates were frequently doubled up in cells. In 1913, the solitary system was officially abandoned. Solitary confinement became a short-term punishment for misbehaving prisoners rather than the prison’s standard operating procedure. …

More than half of all U.S. prisons in use today were built in the past 25 years, to house a prison population that has risen almost 500 percent over roughly the same period. The United States has the highest incarceration rate in the world. In raw numbers, it has more prisoners than China, a country with over four times as many people. …

Supermax prisons – high-tech, maximum-security facilities – were the answer politicians and corrections departments were looking for to solve the problem of increasing violence in prisons. Following Marion’s lead, corrections departments around the country began building supermax prisons, or adding supermax wings to their existing prisons to handle the growing number of violent prisoners who could not be controlled in the traditional prison system. Today there are 20,000 supermax inmates in the United States, roughly 2 percent of the total prison population, though in some states the proportion is much higher: In Mississippi, 12 percent of prisoners live in supermax units.

The system of punishment in supermax units resembles nothing so much as the system of punishment pioneered at Eastern State. The Pelican Bay Security Housing Unit, which cost California taxpayers a quarter of a billion dollars, is perhaps the most notorious supermax. From the air it looks like a high-tech version of the Philadelphia prison: Its hub-and-spokes design is clearly descended from John Haviland’s 19th-century architectural plan. Inmates in the SHU (known as “the shoe”) are kept in their cells close to 24 hours a day. As at Eastern State, inmates eat in their cells and exercise in isolated attached yards. …

Dr. Stuart Grassian, a Harvard Medical School psychiatrist who was given access to SHU inmates to prepare for providing expert testimony in lawsuits against the California Department of Corrections, has concluded that the regimen in security housing units drives prisoners insane, and he estimates that one-third of all SHU inmates are psychotic. He writes of what he calls “the SHU syndrome,” the symptoms of which include self-mutilation and throwing excrement.

Dr. Terry Kupers, a psychiatrist who has interviewed supermax inmates, writes that a majority of inmates “talk about their inability to concentrate, their heightened anxiety, their intermittent disorientation and confusion, their experience of unreality, and their tendency to strike out at the nearest person when they reach their ‘breaking point.’ ” Even those inmates who don’t become psychotic experience many of these symptoms. Those least likely to become mentally ill in solitary confinement are prisoners who can read, because reading prevents the boredom that can lead to insanity. (The human psyche appears not to have changed since the days of Eastern State, when an inmate told Alexis de Tocqueville that reading the Bible was his “greatest consolation.”) Because roughly 40 percent of U.S. prisoners are functionally illiterate, however, reading can provide solace and sanity to only a fraction of those behind bars.

The history of solitary confinement Read More »

Road rash, fender vaults, & root vaults

From Jascha Hoffman’s “Crash Course” (Legal Affairs: July/August 2004):

Typically there are two kinds of injuries [in hit-and-run cases], those from the initial impact, and the ones from hitting and sliding on the asphalt, known as “road rash.” To illustrate the different types of impact a pedestrian can suffer, Rich cued up a series of video clips on his laptop. The first one showed a well-dressed man with a briefcase in each hand caught crossing a busy Manhattan street. Suddenly, a white minivan blindsided him, causing a “fender vault” that tossed the man three feet into the air, still holding one briefcase. A taxi approaching from the opposite direction then launched him into a textbook “roof vault,” sending his remaining briefcase flying and hurling him headfirst onto the pavement. This was not a walk-away accident.

Road rash, fender vaults, & root vaults Read More »

A brief history of American bodysnatching

From Emily Bazelon’s “Grave Offense” (Legal Affairs: July/August 2002):

In December 1882, hundreds of black Philadelphians gathered at the city morgue. They feared that family members whom they had recently buried were, as a reporter put it, “amongst the staring corpses” that lay inside. Six bodies that had been taken from their graves at Lebanon Cemetery, the burial ground for Philadelphia’s African-Americans, had been brought to the morgue after being discovered on the back of a wagon bound for Jefferson Medical College. The cemetery’s black superintendent had admitted that for many years he let three grave robbers, his brother and two white men, steal as many corpses as they could sell to the college for dissection in anatomy classes.

At the morgue, a man asked others to bare their heads and swear on the bodies before them that they would kill the grave robbers. Another man found the body of his 29-year-old brother and screamed. A weeping elderly woman identified one of the corpses as her dead husband. According to the Philadelphia Press, which broke the story, to pay for her husband’s burial she had begged $22 at the wharves where he had once worked.

Medical science lay behind the body snatchings at Lebanon Cemetery and similar crimes throughout the Northeast and Midwest during the 19th century. By the 1820s, anatomy instruction had become central to medical education, but laws of the time, if they allowed for dissection, let medical schools use corpses only of condemned murderers. In their scramble to find other cadavers for students, doctors who taught anatomy competed for the booty of grave robbers—or sent medical students to rob the graves themselves. …

In the early 19th century, doctors were eager to distinguish themselves from midwives and homeopaths, and embraced anatomy as a critical source of their exclusive knowledge. In the words of a speaker at a New York medical society meeting in 1834, a physician who had not dissected a human body was “a disgrace to himself, a pest in society, and would maintain but a level with steam and red pepper quacks.” …

According to Michael Sappol’s recent book, A Traffic of Dead Bodies, Harvard Medical School moved its campus from Cambridge to Boston (where it remains) expecting to get bodies from an almshouse there. …

“Men seem prompted by their very nature to an earnest desire that their deceased friends be decently interred,” explained the grand jury charged with investigating a 1788 dissection-sparked riot in which 5,000 people stormed New York Hospital.

To protect the graves of their loved ones, 19th-century families who could afford it bought sturdy coffins and plots in a churchyard or cemetery guarded by night watchmen. Bodies buried in black cemeteries and paupers’ burial grounds, which often lacked those safeguards, were more vulnerable. In 1827, a black newspaper called Freedom’s Journal instructed readers that they could cheaply guard against body snatching by packing straw into the graves. In 1820s Philadelphia, several medical schools secretly bribed the superintendent of the public graveyard for 12 to 20 cadavers a week during “dissecting season.” He made sure to keep strict watch “to prevent adventurers from robbing him—not to prevent them from emptying the pits,” Philadelphia doctor John D. Godman wrote in 1829.

When a body snatching was detected, it made for fury and headlines. The 1788 New York riot, in which three people were killed, began when an anatomy instructor shooed some children away from his window with the dismembered arm of a corpse, which (legend has it) belonged to the recently buried mother of one of the boys; her body had been stolen from its coffin. In 1824, the body of a farmer’s daughter was found beneath the floor of the cellar of Yale’s medical school. An assistant suspected of the crime was almost tarred and feathered. In 1852, after a woman’s body was found in a cesspool near Cleveland’s medical school, a mob led by her father set fire to the building, wrecking a laboratory and a museum inside. …

In the morning, news spread that the robbers had been taken into custody. An “immense crowd of people surrounded the magistrate’s office and threatened to kill the resurrectionists,” the Press reported. …

The doctors got what they asked for. A new Pennsylvania law, passed in 1883, required officials at every almshouse, prison, morgue, hospital, and public institution in the state to give medical schools corpses that would otherwise be buried at public expense.

A brief history of American bodysnatching Read More »

The difficulties in establishing time of death

From Jessica Sachs’s “Expiration Date” (Legal Affairs: March/April 2004):

More than two centuries of earnest scientific research have tried to forge better clocks based on rigor, algor, and livor mortis – the progressive phenomena of postmortem muscle stiffening, body cooling, and blood pooling. But instead of honing time-of-death estimates, this research has revealed their vagaries. Two bodies that reached death within minutes of each other can, and frequently do, show marked differences in postmortem time markers. Even the method of testing eye potassium levels, which was recently hailed as the new benchmark for pinpointing time of death, has fallen into disrepute, following autopsies that showed occasional differences in levels in the left and right eye of the same cadaver. …

And the longer a body is dead, the harder it is to figure out when its owner died. In their book The Estimation of Time Since Death in the Early Postmortem Period, the world-renowned experts Claus Henssge and Bernard Knight warn pathologists to surrender any pretensions of doing science beyond the first 24 to 48 hours after death.

The difficulties in establishing time of death Read More »

Court acceptance of forensic & biometric evidence

From Brendan I. Koerner’s “Under the Microscope” (Legal Affairs: July/August 2002):

The mantra of forensic evidence examination is “ACE-V.” The acronym stands for Analysis, Comparison, Evaluation, and Verification, which forensic scientists compare with the step-by-step method drilled into countless chemistry students. “Instead of hypothesis, data collection, conclusion, we have ACE-V,” says Elaine Pagliaro, an expert at the Connecticut lab who specializes in biochemical analysis. “It’s essentially the same process. It’s just that it grew out of people who didn’t come from a background in the scientific method.” …

Yet for most of the 20th century, courts seldom set limits on what experts could say to juries. The 1923 case Frye v. United States mandated that expert witnesses could discuss any technique that had “gained general acceptance in the particular field in which it belongs.” Courts treated forensic science as if it were as well-founded as biology or physics. …

In 1993, the Supreme Court set a new standard for evidence that took into account the accelerated pace of scientific progress. In a case called Daubert v. Merrell Dow Pharmaceuticals, the plaintiffs wanted to show the jury some novel epidemiological studies to bolster their claim that Merrell Dow’s anti-nausea drug Bendectin caused birth defects. The trial judge didn’t let them. The plaintiff’s evidence, he reasoned, was simply too futuristic to have gained general acceptance.

When the case got to the Supreme Court, the justices seized the opportunity to revolutionize the judiciary’s role in supervising expert testimony. Writing for a unanimous court, Justice Harry Blackmun instructed judges to “ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable.” Daubert turned judges into “gatekeepers” responsible for discerning good science from junk before an expert takes the stand. Blackmun suggested that good science must be testable, subject to peer review, and feature a “known or potential rate of error.” …

There are a few exceptions, though. In 1999, Judge Nancy Gertner of the Federal District Court in Massachusetts set limits on the kinds of conclusions a handwriting expert could draw before a jury in United States v. Hines. The expert could point out similarities between the defendant’s handwriting and the writing on a stick-up note, the judge said, but she could not “make any ultimate conclusions on the actual authorship.” The judge questioned “the validity of the field” of handwriting analysis, noting that “one’s handwriting is not at all unique in the sense that it remains the same over time, or unique[ly] separates one individual from another.”

Early this year, Judge Pollak stunned the legal world by similarly reining in fingerprint experts in the murder-for-hire case United States v. Plaza. Pollak was disturbed by a proficiency test finding that 26 percent of the crime labs surveyed in different states did not correctly identify a set of latent prints on the first try. “Even 100 years of ‘adversarial’ testing in court cannot substitute for scientific testing,” he said. He ruled that the experts could show the jury similarities between the defendants’ prints and latent prints found at the crime scenes, but could not say the prints matched. …

… the University of West Virginia recently offered the nation’s first-ever four-year degree in biometrics …

Court acceptance of forensic & biometric evidence Read More »

Failure every 30 years produces better design

From The New York Times‘ “Form Follows Function. Now Go Out and Cut the Grass.“:

Failure, [Henry] Petroski shows, works. Or rather, engineers only learn from things that fail: bridges that collapse, software that crashes, spacecraft that explode. Everything that is designed fails, and everything that fails leads to better design. Next time at least that mistake won’t be made: Aleve won’t be packed in child-proof bottles so difficult to open that they stymie the arthritic patients seeking the pills inside; narrow suspension bridges won’t be built without “stay cables” like the ill-fated Tacoma Narrows Bridge, which was twisted to its destruction by strong winds in 1940.

Successes have fewer lessons to teach. This is one reason, Mr. Petroski points out, that there has been a major bridge disaster every 30 years. Gradually the techniques and knowledge of one generation become taken for granted; premises are no longer scrutinized. So they are re-applied in ambitious projects by creators who no longer recognize these hidden flaws and assumptions.

Mr. Petroski suggests that 30 years – an implicit marker of generational time – is the period between disasters in many specialized human enterprises, the period between, say, the beginning of manned space travel and the Challenger disaster, or the beginnings of nuclear energy and the 1979 accident at Three Mile Island. …

Mr. Petroski cites an epigram of Epictetus: “Everything has two handles – by one of which it ought to be carried and by the other not.”

Failure every 30 years produces better design Read More »

What can we use instead of gasoline in cars?

From Popular Mechanics‘ “How far can you drive on a bushel of corn?“:

It is East Kansas Agri-Energy’s ethanol facility, one of 100 or so such heartland garrisons in America’s slowly gathering battle to reduce its dependence on fossil fuels. The plant processes about 13 million bushels of corn to produce approximately 36 million gal. of ethanol a year. “That’s enough high-quality motor fuel to replace 55,000 barrels of imported petroleum,” the plant’s manager, Derek Peine, says. …

It takes five barrels of crude oil to produce enough gasoline (nearly 97 gal.) to power a Honda Civic from New York to California. …

Ethanol/E85

E85 is a blend of 85 percent ethanol and 15 percent gasoline. … A gallon of E85 has an energy content of about 80,000 BTU, compared to gasoline’s 124,800 BTU. So about 1.56 gal. of E85 takes you as far as 1 gal. of gas.

Case For: Ethanol is an excellent, clean-burning fuel, potentially providing more horsepower than gasoline. In fact, ethanol has a higher octane rating (over 100) and burns cooler than gasoline. However, pure alcohol isn’t volatile enough to get an engine started on cold days, hence E85. …

Cynics claim that it takes more energy to grow corn and distill it into alcohol than you can get out of the alcohol. However, according to the DOE, the growing, fermenting and distillation chain actually results in a surplus of energy that ranges from 34 to 66 percent. Moreover, the carbon dioxide (CO2) that an engine produces started out as atmospheric CO2 that the cornstalk captured during growth, making ethanol greenhouse gas neutral. Recent DOE studies note that using ethanol in blends lowers carbon monoxide (CO) and CO2 emissions substantially. In 2005, burning such blends had the same effect on greenhouse gas emissions as removing 1 million cars from American roads. …

One acre of corn can produce 300 gal. of ethanol per growing season. So, in order to replace that 200 billion gal. of petroleum products, American farmers would need to dedicate 675 million acres, or 71 percent of the nation’s 938 million acres of farmland, to growing feedstock. Clearly, ethanol alone won’t kick our fossil fuel dependence–unless we want to replace our oil imports with food imports. …

Biodiesel

Fuels for diesel engines made from sources other than petroleum are known as biodiesel. Among the common sources are vegetable oils, rendered chicken fat and used fry oil. …

Case For: Modern diesel engines can run on 100 percent biodiesel with little degradation in performance compared to petrodiesel because the BTU content of both fuels is similar–120,000 to 130,000 BTU per gallon. In addition, biodiesel burns cleaner than petrodiesel, with reduced emissions. Unlike petrodiesel, biodiesel molecules are oxygen-bearing, and partially support their own combustion.

According to the DOE, pure biodiesel reduces CO emissions by more than 75 percent over petroleum diesel. A blend of 20 percent biodiesel and 80 percent petrodiesel, sold as B20, reduces CO2 emissions by around 15 percent.

Case Against: Pure biodiesel, B100, costs about $3.50–roughly a dollar more per gallon than petrodiesel. And, in low temperatures, higher-concentration blends–B30, B100–turn into waxy solids and do not flow. Special additives or fuel warmers are needed to prevent fuel waxing. …

Electricity

Case For: Vehicles that operate only on electricity require no warmup, run almost silently and have excellent performance up to the limit of their range. Also, electric cars are cheap to “refuel.” At the average price of 10 cents per kwh, it costs around 2 cents per mile. …

A strong appeal of the electric car–and of a hybrid when it’s running on electricity–is that it produces no tailpipe emissions. Even when emissions created by power plants are factored in, electric vehicles emit less than 10 percent of the pollution of an internal-combustion car.

Case Against: Pure electric cars still have limited range, typically no more than 100 to 120 miles. In addition, electrics suffer from slow charging, which, in effect, reduces their usability….

And then there’s the environmental cost. Only 2.3 percent of the nation’s electricity comes from renewable resources; about half is generated in coal-burning plants.

Hydrogen

Hydrogen is the most abundant element on Earth, forming part of many chemical compounds. Pure hydrogen can be made by electrolysis–passing electricity through water. This liberates the oxygen, which can be used for many industrial purposes. Most hydrogen currently is made from petroleum.

Case For: Though hydrogen can fuel a modified internal-combustion engine, most see hydrogen as a way to power fuel cells to move cars electrically. The only byproduct of a hydrogen fuel cell is water.

Case Against: … And, despite the chemical simplicity of electrolysis, producing hydrogen is expensive and energy consuming. It takes about 17 kwh of electricity, which costs about $1.70, to make just 100 cu. ft. of hydrogen. That amount would power a fuel cell vehicle for about 20 miles.

What can we use instead of gasoline in cars? Read More »

Why are some people really good at some things?

From Stephen J. Dubner & Steven D. Levitt’s “A Star Is Made” (The New York Times):

Anders Ericsson, a 58-year-old psychology professor at Florida State University, … is the ringleader of what might be called the Expert Performance Movement, a loose coalition of scholars trying to answer an important and seemingly primordial question: When someone is very good at a given thing, what is it that actually makes him good? …

In other words, whatever innate differences two people may exhibit in their abilities to memorize, those differences are swamped by how well each person “encodes” the information. And the best way to learn how to encode information meaningfully, Ericsson determined, was a process known as deliberate practice.

Deliberate practice entails more than simply repeating a task – playing a C-minor scale 100 times, for instance, or hitting tennis serves until your shoulder pops out of its socket. Rather, it involves setting specific goals, obtaining immediate feedback and concentrating as much on technique as on outcome. …

Their work, compiled in the “Cambridge Handbook of Expertise and Expert Performance,” a 900-page academic book that will be published next month, makes a rather startling assertion: the trait we commonly call talent is highly overrated. Or, put another way, expert performers – whether in memory or surgery, ballet or computer programming – are nearly always made, not born. And yes, practice does make perfect. …

Ericsson’s research suggests a third cliché as well: when it comes to choosing a life path, you should do what you love – because if you don’t love it, you are unlikely to work hard enough to get very good. Most people naturally don’t like to do things they aren’t “good” at. So they often give up, telling themselves they simply don’t possess the talent for math or skiing or the violin. But what they really lack is the desire to be good and to undertake the deliberate practice that would make them better. …

Ericsson has noted that most doctors actually perform worse the longer they are out of medical school. Surgeons, however, are an exception. That’s because they are constantly exposed to two key elements of deliberate practice: immediate feedback and specific goal-setting.

Why are some people really good at some things? Read More »

Our reasons for giving reasons

From Malcolm Gladwell’s “Here’s Why: A sociologist offers an anatomy of explanations“:

In “Why?”, the Columbia University scholar Charles Tilly sets out to make sense of our reasons for giving reasons. …

In Tilly’s view, we rely on four general categories of reasons. The first is what he calls conventions—conventionally accepted explanations. Tilly would call “Don’t be a tattletale” a convention. The second is stories, and what distinguishes a story (“I was playing with my truck, and then Geoffrey came in . . .”) is a very specific account of cause and effect. Tilly cites the sociologist Francesca Polletta’s interviews with people who were active in the civil-rights sit-ins of the nineteen-sixties. Polletta repeatedly heard stories that stressed the spontaneity of the protests, leaving out the role of civil-rights organizations, teachers, and churches. That’s what stories do. As Tilly writes, they circumscribe time and space, limit the number of actors and actions, situate all causes “in the consciousness of the actors,” and elevate the personal over the institutional.

Then there are codes, which are high-level conventions, formulas that invoke sometimes recondite procedural rules and categories. If a loan officer turns you down for a mortgage, the reason he gives has to do with your inability to conform to a prescribed standard of creditworthiness. Finally, there are technical accounts: stories informed by specialized knowledge and authority. An academic history of civil-rights sit-ins wouldn’t leave out the role of institutions, and it probably wouldn’t focus on a few actors and actions; it would aim at giving patient and expert attention to every sort of nuance and detail.

Tilly argues that we make two common errors when it comes to understanding reasons. The first is to assume that some kinds of reasons are always better than others—that there is a hierarchy of reasons, with conventions (the least sophisticated) at the bottom and technical accounts at the top. That’s wrong, Tilly says: each type of reason has its own role.

Tilly’s second point flows from the first, and it’s that the reasons people give aren’t a function of their character—that is, there aren’t people who always favor technical accounts and people who always favor stories. Rather, reasons arise out of situations and roles. …

Reason-giving, Tilly says, reflects, establishes, repairs, and negotiates relationships. The husband who uses a story to explain his unhappiness to his wife—“Ever since I got my new job, I feel like I’ve just been so busy that I haven’t had time for us”—is attempting to salvage the relationship. But when he wants out of the marriage, he’ll say, “It’s not you—it’s me.” He switches to a convention. As his wife realizes, it’s not the content of what he has said that matters. It’s his shift from the kind of reason-giving that signals commitment to the kind that signals disengagement. Marriages thrive on stories. They die on conventions. …

The fact that Timothy’s mother accepts tattling from his father but rejects it from Timothy is not evidence of capriciousness; it just means that a husband’s relationship to his wife gives him access to a reasongiving category that a son’s role does not. …

When we say that two parties in a conflict are “talking past each other,” this is what we mean: that both sides have a legitimate attachment to mutually exclusive reasons. Proponents of abortion often rely on a convention (choice) and a technical account (concerning the viability of a fetus in the first trimester). Opponents of abortion turn the fate of each individual fetus into a story: a life created and then abruptly terminated. Is it any surprise that the issue has proved to be so intractable? If you believe that stories are the most appropriate form of reason-giving, then those who use conventions and technical accounts will seem morally indifferent—regardless of whether you agree with them. And, if you believe that a problem is best adjudicated through conventions or technical accounts, it is hard not to look upon storytellers as sensationalistic and intellectually unserious. …

Tilly argues that these conflicts are endemic to the legal system. Laws are established in opposition to stories. In a criminal trial, we take a complicated narrative of cause and effect and match it to a simple, impersonal code: first-degree murder, or second-degree murder, or manslaughter. The impersonality of codes is what makes the law fair. But it is also what can make the legal system so painful for victims, who find no room for their voices and their anger and their experiences. Codes punish, but they cannot heal.

Our reasons for giving reasons Read More »

10 early choices that helped make the Internet successful

From Dan Gillmor’s “10 choices that were critical to the Net’s success“:

1) Make it all work on top of existing networks.

2) Use packets, not circuits.

3) Create a ‘routing’ function.

4) Split the Transmission Control Protocol (TCP) and Internet Protocol (IP) …

5) The National Science Foundation (NSF) funds the University of California-Berkeley, to put TCP/IP into the Unix operating system originally developed by AT&T.

6) CSNET, an early network used by universities, connects with the ARPANET … The connection was for e-mail only, but it led to much more university research on networks and a more general understanding among students, faculty and staff of the value of internetworking.

7) The NSF requires users of the NSFNET to use TCP/IP, not competing protocols.

8) International telecommunications standards bodies reject TCP/IP, then create a separate standard called OSI.

9) The NSF creates an “Acceptable Use Policy” restricting NSFNET use to noncommercial activities.

10) Once things start to build, government stays mostly out of the way.

10 early choices that helped make the Internet successful Read More »

30 seconds to impress

From The Scotsman‘s “Men, you have 30 seconds to impress women“:

HALF of all women make their minds up within 30 seconds of meeting a man about whether he is potential boyfriend material, according to a study on speed-dating.

The women were on average far quicker at making a decision than the men during some 500 speed dates at an event organised as part of Edinburgh Science Festival.

The scientists behind the research said this showed just how important chat-up lines were in dating. They found that those who were “highly skilled in seduction” used chat-up lines that encouraged their dates to talk about themselves in “an unusual, quirky way”.

The top-rated male’s best line was “If you were on Stars In Their Eyes, who would you be?”, while the top-rated female asked bizarrely: “What’s your favourite pizza topping?”

Failed Casanovas were those who offered up hackneyed comments like “Do you come here often?”, or clumsy attempts to impress, such as “I have a PhD in computing”.

About a third of the speed dates were actually over within the first 30 seconds, but there was a marked difference between the sexes with 45 per cent of women coming to a decision within 30 seconds, compared with only 22 per cent of men.

… Conversation topics were also assessed. Only 9 per cent of pairs who talked about films agreed to meet again, compared with 18 per cent who spoke about the subject found to be the most suitable for dating: travel.

It is thought women’s taste for musicals clashed with the male liking for action films, while talking about “great holidays and dream destinations” made people feel good and appear more attractive to each other.

30 seconds to impress Read More »