control

The Pareto Principle & Temperament Dimensions

From David Brooks’ “More Tools For Thinking” (The New York Times: 29 March 2011):

Clay Shirkey nominates the Pareto Principle. We have the idea in our heads that most distributions fall along a bell curve (most people are in the middle). But this is not how the world is organized in sphere after sphere. The top 1 percent of the population control 35 percent of the wealth. The top two percent of Twitter users send 60 percent of the messages. The top 20 percent of workers in any company will produce a disproportionate share of the value. Shirkey points out that these distributions are regarded as anomalies. They are not.

Helen Fisher, the great researcher into love and romance, has a provocative entry on “temperament dimensions.” She writes that we have four broad temperament constellations. One, built around the dopamine system, regulates enthusiasm for risk. A second, structured around the serotonin system, regulates sociability. A third, organized around the prenatal testosterone system, regulates attention to detail and aggressiveness. A fourth, organized around the estrogen and oxytocin systems, regulates empathy and verbal fluency.

This is an interesting schema to explain temperament. It would be interesting to see others in the field evaluate whether this is the best way to organize our thinking about our permanent natures.

The Pareto Principle & Temperament Dimensions Read More »

Richard Ford on how to deal with mythical narratives

From Bonnie Lyons’s interview of Richard Ford in “The Art of Fiction No. 147” (The Paris Review: Fall 1996, No. 140):

…when you start manipulating mythical narratives, whether you blunder into them or you do it by calculation, you’d better—to be in control of your book—reckon with their true potency and wide reference. They haven’t persisted all these years because they represent trivial human matters.

Richard Ford on how to deal with mythical narratives Read More »

Dan Ariely on irrational decision making

From Dan Ariely’s “Dan Ariely asks, Are we in control of our own decisions?” (TED: 24 June 2009):

I’ll give you a couple of more examples on irrational decision making. Imagine I give you a choice. Do you want to go for a weekend to Rome? All expenses paid, hotel, transportation, food, breakfast, a continental breakfast, everything. Or a weekend in Paris? Now, a weekend in Paris, a weekend in Rome, these are different things. They have different food, different culture, different art. Now imagine I added a choice to the set that nobody wanted. Imagine I said, “A weekend in Rome, a weekend in Paris, or having your car stolen?” It’s a funny idea. Because why would having your car stolen, in this set, influence anything? But what if the option to have your car stolen was not exactly like this. What if it was a trip to Rome, all expenses paid, transportation, breakfast. But doesn’t include coffee in the morning. If you want coffee you have to pay for it yourself. It’s two euros 50. Now in some ways, given that you can have Rome with coffee, why would you possibly want Rome without coffee? It’s like having your car stolen. It’s an inferior option. But guess what happened. The moment you add Rome without coffee, Rome with coffee becomes more popular. And people choose it. The fact that you have Rome without coffee makes Rome with coffee look superior. And not just to Rome without coffee, even superior to Paris.

Here are two examples of this principle. This was an ad from The Economist a few years ago that gave us three choices. An online subscription for 59 dollars. A print subscription for 125. Or you could get both for 125. Now I looked at this and I called up The Economist. And I tried to figure out what were they thinking. And they passed me from one person to another to another. Until eventually I got to a person who was in charge of the website. And I called them up. And they went to check what was going on. The next thing I know, the ad is gone. And no explanation.

So I decided to do the experiment that I would have loved The Economist to do with me. I took this and I gave it to 100 MIT students. I said, “What would you choose?” These are the market share. Most people wanted the combo deal. Thankfully nobody wanted the dominated option. That means our students can read. But now if you have an option that nobody wants you can take it off. Right? So I printed another version of this. Where I eliminated the middle option. I gave it to another 100 students. Here is what happens. Now the most popular option became the least popular. And the least popular became the most popular.

What was happening was the option that was useless, in the middle, was useless in the sense that nobody wanted it. But it wasn’t useless in the sense that it helped people figure out what they wanted. In fact, relative to the option in the middle, which was get only the print for 125, the print and web for 125 looked like a fantastic deal. And as a consequence, people chose it. The general idea here, by the way, is that we actually don’t know our preferences that well. And because we don’t know our preferences that well we’re susceptible to all of these influences from the external forces. The defaults, the particular options that are presented to us. And so on.

One more example of this. People believe that when we deal with physical attraction, we see somebody, and we know immediately whether we like them or not. Attracted or not. Which is why we have these four-minute dates. So I decided to do this experiment with people. I’ll show you graphic images of people — not real people. The experiment was with people. I showed some people a picture of Tom, and a picture of Jerry. I said “Who do you want to date? Tom or Jerry?” But for half the people I added an ugly version of Jerry. I took Photoshop and I made Jerry slightly less attractive. (Laughter) The other people, I added an ugly version of Tom. And the question was, will ugly Jerry and ugly Tom help their respective, more attractive brothers? The answer was absolutely yes. When ugly Jerry was around, Jerry was popular. When ugly Tom was around, Tom was popular.

Dan Ariely on irrational decision making Read More »

Looking at others’ lives for clues to what might have been

From Tim Kreider’s “The Referendum” (The New York Times: 17 September 2009):

The Referendum is a phenomenon typical of (but not limited to) midlife, whereby people, increasingly aware of the finiteness of their time in the world, the limitations placed on them by their choices so far, and the narrowing options remaining to them, start judging their peers’ differing choices with reactions ranging from envy to contempt. The Referendum can subtly poison formerly close and uncomplicated relationships, creating tensions between the married and the single, the childless and parents, careerists and the stay-at-home. It’s exacerbated by the far greater diversity of options available to us now than a few decades ago, when everyone had to follow the same drill. We’re all anxiously sizing up how everyone else’s decisions have worked out to reassure ourselves that our own are vindicated — that we are, in some sense, winning.

It’s especially conspicuous among friends from youth. Young adulthood is an anomalous time in people’s lives; they’re as unlike themselves as they’re ever going to be, experimenting with substances and sex, ideology and religion, trying on different identities before their personalities immutably set. Some people flirt briefly with being freethinking bohemians before becoming their parents. Friends who seemed pretty much indistinguishable from you in your 20s make different choices about family or career, and after a decade or two these initial differences yield such radically divergent trajectories that when you get together again you can only regard each other’s lives with bemused incomprehension.

Yes: the Referendum gets unattractively self-righteous and judgmental. Quite a lot of what passes itself off as a dialogue about our society consists of people trying to justify their own choices as the only right or natural ones by denouncing others’ as selfish or pathological or wrong. So it’s easy to overlook that hidden beneath all this smug certainty is a poignant insecurity, and the naked 3 A.M. terror of regret.

The problem is, we only get one chance at this, with no do-overs. Life is, in effect, a non-repeatable experiment with no control. In his novel about marriage, “Light Years,” James Salter writes: “For whatever we do, even whatever we do not do prevents us from doing its opposite. Acts demolish their alternatives, that is the paradox.” Watching our peers’ lives is the closest we can come to a glimpse of the parallel universes in which we didn’t ruin that relationship years ago, or got that job we applied for, or got on that plane after all. It’s tempting to read other people’s lives as cautionary fables or repudiations of our own.

A colleague of mine once hosted a visiting cartoonist from Scandinavia who was on a promotional tour. My colleague, who has a university job, a wife and children, was clearly a little wistful about the tour, imagining Brussels, Paris, and London, meeting new fans and colleagues and being taken out for beers every night. The cartoonist, meanwhile, looked forlornly around at his host’s pleasant row house and sighed, almost to himself: “I would like to have such a house.”

One of the hardest things to look at in this life is the lives we didn’t lead, the path not taken, potential left unfulfilled. In stories, those who look back — Lot’s wife, Orpheus and Eurydice — are lost. Looking to the side instead, to gauge how our companions are faring, is a way of glancing at a safer reflection of what we cannot directly bear, like Perseus seeing the Gorgon safely mirrored in his shield.

Looking at others’ lives for clues to what might have been Read More »

Apple’s role in technology

Image representing iPhone as depicted in Crunc...
Image via CrunchBase

From Doc Searls’s “The Most Personal Device” (Linux Journal: 1 March 2009):

My friend Keith Hopper made an interesting observation recently. He said one of Apple’s roles in the world is finding categories where progress is logjammed, and opening things up by coming out with a single solution that takes care of everything, from the bottom to the top. Apple did it with graphical computing, with .mp3 players, with on-line music sales and now with smartphones. In each case, it opens up whole new territories that can then be settled and expanded by other products, services and companies. Yes, it’s closed and controlling and the rest of it. But what matters is the new markets that open up.

Apple’s role in technology Read More »

What Google’s book settlement means

Google Book Search
Image via Wikipedia

From Robert Darnton’s “Google & the Future of Books” (The New York Review of Books: 12 February 2009):

As the Enlightenment faded in the early nineteenth century, professionalization set in. You can follow the process by comparing the Encyclopédie of Diderot, which organized knowledge into an organic whole dominated by the faculty of reason, with its successor from the end of the eighteenth century, the Encyclopédie méthodique, which divided knowledge into fields that we can recognize today: chemistry, physics, history, mathematics, and the rest. In the nineteenth century, those fields turned into professions, certified by Ph.D.s and guarded by professional associations. They metamorphosed into departments of universities, and by the twentieth century they had left their mark on campuses…

Along the way, professional journals sprouted throughout the fields, subfields, and sub-subfields. The learned societies produced them, and the libraries bought them. This system worked well for about a hundred years. Then commercial publishers discovered that they could make a fortune by selling subscriptions to the journals. Once a university library subscribed, the students and professors came to expect an uninterrupted flow of issues. The price could be ratcheted up without causing cancellations, because the libraries paid for the subscriptions and the professors did not. Best of all, the professors provided free or nearly free labor. They wrote the articles, refereed submissions, and served on editorial boards, partly to spread knowledge in the Enlightenment fashion, but mainly to advance their own careers.

The result stands out on the acquisitions budget of every research library: the Journal of Comparative Neurology now costs $25,910 for a year’s subscription; Tetrahedron costs $17,969 (or $39,739, if bundled with related publications as a Tetrahedron package); the average price of a chemistry journal is $3,490; and the ripple effects have damaged intellectual life throughout the world of learning. Owing to the skyrocketing cost of serials, libraries that used to spend 50 percent of their acquisitions budget on monographs now spend 25 percent or less. University presses, which depend on sales to libraries, cannot cover their costs by publishing monographs. And young scholars who depend on publishing to advance their careers are now in danger of perishing.

The eighteenth-century Republic of Letters had been transformed into a professional Republic of Learning, and it is now open to amateurs—amateurs in the best sense of the word, lovers of learning among the general citizenry. Openness is operating everywhere, thanks to “open access” repositories of digitized articles available free of charge, the Open Content Alliance, the Open Knowledge Commons, OpenCourseWare, the Internet Archive, and openly amateur enterprises like Wikipedia. The democratization of knowledge now seems to be at our fingertips. We can make the Enlightenment ideal come to life in reality.

What provoked these jeremianic- utopian reflections? Google. Four years ago, Google began digitizing books from research libraries, providing full-text searching and making books in the public domain available on the Internet at no cost to the viewer. For example, it is now possible for anyone, anywhere to view and download a digital copy of the 1871 first edition of Middlemarch that is in the collection of the Bodleian Library at Oxford. Everyone profited, including Google, which collected revenue from some discreet advertising attached to the service, Google Book Search. Google also digitized an ever-increasing number of library books that were protected by copyright in order to provide search services that displayed small snippets of the text. In September and October 2005, a group of authors and publishers brought a class action suit against Google, alleging violation of copyright. Last October 28, after lengthy negotiations, the opposing parties announced agreement on a settlement, which is subject to approval by the US District Court for the Southern District of New York.[2]

The settlement creates an enterprise known as the Book Rights Registry to represent the interests of the copyright holders. Google will sell access to a gigantic data bank composed primarily of copyrighted, out-of-print books digitized from the research libraries. Colleges, universities, and other organizations will be able to subscribe by paying for an “institutional license” providing access to the data bank. A “public access license” will make this material available to public libraries, where Google will provide free viewing of the digitized books on one computer terminal. And individuals also will be able to access and print out digitized versions of the books by purchasing a “consumer license” from Google, which will cooperate with the registry for the distribution of all the revenue to copyright holders. Google will retain 37 percent, and the registry will distribute 63 percent among the rightsholders.

Meanwhile, Google will continue to make books in the public domain available for users to read, download, and print, free of charge. Of the seven million books that Google reportedly had digitized by November 2008, one million are works in the public domain; one million are in copyright and in print; and five million are in copyright but out of print. It is this last category that will furnish the bulk of the books to be made available through the institutional license.

Many of the in-copyright and in-print books will not be available in the data bank unless the copyright owners opt to include them. They will continue to be sold in the normal fashion as printed books and also could be marketed to individual customers as digitized copies, accessible through the consumer license for downloading and reading, perhaps eventually on e-book readers such as Amazon’s Kindle.

After reading the settlement and letting its terms sink in—no easy task, as it runs to 134 pages and 15 appendices of legalese—one is likely to be dumbfounded: here is a proposal that could result in the world’s largest library. It would, to be sure, be a digital library, but it could dwarf the Library of Congress and all the national libraries of Europe. Moreover, in pursuing the terms of the settlement with the authors and publishers, Google could also become the world’s largest book business—not a chain of stores but an electronic supply service that could out-Amazon Amazon.

An enterprise on such a scale is bound to elicit reactions of the two kinds that I have been discussing: on the one hand, utopian enthusiasm; on the other, jeremiads about the danger of concentrating power to control access to information.

Google is not a guild, and it did not set out to create a monopoly. On the contrary, it has pursued a laudable goal: promoting access to information. But the class action character of the settlement makes Google invulnerable to competition. Most book authors and publishers who own US copyrights are automatically covered by the settlement. They can opt out of it; but whatever they do, no new digitizing enterprise can get off the ground without winning their assent one by one, a practical impossibility, or without becoming mired down in another class action suit. If approved by the court—a process that could take as much as two years—the settlement will give Google control over the digitizing of virtually all books covered by copyright in the United States.

Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

Google’s record suggests that it will not abuse its double-barreled fiscal-legal power. But what will happen if its current leaders sell the company or retire? The public will discover the answer from the prices that the future Google charges, especially the price of the institutional subscription licenses. The settlement leaves Google free to negotiate deals with each of its clients, although it announces two guiding principles: “(1) the realization of revenue at market rates for each Book and license on behalf of the Rightsholders and (2) the realization of broad access to the Books by the public, including institutions of higher education.”

What will happen if Google favors profitability over access? Nothing, if I read the terms of the settlement correctly. Only the registry, acting for the copyright holders, has the power to force a change in the subscription prices charged by Google, and there is no reason to expect the registry to object if the prices are too high. Google may choose to be generous in it pricing, and I have reason to hope it may do so; but it could also employ a strategy comparable to the one that proved to be so effective in pushing up the price of scholarly journals: first, entice subscribers with low initial rates, and then, once they are hooked, ratchet up the rates as high as the traffic will bear.

What Google’s book settlement means Read More »

RFID security problems

Old British passport cover
Creative Commons License photo credit: sleepymyf

2005

From Brian Krebs’ “Leaving Las Vegas: So Long DefCon and Blackhat” (The Washington Post: 1 August 2005):

DefCon 13 also was notable for being the location where two new world records were set — both involved shooting certain electronic signals unprecedented distances. Los Angeles-based Flexilis set the world record for transmitting data to and from a “passive” radio frequency identification (RFID) card — covering a distance of more than 69 feet. (Active RFID — the kind being integrated into foreign passports, for example — differs from passive RFID in that it emits its own magnetic signal and can only be detected from a much shorter distance.)

The second record set this year at DefCon was pulled off by some teens from Cincinnati, who broke the world record they set last year by building a device capable of maintaining an unamplified, 11-megabit 802.11b wireless Internet connection over a distance of 125 miles (the network actually spanned from Utah into Nevada).

From Andrew Brandt’s “Black Hat, Lynn Settle with Cisco, ISS” (PC World: 29 July 2005):

Security researcher Kevin Mahaffey makes a final adjustment to a series of radio antennas; Mahaffey used the directional antennas in a demonstration during his presentation, “Long Range RFID and its Security Implications.” Mahaffey and two of his colleagues demonstrated how he could increase the “read range” of radio frequency identification (RF) tags from the typical four to six inches to approximately 50 feet. Mahaffey said the tags could be read at a longer distance, but he wanted to perform the demonstration in the room where he gave the presentation, and that was the greatest distance within the room that he could demonstrate. RFID tags such as the one Mahaffey tested will begin to appear in U.S. passports later this year or next year.

2006

From Joris Evers and Declan McCullagh’s “Researchers: E-passports pose security risk” (CNET: 5 August 2006):

At a pair of security conferences here, researchers demonstrated that passports equipped with radio frequency identification (RFID) tags can be cloned with a laptop equipped with a $200 RFID reader and a similarly inexpensive smart card writer. In addition, they suggested that RFID tags embedded in travel documents could identify U.S. passports from a distance, possibly letting terrorists use them as a trigger for explosives.

At the Black Hat conference, Lukas Grunwald, a researcher with DN-Systems in Hildesheim, Germany, demonstrated that he could copy data stored in an RFID tag from his passport and write the data to a smart card equipped with an RFID chip.

From Kim Zetter’s “Hackers Clone E-Passports” (Wired: 3 August 2006):

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker — Walluf, Germany-based ACG Identification Technologies — but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports — called Golden Reader Tool and made by secunet Security Networks — and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader — which can also act as a writer — and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

Although he can clone the tag, Grunwald says it’s not possible, as far as he can tell, to change data on the chip, such as the name or birth date, without being detected. That’s because the passport uses cryptographic hashes to authenticate the data.

Grunwald’s technique requires a counterfeiter to have physical possession of the original passport for a time. A forger could not surreptitiously clone a passport in a traveler’s pocket or purse because of a built-in privacy feature called Basic Access Control that requires officials to unlock a passport’s RFID chip before reading it. The chip can only be unlocked with a unique key derived from the machine-readable data printed on the passport’s page.

To produce a clone, Grunwald has to program his copycat chip to answer to the key printed on the new passport. Alternatively, he can program the clone to dispense with Basic Access Control, which is an optional feature in the specification.

As planned, U.S. e-passports will contain a web of metal fiber embedded in the front cover of the documents to shield them from unauthorized readers. Though Basic Access Control would keep the chip from yielding useful information to attackers, it would still announce its presence to anyone with the right equipment. The government added the shielding after privacy activists expressed worries that a terrorist could simply point a reader at a crowd and identify foreign travelers.

In theory, with metal fibers in the front cover, nobody can sniff out the presence of an e-passport that’s closed. But [Kevin Mahaffey and John Hering of Flexilis] demonstrated in their video how even if a passport opens only half an inch — such as it might if placed in a purse or backpack — it can reveal itself to a reader at least two feet away.

In addition to cloning passport chips, Grunwald has been able to clone RFID ticket cards used by students at universities to buy cafeteria meals and add money to the balance on the cards.

He and his partners were also able to crash RFID-enabled alarm systems designed to sound when an intruder breaks a window or door to gain entry. Such systems require workers to pass an RFID card over a reader to turn the system on and off. Grunwald found that by manipulating data on the RFID chip he could crash the system, opening the way for a thief to break into the building through a window or door.

And they were able to clone and manipulate RFID tags used in hotel room key cards and corporate access cards and create a master key card to open every room in a hotel, office or other facility. He was able, for example, to clone Mifare, the most commonly used key-access system, designed by Philips Electronics. To create a master key he simply needed two or three key cards for different rooms to determine the structure of the cards. Of the 10 different types of RFID systems he examined that were being used in hotels, none used encryption.

Many of the card systems that did use encryption failed to change the default key that manufacturers program into the access card system before shipping, or they used sample keys that the manufacturer includes in instructions sent with the cards. Grunwald and his partners created a dictionary database of all the sample keys they found in such literature (much of which they found accidentally published on purchasers’ websites) to conduct what’s known as a dictionary attack. When attacking a new access card system, their RFDump program would search the list until it found the key that unlocked a card’s encryption.

“I was really surprised we were able to open about 75 percent of all the cards we collected,” he says.

2009

From Thomas Ricker’s “Video: Hacker war drives San Francisco cloning RFID passports” (Engadget: 2 February 2009):

Using a $250 Motorola RFID reader and antenna connected to his laptop, Chris recently drove around San Francisco reading RFID tags from passports, driver licenses, and other identity documents. In just 20 minutes, he found and cloned the passports of two very unaware US citizens.

RFID security problems Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

David Foster Wallace on the importance of writing within formal constraints

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

You’re probably right about appreciating limits. The sixties’ movement in poetry to radical free verse, in fiction to radically experimental recursive forms—their legacy to my generation of would-be artists is at least an incentive to ask very seriously where literary art’s true relation to limits should be. We’ve seen that you can break any or all of the rules without getting laughed out of town, but we’ve also seen the toxicity that anarchy for its own sake can yield. It’s often useful to dispense with standard formulas, of course, but it’s just as often valuable and brave to see what can be done within a set of rules—which is why formal poetry’s so much more interesting to me than free verse. Maybe our touchstone now should be G. M. Hopkins, who made up his “own” set of formal constraints and then blew everyone’s footwear off from inside them. There’s something about free play within an ordered and disciplined structure that resonates for readers. And there’s something about complete caprice and flux that’s deadening.

David Foster Wallace on the importance of writing within formal constraints Read More »

All about freezing to death

Ice mask, C.T. Madigan / photograph by Frank Hurley
Creative Commons License photo credit: State Library of New South Wales collection

From Peter Stark’s “As Freezing Persons Recollect the Snow–First Chill–Then Stupor–Then the Letting Go” (Outside: January 1997):

There is no precise core temperature at which the human body perishes from cold. At Dachau’s cold-water immersion baths, Nazi doctors calculated death to arrive at around 77 degrees Fahrenheit. The lowest recorded core temperature in a surviving adult is 60.8 degrees. For a child it’s lower: In 1994, a two-year-old girl in Saskatchewan wandered out of her house into a minus-40 night. She was found near her doorstep the next morning, limbs frozen solid, her core temperature 57 degrees. She lived.

The cold remains a mystery, more prone to fell men than women, more lethal to the thin and well muscled than to those with avoirdupois, and least forgiving to the arrogant and the unaware.

Were you a Norwegian fisherman or Inuit hunter, both of whom frequently work gloveless in the cold, your chilled hands would open their surface capillaries periodically to allow surges of warm blood to pass into them and maintain their flexibility. This phenomenon, known as the hunter’s response, can elevate a 35-degree skin temperature to 50 degrees within seven or eight minutes.

Other human adaptations to the cold are more mysterious. Tibetan Buddhist monks can raise the skin temperature of their hands and feet by 15 degrees through meditation. Australian aborigines, who once slept on the ground, unclothed, on near-freezing nights, would slip into a light hypothermic state, suppressing shivering until the rising sun rewarmed them.

The exertion that warmed you on the way uphill now works against you: Your exercise-dilated capillaries carry the excess heat of your core to your skin, and your wet clothing dispels it rapidly into the night. The lack of insulating fat over your muscles allows the cold to creep that much closer to your warm blood.

Your temperature begins to plummet. Within 17 minutes it reaches the normal 98.6. Then it slips below.

At 97 degrees, hunched over in your slow search, the muscles along your neck and shoulders tighten in what’s known as pre-shivering muscle tone. Sensors have signaled the temperature control center in your hypothalamus, which in turn has ordered the constriction of the entire web of surface capillaries. Your hands and feet begin to ache with cold.

At 95, you’ve entered the zone of mild hypothermia. You’re now trembling violently as your body attains its maximum shivering response, an involuntary condition in which your muscles contract rapidly to generate additional body heat.

And after this long stop, the skiing itself has become more difficult. By the time you push off downhill, your muscles have cooled and tightened so dramatically that they no longer contract easily, and once contracted, they won’t relax. You’re locked into an ungainly, spread-armed, weak-kneed snowplow.

As you sink back into the snow, shaken, your heat begins to drain away at an alarming rate, your head alone accounting for 50 percent of the loss. The pain of the cold soon pierces your ears so sharply that you root about in the snow until you find your hat and mash it back onto your head.

But even that little activity has been exhausting. You know you should find your glove as well, and yet you’re becoming too weary to feel any urgency. You decide to have a short rest before going on.

An hour passes. at one point, a stray thought says you should start being scared, but fear is a concept that floats somewhere beyond your immediate reach, like that numb hand lying naked in the snow. You’ve slid into the temperature range at which cold renders the enzymes in your brain less efficient. With every one-degree drop in body temperature below 95, your cerebral metabolic rate falls off by 3 to 5 percent. When your core temperature reaches 93, amnesia nibbles at your consciousness.

In the minus-35-degree air, your core temperature falls about one degree every 30 to 40 minutes, your body heat leaching out into the soft, enveloping snow. Apathy at 91 degrees. Stupor at 90.

You’ve now crossed the boundary into profound hypothermia. By the time your core temperature has fallen to 88 degrees, your body has abandoned the urge to warm itself by shivering. Your blood is thickening like crankcase oil in a cold engine. Your oxygen consumption, a measure of your metabolic rate, has fallen by more than a quarter. Your kidneys, however, work overtime to process the fluid overload that occurred when the blood vessels in your extremities constricted and squeezed fluids toward your center. You feel a powerful urge to urinate, the only thing you feel at all.

By 87 degrees you’ve lost the ability to recognize a familiar face, should one suddenly appear from the woods.

At 86 degrees, your heart, its electrical impulses hampered by chilled nerve tissues, becomes arrhythmic. It now pumps less than two-thirds the normal amount of blood. The lack of oxygen and the slowing metabolism of your brain, meanwhile, begin to trigger visual and auditory hallucinations.

At 85 degrees, those freezing to death, in a strange, anguished paroxysm, often rip off their clothes. This phenomenon, known as paradoxical undressing, is common enough that urban hypothermia victims are sometimes initially diagnosed as victims of sexual assault. Though researchers are uncertain of the cause, the most logical explanation is that shortly before loss of consciousness, the constricted blood vessels near the body’s surface suddenly dilate and produce a sensation of extreme heat against the skin.

There’s an adage about hypothermia: “You aren’t dead until you’re warm and dead.”

At about 6:00 the next morning, his friends, having discovered the stalled Jeep, find him, still huddled inches from the buried log, his gloveless hand shoved into his armpit. The flesh of his limbs is waxy and stiff as old putty, his pulse nonexistent, his pupils unresponsive to light. Dead.

But those who understand cold know that even as it deadens, it offers perverse salvation. Heat is a presence: the rapid vibrating of molecules. Cold is an absence: the damping of the vibrations. At absolute zero, minus 459.67 degrees Fahrenheit, molecular motion ceases altogether. It is this slowing that converts gases to liquids, liquids to solids, and renders solids harder. It slows bacterial growth and chemical reactions. In the human body, cold shuts down metabolism. The lungs take in less oxygen, the heart pumps less blood. Under normal temperatures, this would produce brain damage. But the chilled brain, having slowed its own metabolism, needs far less oxygen-rich blood and can, under the right circumstances, survive intact.

Setting her ear to his chest, one of his rescuers listens intently. Seconds pass. Then, faintly, she hears a tiny sound–a single thump, so slight that it might be the sound of her own blood. She presses her ear harder to the cold flesh. Another faint thump, then another.

The slowing that accompanies freezing is, in its way, so beneficial that it is even induced at times. Cardiologists today often use deep chilling to slow a patient’s metabolism in preparation for heart or brain surgery. In this state of near suspension, the patient’s blood flows slowly, his heart rarely beats–or in the case of those on heart-lung machines, doesn’t beat at all; death seems near. But carefully monitored, a patient can remain in this cold stasis, undamaged, for hours.

In fact, many hypothermia victims die each year in the process of being rescued. In “rewarming shock,” the constricted capillaries reopen almost all at once, causing a sudden drop in blood pressure. The slightest movement can send a victim’s heart muscle into wild spasms of ventricular fibrillation. In 1980, 16 shipwrecked Danish fishermen were hauled to safety after an hour and a half in the frigid North Sea. They then walked across the deck of the rescue ship, stepped below for a hot drink, and dropped dead, all 16 of them.

The doctor rapidly issues orders to his staff: intravenous administration of warm saline, the bag first heated in the microwave to 110 degrees. Elevating the core temperature of an average-size male one degree requires adding about 60 kilocalories of heat. A kilocalorie is the amount of heat needed to raise the temperature of one liter of water one degree Celsius. Since a quart of hot soup at 140 degrees offers about 30 kilocalories, the patient curled on the table would need to consume 40 quarts of chicken broth to push his core temperature up to normal. Even the warm saline, infused directly into his blood, will add only 30 kilocalories.

Ideally, the doctor would have access to a cardiopulmonary bypass machine, with which he could pump out the victim’s blood, rewarm and oxygenate it, and pump it back in again, safely raising the core temperature as much as one degree every three minutes. But such machines are rarely available outside major urban hospitals.

You’d nod if you could. But you can’t move. All you can feel is throbbing discomfort everywhere. Glancing down to where the pain is most biting, you notice blisters filled with clear fluid dotting your fingers, once gloveless in the snow. During the long, cold hours the tissue froze and ice crystals formed in the tiny spaces between your cells, sucking water from them, blocking the blood supply. You stare at them absently.

“I think they’ll be fine,” a voice from overhead says. “The damage looks superficial. We expect that the blisters will break in a week or so, and the tissue should revive after that.”

If not, you know that your fingers will eventually turn black, the color of bloodless, dead tissue. And then they will be amputated.

You’ve seen that in the infinite reaches of the universe, heat is as glorious and ephemeral as the light of the stars. Heat exists only where matter exists, where particles can vibrate and jump. In the infinite winter of space, heat is tiny; it is the cold that is huge.

All about freezing to death Read More »

Green Dam is easily exploitable

Green_Damn_site_blocked.jpg

From Scott Wolchok, Randy Yao, and J. Alex Halderman’s “Analysis of the Green Dam Censorware System” (The University of Michigan: 11 June 2009):

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC.

According to press reports, China will soon require all PCs sold in the country to include Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material.

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

Green Dam is easily exploitable Read More »

The watchclock knows where your night watchman is

Detex Watchclock Station
Creative Commons License photo credit: 917press

From Christopher Fahey’s “Who Watches the Watchman?” (GraphPaper: 2 May 2009):

The Detex Newman watchclock was first introduced in 1927 and is still in wide use today.

&hellip What could you possibly do in 1900 to be absolutely sure a night watchman was making his full patrol?

An elegant solution, designed and patented in 1901 by the German engineer A.A. Newman, is called the “watchclock”. It’s an ingenious mechanical device, slung over the shoulder like a canteen and powered by a simple wind-up spring mechanism. It precisely tracks and records a night watchman’s position in both space and time for the duration of every evening. It also generates a detailed, permanent, and verifiable record of each night’s patrol.

What’s so interesting to me about the watchclock is that it’s an early example of interaction design used to explicitly control user behavior. The “user” of the watchclock device is obliged to behave in a strictly delimited fashion.

The key, literally, to the watchclock system is that the watchman is required to “clock in” at a series of perhaps a dozen or more checkpoints throughout the premises. Positioned at each checkpoint is a unique, coded key nestled in a little steel box and secured by a small chain. Each keybox is permanently and discreetly installed in strategically-placed nooks and crannies throughout the building, for example in a broom closet or behind a stairway.

The watchman makes his patrol. He visits every checkpoint and clicks each unique key into the watchclock. Within the device, the clockwork marks the exact time and key-location code to a paper disk or strip. If the watchman visits all checkpoints in order, they will have completed their required patrol route.

The watchman’s supervisor can subsequently unlock the device itself (the watchman himself cannot open the watchclock) and review the paper records to confirm if the watchman was or was not doing their job.

The watchclock knows where your night watchman is Read More »

Another huge botnet

From Kelly Jackson Higgins’ “Researchers Find Massive Botnet On Nearly 2 Million Infected Consumer, Business, Government PCs” (Dark Reading: 22 April 2009):

Researchers have discovered a major botnet operating out of the Ukraine that has infected 1.9 million machines, including large corporate and government PCs mainly in the U.S.

The botnet, which appears to be larger than the infamous Storm botnet was in its heyday, has infected machines from some 77 government-owned domains — 51 of which are U.S. government ones, according to Ophir Shalitin, marketing director of Finjan, which recently found the botnet. Shalitin says the botnet is controlled by six individuals and is hosted in Ukraine.

Aside from its massive size and scope, what is also striking about the botnet is what its malware can do to an infected machine. The malware lets an attacker read the victim’s email, communicate via HTTP in the botnet, inject code into other processes, visit Websites without the user knowing, and register as a background service on the infected machine, for instance.

Finjan says victims are infected when visiting legitimate Websites containing a Trojan that the company says is detected by only four of 39 anti-malware tools, according to a VirusTotal report run by Finjan researchers.

Around 45 percent of the bots are in the U.S., and the machines are Windows XP. Nearly 80 percent run Internet Explorer; 15 percent, Firefox; 3 percent, Opera; and 1 percent Safari. Finjan says the bots were found in banks and large corporations, as well as consumer machines.

Another huge botnet Read More »

Social software: 5 properties & 3 dynamics

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

A great deal of sociality is about engaging with publics, but we take for granted certain structural aspects of those publics. Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

1. Persistence. What you say sticks around. This is great for asynchronicity, not so great when everything you’ve ever said has gone down on your permanent record. …

2. Replicability. You can copy and paste a conversation from one medium to another, adding to the persistent nature of it. This is great for being able to share information, but it is also at the crux of rumor-spreading. Worse: while you can replicate a conversation, it’s much easier to alter what’s been said than to confirm that it’s an accurate portrayal of the original conversation.

3. Searchability. My mother would’ve loved to scream search into the air and figure out where I’d run off with friends. She couldn’t; I’m quite thankful. But with social media, it’s quite easy to track someone down or to find someone as a result of searching for content. Search changes the landscape, making information available at our fingertips. This is great in some circumstances, but when trying to avoid those who hold power over you, it may be less than ideal.

4. Scalability. Social media scales things in new ways. Conversations that were intended for just a friend or two might spiral out of control and scale to the entire school or, if it is especially embarrassing, the whole world. …

5. (de)locatability. With the mobile, you are dislocated from any particular point in space, but at the same time, location-based technologies make location much more relevant. This paradox means that we are simultaneously more and less connected to physical space.

Those five properties are intertwined, but their implications have to do with the ways in which they alter social dynamics. Let’s look at three different dynamics that have been reconfigured as a result of social media.

1. Invisible Audiences. We are used to being able to assess the people around us when we’re speaking. We adjust what we’re saying to account for the audience. Social media introduces all sorts of invisible audiences. There are lurkers who are present at the moment but whom we cannot see, but there are also visitors who access our content at a later date or in a different environment than where we first produced them. As a result, we are having to present ourselves and communicate without fully understanding the potential or actual audience. The potential invisible audiences can be stifling. Of course, there’s plenty of room to put your head in the sand and pretend like those people don’t really exist.

2. Collapsed Contexts. Connected to this is the collapsing of contexts. In choosing what to say when, we account for both the audience and the context more generally. Some behaviors are appropriate in one context but not another, in front of one audience but not others. Social media brings all of these contexts crashing into one another and it’s often difficult to figure out what’s appropriate, let alone what can be understood.

3. Blurring of Public and Private. Finally, there’s the blurring of public and private. These distinctions are normally structured around audience and context with certain places or conversations being “public” or “private.” These distinctions are much harder to manage when you have to contend with the shifts in how the environment is organized.

All of this means that we’re forced to contend with a society in which things are being truly reconfigured. So what does this mean? As we are already starting to see, this creates all new questions about context and privacy, about our relationship to space and to the people around us.

Social software: 5 properties & 3 dynamics Read More »

How the fundamentalist thinks

From ScienceDaily’s “Brain Differences Found Between Believers In God And Non-believers” (5 March 2009):

In two studies led by Assistant Psychology Professor Michael Inzlicht, participants performed a Stroop task – a well-known test of cognitive control – while hooked up to electrodes that measured their brain activity.

Compared to non-believers, the religious participants showed significantly less activity in the anterior cingulate cortex (ACC), a portion of the brain that helps modify behavior by signaling when attention and control are needed, usually as a result of some anxiety-producing event like making a mistake. The stronger their religious zeal and the more they believed in God, the less their ACC fired in response to their own errors, and the fewer errors they made.

“You could think of this part of the brain like a cortical alarm bell that rings when an individual has just made a mistake or experiences uncertainty,” says lead author Inzlicht, who teaches and conducts research at the University of Toronto Scarborough. “We found that religious people or even people who simply believe in the existence of God show significantly less brain activity in relation to their own errors. They’re much less anxious and feel less stressed when they have made an error.”

“Obviously, anxiety can be negative because if you have too much, you’re paralyzed with fear,” [Inzlicht] says. “However, it also serves a very useful function in that it alerts us when we’re making mistakes. If you don’t experience anxiety when you make an error, what impetus do you have to change or improve your behaviour so you don’t make the same mistakes again and again?”

How the fundamentalist thinks Read More »

Why cons work on us

From Damien Carrick’s interview with Nicholas Johnson, “The psychology of conmen” (The Law Report: 30 September 2008):

Nicholas Johnson: I think what I love most about con artists and the world of scammers is that they’re criminals who manage to get their victims to hand over their possessions freely. Most thieves and robbers and the like, tend to use force, or deception, in order for them to take things, whereas a con artist manages to get their victim to freely give up their stuff.

The main thing that really makes people susceptible to con artists is the idea that we’re going to get something for nothing. So it really buys into our greed; it buys into sometimes our lust, and at the same time, sometimes even our sense that we’re going to do something good, so we’re going to get a great feeling from helping someone out, we’re going to make some money, we’re going to meet a beautiful girl—it really ties into our basest desires, and that’s what the con artist relies on.

Most con artists rely on this idea that the victim is in control. The victim is the one who is controlling the situation. So a great example of that is the classic Nigerian email scam, the person who writes to you and says, ‘I’ve got this money that I need to get out of the country, and I need your help.’ So you’re in control, you can help them, you can do a good deed, you can make some money, you’ve got this fantastic opportunity, and the con artist needs your help. It’s not the con artist doing you a favour. So really, you feel like you’re the one who’s controlling the situation when really it’s the con artist who knows the real deal.

I think for a lot of con artists they’re very proud of their work, and they like people to know exactly what they’ve gotten away with.

… for many of [the conmen], they really feel like even if they get caught, or even if they don’t get away with it, they feel like they’re giving their victim a good story, you know, something to dine out over, something to discuss down at the pub. They think that’s OK, you can scam somebody out of a couple of hundred bucks, because they’re getting a good story in return.

My all-time favourite one only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.

But there’s all sorts of those homemaker scams, people offering to seal your roof so they say, ‘We’ll put a fresh coat of tar on your roof’, or ‘We’ll re-seal your driveway’. In actual fact all they do is get old black sump oil and smooth it over the roof or smooth it over the driveway. You come home and it looks like wet tar, and so ‘Don’t step on it for 24 hours’, and of course 24 hours later they’re long gone with the money, and you’re left with a sticky, smelly driveway.

Why cons work on us Read More »

Chemically remove bad memories

From Nicholas Carr’s “Remembering to forget” (Rough Type: 22 October 2008):

Slowly but surely, scientists are getting closer to developing a drug that will allow people to eliminate unpleasant memories. The new issue of Neuron features a report from a group of Chinese scientists who were able to use a chemical – the protein alpha-CaM kinase II – to successfully erase memories from the minds of mice. The memory losses, report the authors, are “not caused by disrupting the retrieval access to the stored information but are, rather, due to the active erasure of the stored memories.” The erasure, moreover, “is highly restricted to the memory being retrieved while leaving other memories intact. Therefore, our study reveals a molecular genetic paradigm through which a given memory, such as new or old fear memory, can be rapidly and specifically erased in a controlled and inducible manner in the brain.”

One can think of a whole range of applications, from the therapeutic to the cosmetic to the political.

Chemically remove bad memories Read More »

Conficker creating a new gargantuan botneth

From Asavin Wattanajantra’s “Windows worm could create the ‘world’s biggest botnet’” (IT PRO: 19 January 2009):

The Downadup or “Conficker” worm has increased to over nine million infections over the weekend – increasing from 2.4 million in a four-day period, according to F-Secure.

The worm has password cracking capabilities, which is often successful because company passwords sometimes match a predefined password list that the worm carries.

Corporate networks around the world have already been infected by the network worm, which is particularly hard to eradicate as it is able to evolve – making use of a long list of websites – by downloading another version of itself.

Rik Ferguson, solution architect at Trend Micro, told IT PRO that the worm was very difficult to block for security companies as they had to make sure that they blocked every single one of the hundreds of domains that it could download from.

Ferguson said that the worm was creating a staggering amount of infections, even if just the most conservative infection estimates are taken into account. He said: “What’s particularly interesting about this worm is that it is the first hybrid with old school worm infection capabilities and command and control infrastructure.”

Conficker creating a new gargantuan botneth Read More »

US government makes unsafe RFID-laden passports even less safe through business practices

From Bill Gertz’s “Outsourced passports netting govt. profits, risking national security” (The Washington Times: 26 March 2008):

The United States has outsourced the manufacturing of its electronic passports to overseas companies — including one in Thailand that was victimized by Chinese espionage — raising concerns that cost savings are being put ahead of national security, an investigation by The Washington Times has found.

The Government Printing Office’s decision to export the work has proved lucrative, allowing the agency to book more than $100 million in recent profits by charging the State Department more money for blank passports than it actually costs to make them, according to interviews with federal officials and documents obtained by The Times.

The profits have raised questions both inside the agency and in Congress because the law that created GPO as the federal government’s official printer explicitly requires the agency to break even by charging only enough to recover its costs.

Lawmakers said they were alarmed by The Times’ findings and plan to investigate why U.S. companies weren’t used to produce the state-of-the-art passports, one of the crown jewels of American border security.

Officials at GPO, the Homeland Security Department and the State Department played down such concerns, saying they are confident that regular audits and other protections already in place will keep terrorists and foreign spies from stealing or copying the sensitive components to make fake passports.

“Aside from the fact that we have fully vetted and qualified vendors, we also note that the materials are moved via a secure transportation means, including armored vehicles,” GPO spokesman Gary Somerset said.

But GPO Inspector General J. Anthony Ogden, the agency’s internal watchdog, doesn’t share that confidence. He warned in an internal Oct. 12 report that there are “significant deficiencies with the manufacturing of blank passports, security of components, and the internal controls for the process.”

The inspector general’s report said GPO claimed it could not improve its security because of “monetary constraints.” But the inspector general recently told congressional investigators he was unaware that the agency had booked tens of millions of dollars in profits through passport sales that could have been used to improve security, congressional aides told The Times.

GPO is an agency little-known to most Americans, created by Congress almost two centuries ago as a virtual monopoly to print nearly all of the government’s documents … Since 1926, it also has been charged with the job of printing the passports used by Americans to enter and leave the country.

Each new e-passport contains a small computer chip inside the back cover that contains the passport number along with the photo and other personal data of the holder. The data is secured and is transmitted through a tiny wire antenna when it is scanned electronically at border entry points and compared to the actual traveler carrying it.

According to interviews and documents, GPO managers rejected limiting the contracts to U.S.-made computer chip makers and instead sought suppliers from several countries, including Israel, Germany and the Netherlands.

After the computer chips are inserted into the back cover of the passports in Europe, the blank covers are shipped to a factory in Ayutthaya, Thailand, north of Bangkok, to be fitted with a wire Radio Frequency Identification, or RFID, antenna. The blank passports eventually are transported to Washington for final binding, according to the documents and interviews.

The stop in Thailand raises its own security concerns. The Southeast Asian country has battled social instability and terror threats. Anti-government groups backed by Islamists, including al Qaeda, have carried out attacks in southern Thailand and the Thai military took over in a coup in September 2006.

The Netherlands-based company that assembles the U.S. e-passport covers in Thailand, Smartrac Technology Ltd., warned in its latest annual report that, in a worst-case scenario, social unrest in Thailand could lead to a halt in production.

Smartrac divulged in an October 2007 court filing in The Hague that China had stolen its patented technology for e-passport chips, raising additional questions about the security of America’s e-passports.

Transport concerns

A 2005 document obtained by The Times states that GPO was using unsecure FedEx courier services to send blank passports to State Department offices until security concerns were raised and forced GPO to use an armored car company. Even then, the agency proposed using a foreign armored car vendor before State Department diplomatic security officials objected.

Questionable profits

The State Department is now charging Americans $100 or more for new e-passports produced by the GPO, depending on how quickly they are needed. That’s up from a cost of around just $60 in 1998.

Internal agency documents obtained by The Times show each blank passport costs GPO an average of just $7.97 to manufacture and that GPO then charges the State Department about $14.80 for each, a margin of more than 85 percent, the documents show.

The accounting allowed GPO to make gross profits of more than $90 million from Oct. 1, 2006, through Sept. 30, 2007, on the production of e-passports. The four subsequent months produced an additional $54 million in gross profits.

The agency set aside more than $40 million of those profits to help build a secure backup passport production facility in the South, still leaving a net profit of about $100 million in the last 16 months.

GPO plans to produce 28 million blank passports this year up from about 9 million five years ago.

US government makes unsafe RFID-laden passports even less safe through business practices Read More »

The end of Storm

From Brian Krebs’ “Atrivo Shutdown Hastened Demise of Storm Worm” (The Washington Post: 17 October 2008):

The infamous Storm worm, which powered a network of thousands of compromised PCs once responsible for sending more than 20 percent of all spam, appears to have died off. Security experts say Storm’s death knell was sounded by the recent shutdown of Atrivo, a California based ISP that was home to a number of criminal cyber crime operations, including at least three of the master servers used to control the Storm network.

Three out of four of [Storm’s] control servers were located at Atrivo, a.k.a. Intercage, said Joe Stewart, a senior security researcher with Atlanta based SecureWorks who helped unlock the secrets of the complex Storm network. The fourth server, he said, operated out of Hosting.ua, an Internet provider based in the Ukraine.

Stewart said the final spam run blasted out by Storm was on Sept. 18.Three days later, Atrivo was forced off the Internet after its sole remaining upstream provider — Pacific Internet Exchange (PIE) — decided to stop routing for the troubled ISP. In the weeks leading up to that disconnection, four other upstream providers severed connectivity to Atrivo, following detailed reports from Security Fix and Host Exploit that pointed to a massive amount of spam, malicious software and a host of other cyber criminal operations emanating from it.

Stewart said spam sent by the Storm network had been steadily decreasing throughout 2008, aided in large part by the inclusion of the malware in Microsoft’s malicious software removal tool, which has scrubbed Storm from hundreds of thousands of PCs since last fall. Stewart said it’s impossible to tell whether the Storm worm was disrupted by the Atrivo shutdown or if the worm’s authors pulled the plug themselves and decided to move on. But at least 30,000 systems remain infected with the Storm malware.

The end of Storm Read More »