analysis

Ambient awareness & social media

From Clive Thompson’s “Brave New World of Digital Intimacy” (The New York Times Magazine: 5 September 2008):

In essence, Facebook users didn’t think they wanted constant, up-to-the-minute updates on what other people are doing. Yet when they experienced this sort of omnipresent knowledge, they found it intriguing and addictive. Why?

Social scientists have a name for this sort of incessant online contact. They call it “ambient awareness.” It is, they say, very much like being physically near someone and picking up on his mood through the little things he does — body language, sighs, stray comments — out of the corner of your eye. Facebook is no longer alone in offering this sort of interaction online. In the last year, there has been a boom in tools for “microblogging”: posting frequent tiny updates on what you’re doing. The phenomenon is quite different from what we normally think of as blogging, because a blog post is usually a written piece, sometimes quite long: a statement of opinion, a story, an analysis. But these new updates are something different. They’re far shorter, far more frequent and less carefully considered. One of the most popular new tools is Twitter, a Web site and messaging service that allows its two-million-plus users to broadcast to their friends haiku-length updates — limited to 140 characters, as brief as a mobile-phone text message — on what they’re doing. There are other services for reporting where you’re traveling (Dopplr) or for quickly tossing online a stream of the pictures, videos or Web sites you’re looking at (Tumblr). And there are even tools that give your location. When the new iPhone, with built-in tracking, was introduced in July, one million people began using Loopt, a piece of software that automatically tells all your friends exactly where you are.

This is the paradox of ambient awareness. Each little update — each individual bit of social information — is insignificant on its own, even supremely mundane. But taken together, over time, the little snippets coalesce into a surprisingly sophisticated portrait of your friends’ and family members’ lives, like thousands of dots making a pointillist painting. This was never before possible, because in the real world, no friend would bother to call you up and detail the sandwiches she was eating. The ambient information becomes like “a type of E.S.P.,” as Haley described it to me, an invisible dimension floating over everyday life.

“It’s like I can distantly read everyone’s mind,” Haley went on to say. “I love that. I feel like I’m getting to something raw about my friends. It’s like I’ve got this heads-up display for them.” It can also lead to more real-life contact, because when one member of Haley’s group decides to go out to a bar or see a band and Twitters about his plans, the others see it, and some decide to drop by — ad hoc, self-organizing socializing. And when they do socialize face to face, it feels oddly as if they’ve never actually been apart. They don’t need to ask, “So, what have you been up to?” because they already know. Instead, they’ll begin discussing something that one of the friends Twittered that afternoon, as if picking up a conversation in the middle.

You could also regard the growing popularity of online awareness as a reaction to social isolation, the modern American disconnectedness that Robert Putnam explored in his book “Bowling Alone.” The mobile workforce requires people to travel more frequently for work, leaving friends and family behind, and members of the growing army of the self-employed often spend their days in solitude. Ambient intimacy becomes a way to “feel less alone,” as more than one Facebook and Twitter user told me.

Ambient awareness & social media Read More »

The future of news as shown by the 2008 election

From Steven Berlin Johnson’s “Old Growth Media And The Future Of News” (StevenBerlinJohnson.com: 14 March 2009):

The first Presidential election that I followed in an obsessive way was the 1992 election that Clinton won. I was as compulsive a news junkie about that campaign as I was about the Mac in college: every day the Times would have a handful of stories about the campaign stops or debates or latest polls. Every night I would dutifully tune into Crossfire to hear what the punditocracy had to say about the day’s events. I read Newsweek and Time and the New Republic, and scoured the New Yorker for its occasional political pieces. When the debates aired, I’d watch religiously and stay up late soaking in the commentary from the assembled experts.

That was hardly a desert, to be sure. But compare it to the information channels that were available to me following the 2008 election. Everything I relied on in 1992 was still around of course – except for the late, lamented Crossfire – but it was now part of a vast new forest of news, data, opinion, satire – and perhaps most importantly, direct experience. Sites like Talking Points Memo and Politico did extensive direct reporting. Daily Kos provided in-depth surveys and field reports on state races that the Times would never have had the ink to cover. Individual bloggers like Andrew Sullivan responded to each twist in the news cycle; HuffPo culled the most provocative opinion pieces from the rest of the blogosphere. Nate Silver at fivethirtyeight.com did meta-analysis of polling that blew away anything William Schneider dreamed of doing on CNN in 1992. When the economy imploded in September, I followed economist bloggers like Brad DeLong to get their expert take the candidates’ responses to the crisis. (Yochai Benchler talks about this phenomenon of academics engaging with the news cycle in a smart response here.) I watched the debates with a thousand virtual friends live-Twittering alongside me on the couch. All this was filtered and remixed through the extraordinary political satire of John Stewart and Stephen Colbert, which I watched via viral clips on the Web as much as I watched on TV.

What’s more: the ecosystem of political news also included information coming directly from the candidates. Think about the Philadelphia race speech, arguably one of the two or three most important events in the whole campaign. Eight million people watched it on YouTube alone. Now, what would have happened to that speech had it been delivered in 1992? Would any of the networks have aired it in its entirety? Certainly not. It would have been reduced to a minute-long soundbite on the evening news. CNN probably would have aired it live, which might have meant that 500,000 people caught it. Fox News and MSNBC? They didn’t exist yet. A few serious newspaper might have reprinted it in its entirety, which might have added another million to the audience. Online perhaps someone would have uploaded a transcript to Compuserve or The Well, but that’s about the most we could have hoped for.

There is no question in mind my mind that the political news ecosystem of 2008 was far superior to that of 1992: I had more information about the state of the race, the tactics of both campaigns, the issues they were wrestling with, the mind of the electorate in different regions of the country. And I had more immediate access to the candidates themselves: their speeches and unscripted exchanges; their body language and position papers.

The old line on this new diversity was that it was fundamentally parasitic: bloggers were interesting, sure, but if the traditional news organizations went away, the bloggers would have nothing to write about, since most of what they did was link to professionally reported stories. Let me be clear: traditional news organizations were an important part of the 2008 ecosystem, no doubt about it. … But no reasonable observer of the political news ecosystem could describe all the new species as parasites on the traditional media. Imagine how many barrels of ink were purchased to print newspaper commentary on Obama’s San Francisco gaffe about people “clinging to their guns and religion.” But the original reporting on that quote didn’t come from the Times or the Journal; it came from a “citizen reporter” named Mayhill Fowler, part of the Off The Bus project sponsored by Jay Rosen’s Newassignment.net and The Huffington Post.

The future of news as shown by the 2008 election Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

A better alternative to text CAPTCHAs

From Rich Gossweiler, Maryam Kamvar, & Shumeet Baluja’s “What’s Up CAPTCHA?: A CAPTCHA Based On Image Orientation” (Google: 20-24 April 2009):

There are several classes of images which can be successfully oriented by computers. Some objects, such as faces, cars, pedestrians, sky, grass etc.

Many images, however, are difficult for computers to orient. For example, indoor scenes have variations in lighting sources, and abstract and close-up images provide the greatest challenge to both computers and people, often because no clear anchor points or lighting sources exist.

The average performance on outdoor photographs, architecture photographs and typical tourist type photographs was significantly higher than the performance on abstract photographs, close-ups and backgrounds. When an analysis of the features used to make the discriminations was done, it was found that the edge features play a significant role.

It is important not to simply select random images for this task. There are many cues which can quickly reveal the upright orientation of an image to automated systems; these images must be filtered out. For example, if typical vacation or snapshot photos are used, automated rotation accuracies can be in the 90% range. The existence of any of the cues in the presented images will severely limit the effectiveness of the approach. Three common cues are listed below:

1. Text: Usually the predominant orientation of text in an image reveals the upright orientation of an image.

2. Faces and People: Most photographs are taken with the face(s) / people upright in the image.

3. Blue skies, green grass, and beige sand: These are all revealing clues, and are present in many travel/tourist photographs found on the web. Extending this beyond color, in general, the sky often has few texture/edges in comparison to the ground. Additional cues found important in human tests include "grass", "trees", "cars", "water" and "clouds".

Second, due to sometimes warped objects, lack of shading and lighting cues, and often unrealistic colors, cartoons also make ideal candidates. … Finally, although we did not alter the content of the image, it may be possible to simply alter the color- mapping, overall lighting curves, and hue/saturation levels to reveal images that appear unnatural but remain recognizable to people.

To normalize the shape and size of the images, we scaled each image to a 180×180 pixel square and we then applied a circular mask to remove the image corners.

We have created a system that has sufficiently high human- success rates and sufficiently low computer-success rates. When using three images, the rotational CAPTCHA system results in an 84% human success metric, and a .009% bot-success metric (assuming random guessing). These metrics are based on two variables: the number of images we require a user to rotate and the size of the acceptable error window (the degrees from upright which we still consider to be upright). Predictably, as the number of images shown becomes greater, the probability of correctly solving them decreases. However, as the error window increases, the probability of correctly solving them increases. The system which results in an 84% human success rate and .009% bot success rate asks the user to rotate three images, each within 16° of upright (8-degrees on either side of upright).

A CAPTCHA system which displayed ≥ 3 images with a ≤ 16-degree error window would achieve a guess success rate of less than 1 in 10,000, a standard acceptable computer success rates for CAPTCHAs.

In our experiments, users moved a slider to rotate the image to its upright position. On small display devices such as a mobile phone, they could directly manipulate the image using a touch screen, as seen in Figure 12, or can rotate it via button presses.

A better alternative to text CAPTCHAs Read More »

Newspapers are doomed

From Jeff Sigmund’s “Newspaper Web Site Audience Increases More Than Ten Percent In First Quarter To 73.3 Million Visitors” (Newspaper Association of America: 23 April 2009):

Newspaper Web sites attracted more than 73.3 million monthly unique visitors on average (43.6 percent of all Internet users) in the first quarter of 2009, a record number that reflects a 10.5 percent increase over the same period a year ago, according to a custom analysis provided by Nielsen Online for the Newspaper Association of America.

In addition, newspaper Web site visitors generated an average of more than 3.5 billion page views per month throughout the quarter, an increase of 12.8 percent over the same period a year ago (3.1 billion page views).

Contrast that with the article on Craigslist in Wikipedia (1 May 2009):

The site serves over twenty billion page views per month, putting it in 28th place overall among web sites world wide, ninth place overall among web sites in the United States (per Alexa.com on March 27, 2009), to over fifty million unique monthly visitors in the United States alone (per Compete.com on April 7, 2009). As of March 17, 2009 it was ranked 7th on Alexa. With over forty million new classified advertisements each month, Craigslist is the leading classifieds service in any medium. The site receives over one million new job listings each month, making it one of the top job boards in the world.

Even at its best, the entire newspaper industry only gets 1/5 of what Craigslist sees each month.

Newspapers are doomed Read More »

German twins commit the perfect crime

From “Twins Suspected in Spectacular Jewelry Heist Set Free” (Spiegel Online International: 19 March 2009):

Saved by their indistinguishable DNA, identical twins suspected in a massive jewelry heist have been set free. Neither could be exclusively linked to the DNA evidence.

German police say at least one of the identical twin brothers Hassan and Abbas O. may have perpetrated a recent multimillion euro jewelry heist in Berlin. But because of their indistinguishable DNA, neither can be individually linked to the crime. Both were set free on Wednesday.

In the early morning hours of February 25, three masked men broke into Germany’s famous luxury department store Kaufhaus Des Westens (KaDeWe). Video cameras show how they climbed into the store’s grand main hall, broke open cabinets and display cases and made off with an estimated €5 million worth of jewelry and watches.

When police found traces of DNA on a glove left at the scene of the crime, it seemed that the criminals responsible for Germany’s most spectacular heist in years would be caught. But the DNA led to not one but two suspects — 27-year-old identical, or monozygotic, twins with near-identical DNA.

German law stipulates that each criminal must be individually proven guilty. The problem in the case of the O. brothers is that their twin DNA is so similar that neither can be exclusively linked to the evidence using current methods of DNA analysis. So even though both have criminal records and may have committed the heist together, Hassan and Abbas O. have been set free.

German twins commit the perfect crime Read More »

Why did Thomas Jefferson bring a stuffed moose to France?

From David G. Post’s “Jefferson’s Moose” (Remarks presented at the Stanford Law School Conference on Privacy in Cyberspace: 7 February 2000):

In 1787, Jefferson, then the American Minister to France, had the “complete skeleton, skin & horns of the Moose” shipped to him in Paris and mounted in the lobby of his hotel. One can only imagine the comments made by bemused onlookers and hotel staff.

This was no small undertaking at that time — I suppose it would be no small undertaking even today. It’s not as if he had no other things to do with his time or his money. It’s worth asking: Why did he do it? What could have possessed him?

He wanted, first, to shock. He wanted his French friends to stand back, to gasp, and to say: There really is a new world out there, one that has things in it that we can hardly imagine. He wanted them to have what Lessig called an “aha! moment” in regard to the New World from out of which Jefferson (and his moose) had emerged.

But there was another, more specific, purpose. He wanted to show them that this new world was not a degenerate place. The Comte de Buffon, probably the most celebrated naturalist of the late 18th Century, had propounded just such a theory about the degeneracy of life in the New World. Jefferson described Buffon’s theory this way:

“That the animals common both to the old and new world, are smaller in the latter; that those peculiar to the new, are on a smaller scale; that those which have been domesticated in both, have degenerated in America; and that on the whole the New World exhibits fewer species.”

Though it may be hard to appreciate from our more enlightened 21st century perspective, this was deadly serious stuff — both as science and, more to our point here, as politics; to Jefferson, Buffon’s theory had ominous political implications, for it was, as he put it, “within one step” of the notion that man, too, would degenerate in the New World. Thus, it could and did give a kind of intellectual cover to the notion that man in the New World could not be trusted to govern himself.

Sometimes a picture — or, better yet, a carcass — is worth a thousand words. So out comes the moose; larger than its European counterparts (the reindeer and caribou), its brooding presence in downtown Paris would surely make observers think twice about Buffon’s theory. Jefferson was no fool; he knew full well that one data point does not settle the argument, and he would provide, in his “Notes on the State of Virginia,” a detailed refutation of Buffon’s charge, page after page of careful analysis of the relative sizes of American and European animals.

Why did Thomas Jefferson bring a stuffed moose to France? Read More »

What passwords do people use? phpBB examples

From Robert Graham’s “PHPBB Password Analysis” (Dark Reading: 6 February 2009):

A popular Website, phpbb.com, was recently hacked. The hacker published approximately 20,000 user passwords from the site. …

This incident is similar to one two years ago when MySpace was hacked, revealing about 30,000 passwords. …

The striking different between the two incidents is that the phpbb passwords are simpler. MySpace requires that passwords “must be between 6 and 10 characters, and contain at least 1 number or punctuation character.” Most people satisfied this requirement by simply appending “1” to the ends of their passwords. The phpbb site has no such restrictions — the passwords are shorter and rarely contain anything more than a dictionary word.

It’s hard to judge exactly how many passwords are dictionary words. … I ran the phpbb passwords through various dictionary files and come up with a 65% match (for a simple English dictionary) and 94% (for “hacker” dictionaries). …

16% of passwords matched a person’s first name. This includes people choosing their own first names or those of their spouses or children. The most popular first names were Joshua, Thomas, Michael, and Charlie. But I wonder if there is something else going on. Joshua, for example, was also the password to the computer in “Wargames” …

14% of passwords were patterns on the keyboard, like “1234,” “qwerty,” or “asdf.” There are a lot of different patterns people choose, like “1qaz2wsx” or “1q2w3e.” I spent a while googling “159357,” trying to figure out how to categorize it, then realized it was a pattern on the numeric keypad. …

4% are variations of the word “password,” such as “passw0rd,” “password1,” or “passwd.” I googled “drowssap,” trying to figure out how to categorize it, until I realized it was “password” spelled backward.

5% of passwords are pop-culture references from TV, movies, and music. These tend to be youth culture (“hannah,” “pokemon,” “tigger”) and geeky (“klingon,” “starwars,” “matrix,” “legolas,” “ironman”). … Some notable pop-culture references are chosen not because they are popular, but because they sound like passwords, such as “ou812” (’80s Van Halen album), “blink182” (’90s pop), “rush2112” (’80s album), and “8675309” (’80s pop song).

4% of passwords appear to reference things nearby. The name “samsung” is a popular password, I think because it’s the brand name on the monitor that people are looking at … Similarly, there are a lot of names of home computers like “dell,” “packard,” “apple,” “pavilion,” “presario,” “compaq,” and so on. …

3% of passwords are “emo” words. Swear words, especially the F-word, are common, but so are various forms of love and hate (like “iloveyou” or “ihateyou”).

3% are “don’t care” words. … A lot of password choices reflect this attitude, either implicitly with “abc123” or “blahblah,” or explicitly with “whatever,” “whocares,” or “nothing.”

1.3% are passwords people saw in movies/TV. This is a small category, consisting only of “letmein,” “trustno1,” “joshua,” and “monkey,” but it accounts for a large percentage of passwords.

1% are sports related. …

Here is the top 20 passwords from the phpbb dataset. You’ll find nothing surprising here; all of them are on this Top 500 list.

3.03% “123456”
2.13% “password”
1.45% “phpbb”
0.91% “qwerty”
0.82% “12345”
0.59% “12345678”
0.58% “letmein”
0.53% “1234”
0.50% “test”
0.43% “123”
0.36% “trustno1”
0.33% “dragon”
0.31% “abc123”
0.31% “123456789”
0.31% “111111”
0.30% “hello”
0.30% “monkey”
0.28% “master”
0.22% “killer”
0.22% “123123”

Notice that whereas “myspace1” was one of the most popular passwords in the MySpace dataset, “phpbb” is one of the most popular passwords in the phpbb dataset.

The password length distribution is as follows:

1 character 0.34%
2 characters 0.54%
3 characters 2.92%
4 characters 12.29%
5 characters 13.29%
6 characters 35.16%
7 characters 14.60%
8 characters 15.50%
9 characters 3.81%
10 characters 1.14%
11 characters 0.22%

Note that phpbb has no requirements for password lengths …

What passwords do people use? phpBB examples Read More »

Socioeconomic analysis of MySpace & Facebook

From danah boyd’s “Viewing American class divisions through Facebook and MySpace” (danah boyd: 24 June 2007):

When MySpace launched in 2003, it was primarily used by 20/30-somethings (just like Friendster before it). The bands began populating the site by early 2004 and throughout 2004, the average age slowly declined. It wasn’t until late 2004 that teens really started appearing en masse on MySpace and 2005 was the year that MySpace became the “in thing” for teens.

Facebook launched in 2004 as a Harvard-only site. It slowly expanded to welcome people with .edu accounts from a variety of different universities. In mid-2005, Facebook opened its doors to high school students, but it wasn’t that easy to get an account because you needed to be invited. As a result, those who were in college tended to invite those high school students that they liked. Facebook was strongly framed as the “cool” thing that college students did.

In addition to the college framing, the press coverage of MySpace as dangerous and sketchy alienated “good” kids. Facebook seemed to provide an ideal alternative. Parents weren’t nearly as terrified of Facebook because it seemed “safe” thanks to the network-driven structure.

She argues that class divisions in the United States have more to do with lifestyle and social stratification than with income. In other words, all of my anti-capitalist college friends who work in cafes and read Engels are not working class just because they make $14K a year and have no benefits. Class divisions in the United States have more to do with social networks (the real ones, not FB/MS), social capital, cultural capital, and attitudes than income. Not surprisingly, other demographics typically discussed in class terms are also a part of this lifestyle division. Social networks are strongly connected to geography, race, and religion; these are also huge factors in lifestyle divisions and thus “class.”

The goodie two shoes, jocks, athletes, or other “good” kids are now going to Facebook. These kids tend to come from families who emphasize education and going to college. They are part of what we’d call hegemonic society. They are primarily white, but not exclusively. They are in honors classes, looking forward to the prom, and live in a world dictated by after school activities.

MySpace is still home for Latino/Hispanic teens, immigrant teens, “burnouts,” “alternative kids,” “art fags,” punks, emos, goths, gangstas, queer kids, and other kids who didn’t play into the dominant high school popularity paradigm. These are kids whose parents didn’t go to college, who are expected to get a job when they finish high school. These are the teens who plan to go into the military immediately after schools. Teens who are really into music or in a band are also on MySpace. MySpace has most of the kids who are socially ostracized at school because they are geeks, freaks, or queers.

In order to demarcate these two groups, let’s call the first group of teens “hegemonic teens” and the second group “subaltern teens.”

Most teens who exclusively use Facebook are familiar with and have an opinion about MySpace. These teens are very aware of MySpace and they often have a negative opinion about it. They see it as gaudy, immature, and “so middle school.” They prefer the “clean” look of Facebook, noting that it is more mature and that MySpace is “so lame.” What hegemonic teens call gaudy can also be labeled as “glitzy” or “bling” or “fly” (or what my generation would call “phat”) by subaltern teens. Terms like “bling” come out of hip-hop culture where showy, sparkly, brash visual displays are acceptable and valued. The look and feel of MySpace resonates far better with subaltern communities than it does with the upwardly mobile hegemonic teens. … That “clean” or “modern” look of Facebook is akin to West Elm or Pottery Barn or any poshy Scandinavian design house (that I admit I’m drawn to) while the more flashy look of MySpace resembles the Las Vegas imagery that attracts millions every year. I suspect that lifestyles have aesthetic values and that these are being reproduced on MySpace and Facebook.

I should note here that aesthetics do divide MySpace users. The look and feel that is acceptable amongst average Latino users is quite different from what you see the subculturally-identified outcasts using. Amongst the emo teens, there’s a push for simple black/white/grey backgrounds and simplistic layouts. While I’m using the term “subaltern teens” to lump together non-hegemonic teens, the lifestyle divisions amongst the subalterns are quite visible on MySpace through the aesthetic choices of the backgrounds. The aesthetics issue is also one of the forces that drives some longer-term users away from MySpace.

Teens from poorer backgrounds who are on MySpace are less likely to know people who go to universities. They are more likely to know people who are older than them, but most of their older friends, cousins, and co-workers are on MySpace. It’s the cool working class thing and it’s the dominant SNS at community colleges. These teens are more likely to be interested in activities like shows and clubs and they find out about them through MySpace. The subaltern teens who are better identified as “outsiders” in a hegemonic community tend to be very aware of Facebook. Their choice to use MySpace instead of Facebook is a rejection of the hegemonic values (and a lack of desire to hang out with the preps and jocks even online).

Class divisions in military use

A month ago, the military banned MySpace but not Facebook. This was a very interesting move because the division in the military reflects the division in high schools. Soldiers are on MySpace; officers are on Facebook. Facebook is extremely popular in the military, but it’s not the SNS of choice for 18-year old soldiers, a group that is primarily from poorer, less educated communities. They are using MySpace. The officers, many of whom have already received college training, are using Facebook. The military ban appears to replicate the class divisions that exist throughout the military. …

MySpace is the primary way that young soldiers communicate with their peers. When I first started tracking soldiers’ MySpace profiles, I had to take a long deep breath. Many of them were extremely pro-war, pro-guns, anti-Arab, anti-Muslim, pro-killing, and xenophobic as hell. Over the last year, I’ve watched more and more profiles emerge from soldiers who aren’t quite sure what they are doing in Iraq. I don’t have the data to confirm whether or not a significant shift has occurred but it was one of those observations that just made me think. And then the ban happened. I can’t help but wonder if part of the goal is to cut off communication between current soldiers and the group that the military hopes to recruit.

Thoughts and meta thoughts

People often ask me if I’m worried about teens today. The answer is yes, but it’s not because of social network sites. With the hegemonic teens, I’m very worried about the stress that they’re under, the lack of mobility and healthy opportunities for play and socialization, and the hyper-scheduling and surveillance. I’m worried about their unrealistic expectations for becoming rich and famous, their lack of work ethic after being pampered for so long, and the lack of opportunities that many of them have to even be economically stable let alone better off than their parents. I’m worried about how locking teens indoors coupled with a fast food/junk food advertising machine has resulted in a decrease in health levels across the board which will just get messy as they are increasingly unable to afford health insurance. When it comes to ostracized teens, I’m worried about the reasons why society has ostracized them and how they will react to ongoing criticism from hegemonic peers. I cringe every time I hear of another Columbine, another Virgina Tech, another site of horror when an outcast teen lashes back at the hegemonic values of society.

I worry about the lack of opportunities available to poor teens from uneducated backgrounds. I’m worried about how Wal-Mart Nation has destroyed many of the opportunities for meaningful working class labor as these youth enter the workforce. I’m worried about what a prolonged war will mean for them. I’m worried about how they’ve been told that to succeed, they must be a famous musician or sports player. I’m worried about how gangs provide the only meaningful sense of community that many of these teens will ever know.

Given the state of what I see in all sorts of neighborhoods, I’m amazed at how well teens are coping and I think that technology has a lot to do with that. Teens are using social network sites to build community and connect with their peers. They are creating publics for socialization. And through it, they are showcasing all of the good, bad, and ugly of today’s teen life.

In the 70s, Paul Willis analyzed British working class youth and he wrote a book called Learning to Labor: How Working Class Kids Get Working Class Jobs. He argued that working class teens will reject hegemonic values because it’s the only way to continue to be a part of the community that they live in. In other words, if you don’t know that you will succeed if you make a run at jumping class, don’t bother – you’ll lose all of your friends and community in the process. His analysis has such strong resonance in American society today. I just wish I knew how to fix it.

Socioeconomic analysis of MySpace & Facebook Read More »

Three top botnets

From Kelly Jackson Higgins’ “The World’s Biggest Botnets” (Dark Reading: 9 November 2007):

You know about the Storm Trojan, which is spread by the world’s largest botnet. But what you may not know is there’s now a new peer-to-peer based botnet emerging that could blow Storm away.

“We’re investigating a new peer-to-peer botnet that may wind up rivaling Storm in size and sophistication,” says Tripp Cox, vice president of engineering for startup Damballa, which tracks botnet command and control infrastructures. “We can’t say much more about it, but we can tell it’s distinct from Storm.”

Researchers estimate that there are thousands of botnets in operation today, but only a handful stand out by their sheer size and pervasiveness. Although size gives a botnet muscle and breadth, it can also make it too conspicuous, which is why botnets like Storm fluctuate in size and are constantly finding new ways to cover their tracks to avoid detection. Researchers have different head counts for different botnets, with Storm by far the largest (for now, anyway).

Damballa says its top three botnets are Storm, with 230,000 active members per 24 hour period; Rbot, an IRC-based botnet with 40,000 active members per 24 hour period; and Bobax, an HTTP-based botnet with 24,000 active members per 24 hour period, according to the company.

1. Storm

Size: 230,000 active members per 24 hour period

Type: peer-to-peer

Purpose: Spam, DDOS

Malware: Trojan.Peacomm (aka Nuwar)

Few researchers can agree on Storm’s actual size — while Damballa says its over 200,000 bots, Trend Micro says its more like 40,000 to 100,000 today. But all researchers say that Storm is a whole new brand of botnet. First, it uses encrypted decentralized, peer-to-peer communication, unlike the traditional centralized IRC model. That makes it tough to kill because you can’t necessarily shut down its command and control machines. And intercepting Storm’s traffic requires cracking the encrypted data.

Storm also uses fast-flux, a round-robin method where infected bot machines (typically home computers) serve as proxies or hosts for malicious Websites. These are constantly rotated, changing their DNS records to prevent their discovery by researchers, ISPs, or law enforcement. And researchers say it’s tough to tell how the command and control communication structure is set up behind the P2P botnet. “Nobody knows how the mother ships are generating their C&C,” Trend Micro’s Ferguson says.

Storm uses a complex combination of malware called Peacomm that includes a worm, rootkit, spam relay, and Trojan.

But researchers don’t know — or can’t say — who exactly is behind Storm, except that it’s likely a fairly small, tightly knit group with a clear business plan. “All roads lead back to Russia,” Trend Micro’s Ferguson says.

“Storm is only thing now that keeps me awake at night and busy,” he says. “It’s professionalized crimeware… They have young, talented programmers apparently. And they write tools to do administrative [tracking], as well as writing cryptographic routines… and another will handle social engineering, and another will write the Trojan downloader, and another is writing the rootkit.”

Rbot

Size: 40,000 active members per 24 hour period

Type: IRC

Purpose: DDOS, spam, malicious operations

Malware: Windows worm

Rbot is basically an old-school IRC botnet that uses the Rbot malware kit. It isn’t likely to ever reach Storm size because IRC botnets just can’t scale accordingly. “An IRC server has to be a beefy machine to support anything anywhere close to the size of Peacomm/Storm,” Damballa’s Cox says.

It can disable antivirus software, too. Rbot’s underlying malware uses a backdoor to gain control of the infected machine, installing keyloggers, viruses, and even stealing files from the machine, as well as the usual spam and DDOS attacks.

Bobax

Size: 24,000 active members per 24 hour period

Type: HTTP

Purpose: Spam

Malware: Mass-mailing worm

Bobax is specifically for spamming, Cox says, and uses the stealthier HTTP for sending instructions to its bots on who and what to spam. …

According to Symantec, Bobax bores open a back door and downloads files onto the infected machine, and lowers its security settings. It spreads via a buffer overflow vulnerability in Windows, and inserts the spam code into the IE browser so that each time the browser runs, the virus is activated. And Bobax also does some reconnaissance to ensure that its spam runs are efficient: It can do bandwidth and network analysis to determine just how much spam it can send, according to Damballa. “Thus [they] are able to tailor their spamming so as not to tax the network, which helps them avoid detection,” according to company research.

Even more frightening, though, is that some Bobax variants can block access to antivirus and security vendor Websites, a new trend in Website exploitation.

Three top botnets Read More »

Business models for software

From Brian D’s “The benefits of a monthly recurring revenue model in tough economic times” (37 Signals: 18 December 2008):

At 37signals we sell our web-based products using the monthly subscription model. We also give people a 30-day free trial up front before we bill them for their first month.

We think this model works best all the time, but we believe it works especially well in tough times. When times get tough people obviously look to spend less, but understanding how they spend less has a lot to do with which business models work better than others.

There are lots of business models for software. Here are a few of the most popular:

* Freeware
* Freeware, ad supported
* One-off pay up front, get upgrades free
* One-off pay up front, pay for upgrades
* Subscription (recurring annual)
* Subscription (recurring monthly)

Business models for software Read More »

Bruce Schneier on wholesale, constant surveillance

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”

What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.

Bruce Schneier on wholesale, constant surveillance Read More »

How it feels to drown, get decapitated, get electrocuted, and more

From Anna Gosline’s “Death special: How does it feel to die?” (New Scientist: 13 October 2007):

Death comes in many guises, but one way or another it is usually a lack of oxygen to the brain that delivers the coup de grâce. Whether as a result of a heart attack, drowning or suffocation, for example, people ultimately die because their neurons are deprived of oxygen, leading to cessation of electrical activity in the brain – the modern definition of biological death.

If the flow of freshly oxygenated blood to the brain is stopped, through whatever mechanism, people tend to have about 10 seconds before losing consciousness. They may take many more minutes to die, though, with the exact mode of death affecting the subtleties of the final experience.

Drowning

Typically, when a victim realises that they cannot keep their head above water they tend to panic, leading to the classic “surface struggle”. They gasp for air at the surface and hold their breath as they bob beneath, says Tipton. Struggling to breathe, they can’t call for help. Their bodies are upright, arms weakly grasping, as if trying to climb a non-existent ladder from the sea. Studies with New York lifeguards in the 1950s and 1960s found that this stage lasts just 20 to 60 seconds.

When victims eventually submerge, they hold their breath for as long as possible, typically 30 to 90 seconds. After that, they inhale some water, splutter, cough and inhale more. Water in the lungs blocks gas exchange in delicate tissues, while inhaling water also triggers the airway to seal shut – a reflex called a laryngospasm. “There is a feeling of tearing and a burning sensation in the chest as water goes down into the airway. Then that sort of slips into a feeling of calmness and tranquility,” says Tipton, describing reports from survivors.

That calmness represents the beginnings of the loss of consciousness from oxygen deprivation, which eventually results in the heart stopping and brain death.

Heart attack

The most common symptom is, of course, chest pain: a tightness, pressure or squeezing, often described as an “elephant on my chest”, which may be lasting or come and go. This is the heart muscle struggling and dying from oxygen deprivation. Pain can radiate to the jaw, throat, back, belly and arms. Other signs and symptoms include shortness of breath, nausea and cold sweats.

Most victims delay before seeking assistance, waiting an average of 2 to 6 hours. Women are the worst, probably because they are more likely to experience less well-known symptoms, such as breathlessness, back or jaw pain, or nausea, says JoAnn Manson, an epidemiologist at Harvard Medical School.

Even small heart attacks can play havoc with the electrical impulses that control heart muscle contraction, effectively stopping it. In about 10 seconds the person loses consciousness, and minutes later they are dead.

Bleeding to death

People can bleed to death in seconds if the aorta, the major blood vessel leading from the heart, is completely severed, for example, after a severe fall or car accident.

Death could creep up much more slowly if a smaller vein or artery is nicked – even taking hours. Such victims would experience several stages of haemorrhagic shock. The average adult has 5 litres of blood. Losses of around 750 millilitres generally cause few symptoms. Anyone losing 1.5 litres – either through an external wound or internal bleeding – feels weak, thirsty and anxious, and would be breathing fast. By 2 litres, people experience dizziness, confusion and then eventual unconsciousness.

Fire

Long the fate of witches and heretics, burning to death is torture. Hot smoke and flames singe eyebrows and hair and burn the throat and airways, making it hard to breathe. Burns inflict immediate and intense pain through stimulation of the nociceptors – the pain nerves in the skin. To make matters worse, burns also trigger a rapid inflammatory response, which boosts sensitivity to pain in the injured tissues and surrounding areas.

Most people who die in fires do not in fact die from burns. The most common cause of death is inhaling toxic gases – carbon monoxide, carbon dioxide and even hydrogen cyanide – together with the suffocating lack of oxygen. One study of fire deaths in Norway from 1996 found that almost 75 per cent of the 286 people autopsied had died from carbon monoxide poisoning.

Depending on the size of the fire and how close you are to it, concentrations of carbon monoxide could start to cause headache and drowsiness in minutes, eventually leading to unconsciousness. According to the US National Fire Protection Association, 40 per cent of the victims of fatal home fires are knocked out by fumes before they can even wake up.

Decaptitation

Beheading, if somewhat gruesome, can be one of the quickest and least painful ways to die – so long as the executioner is skilled, his blade sharp, and the condemned sits still.

Quick it may be, but consciousness is nevertheless believed to continue after the spinal chord is severed. A study in rats in 1991 found that it takes 2.7 seconds for the brain to consume the oxygen from the blood in the head; the equivalent figure for humans has been calculated at 7 seconds.

It took the axeman three attempts to sever the head of Mary Queen of Scots in 1587. He had to finish the job with a knife.

Decades earlier in 1541, Margaret Pole, the Countess of Salisbury, was executed at the Tower of London. She was dragged to the block, but refused to lay her head down. The inexperienced axe man made a gash in her shoulder rather than her neck. According to some reports, she leapt from the block and was chased by the executioner, who struck 11 times before she died.

Electrocution

In accidental electrocutions, usually involving low, household current, the most common cause of death is arrhythmia, stopping the heart dead. Unconsciousness ensues after the standard 10 seconds, says Richard Trohman, a cardiologist at Rush University in Chicago. One study of electrocution deaths in Montreal, Canada found that 92 per cent had probably died from arrhythmia.

Higher currents can produce nearly immediate unconsciousness.

Fall from a height

A high fall is certainly among the speediest ways to die: terminal velocity (no pun intended) is about 200 kilometres per hour, achieved from a height of about 145 metres or more. A study of deadly falls in Hamburg, Germany, found that 75 per cent of victims died in the first few seconds or minutes after landing.

The exact cause of death varies, depending on the landing surface and the person’s posture. People are especially unlikely to arrive at the hospital alive if they land on their head – more common for shorter (under 10 metres) and higher (over 25 metres) falls. A 1981 analysis of 100 suicidal jumps from the Golden Gate Bridge in San Francisco – height: 75 metres, velocity on impact with the water: 120 kilometres per hour – found numerous causes of instantaneous death including massive lung bruising, collapsed lungs, exploded hearts or damage to major blood vessels and lungs through broken ribs.

Survivors of great falls often report the sensation of time slowing down. The natural reaction is to struggle to maintain a feet-first landing, resulting in fractures to the leg bones, lower spinal column and life-threatening broken pelvises. The impact travelling up through the body can also burst the aorta and heart chambers. Yet this is probably still the safest way to land, despite the force being concentrated in a small area: the feet and legs form a “crumple zone” which provides some protection to the major internal organs.

Some experienced climbers or skydivers who have survived a fall report feeling focused, alert and driven to ensure they landed in the best way possible: relaxed, legs bent and, where possible, ready to roll.

Hanging

Suicides and old-fashioned “short drop” executions cause death by strangulation; the rope puts pressure on the windpipe and the arteries to the brain. This can cause unconsciousness in 10 seconds, but it takes longer if the noose is incorrectly sited. Witnesses of public hangings often reported victims “dancing” in pain at the end of the rope, struggling violently as they asphyxiated. Death only ensues after many minutes, as shown by the numerous people being resuscitated after being cut down – even after 15 minutes.

When public executions were outlawed in Britain in 1868, hangmen looked for a less performance-oriented approach. They eventually adopted the “long-drop” method, using a lengthier rope so the victim reached a speed that broke their necks. It had to be tailored to the victim’s weight, however, as too great a force could rip the head clean off, a professionally embarrassing outcome for the hangman.

Despite the public boasting of several prominent executioners in late 19th-century Britain, a 1992 analysis of the remains of 34 prisoners found that in only about half of cases was the cause of death wholly or partly due to spinal trauma. Just one-fifth showed the classic “hangman’s fracture” between the second and third cervical vertebrae. The others died in part from asphyxiation.

Lethal injection

Read full article
Continue reading page |1 |2 |3 |4

Michael Spence, an anthropologist at the University of Western Ontario in London, Canada, has found similar results in US victims. He concluded, however, that even if asphyxiation played a role, the trauma of the drop would have rapidly rendered all of them unconscious. “What the hangmen were looking for was quick cessation of activity,” he says. “And they knew enough about their craft to ensure that happened. The thing they feared most was decapitation.”
Lethal injection

US-government approved, but is it really painless?

Lethal injection was designed in Oklahoma in 1977 as a humane alternative to the electric chair. The state medical examiner and chair of anaesthesiology settled on a series of three drug injections. First comes the anaesthetic thiopental to speed away any feelings of pain, followed by a paralytic agent called pancuronium to stop breathing. Finally potassium chloride is injected, which stops the heart almost instantly.

Each drug is supposed to be administered in a lethal dose, a redundancy to ensure speedy and humane death. However, eyewitnesses have reported inmates convulsing, heaving and attempting to sit up during the procedure, suggesting the cocktail is not always completely effective.

Explosive decompression

In real life there has been just one fatal space depressurisation accident. This occurred on the Russian Soyuz-11 mission in 1971, when a seal leaked upon re-entry into the Earth’s atmosphere; upon landing all three flight crew were found dead from asphyxiation.

Most of our knowledge of depressurisation comes from animal experiments and the experiences of pilots in accidents at very high altitudes. When the external air pressure suddenly drops, the air in the lungs expands, tearing the fragile gas exchange tissues. This is especially damaging if the victim neglects to exhale prior to decompression or tries to hold their breath. Oxygen begins to escape from the blood and lungs.

Experiments on dogs in the 1950s showed that 30 to 40 seconds after the pressure drops, their bodies began to swell as the water in tissues vaporised, though the tight seal of their skin prevented them from “bursting”. The heart rate rises initially, then plummets. Bubbles of water vapour form in the blood and travel through the circulatory system, obstructing blood flow. After about a minute, blood effectively stops circulating.

Human survivors of rapid decompression accidents include pilots whose planes lost pressure, or in one case a NASA technician who accidentally depressurised his flight suit inside a vacuum chamber. They often report an initial pain, like being hit in the chest, and may remember feeling air escape from their lungs and the inability to inhale. Time to the loss of consciousness was generally less than 15 seconds.

How it feels to drown, get decapitated, get electrocuted, and more Read More »

But we’ve always done it this way …

From James Bennett’s “Let’s talk about Python 3.0” (The B-List: 5 December 2008):

There’s an old joke, so old that I don’t even know for certain where it originated, that’s often used to explain why big corporations do things the way they do. It involves some monkeys, a cage, a banana and a fire hose.

You build a nice big room-sized cage, and in one end of it you put five monkeys. In the other end you put the banana. Then you stand by with the fire hose. Sooner or later one of the monkeys is going to go after the banana, and when it does you turn on the fire hose and spray the other monkeys with it. Replace the banana if needed, then repeat the process. Monkeys are pretty smart, so they’ll figure this out pretty quickly: “If anybody goes for the banana, the rest of us get the hose.” Soon they’ll attack any member of their group who tries to go to the banana.

Once this happens, you take one monkey out of the cage and bring in a new one. The new monkey will come in, try to make friends, then probably go for the banana. And the other monkeys, knowing what this means, will attack him to stop you from using the hose on them. Eventually the new monkey will get the message, and will even start joining in on the attack if somebody else goes for the banana. Once this happens, take another of the original monkeys out of the cage and bring in another new monkey.

After repeating this a few times, there will come a moment when none of the monkeys in the cage have ever been sprayed by the fire hose; in fact, they’ll never even have seen the hose. But they’ll attack any monkey who goes to get the banana. If the monkeys could speak English, and if you could ask them why they attack anyone who goes for the banana, their answer would almost certainly be: “Well, I don’t really know, but that’s how we’ve always done things around here.”

This is a startlingly good analogy for the way lots of corporations do things: once a particular process is entrenched (and especially after a couple rounds of employee turnover), there’s nobody left who remembers why the company does things this way. There’s nobody who stops to think about whether this is still a good way to do things, or whether it was even a good idea way back at the beginning. The process continues through nothing more than inertia, and anyone who suggests a change is likely to end up viciously attacked by monkeys.

But this is also a really good analogy for the way a lot of software works: a function or a class or a library was written, once upon a time, and maybe at the time it was a good idea. Maybe now it’s not such a good idea, and actually causes more problems than it solves, but hey, that’s the way we’ve always done things around here, and who are you to suggest a change? Should I go get the fire hose?

But we’ve always done it this way … Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

The NSA and threats to privacy

From James Bamford’s “Big Brother Is Listening” (The Atlantic: April 2006):

This legislation, the 1978 Foreign Intelligence Surveillance Act, established the FISA court—made up of eleven judges handpicked by the chief justice of the United States—as a secret part of the federal judiciary. The court’s job is to decide whether to grant warrants requested by the NSA or the FBI to monitor communications of American citizens and legal residents. The law allows the government up to three days after it starts eavesdropping to ask for a warrant; every violation of FISA carries a penalty of up to five years in prison. Between May 18, 1979, when the court opened for business, until the end of 2004, it granted 18,742 NSA and FBI applications; it turned down only four outright.

Such facts worry Jonathan Turley, a George Washington University law professor who worked for the NSA as an intern while in law school in the 1980s. The FISA “courtroom,” hidden away on the top floor of the Justice Department building (because even its location is supposed to be secret), is actually a heavily protected, windowless, bug-proof installation known as a Sensitive Compartmented Information Facility, or SCIF.

It is true that the court has been getting tougher. From 1979 through 2000, it modified only two out of 13,087 warrant requests. But from the start of the Bush administration, in 2001, the number of modifications increased to 179 out of 5,645 requests. Most of those—173—involved what the court terms “substantive modifications.”

Contrary to popular perception, the NSA does not engage in “wiretapping”; it collects signals intelligence, or “sigint.” In contrast to the image we have from movies and television of an FBI agent placing a listening device on a target’s phone line, the NSA intercepts entire streams of electronic communications containing millions of telephone calls and e-mails. It runs the intercepts through very powerful computers that screen them for particular names, telephone numbers, Internet addresses, and trigger words or phrases. Any communications containing flagged information are forwarded by the computer for further analysis.

Names and information on the watch lists are shared with the FBI, the CIA, the Department of Homeland Security, and foreign intelligence services. Once a person’s name is in the files, even if nothing incriminating ever turns up, it will likely remain there forever. There is no way to request removal, because there is no way to confirm that a name is on the list.

In December of 1997, in a small factory outside the southern French city of Toulouse, a salesman got caught in the NSA’s electronic web. Agents working for the NSA’s British partner, the Government Communications Headquarters, learned of a letter of credit, valued at more than $1.1 million, issued by Iran’s defense ministry to the French company Microturbo. According to NSA documents, both the NSA and the GCHQ concluded that Iran was attempting to secretly buy from Microturbo an engine for the embargoed C-802 anti-ship missile. Faxes zapping back and forth between Toulouse and Tehran were intercepted by the GCHQ, which sent them on not just to the NSA but also to the Canadian and Australian sigint agencies, as well as to Britain’s MI6. The NSA then sent the reports on the salesman making the Iranian deal to a number of CIA stations around the world, including those in Paris and Bonn, and to the U.S. Commerce Department and the Customs Service. Probably several hundred people in at least four countries were reading the company’s communications.

Such events are central to the current debate involving the potential harm caused by the NSA’s warrantless domestic eavesdropping operation. Even though the salesman did nothing wrong, his name made its way into the computers and onto the watch lists of intelligence, customs, and other secret and law-enforcement organizations around the world. Maybe nothing will come of it. Maybe the next time he tries to enter the United States or Britain he will be denied, without explanation. Maybe he will be arrested. As the domestic eavesdropping program continues to grow, such uncertainties may plague innocent Americans whose names are being run through the supercomputers even though the NSA has not met the established legal standard for a search warrant. It is only when such citizens are turned down while applying for a job with the federal government—or refused when seeking a Small Business Administration loan, or turned back by British customs agents when flying to London on vacation, or even placed on a “no-fly” list—that they will realize that something is very wrong. But they will never learn why.

General Michael Hayden, director of the NSA from 1999 to 2005 and now principal deputy director of national intelligence, noted in 2002 that during the 1990s, e-communications “surpassed traditional communications. That is the same decade when mobile cell phones increased from 16 million to 741 million—an increase of nearly 50 times. That is the same decade when Internet users went from about 4 million to 361 million—an increase of over 90 times. Half as many land lines were laid in the last six years of the 1990s as in the whole previous history of the world. In that same decade of the 1990s, international telephone traffic went from 38 billion minutes to over 100 billion. This year, the world’s population will spend over 180 billion minutes on the phone in international calls alone.”

Intercepting communications carried by satellite is fairly simple for the NSA. The key conduits are the thirty Intelsat satellites that ring the Earth, 22,300 miles above the equator. Many communications from Europe, Africa, and the Middle East to the eastern half of the United States, for example, are first uplinked to an Intelsat satellite and then downlinked to AT&T’s ground station in Etam, West Virginia. From there, phone calls, e-mails, and other communications travel on to various parts of the country. To listen in on that rich stream of information, the NSA built a listening post fifty miles away, near Sugar Grove, West Virginia. Consisting of a group of very large parabolic dishes, hidden in a heavily forested valley and surrounded by tall hills, the post can easily intercept the millions of calls and messages flowing every hour into the Etam station. On the West Coast, high on the edge of a bluff overlooking the Okanogan River, near Brewster, Washington, is the major commercial downlink for communications to and from Asia and the Pacific. Consisting of forty parabolic dishes, it is reportedly the largest satellite antenna farm in the Western Hemisphere. A hundred miles to the south, collecting every whisper, is the NSA’s western listening post, hidden away on a 324,000-acre Army base in Yakima, Washington. The NSA posts collect the international traffic beamed down from the Intelsat satellites over the Atlantic and Pacific. But each also has a number of dishes that appear to be directed at domestic telecommunications satellites.

Until recently, most international telecommunications flowing into and out of the United States traveled by satellite. But faster, more reliable undersea fiber-optic cables have taken the lead, and the NSA has adapted. The agency taps into the cables that don’t reach our shores by using specially designed submarines, such as the USS Jimmy Carter, to attach a complex “bug” to the cable itself. This is difficult, however, and undersea taps are short-lived because the batteries last only a limited time. The fiber-optic transmission cables that enter the United States from Europe and Asia can be tapped more easily at the landing stations where they come ashore. With the acquiescence of the telecommunications companies, it is possible for the NSA to attach monitoring equipment inside the landing station and then run a buried encrypted fiber-optic “backhaul” line to NSA headquarters at Fort Meade, Maryland, where the river of data can be analyzed by supercomputers in near real time.

Tapping into the fiber-optic network that carries the nation’s Internet communications is even easier, as much of the information transits through just a few “switches” (similar to the satellite downlinks). Among the busiest are MAE East (Metropolitan Area Ethernet), in Vienna, Virginia, and MAE West, in San Jose, California, both owned by Verizon. By accessing the switch, the NSA can see who’s e-mailing with whom over the Internet cables and can copy entire messages. Last September, the Federal Communications Commission further opened the door for the agency. The 1994 Communications Assistance for Law Enforcement Act required telephone companies to rewire their networks to provide the government with secret access. The FCC has now extended the act to cover “any type of broadband Internet access service” and the new Internet phone services—and ordered company officials never to discuss any aspect of the program.

The National Security Agency was born in absolute secrecy. Unlike the CIA, which was created publicly by a congressional act, the NSA was brought to life by a top-secret memorandum signed by President Truman in 1952, consolidating the country’s various military sigint operations into a single agency. Even its name was secret, and only a few members of Congress were informed of its existence—and they received no information about some of its most important activities. Such secrecy has lent itself to abuse.

During the Vietnam War, for instance, the agency was heavily involved in spying on the domestic opposition to the government. Many of the Americans on the watch lists of that era were there solely for having protested against the war. … Even so much as writing about the NSA could land a person a place on a watch list.

For instance, during World War I, the government read and censored thousands of telegrams—the e-mail of the day—sent hourly by telegraph companies. Though the end of the war brought with it a reversion to the Radio Act of 1912, which guaranteed the secrecy of communications, the State and War Departments nevertheless joined together in May of 1919 to create America’s first civilian eavesdropping and code-breaking agency, nicknamed the Black Chamber. By arrangement, messengers visited the telegraph companies each morning and took bundles of hard-copy telegrams to the agency’s offices across town. These copies were returned before the close of business that day.

A similar tale followed the end of World War II. In August of 1945, President Truman ordered an end to censorship. That left the Signal Security Agency (the military successor to the Black Chamber, which was shut down in 1929) without its raw intelligence—the telegrams provided by the telegraph companies. The director of the SSA sought access to cable traffic through a secret arrangement with the heads of the three major telegraph companies. The companies agreed to turn all telegrams over to the SSA, under a plan code-named Operation Shamrock. It ran until the government’s domestic spying programs were publicly revealed, in the mid-1970s.

Frank Church, the Idaho Democrat who led the first probe into the National Security Agency, warned in 1975 that the agency’s capabilities

“could be turned around on the American people, and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it is done, is within the reach of the government to know. Such is the capacity of this technology.”

The NSA and threats to privacy Read More »

How Obama raised money in Silicon Valley & using the Net

From Joshua Green’s “The Amazing Money Machine” (The Atlantic: June 2008):

That early fund-raiser [in February 2007] and others like it were important to Obama in several respects. As someone attempting to build a campaign on the fly, he needed money to operate. As someone who dared challenge Hillary Clinton, he needed a considerable amount of it. And as a newcomer to national politics, though he had grassroots appeal, he needed to establish credibility by making inroads to major donors—most of whom, in California as elsewhere, had been locked down by the Clinton campaign.

Silicon Valley was a notable exception. The Internet was still in its infancy when Bill Clinton last ran for president, in 1996, and most of the immense fortunes had not yet come into being; the emerging tech class had not yet taken shape. So, unlike the magnates in California real estate (Walter Shorenstein), apparel (Esprit founder Susie Tompkins Buell), and entertainment (name your Hollywood celeb), who all had long-established loyalty to the Clintons, the tech community was up for grabs in 2007. In a colossal error of judgment, the Clinton campaign never made a serious approach, assuming that Obama would fade and that lack of money and cutting-edge technology couldn’t possibly factor into what was expected to be an easy race. Some of her staff tried to arrange “prospect meetings” in Silicon Valley, but they were overruled. “There was massive frustration about not being able to go out there and recruit people,” a Clinton consultant told me last year. As a result, the wealthiest region of the wealthiest state in the nation was left to Barack Obama.

Furthermore, in Silicon Valley’s unique reckoning, what everyone else considered to be Obama’s major shortcomings—his youth, his inexperience—here counted as prime assets.

[John Roos, Obama’s Northern California finance chair and the CEO of the Palo Alto law firm Wilson Sonsini Goodrich & Rosati]: “… we recognize what great companies have been built on, and that’s ideas, talent, and inspirational leadership.”

The true killer app on My.BarackObama.com is the suite of fund-raising tools. You can, of course, click on a button and make a donation, or you can sign up for the subscription model, as thousands already have, and donate a little every month. You can set up your own page, establish your target number, pound your friends into submission with e-mails to pony up, and watch your personal fund-raising “thermometer” rise. “The idea,” [Joe Rospars, a veteran of Dean’s campaign who had gone on to found an Internet fund-raising company and became Obama’s new-media director] says, “is to give them the tools and have them go out and do all this on their own.”

“What’s amazing,” says Peter Leyden of the New Politics Institute, “is that Hillary built the best campaign that has ever been done in Democratic politics on the old model—she raised more money than anyone before her, she locked down all the party stalwarts, she assembled an all-star team of consultants, and she really mastered this top-down, command-and-control type of outfit. And yet, she’s getting beaten by this political start-up that is essentially a totally different model of the new politics.”

Before leaving Silicon Valley, I stopped by the local Obama headquarters. It was a Friday morning in early March, and the circus had passed through town more than a month earlier, after Obama lost the California primary by nine points. Yet his headquarters was not only open but jammed with volunteers. Soon after I arrived, everyone gathered around a speakerphone, and Obama himself, between votes on the Senate floor, gave a brief hortatory speech telling volunteers to call wavering Edwards delegates in Iowa before the county conventions that Saturday (they took place two months after the presidential caucuses). Afterward, people headed off to rows of computers, put on telephone headsets, and began punching up phone numbers on the Web site, ringing a desk bell after every successful call. The next day, Obama gained nine delegates, including a Clinton delegate.

The most striking thing about all this was that the headquarters is entirely self-sufficient—not a dime has come from the Obama campaign. Instead, everything from the computers to the telephones to the doughnuts and coffee—even the building’s rent and utilities—is user-generated, arranged and paid for by local volunteers. It is one of several such examples across the country, and no other campaign has put together anything that can match this level of self-sufficiency.

But while his rivals continued to depend on big givers, Obama gained more and more small donors, until they finally eclipsed the big ones altogether. In February, the Obama campaign reported that 94 percent of their donations came in increments of $200 or less, versus 26 percent for Clinton and 13 percent for McCain. Obama’s claim of 1,276,000 donors through March is so large that Clinton doesn’t bother to compete; she stopped regularly providing her own number last year.

“If the typical Gore event was 20 people in a living room writing six-figure checks,” Gorenberg told me, “and the Kerry event was 2,000 people in a hotel ballroom writing four-figure checks, this year for Obama we have stadium rallies of 20,000 people who pay absolutely nothing, and then go home and contribute a few dollars online.” Obama himself shrewdly capitalizes on both the turnout and the connectivity of his stadium crowds by routinely asking them to hold up their cell phones and punch in a five-digit number to text their contact information to the campaign—to win their commitment right there on the spot.

How Obama raised money in Silicon Valley & using the Net Read More »

50% of people infected with personality-changing brain parasites from cats

From Carl Zimmer’s “The Return of the Puppet Masters” (Corante: 17 January 2006):

I was investigating the remarkable ability parasites have to manipulate the behavior of their hosts. The lancet fluke Dicrocoelium dendriticum, for example, forces its ant host to clamp itself to the tip of grass blades, where a grazing mammal might eat it. It’s in the fluke’s interest to get eaten, because only by getting into the gut of a sheep or some other grazer can it complete its life cycle. Another fluke, Euhaplorchis californiensis, causes infected fish to shimmy and jump, greatly increasing the chance that wading birds will grab them.

Those parasites were weird enough, but then I got to know Toxoplasma gondii. This single-celled parasite lives in the guts of cats, sheddding eggs that can be picked up by rats and other animals that can just so happen be eaten by cats. Toxoplasma forms cysts throughout its intermediate host’s body, including the brain. And yet a Toxoplasma-ridden rat is perfectly healthy. That makes good sense for the parasite, since a cat would not be particularly interested in eating a dead rat. But scientists at Oxford discovered that the parasite changes the rats in one subtle but vital way.

The scientists studied the rats in a six-foot by six-foot outdoor enclosure. They used bricks to turn it into a maze of paths and cells. In each corner of the enclosure they put a nest box along with a bowl of food and water. On each the nests they added a few drops of a particular odor. On one they added the scent of fresh straw bedding, on another the bedding from a rat’s nests, on another the scent of rabbit urine, on another, the urine of a cat. When they set healthy rats loose in the enclosure, the animals rooted around curiously and investigated the nests. But when they came across the cat odor, they shied away and never returned to that corner. This was no surprise: the odor of a cat triggers a sudden shift in the chemistry of rat brains that brings on intense anxiety. (When researchers test anti-anxiety drugs on rats, they use a whiff of cat urine to make them panic.) The anxiety attack made the healthy rats shy away from the odor and in general makes them leery of investigating new things. Better to lie low and stay alive.

Then the researchers put Toxoplasma-carrying rats in the enclosure. Rats carrying the parasite are for the most part indistinguishable from healthy ones. They can compete for mates just as well and have no trouble feeding themselves. The only difference, the researchers found, is that they are more likely to get themselves killed. The scent of a cat in the enclosure didn’t make them anxious, and they went about their business as if nothing was bothering them. They would explore around the odor at least as often as they did anywhere else in the enclosure. In some cases, they even took a special interest in the spot and came back to it over and over again.

The scientists speculated that Toxoplasma was secreted some substance that was altering the patterns of brain activity in the rats. This manipulation likely evolved through natural selection, since parasites that were more likely to end up in cats would leave more offpsring.

The Oxford scientists knew that humans can be hosts to Toxoplasma, too. People can become infected by its eggs by handling soil or kitty litter. For most people, the infection causes no harm. Only if a person’s immune system is weak does Toxoplasma grow uncontrollably. That’s why pregnant women are advised not to handle kitty litter, and why toxoplasmosis is a serious risk for people with AIDS. Otherwise, the parasite lives quietly in people’s bodies (and brains). It’s estimated that about half of all people on Earth are infected with Toxoplasma.

Parasitologist Jaroslav Flegr of Charles University in Prague administered psychological questionnaires to people infected with Toxoplasma and controls. Those infected, he found, show a small, but statistically significant, tendency to be more self-reproaching and insecure. Paradoxically, infected women, on average, tend to be more outgoing and warmhearted than controls, while infected men tend to be more jealous and suspicious.

… [E. Fuller Torrey of the Stanley Medical Research Institute in Bethesda, Maryland] and his colleagues had noticed some intriguing links between Toxoplasma and schizophrenia. Infection with the parasite has been associated with damage to a certain class of neurons (astrocytes). So has schizophrenia. Pregnant women with high levels of Toxoplasma antibodies in their blood were more likely to give birth to children who would later develop schizophrenia. Torrey lays out more links in this 2003 paper. While none is a smoking gun, they are certainly food for thought. It’s conceivable that exposure to Toxoplasma causes subtle changes in most people’s personality, but in a small minority, it has more devastating effects.

50% of people infected with personality-changing brain parasites from cats Read More »

Microsoft’s programmers, evaluated by an engineer

From John Wharton’s “The Origins of DOS” (Microprocessor Report: 3 October 1994):

In August of 1981, soon after Microsoft had acquired full rights to 86-DOS, Bill Gates visited Santa Clara in an effort to persuade Intel to abandon a joint development project with DRI and endorse MS-DOS instead. It was I – the Intel applications engineer then responsible for iRMX-86 and other 16-bit operating systems – who was assigned the task of performing a technical evaluation of the 86- DOS software. It was I who first informed Gates that the software he just bought was not, in fact, fully compatible with CP/M 2.2. At the time I had the distinct impression that, until then, he’d thought the entire OS had been cloned.

The strong impression I drew 13 years ago was that Microsoft programmers were untrained, undisciplined, and content merely to replicate other people’s ideas, and that they did not seem to appreciate the importance of defining operating systems and user interfaces with an eye to the future.

Microsoft’s programmers, evaluated by an engineer Read More »

How movies are moved around on botnets

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Figure 2.11 illustrates the use of botnets for selling stolen intellectual property, in this case Movies, TV shows, or video. The diagram is based on information from the Pyramid of Internet Piracy created by Motion Picture Arts Association (MPAA) and an actual case. To start the process, a supplier rips a movie or software from an existing DVD or uses a camcorder to record a first run movie in the theaters. These are either burnt to DVDs to be sold on the black market or they are sold or provided to a Release Group. The Release Group is likely to be an organized crime group, excuse me, business associates who wish to invest in the entertainment industry. I am speculating that the Release Group engages (hires) a botnet operator that can meet their delivery and performance specifications. The botherder then commands the botnet clients to retrieve the media from the supplier and store it in a participating botnet client. These botnet clients may be qualified according to the system processor speed and the nature of the Internet connection. The huge Internet pipe, fast connection, and lax security at most universities make them a prime target for this form of botnet application. MPAA calls these clusters of high speed locations “Topsites.”

. . .

According to the MPAA, 44 percent of all movie piracy is attributed to college students. Therefore it makes sense that the Release Groups would try to use university botnet clients as Topsites. The next groups in the chain are called Facilitators. They operate Web sites and search engines and act as Internet directories. These may be Web sites for which you pay a monthly fee or a fee per download. Finally individuals download the films for their own use or they list them via Peer-to-Peer sharing applications like Gnutella, BitTorrent for download.

How movies are moved around on botnets Read More »