teaching

My new book – Google Apps Deciphered – is out!

I’m really proud to announce that my 5th book is now out & available for purchase: Google Apps Deciphered: Compute in the Cloud to Streamline Your Desktop. My other books include:

(I’ve also contributed to two others: Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux and Microsoft Vista for IT Security Professionals.)

Google Apps Deciphered is a guide to setting up Google Apps, migrating to it, customizing it, and using it to improve productivity, communications, and collaboration. I walk you through each leading component of Google Apps individually, and then show my readers exactly how to make them work together for you on the Web or by integrating them with your favorite desktop apps. I provide practical insights on Google Apps programs for email, calendaring, contacts, wikis, word processing, spreadsheets, presentations, video, and even Google’s new web browser Chrome. My aim was to collect together and present tips and tricks I’ve gained by using and setting up Google Apps for clients, family, and friends.

Here’s the table of contents:

  • 1: Choosing an Edition of Google Apps
  • 2: Setting Up Google Apps
  • 3: Migrating Email to Google Apps
  • 4: Migrating Contacts to Google Apps
  • 5: Migrating Calendars to Google Apps
  • 6: Managing Google Apps Services
  • 7: Setting Up Gmail
  • 8: Things to Know About Using Gmail
  • 9: Integrating Gmail with Other Software and Services
  • 10: Integrating Google Contacts with Other Software and Services
  • 11: Setting Up Google Calendar
  • 12: Things to Know About Using Google Calendar
  • 13: Integrating Google Calendar with Other Software and Services
  • 14: Things to Know About Using Google Docs
  • 15: Integrating Google Docs with Other Software and Services
  • 16: Setting Up Google Sites
  • 17: Things to Know About Using Google Sites
  • 18: Things to Know About Using Google Talk
  • 19: Things to Know About Using Start Page
  • 20: Things to Know About Using Message Security and Recovery
  • 21: Things to Know About Using Google Video
  • Appendix A: Backing Up Google Apps
  • Appendix B: Dealing with Multiple Accounts
  • Appendix C: Google Chrome: A Browser Built for Cloud Computing

If you want to know more about Google Apps and how to use it, then I know you’ll enjoy and learn from Google Apps Deciphered. You can read about and buy the book at Amazon (http://www.amazon.com/Google-Apps-Deciphered-Compute-Streamline/dp/0137004702) for $26.39. If you have any questions or comments, don’t hesitate to contact me at scott at granneman dot com.

My new book – Google Apps Deciphered – is out! Read More »

A single medium, with a single search engine, & a single info source

From Nicholas Carr’s “All hail the information triumvirate!” (Rough Type: 22 January 2009):

Today, another year having passed, I did the searches [on Google] again. And guess what:

World War II: #1
Israel: #1
George Washington: #1
Genome: #1
Agriculture: #1
Herman Melville: #1
Internet: #1
Magna Carta: #1
Evolution: #1
Epilepsy: #1

Yes, it’s a clean sweep for Wikipedia.

The first thing to be said is: Congratulations, Wikipedians. You rule. Seriously, it’s a remarkable achievement. Who would have thought that a rag-tag band of anonymous volunteers could achieve what amounts to hegemony over the results of the most popular search engine, at least when it comes to searches for common topics.

The next thing to be said is: what we seem to have here is evidence of a fundamental failure of the Web as an information-delivery service. Three things have happened, in a blink of history’s eye: (1) a single medium, the Web, has come to dominate the storage and supply of information, (2) a single search engine, Google, has come to dominate the navigation of that medium, and (3) a single information source, Wikipedia, has come to dominate the results served up by that search engine. Even if you adore the Web, Google, and Wikipedia – and I admit there’s much to adore – you have to wonder if the transformation of the Net from a radically heterogeneous information source to a radically homogeneous one is a good thing. Is culture best served by an information triumvirate?

It’s hard to imagine that Wikipedia articles are actually the very best source of information for all of the many thousands of topics on which they now appear as the top Google search result. What’s much more likely is that the Web, through its links, and Google, through its search algorithms, have inadvertently set into motion a very strong feedback loop that amplifies popularity and, in the end, leads us all, lemminglike, down the same well-trod path – the path of least resistance. You might call this the triumph of the wisdom of the crowd. I would suggest that it would be more accurately described as the triumph of the wisdom of the mob. The former sounds benign; the latter, less so.

A single medium, with a single search engine, & a single info source Read More »

Social networks can be used to manipulate affinity groups

From Ronald A. Cass’ “Madoff Exploited the Jews” (The Wall Street Journal: 18 December 2008):

Steven Spielberg. Elie Wiesel. Mort Zuckerman. Frank Lautenberg. Yeshiva University. As I read the list of people and enterprises reportedly bilked to the tune of $50 billion by Bernard Madoff, I recalled a childhood in which my father received bad news by asking first, “Was it a Jew?” My father coupled sensitivity to anti-Semitism with special sympathy for other Jews. In contrast, Mr. Madoff, it seems, targeted other Jews, drawing them in at least in some measure because of a shared faith.

The Madoff tale is striking in part because it is like stealing from family. Yet frauds that prey on people who share bonds of religion or ethnicity, who travel in the same circles, are quite common. Two years ago the Securities and Exchange Commission issued a warning about “affinity fraud.” The SEC ticked off a series of examples of schemes that were directed at members of a community: Armenian-Americans, Baptist Church members, Jehovah’s Witnesses, African-American church groups, Korean-Americans. In each case, the perpetrator relied on the fact that being from the same community provided a reason to trust the sales pitch, to believe it was plausible that someone from the same background would give you a deal that, if offered by someone without such ties, would sound too good to be true.

The sense of common heritage, of community, also makes it less seemly to ask hard questions. Pressing a fellow parishioner or club member for hard information is like demanding receipts from your aunt — it just doesn’t feel right. Hucksters know that, they play on it, and they count on our trust to make their confidence games work.

The level of affinity and of trust may be especially high among Jews. The Holocaust and generations of anti-Semitic laws and practices around the world made reliance on other Jews, and care for them, a survival instinct. As a result, Jews are often an easy target both for fund-raising appeals and fraud. But affinity plays a role in many groups, making members more trusting of appeals within the group.

Social networks can be used to manipulate affinity groups Read More »

DIY genetic engineering

From Marcus Wohlsen’s “Amateurs are trying genetic engineering at home” (AP: 25 December 2008):

Now, tinkerers are working at home with the basic building blocks of life itself.

Using homemade lab equipment and the wealth of scientific knowledge available online, these hobbyists are trying to create new life forms through genetic engineering — a field long dominated by Ph.D.s toiling in university and corporate laboratories.

In her San Francisco dining room lab, for example, 31-year-old computer programmer Meredith L. Patterson is trying to develop genetically altered yogurt bacteria that will glow green to signal the presence of melamine, the chemical that turned Chinese-made baby formula and pet food deadly.

Many of these amateurs may have studied biology in college but have no advanced degrees and are not earning a living in the biotechnology field. Some proudly call themselves “biohackers” — innovators who push technological boundaries and put the spread of knowledge before profits.

In Cambridge, Mass., a group called DIYbio is setting up a community lab where the public could use chemicals and lab equipment, including a used freezer, scored for free off Craigslist, that drops to 80 degrees below zero, the temperature needed to keep many kinds of bacteria alive.

Patterson, the computer programmer, wants to insert the gene for fluorescence into yogurt bacteria, applying techniques developed in the 1970s.

She learned about genetic engineering by reading scientific papers and getting tips from online forums. She ordered jellyfish DNA for a green fluorescent protein from a biological supply company for less than $100. And she built her own lab equipment, including a gel electrophoresis chamber, or DNA analyzer, which she constructed for less than $25, versus more than $200 for a low-end off-the-shelf model.

DIY genetic engineering Read More »

Social networking and “friendship”

From danah boyd’s “Friends, Friendsters, and MySpace Top 8: Writing Community Into Being on Social Network Sites” (First Monday: December 2006)

John’s reference to “gateway Friends” concerns a specific technological affordance unique to Friendster. Because the company felt it would make the site more intimate, Friendster limits users from surfing to Profiles beyond four degrees (Friends of Friends of Friends of Friends). When people login, they can see how many Profiles are “in their network” where the network is defined by the four degrees. For users seeking to meet new people, growing this number matters. For those who wanted it to be intimate, keeping the number smaller was more important. In either case, the number of people in one’s network was perceived as directly related to the number of friends one had.

“I am happy with the number of friends I have. I can access over 26,000 profiles, which is enough for me!” — Abby

The number of Friends one has definitely affects the size of one’s network but connecting to Collectors plays a much more significant role. Because these “gateway friends” (a.k.a. social network hubs) have lots of Friends who are not connected to each other, they expand the network pretty rapidly. Thus, connecting to Collectors or connecting to people who connect to Collectors opens you up to a large network rather quickly.

While Collectors could be anyone interested in amassing many Friends, fake Profiles were developed to aid in this process. These Fakesters included characters, celebrities, objects, icons, institutions, and ideas. For example, Homer Simpson had a Profile alongside Jesus and Brown University. By connecting people with shared interests or affiliations, Fakesters supported networking between like-minded individuals. Because play and connecting were primary incentives for many Fakesters, they welcomed any and all Friends. Likewise, people who wanted access to more people connected to Fakesters. Fakesters helped centralize the network and two Fakesters — Burning Man and Ali G — reached mass popularity with over 10,000 Friends each before the Web site’s creators put an end to their collecting and deleted both accounts. This began the deletion of all Fakesters in what was eventually termed the Fakester Genocide [8].

While Friendster was irritated by fake Profiles, MySpace embraced this practice. One of MySpace’s early strategies was to provide a place for everyone who was rejected from Friendster or who didn’t want to be on a dating site [9]. Bands who had been kicked off of Friendster were some of the earliest MySpace users. Over time, movie stars, politicians, porn divas, comedians, and other celebrities joined the fray. Often, the person behind these Profiles was not the celebrity but a manager. Corporations began creating Profiles for their products and brands. While Friendster eventually began allowing such fake Profiles for a fee, MySpace never charged people for their commercial uses.

Investigating Friendship in LiveJournal, Kate Raynes-Goldie and Fono (2005) found that there was tremendous inconsistency in why people Friended others. They primarily found that Friendship stood for: content, offline facilitator, online community, trust, courtesy, declaration, or nothing. When I asked participants about their practices on Friendster and MySpace, I found very similar incentives. The most common reasons for Friendship that I heard from users [11] were:

1. Actual friends
2. Acquaintances, family members, colleagues
3. It would be socially inappropriate to say no because you know them
4. Having lots of Friends makes you look popular
5. It’s a way of indicating that you are a fan (of that person, band, product, etc.)
6. Your list of Friends reveals who you are
7. Their Profile is cool so being Friends makes you look cool
8. Collecting Friends lets you see more people (Friendster)
9. It’s the only way to see a private Profile (MySpace)
10. Being Friends lets you see someone’s bulletins and their Friends-only blog posts (MySpace)
11. You want them to see your bulletins, private Profile, private blog (MySpace)
12. You can use your Friends list to find someone later
13. It’s easier to say yes than no

These incentives account for a variety of different connections. While the first three reasons all concern people that you know, the rest can explain why people connect to a lot of people that they do not know. Most reveal how technical affordances affect people’s incentives to connect.

Raynes-Goldie and Fono (2005) also found that there is a great deal of social anxiety and drama provoked by Friending in LiveJournal (LJ). In LJ, Friendship does not require reciprocity. Anyone can list anyone else as a Friend; this articulation is public but there is no notification. The value of Friendship on LJ is deeply connected to the privacy settings and subscription processes. The norm on LJ is to read others’ entries through a “Friends page.” This page is an aggregation of all of an individual’s Friends’ posts. When someone posts an LJ entry, they have a choice as to whether the post should be public, private, Friends-only, or available to subgroups of Friends. In this way, it is necessary to be someone’s Friend to have access to Friends-only posts. To locate how the multiple and conflicting views of Friendship cause tremendous conflict and misunderstanding on LJ, Raynes-Goldie and Fono speak of “hyperfriending.” This process is quite similar to what takes place on other social network sites, but there are some differences. Because Friends-only posts are commonplace, not being someone’s Friend is a huge limitation to information access. Furthermore, because reciprocity is not structurally required, there’s a much greater social weight to recognizing someone’s Friendship and reciprocating intentionally. On MySpace and Friendster, there is little to lose by being loose with Friendship and more to gain; the perception is that there is much more to lose on LJ.

While users can scroll through their list of Friends, not all Friends are displayed on the participant’s Profile. Most social network sites display Friends in the order in which their account was created or their last login date. By implementing a “Top 8” feature, MySpace changed the social dynamics around the ordering of Friends. Initially, “Top 8” allowed users to select eight Friends to display on their Profile. More recently, that feature was changed to “Top Friends” as users have more options in how many people they could list [12]. Many users will only list people that they know and celebrities that they admire in their Top Friends, often as a way to both demarcate their identity and signal meaningful relationships with others.

There are many advantages to the Top Friends feature. It allows people to show connections that really say something about who they are. It also serves as a bookmark to the people that matter. By choosing to list the people who one visits the most frequently, simply going to one’s Profile provides a set of valuable links.

“As a kid, you used your birthday party guest list as leverage on the playground. ‘If you let me play I’ll invite you to my birthday party.’ Then, as you grew up and got your own phone, it was all about someone being on your speed dial. Well today it’s the MySpace Top 8. It’s the new dangling carrot for gaining superficial acceptance. Taking someone off your Top 8 is your new passive aggressive power play when someone pisses you off.” — Nadine

There are a handful of social norms that pervade Top 8 culture. Often, the person in the upper left (“1st” position) is a significant other, dear friend, or close family member. Reciprocity is another salient component of Top Friends dynamics. If Susan lists Mary on her Top 8, she expects Mary to reciprocate. To acknowledge this, Mary adds a Comment to Susan’s page saying, “Thanx for puttin me on ur Top 8! I put you on mine 2.” By publicly acknowledging this addition, Mary is making certain Susan’s viewers recognize Mary’s status on Susan’s list. Of course, just being in someone’s list is not always enough. As Samantha explains, “Friends get into fights because they’re not 1st on someone’s Top 8, or somebody else is before them.” While some people are ecstatic to be added, there are many more that are frustrated because they are removed or simply not listed.

The Top Friends feature requires participants to actively signal their relationship with others. Such a system makes it difficult to be vague about who matters the most, although some tried by explaining on their bulletins what theme they are using to choose their Top 8 this week: “my Sagittarius friends,” “my basketball team,” and “people whose initials are BR.” Still others relied on fake Profiles for their Top 8.

The networked nature of impressions does not only affect the viewer — this is how newcomers decided what to present in the first place. When people first joined Friendster, they took cues from the people who invited them. Three specific subcultures dominated the early adopters — bloggers, attendees of the Burning Man [14] festival, and gay men mostly living in New York. If the invitee was a Burner, their Profile would probably be filled with references to the event with images full of half-naked, costumed people running around the desert. As such, newcomers would get the impression that it was a site for Burners and they would create a Profile that displayed that facet of their identity. In decided who to invite, newcomers would perpetuate the framing by only inviting people who are part of the Burning Man subculture.

Interestingly, because of this process, Burners believed that the site was for Burners, gay men thought it was a gay dating site, and bloggers were ecstatic to have a geek socializing tool. The reason each group got this impression had to do with the way in which context was created on these systems. Rather than having the context dictated by the environment itself, context emerged through Friends networks. As a result, being socialized into Friendster meant connected to Friends that reinforced the contextual information of early adopters.

The growth of MySpace followed a similar curve. One of the key early adopter groups were hipsters living in the Silverlake neighborhood of Los Angeles. They were passionate about indie rock music and many were musicians, promoters, club goers, etc. As MySpace took hold, long before any press was covering the site, MySpace took off amongst 20/30-something urban socializers, musicians, and teenagers. The latter group may not appear obvious, but teenagers are some of the most active music consumers — they follow music culture avidly, even when they are unable to see the bands play live due to age restrictions. As the site grew, the teenagers and 20/30-somethings pretty much left each other alone, although bands bridged these groups. It was not until the site was sold to News Corp. for US$580 million in the summer of 2005 that the press began covering the phenomenon. The massive press helped it grow larger, penetrating those three demographics more deeply but also attracting new populations, namely adults who are interested in teenagers (parents, teachers, pedophiles, marketers).

When context is defined by whom one Friends, and addressing multiple audiences simultaneously complicates all relationships, people must make hard choices. Joshua Meyrowitz (1985) highlights this problem in reference to television. In the early 1960s, Stokely Carmichael regularly addressed segregated black and white audiences about the values of Black Power. Depending on his audience, he used very different rhetorical styles. As his popularity grew, he began to attract media attention and was invited to speak on TV and radio. Unfortunately, this was more of a curse than a blessing because the audiences he would reach through these mediums included both black and white communities. With no way to reconcile the two different rhetorical styles, he had to choose. In choosing to maintain his roots in front of white listeners, Carmichael permanently alienated white society from the messages of Black Power.

Notes

10. Friendster originally limited users to 150 Friends. It is no accident that they chose 150, as this is the “Dunbar number.” In his research on gossip and grooming, Robin Dunbar argues that there is a cognitive limit to the number of relations that one can maintain. People can only keep gossip with 150 people at any given time (Dunbar, 1998). By capping Friends at 150, Friendster either misunderstood Dunbar or did not realize that their users were actually connecting to friends from the past with whom they are not currently engaging.

12. Eight was the maximum number of Friends that the system initially let people have. Some users figured out how to hack the system to display more Friends; there are entire bulletin boards dedicated to teaching others how to hack this. Consistently, upping the limit was the number one request that the company received. In the spring of 2006, MySpace launched an ad campaign for X-Men. In return for Friending X-Men, users were given the option to have 12, 16, 20, or 24 Friends in their Top Friends section. Millions of users did exactly that. In late June, this feature was introduced to everyone, regardless of Friending X-Men. While eight is no longer the limit, people move between calling it Top 8 or Top Friends. I will use both terms interchangeably, even when the number of Friends might be greater than eight.

Social networking and “friendship” Read More »

Many layers of cloud computing, or just one?

From Nicholas Carr’s “Further musings on the network effect and the cloud” (Rough Type: 27 October 2008):

I think O’Reilly did a nice job of identifying the different layers of the cloud computing business – infrastructure, development platform, applications – and I think he’s right that they’ll have different economic and competitive characteristics. One thing we don’t know yet, though, is whether those layers will in the long run exist as separate industry sectors or whether they’ll collapse into a single supply model. In other words, will the infrastructure suppliers also come to dominate the supply of apps? Google and Microsoft are obviously trying to play across all three layers, while Amazon so far seems content to focus on the infrastructure business and Salesforce is expanding from the apps layer to the development platform layer. The degree to which the layers remain, or don’t remain, discrete business sectors will play a huge role in determining the ultimate shape, economics, and degree of consolidation in cloud computing.

Let me end on a speculative note: There’s one layer in the cloud that O’Reilly failed to mention, and that layer is actually on top of the application layer. It’s what I’ll call the device layer – encompassing all the various appliances people will use to tap the cloud – and it may ultimately come to be the most interesting layer. A hundred years ago, when Tesla, Westinghouse, Insull, and others were building the cloud of that time – the electric grid – companies viewed the effort in terms of the inputs to their business: in particular, the power they needed to run the machines that produced the goods they sold. But the real revolutionary aspect of the electric grid was not the way it changed business inputs – though that was indeed dramatic – but the way it changed business outputs. After the grid was built, we saw an avalanche of new products outfitted with electric cords, many of which were inconceivable before the grid’s arrival. The real fortunes were made by those companies that thought most creatively about the devices that consumers would plug into the grid. Today, we’re already seeing hints of the device layer – of the cloud as output rather than input. Look at the way, for instance, that the little old iPod has shaped the digital music cloud.

Many layers of cloud computing, or just one? Read More »

Problems with airport security

From Jeffrey Goldberg’s “The Things He Carried” (The Atlantic: November 2008):

Because the TSA’s security regimen seems to be mainly thing-based—most of its 44,500 airport officers are assigned to truffle through carry-on bags for things like guns, bombs, three-ounce tubes of anthrax, Crest toothpaste, nail clippers, Snapple, and so on—I focused my efforts on bringing bad things through security in many different airports, primarily my home airport, Washington’s Reagan National, the one situated approximately 17 feet from the Pentagon, but also in Los Angeles, New York, Miami, Chicago, and at the Wilkes-Barre/Scranton International Airport (which is where I came closest to arousing at least a modest level of suspicion, receiving a symbolic pat-down—all frisks that avoid the sensitive regions are by definition symbolic—and one question about the presence of a Leatherman Multi-Tool in my pocket; said Leatherman was confiscated and is now, I hope, living with the loving family of a TSA employee). And because I have a fair amount of experience reporting on terrorists, and because terrorist groups produce large quantities of branded knickknacks, I’ve amassed an inspiring collection of al-Qaeda T-shirts, Islamic Jihad flags, Hezbollah videotapes, and inflatable Yasir Arafat dolls (really). All these things I’ve carried with me through airports across the country. I’ve also carried, at various times: pocketknives, matches from hotels in Beirut and Peshawar, dust masks, lengths of rope, cigarette lighters, nail clippers, eight-ounce tubes of toothpaste (in my front pocket), bottles of Fiji Water (which is foreign), and, of course, box cutters. I was selected for secondary screening four times—out of dozens of passages through security checkpoints—during this extended experiment. At one screening, I was relieved of a pair of nail clippers; during another, a can of shaving cream.

During one secondary inspection, at O’Hare International Airport in Chicago, I was wearing under my shirt a spectacular, only-in-America device called a “Beerbelly,” a neoprene sling that holds a polyurethane bladder and drinking tube. The Beerbelly, designed originally to sneak alcohol—up to 80 ounces—into football games, can quite obviously be used to sneak up to 80 ounces of liquid through airport security. (The company that manufactures the Beerbelly also makes something called a “Winerack,” a bra that holds up to 25 ounces of booze and is recommended, according to the company’s Web site, for PTA meetings.) My Beerbelly, which fit comfortably over my beer belly, contained two cans’ worth of Bud Light at the time of the inspection. It went undetected. The eight-ounce bottle of water in my carry-on bag, however, was seized by the federal government.

Schnei­er and I walked to the security checkpoint. “Counter­terrorism in the airport is a show designed to make people feel better,” he said. “Only two things have made flying safer: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.” This assumes, of course, that al-Qaeda will target airplanes for hijacking, or target aviation at all. “We defend against what the terrorists did last week,” Schnei­er said. He believes that the country would be just as safe as it is today if airport security were rolled back to pre-9/11 levels. “Spend the rest of your money on intelligence, investigations, and emergency response.”

We took our shoes off and placed our laptops in bins. Schnei­er took from his bag a 12-ounce container labeled “saline solution.”

“It’s allowed,” he said. Medical supplies, such as saline solution for contact-lens cleaning, don’t fall under the TSA’s three-ounce rule.

“What’s allowed?” I asked. “Saline solution, or bottles labeled saline solution?”

“Bottles labeled saline solution. They won’t check what’s in it, trust me.”

They did not check. As we gathered our belongings, Schnei­er held up the bottle and said to the nearest security officer, “This is okay, right?” “Yep,” the officer said. “Just have to put it in the tray.”

“Maybe if you lit it on fire, he’d pay attention,” I said, risking arrest for making a joke at airport security. (Later, Schnei­er would carry two bottles labeled saline solution—24 ounces in total—through security. An officer asked him why he needed two bottles. “Two eyes,” he said. He was allowed to keep the bottles.)

We were in the clear. But what did we prove?

“We proved that the ID triangle is hopeless,” Schneier said.

The ID triangle: before a passenger boards a commercial flight, he interacts with his airline or the government three times—when he purchases his ticket; when he passes through airport security; and finally at the gate, when he presents his boarding pass to an airline agent. It is at the first point of contact, when the ticket is purchased, that a passenger’s name is checked against the government’s no-fly list. It is not checked again, and for this reason, Schnei­er argued, the process is merely another form of security theater.

“The goal is to make sure that this ID triangle represents one person,” he explained. “Here’s how you get around it. Let’s assume you’re a terrorist and you believe your name is on the watch list.” It’s easy for a terrorist to check whether the government has cottoned on to his existence, Schnei­er said; he simply has to submit his name online to the new, privately run CLEAR program, which is meant to fast-pass approved travelers through security. If the terrorist is rejected, then he knows he’s on the watch list.

To slip through the only check against the no-fly list, the terrorist uses a stolen credit card to buy a ticket under a fake name. “Then you print a fake boarding pass with your real name on it and go to the airport. You give your real ID, and the fake boarding pass with your real name on it, to security. They’re checking the documents against each other. They’re not checking your name against the no-fly list—that was done on the airline’s computers. Once you’re through security, you rip up the fake boarding pass, and use the real boarding pass that has the name from the stolen credit card. Then you board the plane, because they’re not checking your name against your ID at boarding.”

What if you don’t know how to steal a credit card?

“Then you’re a stupid terrorist and the government will catch you,” he said.

What if you don’t know how to download a PDF of an actual boarding pass and alter it on a home computer?

“Then you’re a stupid terrorist and the government will catch you.”

I couldn’t believe that what Schneier was saying was true—in the national debate over the no-fly list, it is seldom, if ever, mentioned that the no-fly list doesn’t work. “It’s true,” he said. “The gap blows the whole system out of the water.”

Problems with airport security Read More »

Bruce Schneier on wholesale, constant surveillance

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”

What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.

Bruce Schneier on wholesale, constant surveillance Read More »

How the Storm botnet defeats anti-virus programs

From Lisa Vaas’ “Storm Worm Botnet Lobotomizing Anti-Virus Programs” (eWeek: 24 October 2007):

According to an Oct. 22 posting by Sophos analyst Richard Cohen, the Storm botnet – Sophos calls it Dorf, and its also known as Ecard malware – is dropping files that call a routine that gets Windows to tell it every time a new process is started. The malware checks the process file name against an internal list and kills the ones that match – sometimes. But Storm has taken a new twist: It now would rather leave processes running and just patch entry points of loading processes that might pose a threat to it. Then, when processes such as anti-virus programs run, they simply return a value of 0.

The strategy means that users wont be alarmed by their anti-virus software not running. Even more ominously, the technique is designed to fool NAC (network access control) systems, which bar insecure clients from registering on a network by checking to see whether a client is running anti-virus software and whether its patched.

Its the latest evidence of why Storm is “the scariest and most substantial threat” security researchers have ever seen, he said. Storm is patient, its resilient, its adaptive in that it can defeat anti-virus products in multiple ways (programmatically, it changes its signature every 30 minutes), its invisible because it comes with a rootkit built in and hides at the kernel level, and its clever enough to change every few weeks.

Hence the hush-hush nature of research around Storm. Corman said he can tell us that its now accurately pegged at 6 million, but he cant tell us who came up with the figure, or how. Besides retribution, Storms ability to morph means that those who know how to watch it are jealously guarding their techniques. “None of the researchers wanted me to say anything about it,” Corman said. “They’re afraid of retaliation. They fear that if we disclose their unique means of finding information on Storm,” the botnet herder will change tactics yet again and the window into Storm will slam shut.

How the Storm botnet defeats anti-virus programs Read More »

One group files 99.9% of all complaints about TV content

From Christopher M. Fairman’s “Fuck” (bepress Legal Series: 7 March 2006):

The PTC [Parents Television Council] is a perfect example of the way word taboo is perpetuated. The group’s own irrational word fetish – which they try to then impose on others – fuels unhealthy attitudes toward sex that then furthers the taboo status of the word. See supra notes 119-121 and accompanying text (describing this taboo effect). The PTC has even created a pull-down, web-based form that allows people to file an instant complaint with the FCC about specific broadcasts, apparently without regard to whether you actually saw the program or not. See, e.g., FCC Indecency Complaint Form, https://www.parentstv.org/ptc/action/sweeps/main.asp (last visited Feb. 10, 2006) (allowing instant complaints to be filed against episodes of NCIS, Family Guy, and/or The Vibe Awards). This squeaky wheel of a special interest group literally dominates FCC complaints. Consider this data. In 2003, the PTC was responsible for filing 99.86% of all indecency complaints. In 2004, the figure was up to 99.9%.

One group files 99.9% of all complaints about TV content Read More »

CopyBot copies all sorts of items in Second Life

From Glyn Moody’s “The duplicitous inhabitants of Second Life” (The Guardian: 23 November 2006):

What would happen to business and society if you could easily make a copy of anything – not just MP3s and DVDs, but clothes, chairs and even houses? That may not be a problem most of us will have to confront for a while yet, but the 1.5m residents of the virtual world Second Life are already grappling with this issue.

A new program called CopyBot allows Second Life users to duplicate repeatedly certain elements of any object in the vicinity – and sometimes all of it. That’s awkward in a world where such virtual goods can be sold for real money. When CopyBot first appeared, some retailers in Second Life shut up shop, convinced that their virtual goods were about to be endlessly copied and rendered worthless. Others protested, and suggested that in the absence of scarcity, Second Life’s economy would collapse.

Instead of sending a flow of pictures of the virtual world to the user as a series of pixels – something that would be impractical to calculate – the information would be transmitted as a list of basic shapes that were re-created on the user’s PC. For example, a virtual house might be a cuboid with rectangles representing windows and doors, cylinders for the chimney stacks etc.

This meant the local world could be sent in great detail very compactly, but also that the software on the user’s machine had all the information for making a copy of any nearby object. It’s like the web: in order to display a page, the browser receives not an image of the page, but all the underlying HTML code to generate that page, which also means that the HTML of any web page can be copied perfectly. Thus CopyBot – written by a group called libsecondlife as part of an open-source project to create Second Life applications – or something like it was bound to appear one day.

Liberating the economy has led to a boom in creativity, just as Rosedale hoped. It is in constant expansion as people buy virtual land, and every day more than $500,000 (£263,000) is spent buying virtual objects. But the downside is that unwanted copying is potentially a threat to the substantial businesses selling virtual goods that have been built up, and a concern for the real-life companies such as IBM, Adidas and Nissan which are beginning to enter Second Life.

Just as it is probably not feasible to stop “grey goo” – the Second Life equivalent of spam, which takes the form of self- replicating objects malicious “griefers” use to gum up the main servers – so it is probably technically impossible to stop copying. Fortunately, not all aspects of an object can be duplicated. To create complex items – such as a virtual car that can be driven – you use a special programming language to code their realistic behaviour. CopyBot cannot duplicate these programs because they are never passed to the user, but run on the Linden Lab’s computers.

As for the elements that you can copy, such as shape and texture, Rosedale explains: “What we’re going to do is add a lot of attribution. You’ll be able to easily see when an object or texture was first created,” – and hence if something is a later copy.

CopyBot copies all sorts of items in Second Life Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

Why people “friend” others on social networks

From danah boyd’s “Facebook’s ‘Privacy Trainwreck’: Exposure, Invasion, and Drama” (8 September 2006):

Why does everyone assume that Friends equals friends? Here are some of the main reasons why people friend other people on social network sites:

1. Because they are actual friends
2. To be nice to people that you barely know (like the folks in your class)
3. To keep face with people that they know but don’t care for
4. As a way of acknowledging someone you think is interesting
5. To look cool because that link has status
6. (MySpace) To keep up with someone’s blog posts, bulletins or other such bits
7. (MySpace) To circumnavigate the “private” problem that you were forced to use cuz of your parents
8. As a substitute for bookmarking or favoriting
9. Cuz it’s easier to say yes than no if you’re not sure

Why people “friend” others on social networks Read More »

Richard Stallman on why “intellectual property” is a misnomer

From Richard Stallman’s “Transcript of Richard Stallman at the 4th international GPLv3 conference; 23rd August 2006” (FSF Europe: 23 August 2006):

Anyway, the term “intellectual property” is a propaganda term which should never be used, because merely using it, no matter what you say about it, presumes it makes sense. It doesn’t really make sense, because it lumps together several different laws that are more different than similar.

For instance, copyright law and patent law have a little bit in common, but all the details are different and their social effects are different. To try to treat them as they were one thing, is already an error.

To even talk about anything that includes copyright and patent law, means you’re already mistaken. That term systematically leads people into mistakes. But, copyright law and patent law are not the only ones it includes. It also includes trademark law, for instance, which has nothing in common with copyright or patent law. So anyone talking about “quote intellectual property unquote”, is always talking about all of those and many others as well and making nonsensical statements.

So, when you say that you especially object to it when it’s used for Free Software, you’re suggesting it might be a little more legitimate when talking about proprietary software. Yes, software can be copyrighted. And yes, in some countries techniques can be patented. And certainly there can be trademark names for programs, which I think is fine. There’s no problem there. But these are three completely different things, and any attempt to mix them up – any practice which encourages people to lump them together is a terribly harmful practice. We have to totally reject the term “quote intellectual property unquote”. I will not let any excuse convince me to accept the meaningfulness of that term.

When people say “well, what would you call it?”, the answer is that I deny there is an “it” there. There are three, and many more, laws there, and I talk about these laws by their names, and I don’t mix them up.

Richard Stallman on why “intellectual property” is a misnomer Read More »

Richard Stallman on proprietary software

From Richard Stallman’s “Transcript of Richard Stallman at the 4th international GPLv3 conference; 23rd August 2006” (FSF Europe: 23 August 2006):

I hope to see all proprietary software wiped out. That’s what I aim for. That would be a World in which our freedom is respected. A proprietary program is a program that is not free. That is to say, a program that does respect the user’s essential rights. That’s evil. A proprietary program is part of a predatory scheme where people who don’t value their freedom are drawn into giving it up in order to gain some kind of practical convenience. And then once they’re there, it’s harder and harder to get out. Our goal is to rescue people from this.

Richard Stallman on proprietary software Read More »

Richard Stallman on the 4 freedoms

From Richard Stallman’s “Transcript of Richard Stallman at the 4th international GPLv3 conference; 23rd August 2006” (FSF Europe: 23 August 2006):

Specifically, this refers to four essential freedoms, which are the definition of Free Software.

Freedom zero is the freedom to run the program, as you wish, for any purpose.

Freedom one is the freedom to study the source code and then change it so that it does what you wish.

Freedom two is the freedom to help your neighbour, which is the freedom to distribute, including publication, copies of the program to others when you wish.

Freedom three is the freedom to help build your community, which is the freedom to distribute, including publication, your modified versions, when you wish.

These four freedoms make it possible for users to live an upright, ethical life as a member of a community and enable us individually and collectively to have control over what our software does and thus to have control over our computing.

Richard Stallman on the 4 freedoms Read More »

Debt collection business opens up huge security holes

From Mark Gibbs’ “Debt collectors mining your secrets” (Network World: 19 June 2008):

[Bud Hibbs, a consumer advocate] told me any debt collection company has access to an incredible amount of personal data from hundreds of possible sources and the motivation to mine it.

What intrigued me after talking with Hibbs was how the debt collection business works. It turns out pretty much anyone can set up a collections operation by buying a package of bad debts for around $40,000, hiring collectors who will work on commission, and applying for the appropriate city and state licenses. Once a company is set up it can buy access to Axciom and Experian and other databases and start hunting down defaulters.

So, here we have an entire industry dedicated to buying, selling and mining your personal data that has been derived from who knows where. Even better, because the large credit reporting companies use a lot of outsourcing for data entry, much of this data has probably been processed in India or Pakistan where, of course, the data security and integrity are guaranteed.

Hibbs points out that, with no prohibitions on sending data abroad and with the likes of, say, the Russian mafia being interested in the personal information, the probability of identity theft from these foreign data centers is enormous.

Debt collection business opens up huge security holes Read More »

More problems with voting, election 2008

From Ian Urbina’s “High Turnout May Add to Problems at Polling Places” (The New York Times: 3 November 2008):

Two-thirds of voters will mark their choice with a pencil on a paper ballot that is counted by an optical scanning machine, a method considered far more reliable and verifiable than touch screens. But paper ballots bring their own potential problems, voting experts say.

The scanners can break down, leading to delays and confusion for poll workers and voters. And the paper ballots of about a third of all voters will be counted not at the polling place but later at a central county location. That means that if a voter has made an error — not filling in an oval properly, for example, a mistake often made by the kind of novice voters who will be flocking to the polls — it will not be caught until it is too late. As a result, those ballots will be disqualified.

About a fourth of voters will still use electronic machines that offer no paper record to verify that their choice was accurately recorded, even though these machines are vulnerable to hacking and crashes that drop votes. The machines will be used by most voters in Indiana, Kentucky, Pennsylvania, Tennessee, Texas and Virginia. Eight other states, including Georgia, Maryland, New Jersey and South Carolina, will use touch-screen machines with no paper trails.

Florida has switched to its third ballot system in the past three election cycles, and glitches associated with the transition have caused confusion at early voting sites, election officials said. The state went back to using scanned paper ballots this year after touch-screen machines in Sarasota County failed to record any choice for 18,000 voters in a fiercely contested House race in 2006.

Voters in Colorado, Tennessee, Texas and West Virginia have reported using touch-screen machines that at least initially registered their choice for the wrong candidate or party.

Most states have passed laws requiring paper records of every vote cast, which experts consider an important safeguard. But most of them do not have strong audit laws to ensure that machine totals are vigilantly checked against the paper records.

In Ohio, Secretary of State Jennifer Brunner sued the maker of the touch-screen equipment used in half of her state’s 88 counties after an investigation showed that the machines “dropped” votes in recent elections when memory cards were uploaded to computer servers.

A report released last month by several voting rights groups found that eight of the states using touch-screen machines, including Colorado and Virginia, had no guidance or requirement to stock emergency paper ballots at the polls if the machines broke down.

More problems with voting, election 2008 Read More »

Matthew, the blind phone phreaker

From Kevin Poulsen’s “Teenage Hacker Is Blind, Brash and in the Crosshairs of the FBI” (Wired: 29 February 2008):

At 4 in the morning of May 1, 2005, deputies from the El Paso County Sheriff’s Office converged on the suburban Colorado Springs home of Richard Gasper, a TSA screener at the local Colorado Springs Municipal Airport. They were expecting to find a desperate, suicidal gunman holding Gasper and his daughter hostage.

“I will shoot,” the gravely voice had warned, in a phone call to police minutes earlier. “I’m not afraid. I will shoot, and then I will kill myself, because I don’t care.”

But instead of a gunman, it was Gasper himself who stepped into the glare of police floodlights. Deputies ordered Gasper’s hands up and held him for 90 minutes while searching the house. They found no armed intruder, no hostages bound in duct tape. Just Gasper’s 18-year-old daughter and his baffled parents.

A federal Joint Terrorism Task Force would later conclude that Gasper had been the victim of a new type of nasty hoax, called “swatting,” that was spreading across the United States. Pranksters were phoning police with fake murders and hostage crises, spoofing their caller IDs so the calls appear to be coming from inside the target’s home. The result: police SWAT teams rolling to the scene, sometimes bursting into homes, guns drawn.

Now the FBI thinks it has identified the culprit in the Colorado swatting as a 17-year-old East Boston phone phreak known as “Li’l Hacker.” Because he’s underage, Wired.com is not reporting Li’l Hacker’s last name. His first name is Matthew, and he poses a unique challenge to the federal justice system, because he is blind from birth.

Interviews by Wired.com with Matt and his associates, and a review of court documents, FBI reports and audio recordings, paints a picture of a young man with an uncanny talent for quick telephone con jobs. Able to commit vast amounts of information to memory instantly, Matt has mastered the intricacies of telephone switching systems, while developing an innate understanding of human psychology and organization culture — knowledge that he uses to manipulate his patsies and torment his foes.

Matt says he ordered phone company switch manuals off the internet and paid to have them translated into Braille. He became a regular caller to internal telephone company lines, where he’d masquerade as an employee to perform tricks like tracing telephone calls, getting free phone features, obtaining confidential customer information and disconnecting his rivals’ phones.

It was, relatively speaking, mild stuff. The teen though, soon fell in with a bad crowd. The party lines were dominated by a gang of half-a-dozen miscreants who informally called themselves the “Wrecking Crew” and “The Cavalry.”

By then, Matt’s reputation had taken on a life of its own, and tales of some of his hacks — perhaps apocryphal — are now legends. According to Daniels, he hacked his school’s PBX so that every phone would ring at once. Another time, he took control of a hotel elevator, sending it up and down over and over again. One story has it that Matt phoned a telephone company frame room worker at home in the middle of the night, and persuaded him to get out of bed and return to work to disconnect someone’s phone.

Matthew, the blind phone phreaker Read More »

Luddites and e-books

From Clay Shirky’s “The Siren Song of Luddism” (Britannica Blog: 19 June 2007):

…any technology that fixes a problem … threatens the people who profit from the previous inefficiency. However, Gorman omits mentioning the Luddite response: an attempt to halt the spread of mechanical looms which, though beneficial to the general populace, threatened the livelihoods of King Ludd’s band.

… printing was itself enormously disruptive, and many people wanted veto power over its spread as well. Indeed, one of the great Luddites of history (if we can apply the label anachronistically) was Johannes Trithemius, who argued in the late 1400s that the printing revolution be contained, in order to shield scribes from adverse effects.

The uncomfortable fact is that the advantages of paper have become decoupled from the advantages of publishing; a big part of preference for reading on paper is expressed by hitting the print button. As we know from Lyman and Varian’s “How Much Information?” study, “the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.”

The problems with e-books are that they are not radical enough: they dispense with the best aspect of books (paper as a display medium) while simultaneously aiming to disable the best aspects of electronic data (sharability, copyability, searchability, editability.)

If we gathered every bit of output from traditional publishers, we could line them up in order of vulnerability to digital evanescence. Reference works were the first to go — phone books, dictionaries, and thesauri have largely gone digital; the encyclopedia is going, as are scholarly journals. Last to go will be novels — it will be some time before anyone reads One Hundred Years of Solitude in any format other than a traditionally printed book. Some time, however, is not forever. The old institutions, and especially publishers and libraries, have been forced to use paper not just for display, for which is it well suited, but also for storage, transport, and categorization, things for which paper is completely terrible. We are now able to recover from those disadvantages, though only by transforming the institutions organized around the older assumptions.

Luddites and e-books Read More »