problem

How to increase donations on non-profit websites

From Jakob Nielsen’s “Donation Usability: Increasing Online Giving to Non-Profits and Charities” (Alertbox: 30 March 2009):

We asked participants what information they want to see on non-profit websites before they decide whether to donate. Their answers fell into 4 broad categories, 2 of which were the most heavily requested:

  • The organization’s mission, goals, objectives, and work.
  • How it uses donations and contributions.

That is: What are you trying to achieve, and how will you spend my money?

Sadly, only 43% of the sites we studied answered the first question on their homepage. Further, only a ridiculously low 4% answered the second question on the homepage. Although organizations typically provided these answers somewhere within the site, users often had problems finding this crucial information.

In choosing between 2 charities, people referred to 5 categories of information. However, an organization’s mission, goals, objectives, and work was by far the most important. Indeed, it was 3.6 times as important as the runner-up issue, which was the organization’s presence in the user’s own community.

Gottman on relationships

From THE MATHEMATICS OF LOVE: A Talk with John Gottman (Edge: 14 April 2004):

So far, his surmise is that “respect and affection are essential to all relationships working and contempt destroys them. It may differ from culture to culture how to communicate respect, and how to communicate affection, and how not to do it, but I think we’ll find that those are universal things”.

Another puzzle I’m working on is just what happens when a baby enters a relationship. Our study shows that the majority (67%) of couples have a precipitous drop in relationship happiness in the first 3 years of their first baby’s life. That’s tragic in terms of the climate of inter-parental hostility and depression that the baby grows up in. That affective climate between parents is the real cradle that holds the baby. And for the majority of families that cradle is unsafe for babies.

So far I believe we’re going to find that respect and affection are essential to all relationships working and contempt destroys them. It may differ from culture to culture how to communicate respect, and how to communicate affection, and how not to do it, but I think we’ll find that those are universal things.

Bob Levenson and I were very surprised when, in 1983, we found that we could actually predict, with over 90 percent accuracy, what was going to happen to a relationship over a three-year period just by examining their physiology and behavior during a conflict discussion, and later just from an interview about how the couple viewed their past. 90% accuracy!

That was surprising to us. It seemed that people either started in a mean-spirited way, a critical way, started talking about a disagreement, started talking about a problem as just a symptom of their partner’s inadequate character, which made their partner defensive and escalated the conflict, and people started getting mean and insulting to one another. That predicted the relationship was going to fall apart. 96% of the time the way the conflict discussion started in the first 3 minutes determined how it would go for the rest of the discussion. And four years later it was like no time had passed, their interaction style was almost identical. Also 69% of the time they were talking about the same issues, which we realized then were “perpetual issues” that they would never solve. These were basic personality differences that never went away. She was more extroverted or she was more of an explorer or he was more punctual or frugal.

Some couples were caught by the web of these perpetual issues and made each other miserable, they were “grid locked” like bumper-to-bumper traffic with these issues, while other couples had similar issues but coped with them and had a “dialogue” that even contained laughter and affection. It seemed that relationships last to the extent that you select someone whose annoying personality traits don’t send you into emotional orbit. Once again conventional wisdom was wrong. The big issue wasn’t helping couples resolve their conflicts, but moving them from gridlock to dialogue. And the secret of how to do that turned out to be having each person talk about their dream within the conflict and bringing Viktor Frankl’s existential logotherapy into the marital boxing ring. Once people talked about what they wished for and hoped for in this gridlock conflict and the narrative of why this was so important to them, in 86% of the cases they would move from gridlock to dialogue. Again a new door opened. Not all marital conflicts are the same. You can’t teach people a set of skills and just apply them to every issue. Some issues are deeper, they have more meaning. And then it turned out that the very issues that cause the most pain and alienation can also be the greatest sources of intimacy and connection.

Another surprise: we followed couples for as long as 20 years, and we found that there was another kind of couple that didn’t really show up on the radar; they looked fine, they weren’t mean, they didn’t escalate the conflict — but about 16 to 22 years after the wedding they started divorcing. They were often the pillars of their community. They seemed very calm and in control of their lives, and then suddenly they break up. Everyone is shocked and horrified. But we could look back at our early tapes and see the warning signs we had never seen before. Those people were people who just didn’t have very much positive connection. There wasn’t very much affection — and also especially humor — between them.

…These sorts of emotionally disconnected relationships were another important dimension of failed relationships. We learned through them that the quality of the friendship and intimacy affects the nature of conflict in a very big way.

One of the major things we found is that honoring your partner’s dreams is absolutely critical. A lot of times people have incompatible dreams — or they don’t want to honor their partner’s dreams, or they don’t want to yield power, they don’t want to share power. So that explains a lot of times why they don’t really belong together.

Psycho-physiology is an important part of this research. It’s something that Bob Levenson brought to the search initially, and then I got trained in psycho-physiology as well. And the reason we’re interested in what was happening in the body is that there’s an intimate connection between what’s happening to the autonomic nervous system and what happening in the brain, and how well people can take in information — how well they can just process information — for example, just being able to listen to your partner — that is much harder when your heart rate is above the intrinsic rate of the heart, which is around a hundred to a hundred and five beats a minute for most people with a healthy heart.

At that point we know, from Loren Rowling’s work, that people start secreting adrenalin, and then they get into a state of diffuse physiological arousal (or DPA) , so their heart is beating faster, it’s contracting harder, the arteries start getting constricted, blood is drawn away from the periphery into the trunk, the blood supply shuts down to the gut and the kidney, and all kinds of other things are happening — people are sweating, and things are happening in the brain that create a tunnel vision, one in which they perceive everything as a threat and they react as if they have been put in great danger by this conversation.

Because men are different. Men have a lot of trouble when they reach a state of vigilance, when they think there’s real danger, they have a lot of trouble calming down. and there’s probably an evolutionary history to that. Because it functioned very well for our hominid ancestors, anthropologists think, for men to stay physiologically aroused and vigilant, in cooperative hunting and protecting the tribe, which was a role that males had very early in our evolutionary history. Whereas women had the opposite sort of role, in terms of survival of the species, those women reproduced more effectively who had the milk-let-down reflex, which only happens when oxytocin is secreted in the brain, it only happens when women — as any woman knows who’s been breast-feeding, you have to be able to calm down and relax. But oxytocin is also the hormone of affiliation. So women have developed this sort of social order, caring for one another, helping one another, and affiliating, that also allows them to really calm down and have the milk let-down reflex. And so — it’s one of nature’s jokes. Women can calm down, men can’t; they stay aroused and vigilant.

Physiology becomes really critical in this whole thing. A provocative finding from Alyson Shapiro’s recent dissertation is that if we take a look at how a couple argues when the woman is in the sixth month of pregnancy, we can predict over half the variation in the baby, the three-month-old baby’s vagal tone, which is the ability of the vagus nerve, the major nerve of the parasympathetic branch of the autonomic nervous system, which is responsible for establishing calm and focusing attention. That vagus nerve in the baby is eventually going to be working well if the parents, during pregnancy, are fighting with each other constructively. That takes us into fetal development, a whole new realm of inquiry.

You have to study gay and Lesbian couples who are committed to each other as well as heterosexual couples who are committed to each other, and try and match things as much as you can, like how long they’ve been together, and the quality of their relationship. And we’ve done that, and we find that there are two gender differences that really hold up.

One is that if a man presents an issue, to either a man he’s in love with or a woman he’s in love with, the man is angrier presenting the issue. And we find that when a woman receives an issue, either from a woman she loves or a man she loves, she is much more sad than a man would be receiving that same issue. It’s about anger and sadness. Why? Remember, Bowlby taught us that attachment and loss and grief are part of the same system. So women are finely tuned to attaching and connecting and to sadness and loss and grief, while men are attuned to defend, stay vigilant, attack, to anger. My friend Levenson did an acoustic startle study (that’s where you shoot of a blank pistol behind someone’s head when they least expect it). Men had a bigger heart rate reactivity and took longer to recover, which we would expect, but what even more interesting is that when you asked people what they were feeling, women were scared and men were angry.

So that’s probably why those two differences have held up. Physiologically people find over and over again in heterosexual relationships — and this hasn’t been studied yet in gay and Lesbian relationships — that men have a lower flash point for increasing heart-rate arousal, and it takes them longer to recover. And not only that, but when men are trying to recover, and calm down, they can’t do it very well because they keep naturally rehearsing thoughts of righteous indignation and feeling like an innocent victim. They maintain their own vigilance and arousal with these thoughts, mostly of getting even, whereas women really can distract themselves and calm down physiologically from being angered or being upset about something. If women could affiliate and secrete oxytocin when they felt afraid, they’s even calm down faster, probably.

The future of security

From Bruce Schneier’s “Security in Ten Years” (Crypto-Gram: 15 December 2007):

Bruce Schneier: … The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.

Marcus Ranum: … Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.

Bruce Schneier: … I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

Old botnets dead; new botnets coming

From Joel Hruska’s “Meet Son of Storm, Srizbi 2.0: next-gen botnets come online” (Ars Technica: 15 January 2009):

First the good news: SecureWorks reports that Storm is dead, Bobax/Kraken is moribund, and both Srizbi and Rustock were heavily damaged by the McColo takedown; Srizbi is now all but silent, while Rustock remains viable. That’s three significant botnets taken out and one damaged in a single year; cue (genuine) applause.

The bad news kicks in further down the page with a fresh list of botnets what need to be watched. Rustock and Mega-D (also known as Ozdok) are still alive and kicking, while newcomers Xarvester and Waledac could cause serious problems in 2009. Xarvester, according to Marshal may be an updated form of Srizbi; the two share a number of common features, including:

* HTTP command and control over nonstandard ports
* Encrypted template files contain several files needed for spamming
* Bots don’t need to do their own DNS lookups to send spam
* Config files have similar format and data
* Uploads Minidump crash file

Social networking and “friendship”

From danah boyd’s “Friends, Friendsters, and MySpace Top 8: Writing Community Into Being on Social Network Sites” (First Monday: December 2006)

John’s reference to “gateway Friends” concerns a specific technological affordance unique to Friendster. Because the company felt it would make the site more intimate, Friendster limits users from surfing to Profiles beyond four degrees (Friends of Friends of Friends of Friends). When people login, they can see how many Profiles are “in their network” where the network is defined by the four degrees. For users seeking to meet new people, growing this number matters. For those who wanted it to be intimate, keeping the number smaller was more important. In either case, the number of people in one’s network was perceived as directly related to the number of friends one had.

“I am happy with the number of friends I have. I can access over 26,000 profiles, which is enough for me!” — Abby

The number of Friends one has definitely affects the size of one’s network but connecting to Collectors plays a much more significant role. Because these “gateway friends” (a.k.a. social network hubs) have lots of Friends who are not connected to each other, they expand the network pretty rapidly. Thus, connecting to Collectors or connecting to people who connect to Collectors opens you up to a large network rather quickly.

While Collectors could be anyone interested in amassing many Friends, fake Profiles were developed to aid in this process. These Fakesters included characters, celebrities, objects, icons, institutions, and ideas. For example, Homer Simpson had a Profile alongside Jesus and Brown University. By connecting people with shared interests or affiliations, Fakesters supported networking between like-minded individuals. Because play and connecting were primary incentives for many Fakesters, they welcomed any and all Friends. Likewise, people who wanted access to more people connected to Fakesters. Fakesters helped centralize the network and two Fakesters — Burning Man and Ali G — reached mass popularity with over 10,000 Friends each before the Web site’s creators put an end to their collecting and deleted both accounts. This began the deletion of all Fakesters in what was eventually termed the Fakester Genocide [8].

While Friendster was irritated by fake Profiles, MySpace embraced this practice. One of MySpace’s early strategies was to provide a place for everyone who was rejected from Friendster or who didn’t want to be on a dating site [9]. Bands who had been kicked off of Friendster were some of the earliest MySpace users. Over time, movie stars, politicians, porn divas, comedians, and other celebrities joined the fray. Often, the person behind these Profiles was not the celebrity but a manager. Corporations began creating Profiles for their products and brands. While Friendster eventually began allowing such fake Profiles for a fee, MySpace never charged people for their commercial uses.

Investigating Friendship in LiveJournal, Kate Raynes-Goldie and Fono (2005) found that there was tremendous inconsistency in why people Friended others. They primarily found that Friendship stood for: content, offline facilitator, online community, trust, courtesy, declaration, or nothing. When I asked participants about their practices on Friendster and MySpace, I found very similar incentives. The most common reasons for Friendship that I heard from users [11] were:

1. Actual friends
2. Acquaintances, family members, colleagues
3. It would be socially inappropriate to say no because you know them
4. Having lots of Friends makes you look popular
5. It’s a way of indicating that you are a fan (of that person, band, product, etc.)
6. Your list of Friends reveals who you are
7. Their Profile is cool so being Friends makes you look cool
8. Collecting Friends lets you see more people (Friendster)
9. It’s the only way to see a private Profile (MySpace)
10. Being Friends lets you see someone’s bulletins and their Friends-only blog posts (MySpace)
11. You want them to see your bulletins, private Profile, private blog (MySpace)
12. You can use your Friends list to find someone later
13. It’s easier to say yes than no

These incentives account for a variety of different connections. While the first three reasons all concern people that you know, the rest can explain why people connect to a lot of people that they do not know. Most reveal how technical affordances affect people’s incentives to connect.

Raynes-Goldie and Fono (2005) also found that there is a great deal of social anxiety and drama provoked by Friending in LiveJournal (LJ). In LJ, Friendship does not require reciprocity. Anyone can list anyone else as a Friend; this articulation is public but there is no notification. The value of Friendship on LJ is deeply connected to the privacy settings and subscription processes. The norm on LJ is to read others’ entries through a “Friends page.” This page is an aggregation of all of an individual’s Friends’ posts. When someone posts an LJ entry, they have a choice as to whether the post should be public, private, Friends-only, or available to subgroups of Friends. In this way, it is necessary to be someone’s Friend to have access to Friends-only posts. To locate how the multiple and conflicting views of Friendship cause tremendous conflict and misunderstanding on LJ, Raynes-Goldie and Fono speak of “hyperfriending.” This process is quite similar to what takes place on other social network sites, but there are some differences. Because Friends-only posts are commonplace, not being someone’s Friend is a huge limitation to information access. Furthermore, because reciprocity is not structurally required, there’s a much greater social weight to recognizing someone’s Friendship and reciprocating intentionally. On MySpace and Friendster, there is little to lose by being loose with Friendship and more to gain; the perception is that there is much more to lose on LJ.

While users can scroll through their list of Friends, not all Friends are displayed on the participant’s Profile. Most social network sites display Friends in the order in which their account was created or their last login date. By implementing a “Top 8” feature, MySpace changed the social dynamics around the ordering of Friends. Initially, “Top 8” allowed users to select eight Friends to display on their Profile. More recently, that feature was changed to “Top Friends” as users have more options in how many people they could list [12]. Many users will only list people that they know and celebrities that they admire in their Top Friends, often as a way to both demarcate their identity and signal meaningful relationships with others.

There are many advantages to the Top Friends feature. It allows people to show connections that really say something about who they are. It also serves as a bookmark to the people that matter. By choosing to list the people who one visits the most frequently, simply going to one’s Profile provides a set of valuable links.

“As a kid, you used your birthday party guest list as leverage on the playground. ‘If you let me play I’ll invite you to my birthday party.’ Then, as you grew up and got your own phone, it was all about someone being on your speed dial. Well today it’s the MySpace Top 8. It’s the new dangling carrot for gaining superficial acceptance. Taking someone off your Top 8 is your new passive aggressive power play when someone pisses you off.” — Nadine

There are a handful of social norms that pervade Top 8 culture. Often, the person in the upper left (“1st” position) is a significant other, dear friend, or close family member. Reciprocity is another salient component of Top Friends dynamics. If Susan lists Mary on her Top 8, she expects Mary to reciprocate. To acknowledge this, Mary adds a Comment to Susan’s page saying, “Thanx for puttin me on ur Top 8! I put you on mine 2.” By publicly acknowledging this addition, Mary is making certain Susan’s viewers recognize Mary’s status on Susan’s list. Of course, just being in someone’s list is not always enough. As Samantha explains, “Friends get into fights because they’re not 1st on someone’s Top 8, or somebody else is before them.” While some people are ecstatic to be added, there are many more that are frustrated because they are removed or simply not listed.

The Top Friends feature requires participants to actively signal their relationship with others. Such a system makes it difficult to be vague about who matters the most, although some tried by explaining on their bulletins what theme they are using to choose their Top 8 this week: “my Sagittarius friends,” “my basketball team,” and “people whose initials are BR.” Still others relied on fake Profiles for their Top 8.

The networked nature of impressions does not only affect the viewer — this is how newcomers decided what to present in the first place. When people first joined Friendster, they took cues from the people who invited them. Three specific subcultures dominated the early adopters — bloggers, attendees of the Burning Man [14] festival, and gay men mostly living in New York. If the invitee was a Burner, their Profile would probably be filled with references to the event with images full of half-naked, costumed people running around the desert. As such, newcomers would get the impression that it was a site for Burners and they would create a Profile that displayed that facet of their identity. In decided who to invite, newcomers would perpetuate the framing by only inviting people who are part of the Burning Man subculture.

Interestingly, because of this process, Burners believed that the site was for Burners, gay men thought it was a gay dating site, and bloggers were ecstatic to have a geek socializing tool. The reason each group got this impression had to do with the way in which context was created on these systems. Rather than having the context dictated by the environment itself, context emerged through Friends networks. As a result, being socialized into Friendster meant connected to Friends that reinforced the contextual information of early adopters.

The growth of MySpace followed a similar curve. One of the key early adopter groups were hipsters living in the Silverlake neighborhood of Los Angeles. They were passionate about indie rock music and many were musicians, promoters, club goers, etc. As MySpace took hold, long before any press was covering the site, MySpace took off amongst 20/30-something urban socializers, musicians, and teenagers. The latter group may not appear obvious, but teenagers are some of the most active music consumers — they follow music culture avidly, even when they are unable to see the bands play live due to age restrictions. As the site grew, the teenagers and 20/30-somethings pretty much left each other alone, although bands bridged these groups. It was not until the site was sold to News Corp. for US$580 million in the summer of 2005 that the press began covering the phenomenon. The massive press helped it grow larger, penetrating those three demographics more deeply but also attracting new populations, namely adults who are interested in teenagers (parents, teachers, pedophiles, marketers).

When context is defined by whom one Friends, and addressing multiple audiences simultaneously complicates all relationships, people must make hard choices. Joshua Meyrowitz (1985) highlights this problem in reference to television. In the early 1960s, Stokely Carmichael regularly addressed segregated black and white audiences about the values of Black Power. Depending on his audience, he used very different rhetorical styles. As his popularity grew, he began to attract media attention and was invited to speak on TV and radio. Unfortunately, this was more of a curse than a blessing because the audiences he would reach through these mediums included both black and white communities. With no way to reconcile the two different rhetorical styles, he had to choose. In choosing to maintain his roots in front of white listeners, Carmichael permanently alienated white society from the messages of Black Power.

Notes

10. Friendster originally limited users to 150 Friends. It is no accident that they chose 150, as this is the “Dunbar number.” In his research on gossip and grooming, Robin Dunbar argues that there is a cognitive limit to the number of relations that one can maintain. People can only keep gossip with 150 people at any given time (Dunbar, 1998). By capping Friends at 150, Friendster either misunderstood Dunbar or did not realize that their users were actually connecting to friends from the past with whom they are not currently engaging.

12. Eight was the maximum number of Friends that the system initially let people have. Some users figured out how to hack the system to display more Friends; there are entire bulletin boards dedicated to teaching others how to hack this. Consistently, upping the limit was the number one request that the company received. In the spring of 2006, MySpace launched an ad campaign for X-Men. In return for Friending X-Men, users were given the option to have 12, 16, 20, or 24 Friends in their Top Friends section. Millions of users did exactly that. In late June, this feature was introduced to everyone, regardless of Friending X-Men. While eight is no longer the limit, people move between calling it Top 8 or Top Friends. I will use both terms interchangeably, even when the number of Friends might be greater than eight.

Problems with airport security

From Jeffrey Goldberg’s “The Things He Carried” (The Atlantic: November 2008):

Because the TSA’s security regimen seems to be mainly thing-based—most of its 44,500 airport officers are assigned to truffle through carry-on bags for things like guns, bombs, three-ounce tubes of anthrax, Crest toothpaste, nail clippers, Snapple, and so on—I focused my efforts on bringing bad things through security in many different airports, primarily my home airport, Washington’s Reagan National, the one situated approximately 17 feet from the Pentagon, but also in Los Angeles, New York, Miami, Chicago, and at the Wilkes-Barre/Scranton International Airport (which is where I came closest to arousing at least a modest level of suspicion, receiving a symbolic pat-down—all frisks that avoid the sensitive regions are by definition symbolic—and one question about the presence of a Leatherman Multi-Tool in my pocket; said Leatherman was confiscated and is now, I hope, living with the loving family of a TSA employee). And because I have a fair amount of experience reporting on terrorists, and because terrorist groups produce large quantities of branded knickknacks, I’ve amassed an inspiring collection of al-Qaeda T-shirts, Islamic Jihad flags, Hezbollah videotapes, and inflatable Yasir Arafat dolls (really). All these things I’ve carried with me through airports across the country. I’ve also carried, at various times: pocketknives, matches from hotels in Beirut and Peshawar, dust masks, lengths of rope, cigarette lighters, nail clippers, eight-ounce tubes of toothpaste (in my front pocket), bottles of Fiji Water (which is foreign), and, of course, box cutters. I was selected for secondary screening four times—out of dozens of passages through security checkpoints—during this extended experiment. At one screening, I was relieved of a pair of nail clippers; during another, a can of shaving cream.

During one secondary inspection, at O’Hare International Airport in Chicago, I was wearing under my shirt a spectacular, only-in-America device called a “Beerbelly,” a neoprene sling that holds a polyurethane bladder and drinking tube. The Beerbelly, designed originally to sneak alcohol—up to 80 ounces—into football games, can quite obviously be used to sneak up to 80 ounces of liquid through airport security. (The company that manufactures the Beerbelly also makes something called a “Winerack,” a bra that holds up to 25 ounces of booze and is recommended, according to the company’s Web site, for PTA meetings.) My Beerbelly, which fit comfortably over my beer belly, contained two cans’ worth of Bud Light at the time of the inspection. It went undetected. The eight-ounce bottle of water in my carry-on bag, however, was seized by the federal government.

Schnei­er and I walked to the security checkpoint. “Counter­terrorism in the airport is a show designed to make people feel better,” he said. “Only two things have made flying safer: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.” This assumes, of course, that al-Qaeda will target airplanes for hijacking, or target aviation at all. “We defend against what the terrorists did last week,” Schnei­er said. He believes that the country would be just as safe as it is today if airport security were rolled back to pre-9/11 levels. “Spend the rest of your money on intelligence, investigations, and emergency response.”

We took our shoes off and placed our laptops in bins. Schnei­er took from his bag a 12-ounce container labeled “saline solution.”

“It’s allowed,” he said. Medical supplies, such as saline solution for contact-lens cleaning, don’t fall under the TSA’s three-ounce rule.

“What’s allowed?” I asked. “Saline solution, or bottles labeled saline solution?”

“Bottles labeled saline solution. They won’t check what’s in it, trust me.”

They did not check. As we gathered our belongings, Schnei­er held up the bottle and said to the nearest security officer, “This is okay, right?” “Yep,” the officer said. “Just have to put it in the tray.”

“Maybe if you lit it on fire, he’d pay attention,” I said, risking arrest for making a joke at airport security. (Later, Schnei­er would carry two bottles labeled saline solution—24 ounces in total—through security. An officer asked him why he needed two bottles. “Two eyes,” he said. He was allowed to keep the bottles.)

We were in the clear. But what did we prove?

“We proved that the ID triangle is hopeless,” Schneier said.

The ID triangle: before a passenger boards a commercial flight, he interacts with his airline or the government three times—when he purchases his ticket; when he passes through airport security; and finally at the gate, when he presents his boarding pass to an airline agent. It is at the first point of contact, when the ticket is purchased, that a passenger’s name is checked against the government’s no-fly list. It is not checked again, and for this reason, Schnei­er argued, the process is merely another form of security theater.

“The goal is to make sure that this ID triangle represents one person,” he explained. “Here’s how you get around it. Let’s assume you’re a terrorist and you believe your name is on the watch list.” It’s easy for a terrorist to check whether the government has cottoned on to his existence, Schnei­er said; he simply has to submit his name online to the new, privately run CLEAR program, which is meant to fast-pass approved travelers through security. If the terrorist is rejected, then he knows he’s on the watch list.

To slip through the only check against the no-fly list, the terrorist uses a stolen credit card to buy a ticket under a fake name. “Then you print a fake boarding pass with your real name on it and go to the airport. You give your real ID, and the fake boarding pass with your real name on it, to security. They’re checking the documents against each other. They’re not checking your name against the no-fly list—that was done on the airline’s computers. Once you’re through security, you rip up the fake boarding pass, and use the real boarding pass that has the name from the stolen credit card. Then you board the plane, because they’re not checking your name against your ID at boarding.”

What if you don’t know how to steal a credit card?

“Then you’re a stupid terrorist and the government will catch you,” he said.

What if you don’t know how to download a PDF of an actual boarding pass and alter it on a home computer?

“Then you’re a stupid terrorist and the government will catch you.”

I couldn’t believe that what Schneier was saying was true—in the national debate over the no-fly list, it is seldom, if ever, mentioned that the no-fly list doesn’t work. “It’s true,” he said. “The gap blows the whole system out of the water.”

Bruce Schneier on security & crime economics

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Basically, you’re asking if crime pays. Most of the time, it doesn’t, and the problem is the different risk characteristics. If I make a computer security mistake — in a book, for a consulting client, at BT — it’s a mistake. It might be expensive, but I learn from it and move on. As a criminal, a mistake likely means jail time — time I can’t spend earning my criminal living. For this reason, it’s hard to improve as a criminal. And this is why there are more criminal masterminds in the movies than in real life.

Crime has been part of our society since our species invented society, and it’s not going away anytime soon. The real question is, “Why is there so much crime and hacking on the Internet, and why isn’t anyone doing anything about it?”

The answer is in the economics of Internet vulnerabilities and attacks: the organizations that are in the position to mitigate the risks aren’t responsible for the risks. This is an externality, and if you want to fix the problem you need to address it. In this essay (more here), I recommend liabilities; companies need to be liable for the effects of their software flaws. A related problem is that the Internet security market is a lemon’s market (discussed here), but there are strategies for dealing with that, too.

Bruce Schneier on identity theft

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Identity theft is a problem for two reasons. One, personal identifying information is incredibly easy to get; and two, personal identifying information is incredibly easy to use. Most of our security measures have tried to solve the first problem. Instead, we need to solve the second problem. As long as it’s easy to impersonate someone if you have his data, this sort of fraud will continue to be a major problem.

The basic answer is to stop relying on authenticating the person, and instead authenticate the transaction. Credit cards are a good example of this. Credit card companies spend almost no effort authenticating the person — hardly anyone checks your signature, and you can use your card over the phone, where they can’t even check if you’re holding the card — and spend all their effort authenticating the transaction.

CopyBot copies all sorts of items in Second Life

From Glyn Moody’s “The duplicitous inhabitants of Second Life” (The Guardian: 23 November 2006):

What would happen to business and society if you could easily make a copy of anything – not just MP3s and DVDs, but clothes, chairs and even houses? That may not be a problem most of us will have to confront for a while yet, but the 1.5m residents of the virtual world Second Life are already grappling with this issue.

A new program called CopyBot allows Second Life users to duplicate repeatedly certain elements of any object in the vicinity – and sometimes all of it. That’s awkward in a world where such virtual goods can be sold for real money. When CopyBot first appeared, some retailers in Second Life shut up shop, convinced that their virtual goods were about to be endlessly copied and rendered worthless. Others protested, and suggested that in the absence of scarcity, Second Life’s economy would collapse.

Instead of sending a flow of pictures of the virtual world to the user as a series of pixels – something that would be impractical to calculate – the information would be transmitted as a list of basic shapes that were re-created on the user’s PC. For example, a virtual house might be a cuboid with rectangles representing windows and doors, cylinders for the chimney stacks etc.

This meant the local world could be sent in great detail very compactly, but also that the software on the user’s machine had all the information for making a copy of any nearby object. It’s like the web: in order to display a page, the browser receives not an image of the page, but all the underlying HTML code to generate that page, which also means that the HTML of any web page can be copied perfectly. Thus CopyBot – written by a group called libsecondlife as part of an open-source project to create Second Life applications – or something like it was bound to appear one day.

Liberating the economy has led to a boom in creativity, just as Rosedale hoped. It is in constant expansion as people buy virtual land, and every day more than $500,000 (£263,000) is spent buying virtual objects. But the downside is that unwanted copying is potentially a threat to the substantial businesses selling virtual goods that have been built up, and a concern for the real-life companies such as IBM, Adidas and Nissan which are beginning to enter Second Life.

Just as it is probably not feasible to stop “grey goo” – the Second Life equivalent of spam, which takes the form of self- replicating objects malicious “griefers” use to gum up the main servers – so it is probably technically impossible to stop copying. Fortunately, not all aspects of an object can be duplicated. To create complex items – such as a virtual car that can be driven – you use a special programming language to code their realistic behaviour. CopyBot cannot duplicate these programs because they are never passed to the user, but run on the Linden Lab’s computers.

As for the elements that you can copy, such as shape and texture, Rosedale explains: “What we’re going to do is add a lot of attribution. You’ll be able to easily see when an object or texture was first created,” – and hence if something is a later copy.

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

Richard Stallman on why “intellectual property” is a misnomer

From Richard Stallman’s “Transcript of Richard Stallman at the 4th international GPLv3 conference; 23rd August 2006” (FSF Europe: 23 August 2006):

Anyway, the term “intellectual property” is a propaganda term which should never be used, because merely using it, no matter what you say about it, presumes it makes sense. It doesn’t really make sense, because it lumps together several different laws that are more different than similar.

For instance, copyright law and patent law have a little bit in common, but all the details are different and their social effects are different. To try to treat them as they were one thing, is already an error.

To even talk about anything that includes copyright and patent law, means you’re already mistaken. That term systematically leads people into mistakes. But, copyright law and patent law are not the only ones it includes. It also includes trademark law, for instance, which has nothing in common with copyright or patent law. So anyone talking about “quote intellectual property unquote”, is always talking about all of those and many others as well and making nonsensical statements.

So, when you say that you especially object to it when it’s used for Free Software, you’re suggesting it might be a little more legitimate when talking about proprietary software. Yes, software can be copyrighted. And yes, in some countries techniques can be patented. And certainly there can be trademark names for programs, which I think is fine. There’s no problem there. But these are three completely different things, and any attempt to mix them up – any practice which encourages people to lump them together is a terribly harmful practice. We have to totally reject the term “quote intellectual property unquote”. I will not let any excuse convince me to accept the meaningfulness of that term.

When people say “well, what would you call it?”, the answer is that I deny there is an “it” there. There are three, and many more, laws there, and I talk about these laws by their names, and I don’t mix them up.

More problems with voting, election 2008

From Ian Urbina’s “High Turnout May Add to Problems at Polling Places” (The New York Times: 3 November 2008):

Two-thirds of voters will mark their choice with a pencil on a paper ballot that is counted by an optical scanning machine, a method considered far more reliable and verifiable than touch screens. But paper ballots bring their own potential problems, voting experts say.

The scanners can break down, leading to delays and confusion for poll workers and voters. And the paper ballots of about a third of all voters will be counted not at the polling place but later at a central county location. That means that if a voter has made an error — not filling in an oval properly, for example, a mistake often made by the kind of novice voters who will be flocking to the polls — it will not be caught until it is too late. As a result, those ballots will be disqualified.

About a fourth of voters will still use electronic machines that offer no paper record to verify that their choice was accurately recorded, even though these machines are vulnerable to hacking and crashes that drop votes. The machines will be used by most voters in Indiana, Kentucky, Pennsylvania, Tennessee, Texas and Virginia. Eight other states, including Georgia, Maryland, New Jersey and South Carolina, will use touch-screen machines with no paper trails.

Florida has switched to its third ballot system in the past three election cycles, and glitches associated with the transition have caused confusion at early voting sites, election officials said. The state went back to using scanned paper ballots this year after touch-screen machines in Sarasota County failed to record any choice for 18,000 voters in a fiercely contested House race in 2006.

Voters in Colorado, Tennessee, Texas and West Virginia have reported using touch-screen machines that at least initially registered their choice for the wrong candidate or party.

Most states have passed laws requiring paper records of every vote cast, which experts consider an important safeguard. But most of them do not have strong audit laws to ensure that machine totals are vigilantly checked against the paper records.

In Ohio, Secretary of State Jennifer Brunner sued the maker of the touch-screen equipment used in half of her state’s 88 counties after an investigation showed that the machines “dropped” votes in recent elections when memory cards were uploaded to computer servers.

A report released last month by several voting rights groups found that eight of the states using touch-screen machines, including Colorado and Virginia, had no guidance or requirement to stock emergency paper ballots at the polls if the machines broke down.

Luddites and e-books

From Clay Shirky’s “The Siren Song of Luddism” (Britannica Blog: 19 June 2007):

…any technology that fixes a problem … threatens the people who profit from the previous inefficiency. However, Gorman omits mentioning the Luddite response: an attempt to halt the spread of mechanical looms which, though beneficial to the general populace, threatened the livelihoods of King Ludd’s band.

… printing was itself enormously disruptive, and many people wanted veto power over its spread as well. Indeed, one of the great Luddites of history (if we can apply the label anachronistically) was Johannes Trithemius, who argued in the late 1400s that the printing revolution be contained, in order to shield scribes from adverse effects.

The uncomfortable fact is that the advantages of paper have become decoupled from the advantages of publishing; a big part of preference for reading on paper is expressed by hitting the print button. As we know from Lyman and Varian’s “How Much Information?” study, “the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.”

The problems with e-books are that they are not radical enough: they dispense with the best aspect of books (paper as a display medium) while simultaneously aiming to disable the best aspects of electronic data (sharability, copyability, searchability, editability.)

If we gathered every bit of output from traditional publishers, we could line them up in order of vulnerability to digital evanescence. Reference works were the first to go — phone books, dictionaries, and thesauri have largely gone digital; the encyclopedia is going, as are scholarly journals. Last to go will be novels — it will be some time before anyone reads One Hundred Years of Solitude in any format other than a traditionally printed book. Some time, however, is not forever. The old institutions, and especially publishers and libraries, have been forced to use paper not just for display, for which is it well suited, but also for storage, transport, and categorization, things for which paper is completely terrible. We are now able to recover from those disadvantages, though only by transforming the institutions organized around the older assumptions.

George Clinton and the sample troll

From Tim Wu’s “On Copyright’s Authorship Policy” (Internet Archive: 2007):

On May 4, 2001, a one-man corporation named Bridgeport Music, Inc. launched over 500 counts of copyright infringement against more than 800 different artists and labels.1 Bridgeport Music has no employees, and other than copyrights, no reported assets.2 Technically, Bridgeport is a “catalogue company.” Others call it a “sample troll.”

Bridgeport is the owner of valuable copyrights, including many of funk singer George Clinton’s most famous songs – songs which are sampled in a good amount of rap music.3 Bridgeport located every sample of Clinton’s and other copyrights it owned, and sued based on the legal position that any sampling of a sound recording, no matter how minimal or unnoticeable, is still an infringement.

During the course of Bridgeport’s campaign, it has won two important victories. First, the Sixth Circuit, the appellate court for Nashville adopted Bridgeport’s theory of infringement. In Bridgeport Music, Inc. v. Dimension Films,4 the defendants sampled a single chord from the George Clinton tune “Get Off Your Ass and Jam,” changed the pitch, and looped the sound. Despite the plausible defense that one note is but a de minimus use of the work, the Sixth Circuit ruled for Bridgeport and created a stark rule: any sampling, no matter how minimal or undetectable, is a copyright infringement. Said the court in Bridgeport, “Get a license or do not sample. We do not see this as stifling creativity in any significant way.”5 In 2006 Bridgeport convinced a district court to enjoin the sales of the bestselling Notorious B.I.G. album, Ready to Die, for “illegal sampling.”6 A jury then awarded Bridgeport more than four million dollars in damages.7

The Bridgeport cases have been heavily criticized, and taken as a prime example of copyright’s excesses.8 Yet the deeper problem with the Bridgeport litigation is not necessarily a problem of too much copyright. It can be equally concluded that the ownership of the relevant rights is the root of the problem. George Clinton, the actual composer and recording artist, takes a much different approach to sampling. “When hip-hop came out,” said Clinton in an interview with journalist Rick Karr, “I was glad to hear it, especially when it was our songs – it was a way to get back on the radio.”9 Clinton accepts sampling of his work, and has released a three CD collection of his sounds for just that purpose.10 The problem is that he doesn’t own many of his most important copyrights. Instead, it is Bridgeport, the one-man company, that owns the rights to Clinton’s work. In the 1970s Bridgeport, through its owner Armen Boladian, managed to seize most of George Clinton’s copyrights and many other valuable rights. In at least a few cases, Boladian assigned the copyrights to Bridgeport by writing a contract and then faking Clinton’s signature.11 As Clinton puts it “he just stole ‘em.”12 With the copyrights to Clinton’s songs in the hands of Bridgeport – an entity with no vested interest in the works beyond their sheer economic value – the targeting of sampling is not surprising.

1 Tim Wu, Jay-Z Versus the Sample Troll, Slate Magazine, Nov. 16, 2006, http://www.slate.com/id/2153961/.

2 See Bridgeport Music, Inc.’s corporate entity details, Michigan Department of Labor & Economic Growth, available at http://www.dleg.state.mi.us/bcs_corp/dt_corp.asp?id_nbr=190824&name_entity=BRIDGEPORT%20MUSIC,%20INC (last visited Mar. 18, 2007).

3 See Wu, supra note 1.

4 410 F.3d 792 (6th Cir. 2005).

5 Id. at 801.

6 Jeff Leeds, Judge Freezes Notorious B.I.G. Album, N.Y. Times, Mar. 21, 2006, at E2.

7 Id.

8 See, e.g., Matthew R. Broodin, Comment, Bridgeport Music, Inc. v. Dimension Films: The Death of the Substantial Similarity Test in Digital Samping Copyright Infringemnt Claims—The Sixth Circuit’s Flawed Attempt at a Bright Line Rule, 6 Minn. J. L. Sci. & Tech. 825 (2005); Jeffrey F. Kersting, Comment, Singing a Different Tune: Was the Sixth Circuit Justified in Changing the Protection of Sound Recordings in Bridgeport Music, Inc. v. Dimension Films?, 74 U. Cin. L. Rev. 663 (2005) (answering the title question in the negative); John Schietinger, Note, Bridgeport Music, Inc. v. Dimension Films: How the Sixth Circuit Missed a Beat on Digital Music Sampling, 55 DePaul L. Rev. 209 (2005).

9 Interview by Rick Karr with George Clinton, at the 5th Annual Future of Music Policy Summit, Wash. D.C. (Sept. 12, 2005), video clip available at http://www.tvworldwide.com/showclip.cfm?ID=6128&clip=2 [hereinafter Clinton Interview].

10 George Clinton, Sample Some of Disc, Sample Some of D.A.T., Vols. 1-3 (1993-94).

11 Sound Generator, George Clinton awarded Funkadelic master recordings (Jun. 6, 2005), http://www.soundgenerator.com/news/showarticle.cfm?articleid=5555.

12 Clinton Interview, supra note 9.

George Clinton and the sample troll

From Tim Wu’s “On Copyright’s Authorship Policy” (Internet Archive: 2007):

On May 4, 2001, a one-man corporation named Bridgeport Music, Inc. launched over 500 counts of copyright infringement against more than 800 different artists and labels.1 Bridgeport Music has no employees, and other than copyrights, no reported assets.2 Technically, Bridgeport is a “catalogue company.” Others call it a “sample troll.”

Bridgeport is the owner of valuable copyrights, including many of funk singer George Clinton’s most famous songs – songs which are sampled in a good amount of rap music.3 Bridgeport located every sample of Clinton’s and other copyrights it owned, and sued based on the legal position that any sampling of a sound recording, no matter how minimal or unnoticeable, is still an infringement.

During the course of Bridgeport’s campaign, it has won two important victories. First, the Sixth Circuit, the appellate court for Nashville adopted Bridgeport’s theory of infringement. In Bridgeport Music, Inc. v. Dimension Films,4 the defendants sampled a single chord from the George Clinton tune “Get Off Your Ass and Jam,” changed the pitch, and looped the sound. Despite the plausible defense that one note is but a de minimus use of the work, the Sixth Circuit ruled for Bridgeport and created a stark rule: any sampling, no matter how minimal or undetectable, is a copyright infringement. Said the court in Bridgeport, “Get a license or do not sample. We do not see this as stifling creativity in any significant way.”5 In 2006 Bridgeport convinced a district court to enjoin the sales of the bestselling Notorious B.I.G. album, Ready to Die, for “illegal sampling.”6 A jury then awarded Bridgeport more than four million dollars in damages.7

The Bridgeport cases have been heavily criticized, and taken as a prime example of copyright’s excesses.8 Yet the deeper problem with the Bridgeport litigation is not necessarily a problem of too much copyright. It can be equally concluded that the ownership of the relevant rights is the root of the problem. George Clinton, the actual composer and recording artist, takes a much different approach to sampling. “When hip-hop came out,” said Clinton in an interview with journalist Rick Karr, “I was glad to hear it, especially when it was our songs – it was a way to get back on the radio.”9 Clinton accepts sampling of his work, and has released a three CD collection of his sounds for just that purpose.10 The problem is that he doesn’t own many of his most important copyrights. Instead, it is Bridgeport, the one-man company, that owns the rights to Clinton’s work. In the 1970s Bridgeport, through its owner Armen Boladian, managed to seize most of George Clinton’s copyrights and many other valuable rights. In at least a few cases, Boladian assigned the copyrights to Bridgeport by writing a contract and then faking Clinton’s signature.11 As Clinton puts it “he just stole ‘em.”12 With the copyrights to Clinton’s songs in the hands of Bridgeport – an entity with no vested interest in the works beyond their sheer economic value – the targeting of sampling is not surprising.

1 Tim Wu, Jay-Z Versus the Sample Troll, Slate Magazine, Nov. 16, 2006, http://www.slate.com/id/2153961/.

2 See Bridgeport Music, Inc.’s corporate entity details, Michigan Department of Labor & Economic Growth, available at http://www.dleg.state.mi.us/bcs_corp/dt_corp.asp?id_nbr=190824&name_entity=BRI DGEPORT%20MUSIC,%20INC (last visited Mar. 18, 2007).

3 See Wu, supra note 1.

4 410 F.3d 792 (6th Cir. 2005).

5 Id. at 801.

6 Jeff Leeds, Judge Freezes Notorious B.I.G. Album, N.Y. Times, Mar. 21, 2006, at E2.

7 Id.

8 See, e.g., Matthew R. Broodin, Comment, Bridgeport Music, Inc. v. Dimension Films: The Death of the Substantial Similarity Test in Digital Samping Copyright Infringemnt Claims—The Sixth Circuit’s Flawed Attempt at a Bright Line Rule, 6 Minn. J. L. Sci. & Tech. 825 (2005); Jeffrey F. Kersting, Comment, Singing a Different Tune: Was the Sixth Circuit Justified in Changing the Protection of Sound Recordings in Bridgeport Music, Inc. v. Dimension Films?, 74 U. Cin. L. Rev. 663 (2005) (answering the title question in the negative); John Schietinger, Note, Bridgeport Music, Inc. v. Dimension Films: How the Sixth Circuit Missed a Beat on Digital Music Sampling, 55 DePaul L. Rev. 209 (2005).

9 Interview by Rick Karr with George Clinton, at the 5th Annual Future of Music Policy Summit, Wash. D.C. (Sept. 12, 2005), video clip available at http://www.tvworldwide.com/showclip.cfm?ID=6128&clip=2 [hereinafter Clinton Interview].

10 George Clinton, Sample Some of Disc, Sample Some of D.A.T., Vols. 1-3 (1993-94).

11 Sound Generator, George Clinton awarded Funkadelic master recordings (Jun. 6, 2005), http://www.soundgenerator.com/news/showarticle.cfm?articleid=5555.

12 Clinton Interview, supra note 9.

The latest on electronic voting machines

From James Turner’s interview with Dr. Barbara Simons, past President of the Association for Computing Machinery & recent appointee to the Advisory Board of the Federal Election Assistance Commission, at “A 2008 e-Voting Wrapup with Dr. Barbara Simons” (O’Reilly Media: 7 November 2008):

[Note from Scott: headers added by me]

Optical Scan: Good & Bad

And most of the voting in Minnesota was done on precinct based optical scan machines, paper ballot which is then fed into the optical scanner at the precinct. And the good thing about that is it gives the voter immediate feedback if there is any problem, such as over-voting, voting twice for a candidate.

Well there’s several problems; one is–well first of all, as you say because these things have computers in them they can be mis-programmed, there can be software bugs. You could conceivably have malicious code. You could have the machines give you a different count from the right one. There was a situation back in the 2004 race where Gephardt in one of the Primaries–Gephardt received a large number of votes after he had withdrawn from the race. And this was done–using paper ballots, using optical scan paper ballots. I don’t know if it was this particular brand or not. And when they were recounted it was discovered that in fact that was the wrong result; that he had gotten fewer votes. Now I never saw an explanation for what happened but my guess is that whoever programmed these machines had mistakenly assigned the slot that was for Kerry to Gephardt and the slot that was for Gephardt to Kerry; that’s my guess. Now I don’t know if that’s true but if that did happen I think there’s very little reason to believe it was malicious because there was really nothing to be gained by doing that. So I think it was just an honest error but of course errors can occur.

DRE Studies

Ohio conducted a major study of electronic voting machines called the Everest Study which was commissioned by the current Secretary of State Bruner, Secretary of State Bruner and this study uncovered huge problems with these–with most of these voting systems, these touch screen voting systems. They were found to be insecure, unreliable, difficult to use; basically a similar study had been studied in California not too much earlier called the Top to Bottom Review and the Ohio study confirmed every–all of the problems that had been uncovered in California and found additional problems, so based on that there was a push to get rid of a lot of these machines.

States Using DREs

Maryland and Georgia are entirely touch screen States and so is New Jersey. In Maryland they’re supposed to replace them with optical scan paper ballots by 2010 but there’s some concern that there may not be the funding to do that. In fact Maryland and Georgia both use Diebold which is now called Premier, paperless touch screen voting machines; Georgia started using them in 2002 and in that race, that’s the race in which Max Cleveland, the Democratic Senator, paraplegic from–the Vietnam War Vet was defeated and I know that there are some people who questioned the outcome of that race because the area polls had showed him winning. And because that race–those machines are paperless there was no way to check the outcome. Another thing that was of a concern in Maryland in 2002 was that–I mean in Georgia in 2002 was that there were last minute software patches being added to the machines just before the Election and the software patches hadn’t really been inspected by any kind of independent agency.

More on Optical Scans

Well I think scanned ballots–well certainly scanned ballots give you a paper trail and they give you a good paper trail. The kind of paper trail you want and it’s not really a paper trail; it’s paper ballots because they are the ballots. What you want is you want it to be easy to audit and recount an election. And I think that’s something that really people hadn’t taken into consideration early on when a lot of these machines were first designed and purchased.

Disabilities

One of the things that was investigated in California when they did the Top to Bottom Review was just how easy is it for people with disabilities to use these touch screen machines? Nobody had ever done that before and these test results came back very negatively. If you look at the California results they’re very negative on these touch screen machines. In many cases people in wheelchairs had a very difficult time being able to operate them correctly, people who were blind sometimes had troubles understanding what was being said or things were said too loudly or too softly or they would get confused about the instructions or some of the ways that they had for manual inputting; their votes were confusing.

There is a–there are these things called Ballot Generating Devices which are not what we generally refer to as touch screen machines although they can be touch screen. The most widely used one is called the Auto Mark. And the way the Auto Mark works is you take a paper ballots, one of these optical scan ballots and you insert it into the Auto Mark and then it operates much the same way that these other paperless–potentially paperless touch screen machines work. It has a headphone–headset so that a blind voter can use it; it has–it’s possible for somebody in a wheelchair to vote, although in fact you don’t have to use this if you’re in a wheelchair; you can vote optical scan clearly. Somebody who has severe mobility impairments can vote on these machines using a sip, puff device where if you sip it’s a zero or one and if you puff it’s the opposite or a yes or a no. And these–the Auto Mark was designed with disability people in mind from early on. And it faired much better in the California tests. What it does is at the end when the voter with disabilities is finished he or she will say okay cast my ballot. At that point the Auto Mark simply marks the optical scan ballot; it just marks it. And then you have an optical scan ballot that can be read by an optical scanner. There should be no problems with it because it’s been generated by a machine. And you have a paper ballot that can be recounted.

Problems with DREs vs Optical Scans

One of the things to keep in–there’s a couple things to keep in mind when thinking about replacing these systems. The first is that these direct recording electronic systems or touch screen systems as they’re called they have to have–the States and localities that buy these systems have to have maintenance contracts with the vendors because they’re very complicated systems to maintain and of course the software is a secret. So some of these contracts are quite costly and these are ongoing expenses with these machines. In addition, because they have software in them they have to be securely stored and they have to be securely delivered and those create enormous problems especially when you have to worry about delivering large numbers of machines to places prior to the election. Frequently these machines end up staying in people’s garages or in churches for periods of time when they’re relatively insecure.

And you need far fewer scanners; the security issues with scanners are not as great because you can do an audit and a recount, so altogether it just seems to me that moving to paper based optical scan systems with precinct scanners so that the voter gets feedback on the ballot if the voter votes twice for President; the ballot is kicked out and the voter can vote a new ballot.

And as I say there is the Auto Mark for voters with disabilities to use; there’s also another system called Populex but that’s not as widely used as Auto Mark. There could be new systems coming forward.

1/2 of DREs Broken in Pennsylvania on Election Day

Editor’s Note: Dr. Simons wrote me later to say: “Many Pennsylvania polling places opened on election day with half or more of their voting machines broken — so they used emergency paper ballots until they could fix their machines.”

Cheating, security, & theft in virtual worlds and online games

From Federico Biancuzzi’s interview with security researchers Greg Hoglund & Gary McGraw, authors of Exploiting Online Games, in “Real Flaws in Virtual Worlds” (SecurityFocus: 20 December 2007):

The more I dug into online game security, the more interesting things became. There are multiple threads intersecting in our book: hackers who cheat in online games and are not detected can make tons of money selling virtual items in the middle market; the law says next to nothing about cheating in online games, so doing so is really not illegal; the kinds of technological attacks and exploits that hackers are using to cheat in online games are an interesting bellwether; software is evolving to look very much like massively distributed online games look today with thick clients and myriad time and state related security problems. [Emphasis added]

In Brazil, a criminal gang even kidnapped a star MMORPG player in order to take away his character, and its associated virtual wealth.

The really interesting thing about online game security is that the attackers are in most cases after software running on their own machine, not software running on somebody else’s box. That’s a real change. Interestingly, the laws we have developed in computer security don’t have much to say about cheating in a game or hacking software on your own PC.

Matching voters with their votes, thanks to voting machines

From Declan McCullagh’s “E-voting predicament: Not-so-secret ballots” (CNET News: 20 August 2007):

Two Ohio activists have discovered that e-voting machines made by Election Systems and Software and used across the country produce time-stamped paper trails that permit the reconstruction of an election’s results — including allowing voter names to be matched to their actual votes.

Ohio law permits anyone to walk into a county election office and obtain two crucial documents: a list of voters in the order they voted, and a time-stamped list of the actual votes. “We simply take the two pieces of paper together, merge them, and then we have which voter voted and in which way,” said James Moyer, a longtime privacy activist and poll worker who lives in Columbus, Ohio.
Click for gallery

Once the two documents are merged, it’s easy enough to say that the first voter who signed in is very likely going to be responsible for the first vote cast, and so on.

Other suppliers of electronic voting machines say they do not include time stamps in their products that provide voter-verified paper audit trails. Sequoia Voting Systems and Hart Intercivic both said they don’t. A spokesman for Diebold Election Systems (now Premier Election Solutions), said they don’t for security and privacy reasons…

David Wagner, a professor of computer science at the University of California, Berkeley, said electronic storage of votes in the order that voters cast them is a recurring problem with e-voting machines.

“This summer I learned that Diebold’s AV-TSX touchscreen voting machine stores a time stamp showing the time which each vote was cast–down to the millisecond–along with the electronic record of that vote,” Wagner said in an e-mail message. “In particular, we discovered this as part of the California top-to-bottom review and reported it in our public report on the Diebold voting system. However, I had no idea that this kind of information was available to the public as a public record.”

A collective action problem: why the cops can’t talk to firemen

From Bruce Schneier’s “First Responders” (Crypto-Gram: 15 September 2007):

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25% of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80% of cities, municipal authorities couldn’t communicate with the FBI, FEMA, and other federal agencies.

The source of the problem is a basic economic one, called the “collective action problem.” A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

China’s increasing control over American dollars

From James Fallows’ “The $1.4 Trillion Question” (The Atlantic: January/February 2008):

Through the quarter-century in which China has been opening to world trade, Chinese leaders have deliberately held down living standards for their own people and propped them up in the United States. This is the real meaning of the vast trade surplus—$1.4 trillion and counting, going up by about $1 billion per day—that the Chinese government has mostly parked in U.S. Treasury notes. In effect, every person in the (rich) United States has over the past 10 years or so borrowed about $4,000 from someone in the (poor) People’s Republic of China. Like so many imbalances in economics, this one can’t go on indefinitely, and therefore won’t. But the way it ends—suddenly versus gradually, for predictable reasons versus during a panic—will make an enormous difference to the U.S. and Chinese economies over the next few years, to say nothing of bystanders in Europe and elsewhere.

When the dollar is strong, the following (good) things happen: the price of food, fuel, imports, manufactured goods, and just about everything else (vacations in Europe!) goes down. The value of the stock market, real estate, and just about all other American assets goes up. Interest rates go down—for mortgage loans, credit-card debt, and commercial borrowing. Tax rates can be lower, since foreign lenders hold down the cost of financing the national debt. The only problem is that American-made goods become more expensive for foreigners, so the country’s exports are hurt.

When the dollar is weak, the following (bad) things happen: the price of food, fuel, imports, and so on (no more vacations in Europe) goes up. The value of the stock market, real estate, and just about all other American assets goes down. Interest rates are higher. Tax rates can be higher, to cover the increased cost of financing the national debt. The only benefit is that American-made goods become cheaper for foreigners, which helps create new jobs and can raise the value of export-oriented American firms (winemakers in California, producers of medical devices in New England).

Americans sometimes debate (though not often) whether in principle it is good to rely so heavily on money controlled by a foreign government. The debate has never been more relevant, because America has never before been so deeply in debt to one country. Meanwhile, the Chinese are having a debate of their own—about whether the deal makes sense for them. Certainly China’s officials are aware that their stock purchases prop up 401(k) values, their money-market holdings keep down American interest rates, and their bond purchases do the same thing—plus allow our government to spend money without raising taxes.