power

Social networking and “friendship”

From danah boyd’s “Friends, Friendsters, and MySpace Top 8: Writing Community Into Being on Social Network Sites” (First Monday: December 2006)

John’s reference to “gateway Friends” concerns a specific technological affordance unique to Friendster. Because the company felt it would make the site more intimate, Friendster limits users from surfing to Profiles beyond four degrees (Friends of Friends of Friends of Friends). When people login, they can see how many Profiles are “in their network” where the network is defined by the four degrees. For users seeking to meet new people, growing this number matters. For those who wanted it to be intimate, keeping the number smaller was more important. In either case, the number of people in one’s network was perceived as directly related to the number of friends one had.

“I am happy with the number of friends I have. I can access over 26,000 profiles, which is enough for me!” — Abby

The number of Friends one has definitely affects the size of one’s network but connecting to Collectors plays a much more significant role. Because these “gateway friends” (a.k.a. social network hubs) have lots of Friends who are not connected to each other, they expand the network pretty rapidly. Thus, connecting to Collectors or connecting to people who connect to Collectors opens you up to a large network rather quickly.

While Collectors could be anyone interested in amassing many Friends, fake Profiles were developed to aid in this process. These Fakesters included characters, celebrities, objects, icons, institutions, and ideas. For example, Homer Simpson had a Profile alongside Jesus and Brown University. By connecting people with shared interests or affiliations, Fakesters supported networking between like-minded individuals. Because play and connecting were primary incentives for many Fakesters, they welcomed any and all Friends. Likewise, people who wanted access to more people connected to Fakesters. Fakesters helped centralize the network and two Fakesters — Burning Man and Ali G — reached mass popularity with over 10,000 Friends each before the Web site’s creators put an end to their collecting and deleted both accounts. This began the deletion of all Fakesters in what was eventually termed the Fakester Genocide [8].

While Friendster was irritated by fake Profiles, MySpace embraced this practice. One of MySpace’s early strategies was to provide a place for everyone who was rejected from Friendster or who didn’t want to be on a dating site [9]. Bands who had been kicked off of Friendster were some of the earliest MySpace users. Over time, movie stars, politicians, porn divas, comedians, and other celebrities joined the fray. Often, the person behind these Profiles was not the celebrity but a manager. Corporations began creating Profiles for their products and brands. While Friendster eventually began allowing such fake Profiles for a fee, MySpace never charged people for their commercial uses.

Investigating Friendship in LiveJournal, Kate Raynes-Goldie and Fono (2005) found that there was tremendous inconsistency in why people Friended others. They primarily found that Friendship stood for: content, offline facilitator, online community, trust, courtesy, declaration, or nothing. When I asked participants about their practices on Friendster and MySpace, I found very similar incentives. The most common reasons for Friendship that I heard from users [11] were:

1. Actual friends
2. Acquaintances, family members, colleagues
3. It would be socially inappropriate to say no because you know them
4. Having lots of Friends makes you look popular
5. It’s a way of indicating that you are a fan (of that person, band, product, etc.)
6. Your list of Friends reveals who you are
7. Their Profile is cool so being Friends makes you look cool
8. Collecting Friends lets you see more people (Friendster)
9. It’s the only way to see a private Profile (MySpace)
10. Being Friends lets you see someone’s bulletins and their Friends-only blog posts (MySpace)
11. You want them to see your bulletins, private Profile, private blog (MySpace)
12. You can use your Friends list to find someone later
13. It’s easier to say yes than no

These incentives account for a variety of different connections. While the first three reasons all concern people that you know, the rest can explain why people connect to a lot of people that they do not know. Most reveal how technical affordances affect people’s incentives to connect.

Raynes-Goldie and Fono (2005) also found that there is a great deal of social anxiety and drama provoked by Friending in LiveJournal (LJ). In LJ, Friendship does not require reciprocity. Anyone can list anyone else as a Friend; this articulation is public but there is no notification. The value of Friendship on LJ is deeply connected to the privacy settings and subscription processes. The norm on LJ is to read others’ entries through a “Friends page.” This page is an aggregation of all of an individual’s Friends’ posts. When someone posts an LJ entry, they have a choice as to whether the post should be public, private, Friends-only, or available to subgroups of Friends. In this way, it is necessary to be someone’s Friend to have access to Friends-only posts. To locate how the multiple and conflicting views of Friendship cause tremendous conflict and misunderstanding on LJ, Raynes-Goldie and Fono speak of “hyperfriending.” This process is quite similar to what takes place on other social network sites, but there are some differences. Because Friends-only posts are commonplace, not being someone’s Friend is a huge limitation to information access. Furthermore, because reciprocity is not structurally required, there’s a much greater social weight to recognizing someone’s Friendship and reciprocating intentionally. On MySpace and Friendster, there is little to lose by being loose with Friendship and more to gain; the perception is that there is much more to lose on LJ.

While users can scroll through their list of Friends, not all Friends are displayed on the participant’s Profile. Most social network sites display Friends in the order in which their account was created or their last login date. By implementing a “Top 8” feature, MySpace changed the social dynamics around the ordering of Friends. Initially, “Top 8” allowed users to select eight Friends to display on their Profile. More recently, that feature was changed to “Top Friends” as users have more options in how many people they could list [12]. Many users will only list people that they know and celebrities that they admire in their Top Friends, often as a way to both demarcate their identity and signal meaningful relationships with others.

There are many advantages to the Top Friends feature. It allows people to show connections that really say something about who they are. It also serves as a bookmark to the people that matter. By choosing to list the people who one visits the most frequently, simply going to one’s Profile provides a set of valuable links.

“As a kid, you used your birthday party guest list as leverage on the playground. ‘If you let me play I’ll invite you to my birthday party.’ Then, as you grew up and got your own phone, it was all about someone being on your speed dial. Well today it’s the MySpace Top 8. It’s the new dangling carrot for gaining superficial acceptance. Taking someone off your Top 8 is your new passive aggressive power play when someone pisses you off.” — Nadine

There are a handful of social norms that pervade Top 8 culture. Often, the person in the upper left (“1st” position) is a significant other, dear friend, or close family member. Reciprocity is another salient component of Top Friends dynamics. If Susan lists Mary on her Top 8, she expects Mary to reciprocate. To acknowledge this, Mary adds a Comment to Susan’s page saying, “Thanx for puttin me on ur Top 8! I put you on mine 2.” By publicly acknowledging this addition, Mary is making certain Susan’s viewers recognize Mary’s status on Susan’s list. Of course, just being in someone’s list is not always enough. As Samantha explains, “Friends get into fights because they’re not 1st on someone’s Top 8, or somebody else is before them.” While some people are ecstatic to be added, there are many more that are frustrated because they are removed or simply not listed.

The Top Friends feature requires participants to actively signal their relationship with others. Such a system makes it difficult to be vague about who matters the most, although some tried by explaining on their bulletins what theme they are using to choose their Top 8 this week: “my Sagittarius friends,” “my basketball team,” and “people whose initials are BR.” Still others relied on fake Profiles for their Top 8.

The networked nature of impressions does not only affect the viewer — this is how newcomers decided what to present in the first place. When people first joined Friendster, they took cues from the people who invited them. Three specific subcultures dominated the early adopters — bloggers, attendees of the Burning Man [14] festival, and gay men mostly living in New York. If the invitee was a Burner, their Profile would probably be filled with references to the event with images full of half-naked, costumed people running around the desert. As such, newcomers would get the impression that it was a site for Burners and they would create a Profile that displayed that facet of their identity. In decided who to invite, newcomers would perpetuate the framing by only inviting people who are part of the Burning Man subculture.

Interestingly, because of this process, Burners believed that the site was for Burners, gay men thought it was a gay dating site, and bloggers were ecstatic to have a geek socializing tool. The reason each group got this impression had to do with the way in which context was created on these systems. Rather than having the context dictated by the environment itself, context emerged through Friends networks. As a result, being socialized into Friendster meant connected to Friends that reinforced the contextual information of early adopters.

The growth of MySpace followed a similar curve. One of the key early adopter groups were hipsters living in the Silverlake neighborhood of Los Angeles. They were passionate about indie rock music and many were musicians, promoters, club goers, etc. As MySpace took hold, long before any press was covering the site, MySpace took off amongst 20/30-something urban socializers, musicians, and teenagers. The latter group may not appear obvious, but teenagers are some of the most active music consumers — they follow music culture avidly, even when they are unable to see the bands play live due to age restrictions. As the site grew, the teenagers and 20/30-somethings pretty much left each other alone, although bands bridged these groups. It was not until the site was sold to News Corp. for US$580 million in the summer of 2005 that the press began covering the phenomenon. The massive press helped it grow larger, penetrating those three demographics more deeply but also attracting new populations, namely adults who are interested in teenagers (parents, teachers, pedophiles, marketers).

When context is defined by whom one Friends, and addressing multiple audiences simultaneously complicates all relationships, people must make hard choices. Joshua Meyrowitz (1985) highlights this problem in reference to television. In the early 1960s, Stokely Carmichael regularly addressed segregated black and white audiences about the values of Black Power. Depending on his audience, he used very different rhetorical styles. As his popularity grew, he began to attract media attention and was invited to speak on TV and radio. Unfortunately, this was more of a curse than a blessing because the audiences he would reach through these mediums included both black and white communities. With no way to reconcile the two different rhetorical styles, he had to choose. In choosing to maintain his roots in front of white listeners, Carmichael permanently alienated white society from the messages of Black Power.

Notes

10. Friendster originally limited users to 150 Friends. It is no accident that they chose 150, as this is the “Dunbar number.” In his research on gossip and grooming, Robin Dunbar argues that there is a cognitive limit to the number of relations that one can maintain. People can only keep gossip with 150 people at any given time (Dunbar, 1998). By capping Friends at 150, Friendster either misunderstood Dunbar or did not realize that their users were actually connecting to friends from the past with whom they are not currently engaging.

12. Eight was the maximum number of Friends that the system initially let people have. Some users figured out how to hack the system to display more Friends; there are entire bulletin boards dedicated to teaching others how to hack this. Consistently, upping the limit was the number one request that the company received. In the spring of 2006, MySpace launched an ad campaign for X-Men. In return for Friending X-Men, users were given the option to have 12, 16, 20, or 24 Friends in their Top Friends section. Millions of users did exactly that. In late June, this feature was introduced to everyone, regardless of Friending X-Men. While eight is no longer the limit, people move between calling it Top 8 or Top Friends. I will use both terms interchangeably, even when the number of Friends might be greater than eight.

Social networking and “friendship” Read More »

Denver International Airport, home to alien reptilians enslaving children in deep dungeons

From Jared Jacang Maher’s “DIA Conspiracies Take Off” (Denver Westword News: 30 August 2007):

Chris from Indianapolis has heard that the tunnels below DIA [Denver International Airport] were constructed as a kind of Noah’s Ark so that five million people could escape the coming earth change; shaken and earnest, he asks how someone might go about getting on the list.

Today, dozens of websites are devoted to the “Denver Airport Conspiracy,” and theorists have even nicknamed the place “Area 52.” Wikipedia presents DIA as a primary example of New World Order symbolism, above the entry about the eyeball/pyramid insignia on the one-dollar bill. And over the past two years, DIA has been the subject of books, articles, documentaries, radio interviews and countless YouTube and forum board postings, all attempting to unlock its mysteries. While the most extreme claim maintains that a massive underground facility exists below the airport where an alien race of reptilian humanoids feeds on missing children while awaiting the date of government-sponsored rapture, all of the assorted theories share a common thread: The key to decoding the truth about DIA and the sinister forces that control our reality is contained within the two Tanguma murals, “In Peace and Harmony With Nature” and “The Children of the World Dream of Peace.”

And not all these theorists are Unabomber-like crackpots uploading their hallucinations from basement lairs. Former BBC media personality David Icke, for example, has written twenty books in his quest to prove that the world is controlled by an elite group of reptilian aliens known as the Babylonian Brotherhood, whose ranks include George W. Bush, Queen Elizabeth II, the Jews and Kris Kristofferson. In various writings, lectures and interviews, he has long argued that DIA is one of many home bases for the otherworldly creatures, a fact revealed in the lizard/alien-faced military figure shown in Tanguma’s murals.

“Denver is scheduled to be the Western headquarters of the US New World Order during martial law take over,” Icke wrote in his 1999 book, The Biggest Secret. “Other contacts who have been underground at the Denver Airport claim that there are large numbers of human slaves, many of them children, working there under the control of the reptilians.”

On the other end of the conspiracy spectrum is anti-vaccination activist Dr. Len Horowitz, who believes that global viruses such as AIDS, Ebola, West Nile, tuberculosis and SARS are actually population-control plots engineered by the government. The former dentist from Florida does not speak about 2012 or reptiles — in fact, he sees Icke’s Jewish alien lizards as a Masonic plot to divert observers from the true earthly enemies: remnants of the Third Reich. He even used the mural’s sword-wielding military figure as the front cover of his 2001 book, Death in the Air.

“The Nazi alien symbolizes the Nazi-fascist links between contemporary population controllers and the military-medical-petrochemical-pharmaceutical cartel largely accountable for Hitler’s rise to power,” Horowitz explained in a 2003 interview with BookWire.

Although conspiracy theories vary widely, they all share three commonalities. “One is the belief that nothing happens by accident,” [Syracuse University professor Michael Barkun, author of the 2006 book A Culture of Conspiracy] points out. “Another is that everything is connected. And a third is that nothing is as it seems.” [Emphasis added]

[Alex] Christopher is a 65-year-old grandmother living in Alabama.

Christopher, on the other hand, was open to hearing anything. A man called her and said he had found an elevator at DIA that led to a corridor that led all the way down into a military base that also contained alien-operated concentration camps. She detailed this theory in her next book, Pandora’s Box II…

And the scale of DIA reflected this desire: It was to be the largest, most modern airport in the world. But almost as soon as ground was broken in 1989, problems cropped up. The massive public-works project was encumbered by design changes, difficult airline negotiations, allegations of cronyism in the contracting process, rumors of mismanagement and real troubles with the $700 million (and eventually abandoned) automated baggage system. Peña’s successor, Wellington Webb, was forced to push back the 1993 opening date three times. By the time DIA finally opened in February 1995, the original $1.5 billion cost had grown to $5.2 billion. Three months after that opening, the Congressional Subcommittee on Aviation held a special hearing on DIA in which one member said the Denver airport represented the “worst in government inefficiency, political behind-the-scenes deal-making, and financial mismanagement.” …

And what looked like a gamble in 1995 seems to have paid off for Denver. Today, DIA is considered one of the world’s most efficient, spacious and technologically advanced airports. It is the fifth-busiest in the nation and tenth-busiest in the world, serving some 50 million passengers in 2006.

Denver International Airport, home to alien reptilians enslaving children in deep dungeons Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

How con artists use psychology to work

From Paul J. Zak’s “How to Run a Con” (Psychology Today: 13 November 2008):

When I was in high school, I took a job at an ARCO gas station on the outskirts of Santa Barbara, California. At the time, I drove a 1967 Mustang hotrod and thought I might pick up some tips and cheap parts by working around cars after school. You see a lot of interesting things working the night shift in a sketchy neighborhood. I constantly saw people making bad decisions: drunk drivers, gang members, unhappy cops, and con men. In fact, I was the victim of a classic con called “The Pigeon Drop.” If we humans have such big brains, how can we get conned?

Here’s what happened to me. One slow Sunday afternoon, a man comes out of the restroom with a pearl necklace in his hand. “Found it on the bathroom floor” he says. He followed with “Geez, looks nice-I wonder who lost it?” Just then, the gas station’s phone rings and a man asked if anyone found a pearl necklace that he had purchased as a gift for his wife. He offers a $200 reward for the necklace’s return. I tell him that a customer found it. “OK” he says, “I’ll be there in 30 minutes.” I give him the ARCO address and he gives me his phone number. The man who found the necklace hears all this but tells me he is running late for a job interview and cannot wait for the other man to arrive.

Huum, what to do? The man with the necklace said “Why don’t I give you the necklace and we split the reward?” The greed-o-meter goes off in my head, suppressing all rational thought. “Yeah, you give me the necklace to hold and I’ll give you $100” I suggest. He agrees. Since high school kids working at gas stations don’t have $100, I take money out of the cash drawer to complete the transaction.

You can guess the rest. The man with the lost necklace doesn’t come and never answers my many calls. After about an hour, I call the police. The “pearl” necklace was a two dollar fake and the number I was calling went to a pay phone nearby. I had to fess up to my boss and pay back the money with my next paycheck.

Why did this con work? Let’s do some neuroscience. While the primary motivator from my perspective was greed, the pigeon drop cleverly engages THOMAS (The Human Oxytocin Mediated Attachment System). … THOMAS is a powerful brain circuit that releases the neurochemical oxytocin when we are trusted and induces a desire to reciprocate the trust we have been shown–even with strangers.

The key to a con is not that you trust the conman, but that he shows he trusts you. Conmen ply their trade by appearing fragile or needing help, by seeming vulnerable. Because of THOMAS, the human brain makes us feel good when we help others–this is the basis for attachment to family and friends and cooperation with strangers. “I need your help” is a potent stimulus for action.

How con artists use psychology to work Read More »

Why American car companies are in trouble

From Paul Ingrassia’s “How Detroit Drove Into a Ditch” (The Wall Street Journal: 25 October 2008):

This situation doesn’t stem from the recent meltdown in banking and the markets. GM, Ford and Chrysler have been losing billions since 2005, when the U.S. economy was still healthy. The financial crisis does, however, greatly exacerbate Detroit’s woes. As car sales plunge — both in the U.S. and in Detroit’s once-booming overseas markets — it’s becoming nearly impossible for the companies to cut costs fast enough to keep pace with the evaporation of their revenue. All three companies, once the very symbol of American economic might, need new capital, but their options for raising it are limited.

In all this lies a tale of hubris, missed opportunities, disastrous decisions and flawed leadership of almost biblical proportions. In fact, for the last 30 years Detroit has gone astray, repented, gone astray and repented again in a cycle not unlike the Israelites in the Book of Exodus.

Detroit failed to grasp — or at least to address — the fundamental nature of its Japanese competition. Japan’s car companies, and more recently the Germans and Koreans, gained a competitive advantage largely by forging an alliance with American workers.

Detroit, meanwhile, has remained mired in mutual mistrust with the United Auto Workers union. While the suspicion has abated somewhat in recent years, it never has disappeared — which is why Detroit’s factories remain vastly more cumbersome to manage than the factories of foreign car companies in the U.S.

Two incidents in 1936 and 1937 formed this lasting labor-management divide: the sit-down strike at GM’s factories in Flint, Mich., and the Battle of the Overpass in Detroit, in which Ford goons beat up union organizers. But the United Auto Workers prevailed, and as the GM-Ford-Chrysler oligopoly emerged in the 1940s, the union gained a labor monopoly in American auto factories. As costs increased, the companies routinely passed them on to U.S. consumers, who had virtually no alternatives in buying cars.

Nissan, Toyota and other Japanese car companies soon started building factories in America, followed by German and Korean auto makers. There are now 16 foreign-owned assembly plants in the U.S., and many more that build engines, transmissions and other components.

Several years ago Ford even considered dropping cars altogether because they weren’t profitable, and focusing entirely on trucks. Then in 2005, Hurricane Katrina and growing oil demand from China and India sent gasoline prices soaring and SUV sales plunging. GM lost $10.6 billion that year. Ford topped that by losing $12.7 billion in 2006. Last summer Daimler gave up on Chrysler, selling it to private-equity powerhouse Cerberus for about one-fourth of what it had paid to buy Chrysler. Last fall the UAW approved significant wage and benefit concessions, but they won’t kick in until 2010. That might be too late. GM lost $15.5 billion in this year’s second quarter, Ford lost $8.7 billion, and further losses are coming. (Closely held Chrysler, of course, doesn’t report financial results.)

Why American car companies are in trouble Read More »

The NSA and threats to privacy

From James Bamford’s “Big Brother Is Listening” (The Atlantic: April 2006):

This legislation, the 1978 Foreign Intelligence Surveillance Act, established the FISA court—made up of eleven judges handpicked by the chief justice of the United States—as a secret part of the federal judiciary. The court’s job is to decide whether to grant warrants requested by the NSA or the FBI to monitor communications of American citizens and legal residents. The law allows the government up to three days after it starts eavesdropping to ask for a warrant; every violation of FISA carries a penalty of up to five years in prison. Between May 18, 1979, when the court opened for business, until the end of 2004, it granted 18,742 NSA and FBI applications; it turned down only four outright.

Such facts worry Jonathan Turley, a George Washington University law professor who worked for the NSA as an intern while in law school in the 1980s. The FISA “courtroom,” hidden away on the top floor of the Justice Department building (because even its location is supposed to be secret), is actually a heavily protected, windowless, bug-proof installation known as a Sensitive Compartmented Information Facility, or SCIF.

It is true that the court has been getting tougher. From 1979 through 2000, it modified only two out of 13,087 warrant requests. But from the start of the Bush administration, in 2001, the number of modifications increased to 179 out of 5,645 requests. Most of those—173—involved what the court terms “substantive modifications.”

Contrary to popular perception, the NSA does not engage in “wiretapping”; it collects signals intelligence, or “sigint.” In contrast to the image we have from movies and television of an FBI agent placing a listening device on a target’s phone line, the NSA intercepts entire streams of electronic communications containing millions of telephone calls and e-mails. It runs the intercepts through very powerful computers that screen them for particular names, telephone numbers, Internet addresses, and trigger words or phrases. Any communications containing flagged information are forwarded by the computer for further analysis.

Names and information on the watch lists are shared with the FBI, the CIA, the Department of Homeland Security, and foreign intelligence services. Once a person’s name is in the files, even if nothing incriminating ever turns up, it will likely remain there forever. There is no way to request removal, because there is no way to confirm that a name is on the list.

In December of 1997, in a small factory outside the southern French city of Toulouse, a salesman got caught in the NSA’s electronic web. Agents working for the NSA’s British partner, the Government Communications Headquarters, learned of a letter of credit, valued at more than $1.1 million, issued by Iran’s defense ministry to the French company Microturbo. According to NSA documents, both the NSA and the GCHQ concluded that Iran was attempting to secretly buy from Microturbo an engine for the embargoed C-802 anti-ship missile. Faxes zapping back and forth between Toulouse and Tehran were intercepted by the GCHQ, which sent them on not just to the NSA but also to the Canadian and Australian sigint agencies, as well as to Britain’s MI6. The NSA then sent the reports on the salesman making the Iranian deal to a number of CIA stations around the world, including those in Paris and Bonn, and to the U.S. Commerce Department and the Customs Service. Probably several hundred people in at least four countries were reading the company’s communications.

Such events are central to the current debate involving the potential harm caused by the NSA’s warrantless domestic eavesdropping operation. Even though the salesman did nothing wrong, his name made its way into the computers and onto the watch lists of intelligence, customs, and other secret and law-enforcement organizations around the world. Maybe nothing will come of it. Maybe the next time he tries to enter the United States or Britain he will be denied, without explanation. Maybe he will be arrested. As the domestic eavesdropping program continues to grow, such uncertainties may plague innocent Americans whose names are being run through the supercomputers even though the NSA has not met the established legal standard for a search warrant. It is only when such citizens are turned down while applying for a job with the federal government—or refused when seeking a Small Business Administration loan, or turned back by British customs agents when flying to London on vacation, or even placed on a “no-fly” list—that they will realize that something is very wrong. But they will never learn why.

General Michael Hayden, director of the NSA from 1999 to 2005 and now principal deputy director of national intelligence, noted in 2002 that during the 1990s, e-communications “surpassed traditional communications. That is the same decade when mobile cell phones increased from 16 million to 741 million—an increase of nearly 50 times. That is the same decade when Internet users went from about 4 million to 361 million—an increase of over 90 times. Half as many land lines were laid in the last six years of the 1990s as in the whole previous history of the world. In that same decade of the 1990s, international telephone traffic went from 38 billion minutes to over 100 billion. This year, the world’s population will spend over 180 billion minutes on the phone in international calls alone.”

Intercepting communications carried by satellite is fairly simple for the NSA. The key conduits are the thirty Intelsat satellites that ring the Earth, 22,300 miles above the equator. Many communications from Europe, Africa, and the Middle East to the eastern half of the United States, for example, are first uplinked to an Intelsat satellite and then downlinked to AT&T’s ground station in Etam, West Virginia. From there, phone calls, e-mails, and other communications travel on to various parts of the country. To listen in on that rich stream of information, the NSA built a listening post fifty miles away, near Sugar Grove, West Virginia. Consisting of a group of very large parabolic dishes, hidden in a heavily forested valley and surrounded by tall hills, the post can easily intercept the millions of calls and messages flowing every hour into the Etam station. On the West Coast, high on the edge of a bluff overlooking the Okanogan River, near Brewster, Washington, is the major commercial downlink for communications to and from Asia and the Pacific. Consisting of forty parabolic dishes, it is reportedly the largest satellite antenna farm in the Western Hemisphere. A hundred miles to the south, collecting every whisper, is the NSA’s western listening post, hidden away on a 324,000-acre Army base in Yakima, Washington. The NSA posts collect the international traffic beamed down from the Intelsat satellites over the Atlantic and Pacific. But each also has a number of dishes that appear to be directed at domestic telecommunications satellites.

Until recently, most international telecommunications flowing into and out of the United States traveled by satellite. But faster, more reliable undersea fiber-optic cables have taken the lead, and the NSA has adapted. The agency taps into the cables that don’t reach our shores by using specially designed submarines, such as the USS Jimmy Carter, to attach a complex “bug” to the cable itself. This is difficult, however, and undersea taps are short-lived because the batteries last only a limited time. The fiber-optic transmission cables that enter the United States from Europe and Asia can be tapped more easily at the landing stations where they come ashore. With the acquiescence of the telecommunications companies, it is possible for the NSA to attach monitoring equipment inside the landing station and then run a buried encrypted fiber-optic “backhaul” line to NSA headquarters at Fort Meade, Maryland, where the river of data can be analyzed by supercomputers in near real time.

Tapping into the fiber-optic network that carries the nation’s Internet communications is even easier, as much of the information transits through just a few “switches” (similar to the satellite downlinks). Among the busiest are MAE East (Metropolitan Area Ethernet), in Vienna, Virginia, and MAE West, in San Jose, California, both owned by Verizon. By accessing the switch, the NSA can see who’s e-mailing with whom over the Internet cables and can copy entire messages. Last September, the Federal Communications Commission further opened the door for the agency. The 1994 Communications Assistance for Law Enforcement Act required telephone companies to rewire their networks to provide the government with secret access. The FCC has now extended the act to cover “any type of broadband Internet access service” and the new Internet phone services—and ordered company officials never to discuss any aspect of the program.

The National Security Agency was born in absolute secrecy. Unlike the CIA, which was created publicly by a congressional act, the NSA was brought to life by a top-secret memorandum signed by President Truman in 1952, consolidating the country’s various military sigint operations into a single agency. Even its name was secret, and only a few members of Congress were informed of its existence—and they received no information about some of its most important activities. Such secrecy has lent itself to abuse.

During the Vietnam War, for instance, the agency was heavily involved in spying on the domestic opposition to the government. Many of the Americans on the watch lists of that era were there solely for having protested against the war. … Even so much as writing about the NSA could land a person a place on a watch list.

For instance, during World War I, the government read and censored thousands of telegrams—the e-mail of the day—sent hourly by telegraph companies. Though the end of the war brought with it a reversion to the Radio Act of 1912, which guaranteed the secrecy of communications, the State and War Departments nevertheless joined together in May of 1919 to create America’s first civilian eavesdropping and code-breaking agency, nicknamed the Black Chamber. By arrangement, messengers visited the telegraph companies each morning and took bundles of hard-copy telegrams to the agency’s offices across town. These copies were returned before the close of business that day.

A similar tale followed the end of World War II. In August of 1945, President Truman ordered an end to censorship. That left the Signal Security Agency (the military successor to the Black Chamber, which was shut down in 1929) without its raw intelligence—the telegrams provided by the telegraph companies. The director of the SSA sought access to cable traffic through a secret arrangement with the heads of the three major telegraph companies. The companies agreed to turn all telegrams over to the SSA, under a plan code-named Operation Shamrock. It ran until the government’s domestic spying programs were publicly revealed, in the mid-1970s.

Frank Church, the Idaho Democrat who led the first probe into the National Security Agency, warned in 1975 that the agency’s capabilities

“could be turned around on the American people, and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it is done, is within the reach of the government to know. Such is the capacity of this technology.”

The NSA and threats to privacy Read More »

The Chinese Internet threat

From Shane Harris’ “China’s Cyber-Militia” (National Journal: 31 May 2008):

Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.

One prominent expert told National Journal he believes that China’s People’s Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. “They said that, with confidence, it had been traced back to the PLA.” These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.

Bennett, whose former trade association includes some of the nation’s largest computer-security companies and who has testified before Congress on the vulnerability of information networks, also said that a blackout in February, which affected 3 million customers in South Florida, was precipitated by a cyber-hacker. That outage cut off electricity along Florida’s east coast, from Daytona Beach to Monroe County, and affected eight power-generating stations.

A second information-security expert independently corroborated Bennett’s account of the Florida blackout. According to this individual, who cited sources with direct knowledge of the investigation, a Chinese PLA hacker attempting to map Florida Power & Light’s computer infrastructure apparently made a mistake.

The industry source, who conducts security research for government and corporate clients, said that hackers in China have devoted considerable time and resources to mapping the technology infrastructure of other U.S. companies. That assertion has been backed up by the current vice chairman of the Joint Chiefs of Staff, who said last year that Chinese sources are probing U.S. government and commercial networks.

“The Chinese operate both through government agencies, as we do, but they also operate through sponsoring other organizations that are engaging in this kind of international hacking, whether or not under specific direction. It’s a kind of cyber-militia.… It’s coming in volumes that are just staggering.”

In addition to disruptive attacks on networks, officials are worried about the Chinese using long-established computer-hacking techniques to steal sensitive information from government agencies and U.S. corporations.

Brenner, the U.S. counterintelligence chief, said he knows of “a large American company” whose strategic information was obtained by its Chinese counterparts in advance of a business negotiation. As Brenner recounted the story, “The delegation gets to China and realizes, ‘These guys on the other side of the table know every bottom line on every significant negotiating point.’ They had to have got this by hacking into [the company’s] systems.”

During a trip to Beijing in December 2007, spyware programs designed to clandestinely remove information from personal computers and other electronic equipment were discovered on devices used by Commerce Secretary Carlos Gutierrez and possibly other members of a U.S. trade delegation, according to a computer-security expert with firsthand knowledge of the spyware used. Gutierrez was in China with the Joint Commission on Commerce and Trade, a high-level delegation that includes the U.S. trade representative and that meets with Chinese officials to discuss such matters as intellectual-property rights, market access, and consumer product safety. According to the computer-security expert, the spyware programs were designed to open communications channels to an outside system, and to download the contents of the infected devices at regular intervals. The source said that the computer codes were identical to those found in the laptop computers and other devices of several senior executives of U.S. corporations who also had their electronics “slurped” while on business in China.

The Chinese make little distinction between hackers who work for the government and those who undertake cyber-adventures on its behalf. “There’s a huge pool of Chinese individuals, students, academics, unemployed, whatever it may be, who are, at minimum, not discouraged from trying this out,” said Rodger Baker, a senior China analyst for Stratfor, a private intelligence firm. So-called patriotic-hacker groups have launched attacks from inside China, usually aimed at people they think have offended the country or pose a threat to its strategic interests. At a minimum the Chinese government has done little to shut down these groups, which are typically composed of technologically skilled and highly nationalistic young men.

The military is not waiting for China, or any other nation or hacker group, to strike a lethal cyber-blow. In March, Air Force Gen. Kevin Chilton, the chief of U.S. Strategic Command, said that the Pentagon has its own cyberwar plans. “Our challenge is to define, shape, develop, deliver, and sustain a cyber-force second to none,” Chilton told the Senate Armed Services Committee. He asked appropriators for an “increased emphasis” on the Defense Department’s cyber-capabilities to help train personnel to “conduct network warfare.”

The Air Force is in the process of setting up a Cyberspace Command, headed by a two-star general and comprising about 160 individuals assigned to a handful of bases. As Wired noted in a recent profile, Cyberspace Command “is dedicated to the proposition that the next war will be fought in the electromagnetic spectrum and that computers are military weapons.” The Air Force has launched a TV ad campaign to drum up support for the new command, and to call attention to cyberwar. “You used to need an army to wage a war,” a narrator in the TV spot declares. “Now all you need is an Internet connection.”

The Chinese Internet threat Read More »

If concerts bring money in for the music biz, what happens when concerts get smaller?

From Jillian Cohen’s “The Show Must Go On” (The American: March/April 2008):

You can’t steal a concert. You can’t download the band—or the sweaty fans in the front row, or the merch guy, or the sound tech—to your laptop to take with you. Concerts are not like albums—easy to burn, copy, and give to your friends. If you want to share the concert-going experience, you and your friends all have to buy tickets. For this reason, many in the ailing music industry see concerts as the next great hope to revive their business.

It’s a blip that already is fading, to the dismay of the major record labels. CD sales have dropped 25 percent since 2000 and digital downloads haven’t picked up the slack. As layoffs swept the major labels this winter, many industry veterans turned their attention to the concert business, pinning their hopes on live performances as a way to bolster their bottom line.

Concerts might be a short-term fix. As one national concert promoter says, “The road is where the money is.” But in the long run, the music business can’t depend on concert tours for a simple, biological reason: the huge tour profits that have been generated in the last few decades have come from performers who are in their 40s, 50s, and 60s. As these artists get older, they’re unlikely to be replaced, because the industry isn’t investing in new talent development.

When business was good—as it was when CD sales grew through much of the 1990s—music labels saw concert tours primarily as marketing vehicles for albums. Now, they’re seizing on the reverse model. Tours have become a way to market the artist as a brand, with the fan clubs, limited-edition doodads, and other profitable products and services that come with the territory.

“Overall, it’s not a pretty picture for some parts of the industry,” JupiterResearch analyst David Card wrote in November when he released a report on digital music sales. “Labels must act more like management companies, and tap into the broadest collection of revenue streams and licensing as possible,” he said. “Advertising and creative packaging and bundling will have to play a bigger role than they have. And the $3 billion-plus touring business is not exactly up for grabs—it’s already competitive and not very profitable. Music companies of all types need to use the Internet for more cost-effective marketing, and A&R [artist development] risk has to be spread more fairly.”

The ‘Heritage Act’ Dilemma

Even so, belief in the touring business was so strong last fall that Madonna signed over her next ten years to touring company Live Nation—the folks who put on megatours for The Rolling Stones, The Police, and other big headliners—in a deal reportedly worth more than $120 million. The Material Girl’s arrangement with Live Nation is known in the industry as a 360-degree deal. Such deals may give artists a big upfront payout in exchange for allowing record labels or, in Madonna’s case, tour producers to profit from all aspects of their business, including touring, merchandise, sponsorships, and more.

While 360 deals may work for big stars, insiders warn that they’re not a magic bullet that will save record labels from their foundering, top-heavy business model. Some artists have done well by 360 contracts, including alt-metal act Korn and British pop sensation Robbie Williams. With these successes in mind, some tout the deals as a way for labels to recoup money they’re losing from downloads and illegal file sharing. But the artists who are offered megamillions for a piece of their brand already have built it through years of album releases, heavy touring, and careful fan-base development.

Not all these deals are good ones, says Bob McLynn, who manages pop-punk act Fall Out Boy and other young artists through his agency, Crush Management. Labels still have a lot to offer, he says. They pay for recording sessions, distribute CDs, market a band’s music, and put up money for touring, music-video production, and other expenses. But in exchange, music companies now want to profit from more than a band’s albums and recording masters. “The artist owns the brand, and now the labels—because they can’t sell as many albums—are trying to get in on the brand,” McLynn says. “To be honest, if an artist these days is looking for a traditional major-label deal for several hundred thousand dollars, they will have to be willing to give up some of that brand.

”For a young act, such offers may be enticing, but McLynn urges caution. “If they’re not going to give you a lot of money for it, it’s a mistake,” says the manager, who helped build Fall Out Boy’s huge teen fan base through constant touring and Internet marketing, only later signing the band to a big label. “I had someone from a major label ask me recently, ‘Hey, I have this new artist; can we convert the deal to a 360 deal?’” McLynn recalls. “I told him [it would cost] $2 million to consider it. He thought I was crazy; but I’m just saying, how is that crazy? If you want all these extra rights and if this artist does blow up, then that’s the best deal in the world for you. If you’re not taking a risk, why am I going to give you this? And if it’s not a lot of money, you’re not taking a risk.”

A concert-tour company’s margin is about 4 percent, Live Nation CEO Michael Rapino has said, while the take on income from concessions, T-shirts, and other merchandise sold at shows can be much higher. The business had a record-setting year in 2006, which saw The Rolling Stones, Madonna, U2, Barbra Streisand, and other popular, high-priced tours on the road. But in 2007, North American gross concert dollars dropped more than 10 percent to $2.6 billion, according to Billboard statistics. Concert attendance fell by more than 19 percent to 51 million. Fewer people in the stands means less merchandise sold and concession-stand food eaten.

Now add this wrinkle: if you pour tens of millions of dollars into a 360 deal, as major labels and Live Nation have done with their big-name stars, you will need the act to tour for a long time to recoup your investment. “For decades we’ve been fueled by acts from the ’60s,” says Gary Bongiovanni, editor of the touring-industry trade magazine Pollstar. Three decades ago, no one would have predicted that Billy Joel or Rod Stewart would still be touring today, Bongiovanni notes, yet the industry has come to depend on artists such as these, known as “heritage acts.” “They’re the ones that draw the highest ticket prices and biggest crowds for our year-end charts,” he says. Consider the top-grossing tours of 2006 and 2007: veterans such as The Rolling Stones, Rod Stewart, Barbra Streisand, and Roger Waters were joined by comparative youngsters Madonna, U2, and Bon Jovi. Only two of the 20 acts—former Mouseketeers Justin Timberlake and Christina Aguilera—were younger than 30.

These young stars, the ones who are prone to taking what industry observer Bob Lefsetz calls “media shortcuts,” such as appearing on MTV, may have less chance of developing real staying power. Lefsetz, formerly an entertainment lawyer and consultant to major labels, has for 20 years published an industry newsletter (now a blog) called the Lefsetz Letter. “Whatever a future [superstar] act will be, it won’t be as ubiquitous as the acts from the ’60s because we were all listening to Top 40 radio,” he says.

From the 1960s to the 1980s, music fans discovered new music primarily on the radio and purchased albums in record stores. The stations young people listened to might have played rock, country, or soul; but whatever the genre, DJs introduced listeners to the hits of tomorrow and guided them toward retail stores and concert halls.

Today, music is available in so many genres and subgenres, via so many distribution streams—including cell phones, social networking sites, iTunes, Pure Volume, and Limewire—that common ground rarely exists for post–Baby Boom fans. This in turn makes it harder for tour promoters to corral the tens of thousands of ticket holders they need to fill an arena. “More people can make music than ever before. They can get it heard, but it’s such a cacophony of noise that it will be harder to get any notice,” says Lefsetz.

Most major promoters don’t know how to capture young people’s interest and translate it into ticket sales, he says. It’s not that his students don’t listen to music, but that they seek to discover it online, from friends, or via virtual buzz. They’ll go out to clubs and hear bands, but they rarely attend big arena concerts. Promoters typically spend 40 percent to 50 percent of their promotional budgets on radio and newspaper advertising, Barnet says. “High school and college students—what percentage of tickets do they buy? And you’re spending most of your advertising dollars on media that don’t even focus on those demographics.” Conversely, the readers and listeners of traditional media are perfect for high-grossing heritage tours. As long as tickets sell for those events, promoters won’t have to change their approach, Barnet says. Heritage acts also tend to sell more CDs, says Pollstar’s Bongiovanni. “Your average Rod Stewart fan is more likely to walk into a record store, if they can find one, than your average Fall Out Boy fan.”

Personally, [Live Nation’s chairman of global music and global touring, Arthur Fogel] said, he’d been disappointed in the young bands he’d seen open for the headliners on Live Nation’s big tours. Live performance requires a different skill set from recorded tracks. It’s the difference between playing music and putting on a show, he said. “More often than not, I find young bands get up and play their music but are not investing enough time or energy into creating that show.” It’s incumbent on the industry to find bands that can rise to the next level, he added. “We aren’t seeing that development that’s creating the next generation of stadium headliners. Hopefully that will change.”

Live Nation doesn’t see itself spearheading such a change, though. In an earlier interview with Billboard magazine, Rapino took a dig at record labels’ model of bankrolling ten bands in the hope that one would become a success. “We don’t want to be in the business of pouring tens of millions of dollars into unknown acts, throwing it against the wall and then hoping that enough sticks that we only lose some of our money,” he said. “It’s not part of our business plan to be out there signing 50 or 60 young acts every year.”

And therein lies the rub. If the big dog in the touring pack won’t take responsibility for nurturing new talent and the labels have less capital to invest in artist development, where will the future megatour headliners come from?

Indeed, despite its all-encompassing moniker, the 360 deal isn’t the only option for musicians, nor should it be. Some artists may find they need the distribution reach and bankroll that a traditional big-label deal provides. Others might negotiate with independent labels for profit sharing or licensing arrangements in which they’ll retain more control of their master recordings. Many will earn the bulk of their income from licensing their songs for use on TV shows, movie soundtracks, and video games. Some may take an entirely do-it-yourself approach, in which they’ll write, produce, perform, and distribute all of their own music—and keep any of the profits they make.

There are growing signs of this transition. The Eagles recently partnered with Wal-Mart to give the discount chain exclusive retail-distribution rights to the band’s latest album. Paul McCartney chose to release his most recent record through Starbucks, and last summer Prince gave away his newest CD to London concertgoers and to readers of a British tabloid. And in a move that earned nearly as much ink as Madonna’s 360 deal, rock act Radiohead let fans download its new release directly from the band’s website for whatever price listeners were willing to pay. Though the numbers are debated, one source, ComScore, reported that in the first month 1.2 million people downloaded the album. About 40 percent paid for it, at an average of about $6 each—well above the usual cut an artist would get in royalties. The band also self-released the album in an $80 limited-edition package and, months later, as a CD with traditional label distribution. Such a move wouldn’t work for just any artist. Radiohead had the luxury of a fan base that it developed over more than a dozen years with a major label. But the band’s experiment showed creativity and adaptability.

If concerts bring money in for the music biz, what happens when concerts get smaller? Read More »

Abuse of “terrorist” investigative powers

From BBC News’ “Council admits spying on family” (10 April 2008):

A council has admitted spying on a family using laws to track criminals and terrorists to find out if they were really living in a school catchment.

A couple and their three children were put under surveillance without their knowledge by Poole Borough Council for more than two weeks.

The council admitted using powers under the Regulation of Investigatory Powers Act (RIPA) on six occasions in total.

Three of those were for suspected fraudulent school place applications.

RIPA legislation allows councils to carry out surveillance if it suspects criminal activity.

On its website, the Home Office says: “The Regulation of Investigatory Powers Act (RIPA) legislates for using methods of surveillance and information gathering to help the prevention of crime, including terrorism.”

Abuse of “terrorist” investigative powers Read More »

Bush’s Manicheanism destroyed him

From Glenn Greenwald’s “A tragic legacy: How a good vs. evil mentality destroyed the Bush presidency” (Salon: 20 June 2007):

One of the principal dangers of vesting power in a leader who is convinced of his own righteousness — who believes that, by virtue of his ascension to political power, he has been called to a crusade against Evil — is that the moral imperative driving the mission will justify any and all means used to achieve it. Those who have become convinced that they are waging an epic and all-consuming existential war against Evil cannot, by the very premises of their belief system, accept any limitations — moral, pragmatic, or otherwise — on the methods adopted to triumph in this battle.

Efforts to impose limits on waging war against Evil will themselves be seen as impediments to Good, if not as an attempt to aid and abet Evil. In a Manichean worldview, there is no imperative that can compete with the mission of defeating Evil. The primacy of that mandate is unchallengeable. Hence, there are no valid reasons for declaring off-limits any weapons that can be deployed in service of the war against Evil.

Equally operative in the Manichean worldview is the principle that those who are warriors for a universal Good cannot recognize that the particular means they employ in service of their mission may be immoral or even misguided. The very fact that the instruments they embrace are employed in service of their Manichean mission renders any such objections incoherent. How can an act undertaken in order to strengthen the side of Good, and to weaken the forces of Evil, ever be anything other than Good in itself? Thus, any act undertaken by a warrior of Good in service of the war against Evil is inherently moral for that reason alone.

It is from these premises that the most amoral or even most reprehensible outcomes can be — and often are — produced by political movements and political leaders grounded in universal moral certainties. Intoxicated by his own righteousness and therefore immune from doubt, the Manichean warrior becomes capable of acts of moral monstrousness that would be unthinkable in the absence of such unquestionable moral conviction. One who believes himself to be leading a supreme war against Evil on behalf of Good will be incapable of understanding any claims that he himself is acting immorally.

That is the essence of virtually every argument Bush supporters make regarding terrorism. No matter what objection is raised to the never-ending expansions of executive power, no matter what competing values are touted (due process, the rule of law, the principles our country embodies, how we are perceived around the world), the response will always be that The Terrorists are waging war against us and our overarching priority — one that overrides all others — is to protect ourselves, to triumph over Evil. By definition, then, there can never be any good reason to oppose vesting powers in the government to protect us from The Terrorists because that goal outweighs all others.

But our entire system of government, from its inception, has been based upon a very different calculus — that is, that many things matter besides merely protecting ourselves against threats, and consequently, we are willing to accept risks, even potentially fatal ones, in order to secure those other values. From its founding, America has rejected the worldview of prioritizing physical safety above all else, as such a mentality leads to an impoverished and empty civic life. The premise of America is and always has been that imposing limitations on government power is necessary to secure liberty and avoid tyranny even if it means accepting an increased risk of death as a result. That is the foundational American value.

It is this courageous demand for core liberties even if such liberties provide less than maximum protection from physical risks that has made America bold, brave, and free. Societies driven exclusively or primarily by a fear of avoiding Evil, minimizing risks, and seeking above all else that our government “protects” us are not free. That is a path that inevitably leads to authoritarianism — an increasingly strong and empowered leader in whom the citizens vest ever-increasing faith and power in exchange for promises of safety. That is most assuredly not the historical ethos of the United States.

The Bill of Rights contains numerous limitations on government power, and many of them render us more vulnerable to threats. If there is a serial killer on the loose in a community, the police would be able to find and apprehend him much more easily if they could simply invade and search everyone’s homes at will and without warning. Nonetheless, the Fourth Amendment expressly prohibits the police from undertaking such searches. It requires both probable cause and a judicial warrant before police may do so, even though such limitations on state power will enable dangerous killers to elude capture.

The scare tactic of telling Americans that every desired expansion of government power is justified by the Evil Terrorist Threat — and that there is no need to worry because the president is Good and will use these powers only to protect us — is effective because it has immediate rhetorical appeal. Most people, especially when placed in fear of potentially fatal threats, are receptive to the argument that maximizing protection is the only thing that matters, and that no abstract concept (such as liberty, or freedom, or due process, or adhering to civilized norms) is worth risking one’s life by accepting heightened levels of vulnerability.

But nothing in life is perfectly safe. Perfect safety is an illusion. When pursued by an individual to the exclusion of all else, it creates a tragically worthless, paralyzed way of life. On the political level, safety as the paramount goal produces tyranny, causing people to vest as much power as possible in the government, without limits, in exchange for the promise of maximum protection.

Bush’s Manicheanism destroyed him Read More »

USA owns 74% of IPv4 addresses

From Stephen Ornes’s “Map: What Does the Internet Look Like?” (Discover: October 2006):

The United States owns 74 percent of the 4 billion available Internet protocol (IP) addresses. China’s stake amounts to little more than that of an American university. Not surprisingly, China is championing the next wave of the Internet, which would accommodate 340 trillion trillion trillion IP addresses.

USA owns 74% of IPv4 addresses Read More »

Wal-Mart’s monopsony power damages its vendors

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

Instead, the firm is also one of the world’s most intrusive, jealous, fastidious micromanagers, and its aim is nothing less than to remake entirely how its suppliers do business, not least so that it can shift many of its own costs of doing business onto them. In addition to dictating what price its suppliers must accept, Wal-Mart also dictates how they package their products, how they ship those products, and how they gather and process information on the movement of those products. Take, for instance, Levi Strauss & Co. Wal-Mart dictates that its suppliers tell it what price they charge Wal-Mart’s competitors, that they accept payment entirely on Wal-Mart’s terms, and that they share information all the way back to the purchase of raw materials. Take, for instance, Newell Rubbermaid. Wal-Mart controls with whom its suppliers speak, how and where they can sell their goods, and even encourages them to support Wal-Mart in its political fights. Take, for instance, Disney. Wal-Mart all but dictates to suppliers where to manufacture their products, as well as how to design those products and what materials and ingredients to use in those products. Take, for instance, Coca-Cola [… Wal-Mart decided that it did not approve of the artificial sweetener Coca-Cola planned to use in a new line of diet colas. In a response that would have been unthinkable just a few years ago, Coca-Cola yielded to the will of an outside firm and designed a second product to meet Wal-Mart’s decree.]. …

Wal-Mart and a growing number of today’s dominant firms, by contrast, are programmed to cut cost faster than price, to slow the introduction of new technologies and techniques, to dictate downward the wages and profits of the millions of people and smaller firms who make and grow what they sell, to break down entire lines of production in the name of efficiency. The effects of this change are clear: We see them in the collapsing profit margins of the firms caught in Wal-Mart’s system. We see them in the fact that of Wal-Mart’s top ten suppliers in 1994, four have sought bankruptcy protection.

Wal-Mart’s monopsony power damages its vendors Read More »

The mirror of monopoly: monopsony … which may be worse

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

Popular notions of oligopoly and monopoly tend to focus on the danger that firms, having gained control over a marketplace, will then be able to dictate an unfairly high price, extracting a sort of tax from society as a whole. But what should concern us today even more is a mirror image of monopoly called “monopsony.” Monopsony arises when a firm captures the ability to dictate price to its suppliers, because the suppliers have no real choice other than to deal with that buyer. Not all oligopolists rely on the exercise of monopsony, but a large and growing contingent of today’s largest firms are built to do just that. The ultimate danger of monopsony is that it deprives the firms that actually manufacture products from obtaining an adequate return on their investment. In other words, the ultimate danger of monopsony is that, over time, it tends to destroy the machines and skills on which we all rely.

Examples of monopsony can be difficult to pin down, but we are in luck in that today we have one of the best illustrations of monopsony pricing power in economic history: Wal-Mart. There is little need to recount at any length the retailer’s power over America’s marketplace. For our purposes, a few facts will suffice — that one in every five retail sales in America is recorded at Wal-Mart’s cash registers; that the firm’s revenue nearly equals that of the next six retailers combined; that for many goods, Wal-Mart accounts for upward of 30 percent of U.S. sales, and plans to more than double its sales within the next five years.

… The problem is that Wal-Mart, like other monopsonists, does not participate in the market so much as use its power to micromanage the market, carefully coordinating the actions of thousands of firms from a position above the market.

The mirror of monopoly: monopsony … which may be worse Read More »

Your job? Waiting in line for others.

From Brian Montopoli’s “The Queue Crew: Waiting in line for a living” (Legal Affairs: January/February 2004):

ON CAPITOL HILL, a placeholder is someone paid by the hour to wait in line. When legislative committees hold hearings, they reserve seats for Congressional staffers, for the press, and for the general public. The general-public seats are the only ones available to the so-called influence peddlers, the Washington lawyers and lobbyists whose livelihood depends on their ability to influence legislation. These seats are first come, first served, which is where the placeholders (also called “stand-ins” or “linestanders”) come in. Since most lobbyists and lawyers seeking to rub shoulders with lawmakers don’t have time to wait in line themselves, they pay others to do it for them.

Rather than use an independent contractor, most influence peddlers secure placeholders through one of the two companies that control about 80 percent of the market: Congressional Services Company and the CVK Group, both of which have rosters of on-call placeholders at the ready. Most of the time, placeholders are asked to wait for just a few hours, often arriving around 5 a.m. to wait for hearings scheduled for 10 a.m. If seats are in great demand, however, placeholders can be asked to get in line several days in advance. Congressional Services charges its clients $32 to $40 per hour for each placeholder, and the placeholders themselves make $10 to $15 an hour. …

For the sake of logistics and appearances, the lines usually form outdoors and stay there until a few hours before a hearing. …

Today, however, most placeholders are not nimble students out to earn a little spending money but older men and women trying to make ends meet. Jim Keegan is one of the “Van Gogh veterans,” a group of placeholders discovered by Congressional Services in 1998 when they were standing in line to get coveted free tickets to the Van Gogh exhibit at the National Gallery of Art. …

Now he said he has time to pursue his interests and get paid. “I’ll probably make $2,000 to $3,000 in a good month,” he said. “That’s more than I made at my old job.”

There is a collegial atmosphere among the placeholders – if you leave to go get something to eat, you aren’t going to lose your spot – but simple tasks like going to the bathroom present challenges. During the day, placeholders can go into the Rayburn Building, but after hours they have to make their way over to the public bathrooms at Union Station. Getting sleep is also a problem. Since the lines form on public sidewalks, placeholders are technically not allowed to sit down, and though the Capitol Hill police often ignore them, there are evenings when an overzealous officer will repeatedly wake them up and tell them to stand. …

Once, a group upset over banking regulations brought busloads of protesters to a hearing, only to discover that they wouldn’t be able to get in, thanks to the placeholders. A scuffle ensued, but the placeholders held their ground.

In general, however, most staffers and politicians don’t even notice the placeholders they pass on their way to work. …

Since hearings can be rescheduled or closed to the public at the last minute, the placeholding services insist on getting paid regardless of whether their clients succeed in getting in. Keegan and Herzog’s long wait, for example, ended before they could pass along their spots to their clients: The housing hearing was cancelled because of partisan infighting, and after two days and 20 hours of waiting, the placeholders were sent home on Tuesday at 6:30 p.m.

The next morning, however, after showers and a change of clothes, many of them were back, this time to wait for a healthcare hearing before the Commerce Committee. When I arrived at the Rayburn Building at 9 a.m., over 70 people were waiting to get into the hearing, and by 10, when it was scheduled to start, there were more than 200. The line began around the corner from the hearing room and snaked past elevator banks and Congressional offices. At the front were mostly placeholders, among them a bored-looking young man with red sneakers and a hat worn sideways and a woman in her late 30s wearing a frayed sweatshirt that read “OJ SIMPSON: JUICE ON THE LOOSE.” …

Thirty minutes before the hearing began, the clients started showing up. The placeholders were identified by placards or by assistant managers who worked the line. A bald white man in his 40s with a yellow tie and an expensive suit took his spot and thanked his placeholder. (Congressional rules prohibit tipping.)

Your job? Waiting in line for others. Read More »

Feral cities of the future

From Richard J. Norton’s “Feral cities – The New Strategic Environment” (Naval War College Review: Autumn, 2003):

Imagine a great metropolis covering hundreds of square miles. Once a vital component in a national economy, this sprawling urban environment is now a vast collection of blighted buildings, an immense petri dish of both ancient and new diseases, a territory where the rule of law has long been replaced by near anarchy in which the only security available is that which is attained through brute power. Such cities have been routinely imagined in apocalyptic movies and in certain science-fiction genres, where they are often portrayed as gigantic versions of T. S. Eliot’s Rat’s Alley. Yet this city would still be globally connected. It would possess at least a modicum of commercial linkages, and some of its inhabitants would have access to the world’s most modern communication and computing technologies. It would, in effect, be a feral city.

The putative “feral city” is (or would be) a metropolis with a population of more than a million people in a state the government of which has lost the ability to maintain the rule of law within the city’s boundaries yet remains a functioning actor in the greater international system.

In a feral city social services are all but nonexistent, and the vast majority of the city’s occupants have no access to even the most basic health or security assistance. There is no social safety net. Human security is for the most part a matter of individual initiative. Yet a feral city does not descend into complete, random chaos. Some elements, be they criminals, armed resistance groups, clans, tribes, or neighborhood associations, exert various degrees of control over portions of the city. Intercity, city-state, and even international commercial transactions occur, but corruption, avarice, and violence are their hallmarks. A feral city experiences massive levels of disease and creates enough pollution to qualify as an international environmental disaster zone. Most feral cities would suffer from massive urban hypertrophy, covering vast expanses of land. The city’s structures range from once-great buildings symbolic of state power to the meanest shantytowns and slums. Yet even under these conditions, these cities continue to grow, and the majority of occupants do not voluntarily leave.

Feral cities would exert an almost magnetic influence on terrorist organizations. Such megalopolises will provide exceptionally safe havens for armed resistance groups, especially those having cultural affinity with at least one sizable segment of the city’s population. The efficacy and portability of the most modern computing and communication systems allow the activities of a worldwide terrorist, criminal, or predatory and corrupt commercial network to be coordinated and directed with equipment easily obtained on the open market and packed into a minivan. The vast size of a feral city, with its buildings, other structures, and subterranean spaces, would offer nearly perfect protection from overhead sensors, whether satellites or unmanned aerial vehicles. The city’s population represents for such entities a ready source of recruits and a built-in intelligence network. Collecting human intelligence against them in this environment is likely to be a daunting task. Should the city contain airport or seaport facilities, such an organization would be able to import and export a variety of items. The feral city environment will actually make it easier for an armed resistance group that does not already have connections with criminal organizations to make them. The linkage between such groups, once thought to be rather unlikely, is now so commonplace as to elicit no comment.

Feral cities of the future Read More »

Secret movies in the Paris underground

From Jon Henley’s “In a secret Paris cavern, the real underground cinema” (The Guardian: 8 September 2004):

Police in Paris have discovered a fully equipped cinema-cum-restaurant in a large and previously uncharted cavern underneath the capital’s chic 16th arrondissement. Officers admit they are at a loss to know who built or used one of Paris’s most intriguing recent discoveries. "We have no idea whatsoever," a police spokesman said. …

Members of the force’s sports squad, responsible – among other tasks – for policing the 170 miles of tunnels, caves, galleries and catacombs that underlie large parts of Paris, stumbled on the complex while on a training exercise beneath the Palais de Chaillot, across the Seine from the Eiffel Tower.

After entering the network through a drain next to the Trocadero, the officers came across a tarpaulin marked: Building site, No access.

Behind that, a tunnel held a desk and a closed-circuit TV camera set to automatically record images of anyone passing. The mechanism also triggered a tape of dogs barking, "clearly designed to frighten people off," the spokesman said.

Further along, the tunnel opened into a vast 400 sq metre cave some 18m underground, "like an underground amphitheatre, with terraces cut into the rock and chairs".

There the police found a full-sized cinema screen, projection equipment, and tapes of a wide variety of films, including 1950s film noir classics and more recent thrillers. None of the films were banned or even offensive, the spokesman said.

A smaller cave next door had been turned into an informal restaurant and bar. "There were bottles of whisky and other spirits behind a bar, tables and chairs, a pressure-cooker for making couscous," the spokesman said.

"The whole thing ran off a professionally installed electricity system and there were at least three phone lines down there."

Three days later, when the police returned accompanied by experts from the French electricity board to see where the power was coming from, the phone and electricity lines had been cut and a note was lying in the middle of the floor: "Do not," it said, "try to find us." …

There exist, however, several secretive bands of so-called cataphiles, who gain access to the tunnels mainly after dark, through drains and ventilation shafts, and hold what in the popular imagination have become drunken orgies but are, by all accounts, innocent underground picnics.

… the Perforating Mexicans, last night told French radio the subterranean cinema was its work.

Film noir in the Parisian catacombs. Secret bars and telephones. Scuttling down drains for secret assignations. "Do not try to find us." I’m swooning just thinking about it!

Secret movies in the Paris underground Read More »