history

Some reasons why America hasn’t been attacked since 9/11

The World Trade Center after the 9/11 attacks
Image via Wikipedia

From Timothy Noah’s “Why No More 9/11s?: An interactive inquiry about why America hasn’t been attacked again” (Slate: 5 March 2009):

… I spent the Obama transition asking various terrorism experts why the dire predictions of a 9/11 sequel proved untrue and reviewing the literature on this question. The answers boiled down to eight prevailing theories whose implications range from fairly reassuring to deeply worrying.

I. The Terrorists-Are-Dumb Theory

“Acts of terrorism almost never appear to accomplish anything politically significant,” prominent game theorist Thomas C. Schelling observed nearly two decades ago. Max Abrahms, a pre-doctoral fellow at Stanford’s Center for International Security and Cooperation, reaffirmed that conclusion in a 2006 paper for International Security titled, “Why Terrorism Does Not Work.” Abrahms researched 28 groups designated “foreign terrorist organizations” by the U.S. State Department since 2001, identifying among them a total of 42 objectives. The groups achieved those objectives only 7 percent of the time, Abrahms concluded, and the key variable for success was whether they targeted civilians. Groups that attacked civilian targets more often than military ones “systematically failed to achieve their policy objectives.”

In a 2008 follow-up essay, “What Terrorists Really Want,” Abrahms explained that terrorist groups are typically incapable of maintaining a consistent set of strategic goals, much less achieving them. Then why do they become terrorists? To “develop strong affective ties with fellow terrorists.” It’s fraternal bonds they want, not territory, nor influence, nor even, in most cases, to affirm religious beliefs. If a terrorist group’s demands tend to sound improvised, that’s because they are improvised; what really matters to its members—even its leaders—is that they are a band of brothers. Marc Sageman, a forensic psychiatrist and former Central Intelligence Agency case officer in Afghanistan, collected the biographies of 400 terrorists who’d targeted the United States. He found that fully 88 percent became terrorists not because they wanted to change the world but because they had “friendship/family bonds to the jihad.” Among the 400, Sageman found only four who had “any hint of a [psychological] disorder,” a lower incidence than in the general population. Think the Elks, only more lethal. Cut off from al-Qaida’s top leadership, they are plenty dangerous, but not nearly as task-oriented as we imagine them to be.

II. The Near-Enemy Theory

Jihadis speak of the “near enemy” (apostate regimes in and around the Middle East) and the “far enemy” (the United States and the West generally). The man credited with coining these terms, Mohammed Abd al-Salam Faraj, did so largely to emphasize that it was much more important to attack the near enemy, a principle he upheld by organizing the 1981 assassination of Egyptian President Anwar Sadat. (The Egyptian government affirmed the same principle in executing Faraj.) In 1993, a militant Egyptian group called al-Gama’a al-Islamiyya (“the Islamic Group”), which had extensive ties to al-Qaida, broke with the “near enemy” strategy and bombed the World Trade Center. In 1996, al-Qaida followed suit and formally turned its attention to the far enemy. But according to Fawaz A. Gerges, an international affairs professor at Sarah Lawrence and author of The Far Enemy: Why Jihad Went Global, other jihadist groups around the world never really bought into this shift in priorities. Even al-Gama’a al-Islamiyya had by late 1999 declared a cease-fire, a move that outraged its incarcerated spiritual leader, Omar Abdel-Rahman (“the blind sheikh”) and caused the group to splinter. With the 9/11 attacks, Bin Laden hoped to rally jihadis outside al-Qaida’s orbit to join the battle against the far enemy. Instead, he scared them off.

III. The Melting-Pot Theory

In the absence of other evidence, we must conclude that inside the United States, homegrown, al-Qaida-inspired terrorist conspiracy-mongering seldom advances very far.

That record stands in stark contrast to that of the United Kingdom, which since 9/11 has incubated several very serious terrorism plots inspired or directed by al-Qaida. … Even when it isn’t linked directly to terrorism, Muslim radicalism seems more prevalent—and certainly more visible—inside the United Kingdom, and in Western Europe generally, than it is inside the United States.

Why the difference? Economics may be one reason. American Muslims are better-educated and wealthier than the average American. In Europe, they are poorer and less well-educated than the rest of the population—in Germany, only about 10 percent of the Turkish population attends college. The United States has assimilated Muslims into its society more successfully than Western Europe—and over a longer period. Arabs began migrating to the United States in great numbers during the second half of the 19th century. Western Europe’s Arab migration didn’t start until after World War II, when many arrived as guest workers. In Germany and France, a great many Muslims live in housing projects segregated from the rest of the population. In the United States, Muslims are dispersed more widely. An exception would be Detroit, which has a large Muslim community but not an impoverished one.

The relative dearth of Islamist radicalism in the United States is at least as much a function of American demographics as it is of American exceptionalism. Muslims simply loom smaller in the U.S. population than they do in the populations of many Western European countries. Muslims account for roughly 3 percent of the population in the United Kingdom, 4 percent in Germany, and 9 percent in France. In the United States, they’re closer to 1 percent and are spread over a much larger geographic area. As both immigrants and descendants of immigrants, Muslims are far outnumbered in the United States by Latinos. It’s quite different in Western Europe. Muslims represent the largest single immigrant group in France, Germany, Belgium, the Netherlands (where they constitute a majority of all immigrants), and the United Kingdom (where they constitute a plurality of all immigrants).

Somewhere between one-quarter to one-half of U.S. Muslims are African-American. Historically, American-born black Muslims have felt little kinship with Arab and foreign-born Muslims, and while al-Qaida has sought to recruit black Muslims, “there’s no sign” they’ve met with any success, according to Laurence. … Among foreign-born Muslims in the United States, nearly one-quarter are Shiite—many of them refugees from the 1979 Iranian revolution—and therefore harbor little sympathy for al-Qaida’s Sunni following. Europe’s Muslim population, by contrast, is overwhelmingly Sunni, hailing typically in France from Algeria and Morocco; in Germany from Turkey; and in the United Kingdom from Pakistan and the subcontinent.

All right, then. American Muslims are disinclined to commit acts of terror inside the United States. Why don’t American non-Muslims pick up the slack?

Actually, they do. In April 1995 Timothy McVeigh and Terry Nichols bombed a federal building in Oklahoma City, killing 168 people and injuring 500 more. In April 1996, Ted Kaczynski, the “Unabomber,” was arrested for killing three people and wounding 22 others. In July 1996, a former Army explosives expert named Eric Rudolph set off a bomb at the Olympics in Atlanta, killing one person and injuring 11; later, he set off bombs at two abortion clinics and a nightclub frequented by gay men and women, killing a security guard* and injuring 12 others. In September and October 2001, somebody sent anthrax spores to media outlets and government offices, killing five people. The FBI believes it was an Army scientist named Bruce Ivins who killed himself as the investigation closed in on him. These are just the incidents everybody’s heard of. The point is that domestic terrorism inside the United States is fairly routine. The FBI counted 24 terror incidents inside the United States between 2002 and 2005; all but one were committed by American citizens.

IV. The Burden-Of-Success Theory

In fact, the likelihood of nuclear terrorism isn’t that great. Mueller points out that Russian “suitcase bombs,” which figure prominently in discussions about “loose nukes,” were all built before 1991 and ceased being operable after three years. Enriched uranium is extremely difficult to acquire; over the past decade, Mueller argues, there were only 10 known thefts. The material stolen weighed a combined 16 pounds, which was nowhere near the amount needed to build a bomb. Once the uranium is acquired, building the weapon is simple in theory (anti-nuclear activist Howard Morland published a famous 1979 article about this in the Progressive) but quite difficult in practice, which is why entire countries have had to work decades to acquire the bomb, only sometimes meeting with success. (Plutonium, another fissile material, is sufficiently dangerous and difficult to transport that nonproliferation experts seldom discuss it.)

V. The Flypaper Theory

The 9/11 attacks led to a U.S. invasion of Afghanistan, whose Taliban regime was sheltering al-Qaida. That made sense. Then it led to a U.S. invasion of Iraq. That made no sense. The Bush administration claimed that Iraq’s Saddam Hussein had close ties to al-Qaida. This was based on:

a) allegations made by an American Enterprise Institute scholar named Laurie Mylroie, later discredited;

b) an al-Qaida captive’s confession under threat of torture to Egyptian authorities, later retracted;

c) a false report from Czech intelligence about a Prague meeting between the lead 9/11 hijacker, Mohamed Atta, and an Iraqi intelligence agent;

d) Defense Secretary Donald Rumsfeld’s zany complaint at a Sept. 12, 2001, White House meeting that “there aren’t any good targets in Afghanistan, and there are lots of good targets in Iraq”;

and

e) certain Oedipal preoccupations of President George W. Bush.

VI. The He-Kept-Us-Safe Theory

A White House fact sheet specifies six terror plots “prevented in the United States” on Bush’s watch:

  • an attempt to bomb fuel tanks at JFK airport,
  • a plot to blow up airliners bound for the East Coast,
  • a plan to destroy the tallest skyscraper in Los Angeles,
  • a plot by six al-Qaida-inspired individuals to kill soldiers at Fort Dix Army Base in New Jersey,
  • a plan to attack a Chicago-area shopping mall using grenades,
  • a plot to attack the Sears Tower in Chicago.

The Bush administration deserves at least some credit in each of these instances, but a few qualifications are in order. The most serious terror plot listed was the scheme to blow up airliners headed for the East Coast. That conspiracy, halted in its advanced stages, is why you aren’t allowed to carry liquids and gels onto a plane. As noted in “The Melting-Pot Theory,” it originated in the United Kingdom, which took the lead in the investigation. (The undercover agent who infiltrated the terror group was British.) We also learned in “The Melting-Pot Theory” that the plan to bring down the Sears Tower was termed by the Federal Bureau of Investigation’s deputy director “more aspirational than operational” and that the prosecution ended in a mistrial.

The JFK plot was unrelated to al-Qaida and so technically infeasible that the New York Times, the airport’s hometown newspaper, buried the story on Page A37. The attack on the Library Tower in Los Angeles was planned in October 2001 by 9/11’s architect, Khalid Sheikh Mohammed, who recruited volunteers from South Asia to fly a commercial jetliner into the building. But Michael Scheuer, a veteran al-Qaida expert who was working at the Central Intelligence Agency in 2002, when the arrests were made, told the Voice of America that he never heard about them, and a U.S. government official told the Los Angeles Times that the plot never approached the operational stage. Moreover, as the story of United Flight 93 demonstrated, the tactic of flying passenger planes into buildings—which depended on passengers not conceiving of that possibility—didn’t remain viable even through the morning of 9/11 (“Let’s roll”).

The Fort Dix plot was inspired by, but not directed by, al-Qaida. The five Muslim conspirators from New Jersey, convicted on conspiracy charges in December, watched jihadi videos. They were then foolish enough not only to make one of their own but to bring the tape to Circuit City for transfer to DVD. A teenage clerk tipped off the FBI, which infiltrated the group, sold them automatic weapons, and busted them. The attempted grenade attack on the CherryVale Mall in suburban Chicago was similarly inspired but not directed by al-Qaida. In this instance, the conspirators numbered only two, one of whom was an FBI informant. The other guy was arrested when an undercover FBI agent accepted his offer to trade two stereo speakers for four grenades and a gun. He is now serving a life sentence.

VIII. The Time-Space Theory

The RAND Corp. is headquartered in a blindingly white temple of reason a few blocks from the Pacific Ocean in Santa Monica, Calif. It was here—or rather, next door, in the boxy international-style offices it inhabited for half a century before moving four years ago into a new $100 million structure—that America’s Cold War nuclear strategy of “mutual assured destruction” was dreamed up. Also, the Internet. Created by the Air Force in 1948, the nonprofit RAND would “invent a whole new language in [its] quest for rationality,” Slate’s Fred Kaplan wrote in his 1983 book The Wizards of Armageddon.

RAND is the cradle of rational-choice theory, a rigorously utilitarian mode of thought with applications to virtually every field of social science. Under rational-choice theory, belief systems, historical circumstances, cultural influences, and other nonrational filigree must be removed from consideration in calculating the dynamics of human behavior. There exists only the rational and orderly pursuit of self-interest. It is the religion that governs RAND. …

Lakdawalla and RAND economist Claude Berrebi are co-authors of “How Does Terrorism Risk Vary Across Space and Time?” a 2007 paper.

One goal inherent in the 9/11 attacks was to do harm to the United States. In “The Terrorists-Are-Dumb Theory” and “The Melting-Pot Theory,” we reviewed the considerable harm that the furious U.S. response to 9/11 caused al-Qaida. But that response harmed the United States, too. Nearly 5,000 U.S. troops have died in Iraq and Afghanistan, and more than 15,000 have come home wounded. More than 90,000 Iraqi civilians have been killed and perhaps as many as 10,000 Afghan civilians; in Afghanistan, where fighting has intensified, more than 2,000 civilians died just in the past year. “In Muslim nations, the wars in Afghanistan and particularly Iraq have driven negative ratings [of the United States] nearly off the charts,” the Pew Global Attitudes Project reported in December. Gallup polls conducted between 2006 and 2008 found approval ratings for the U.S. government at 15 percent in the Middle East, 23 percent in Europe, and 34 percent in Asia. To be sure, civilian casualties have harmed al-Qaida’s standing, too, as I noted in “The Terrorists-Are-Dumb Theory.” But to whatever extent al-Qaida hoped to reduce the United States’ standing in the world, and especially in the Middle East: Mission accomplished.

Rational-choice theory is most at home with economics, and here the costs are more straightforward. In March 2008, the Nobel Prize-winning economist Joseph Stiglitz, and Linda Bilmes of Harvard’s Kennedy School of Government, put the Iraq war’s cost at $3 trillion. In October 2008, the Congressional Research Service calculated, more conservatively, an additional $107 billion for the Afghanistan war and another $28 billion for enhanced homeland security since 9/11. According to CRS, for every soldier the United States deploys in Iraq or Afghanistan, the taxpayer spends $390,000. Let me put that another way. Sending a single soldier to Iraq or Afghanistan costs the United States nearly as much as the estimated $500,000 it cost al-Qaida to conduct the entire 9/11 operation. Not a bad return on Bin Laden’s investment, Berrebi says. President Bush left office with a budget deficit of nearly $500 billion, and that’s before most of the deficit spending that most economists think will be required to avoid another Great Depression even begins.

Some reasons why America hasn’t been attacked since 9/11 Read More »

Huck Finn caged

From Nicholas Carr’s “Sivilized” (Rough Type: 27 June 2009):

Michael Chabon, in an elegiac essay in the new edition of the New York Review of Books, rues the loss of the “Wilderness of Childhood” – the unparented, unfenced, only partially mapped territory that was once the scene of youth.

Huck Finn, now fully under the thumb of Miss Watson and the Widow Douglas, spends his unscheduled time wandering the fabricated landscapes of World of Warcraft, seeking adventure.

Huck Finn caged Read More »

Various confidence scams, tricks, & frauds

From “List of confidence tricks” (Wikipedia: 3 July 2009):

Get-rich-quick schemes

Get-rich-quick schemes are extremely varied. For example, fake franchises, real estate “sure things”, get-rich-quick books, wealth-building seminars, self-help gurus, sure-fire inventions, useless products, chain letters, fortune tellers, quack doctors, miracle pharmaceuticals, Nigerian money scams, charms and talismans are all used to separate the mark from his money. Variations include the pyramid scheme, Ponzi scheme and Matrix sale.

Count Victor Lustig sold the “money-printing machine” which could copy $100 bills. The client, sensing huge profits, would buy the machines for a high price (usually over $30,000). Over the next twelve hours, the machine would produce just two more $100 bills, but after that it produced only blank paper, as its supply of hidden $100 bills would have become exhausted. This type of scheme is also called the “money box” scheme.

The wire game, as depicted in the movie The Sting, trades on the promise of insider knowledge to beat a gamble, stock trade or other monetary action. In the wire game, a “mob” composed of dozens of grifters simulates a “wire store”, i.e., a place where results from horse races are received by telegram and posted on a large board, while also being read aloud by an announcer. The griftee is given secret foreknowledge of the race results minutes before the race is broadcast, and is therefore able to place a sure bet at the wire store. In reality, of course, the con artists who set up the wire store are the providers of the inside information, and the mark eventually is led to place a large bet, thinking it to be a sure win. At this point, some mistake is made, which actually makes the bet a loss. …

Salting or to salt the mine are terms for a scam in which gems or gold ore are planted in a mine or on the landscape, duping the greedy mark into purchasing shares in a worthless or non-existent mining company.[2] During the Gold Rush, scammers would load shotguns with gold dust and shoot into the sides of the mine to give the appearance of a rich ore, thus “salting the mine”. …

The Spanish Prisoner scam – and its modern variant, the advance fee fraud or Nigerian scam – take advantage of the victim’s greed. The basic premise involves enlisting the mark to aid in retrieving some stolen money from its hiding place. The victim sometimes believes he can cheat the con artists out of their money, but anyone trying this has already fallen for the essential con by believing that the money is there to steal (see also Black money scam). …

Many conmen employ extra tricks to keep the victim from going to the police. A common ploy of investment scammers is to encourage a mark to use money concealed from tax authorities. The mark cannot go to the authorities without revealing that he or she has committed tax fraud. Many swindles involve a minor element of crime or some other misdeed. The mark is made to think that he or she will gain money by helping fraudsters get huge sums out of a country (the classic Nigerian scam); hence marks cannot go to the police without revealing that they planned to commit a crime themselves.

Gold brick scams

Gold brick scams involve selling a tangible item for more than it is worth; named after selling the victim an allegedly golden ingot which turns out to be gold-coated lead.

Pig-in-a-poke originated in the late Middle Ages. The con entails a sale of a (suckling) “pig” in a “poke” (bag). The bag ostensibly contains a live healthy little pig, but actually contains a cat (not particularly prized as a source of meat, and at any rate, quite unlikely to grow to be a large hog). If one buys a “pig in a poke” without looking in the bag (a colloquial expression in the English language, meaning “to be a sucker”), the person has bought something of less value than was assumed, and has learned firsthand the lesson caveat emptor.

The Thai gem scam involves layers of con men and helpers who tell a tourist in Bangkok of an opportunity to earn money by buying duty-free jewelry and having it shipped back to the tourist’s home country. The mark is driven around the city in a tuk-tuk operated by one of the con men, who ensures that the mark meets one helper after another, until the mark is persuaded to buy the jewelry from a store also operated by the swindlers. The gems are real but significantly overpriced. This scam has been operating for 20 years in Bangkok, and is said to be protected by Thai police and politicians. A similar scam usually runs in parallel for custom-made suits.

Extortion or false-injury tricks

The badger game extortion is often perpetrated on married men. The mark is deliberately coerced into a compromising position, a supposed affair for example, then threatened with public exposure of his acts unless blackmail money is paid.

The Melon Drop is a scam in which the scammer will intentionally bump into the mark and drop a package containing (already broken) glass. He will blame the damage on the clumsiness of the mark, and demand money in compensation. This con arose when artists discovered that the Japanese paid large sums of money for watermelons. The scammer would go to a supermarket to buy a cheap watermelon, then bump into a Japanese tourist and set a high price.

Gambling tricks

Three-card Monte, ‘Find The Queen’, the “Three-card Trick”, or “Follow The Lady”, is (except for the props) essentially the same as the probably centuries-older shell game or thimblerig. The trickster shows three playing cards to the audience, one of which is a queen (the “lady”), then places the cards face-down, shuffles them around and invites the audience to bet on which one is the queen. At first the audience is skeptical, so the shill places a bet and the scammer allows him to win. In one variation of the game, the shill will (apparently surreptitiously) peek at the lady, ensuring that the mark also sees the card. This is sometimes enough to entice the audience to place bets, but the trickster uses sleight of hand to ensure that they always lose, unless the conman decides to let them win, hoping to lure them into betting much more. The mark loses whenever the dealer chooses to make him lose. This con appears in the Eric Garcia novel Matchstick Men and is featured in the movie Edmond.

A variation on this scam exists in Barcelona, Spain, but with the addition of a pickpocket. The dealer and shill behave in an overtly obvious manner, attracting a larger audience. When the pickpocket succeeds in stealing from a member of the audience, he signals the dealer. The dealer then shouts the word “aqua”, and the three split up. The audience is left believing that “aqua” is a code word indicating the police are coming, and that the performance was a failed scam.

In the Football Picks Scam the scammer sends out tip sheet stating a game will go one way to 100 potential victims and the other way to another 100. The next week, the 100 or so who received the correct answer are divided into two groups and fed another pick. This is repeated until a small population have (apparently) received a series of supernaturally perfect picks, then the final pick is offered for sale. Despite being well-known (it was even described completely on an episode of The Simpsons and used by Derren Brown in “The System”), this scam is run almost continuously in different forms by different operators. The sports picks can also be replaced with securities, or any other random process, in an alternative form. This scam has also been called the inverted pyramid scheme, because of the steadily decreasing population of victims at each stage.

Visitors to Las Vegas or other gambling towns often encounter the Barred Winner scam, a form of advance fee fraud performed in person. The artist will approach his mark outside a casino with a stack or bag of high-value casino chips and say that he just won big, but the casino accused him of cheating and threw him out without letting him redeem the chips. The artist asks the mark to go in and cash the chips for him. The artist will often offer a percentage of the winnings to the mark for his trouble. But, when the mark agrees, the artist feigns suspicion and asks the mark to put up something of value “for insurance”. The mark agrees, hands over jewelry, a credit card or their wallet, then goes in to cash the chips. When the mark arrives at the cashier, they are informed the chips are fake. The artist, by this time, is long gone with the mark’s valuables.

False reward tricks

The glim-dropper requires several accomplices, one of whom must be a one-eyed man. One grifter goes into a store and pretends he has lost his glass eye. Everyone looks around, but the eye cannot be found. He declares that he will pay a thousand-dollar reward for the return of his eye, leaving contact information. The next day, an accomplice enters the store and pretends to find the eye. The storekeeper (the intended griftee), thinking of the reward, offers to take it and return it to its owner. The finder insists he will return it himself, and demands the owner’s address. Thinking he will lose all chance of the reward, the storekeeper offers a hundred dollars for the eye. The finder bargains him up to $250, and departs.…

The fiddle game uses the pigeon drop technique. A pair of con men work together, one going into an expensive restaurant in shabby clothes, eating, and claiming to have left his wallet at home, which is nearby. As collateral, the con man leaves his only worldly possession, the violin that provides his livelihood. After he leaves, the second con man swoops in, offers an outrageously large amount (for example $50,000) for such a rare instrument, then looks at his watch and runs off to an appointment, leaving his card for the mark to call him when the fiddle-owner returns. The mark’s greed comes into play when the “poor man” comes back, having gotten the money to pay for his meal and redeem his violin. The mark, thinking he has an offer on the table, then buys the violin from the fiddle player (who “reluctantly” sells it eventually for, say, $5,000). The result is the two conmen are $5,000 richer (less the cost of the violin), and the mark is left with a cheap instrument.

Other confidence tricks and techniques

The Landlord Scam advertises an apartment for rent at an attractive price. The con artist, usually someone who is house-sitting or has a short-term sublet at the unit, takes a deposit and first/last month’s rent from every person who views the suite. When move-in day arrives, the con artist is of course gone, and the apartment belongs to none of the angry people carrying boxes.

Change raising is a common short con and involves an offer to change an amount of money with someone, while at the same time taking change or bills back and forth to confuse the person as to how much money is actually being changed. The most common form, “the Short Count”, has been featured prominently in several movies about grifting, notably Nueve Reinas, The Grifters and Paper Moon. A con artist shopping at, say a gas station, is given 80 cents in change because he lacks two dimes to complete the sale (say the sale cost is $19.20 and the con man has a 20 dollar bill). He goes out to his car and returns a short time later, with 20 cents. He returns them, saying that he found the rest of the change to make a dollar, and asking for a bill so he will not have to carry coins. The confused store clerk agrees, exchanging a dollar for the 20 cents the conman returned. In essence, the mark makes change twice.

Beijing tea scam is a famous scam in and around Beijing. The artists (usually female and working in pairs) will approach tourists and try to make friends. After chatting, they will suggest a trip to see a tea ceremony, claiming that they have never been to one before. The tourist is never shown a menu, but assumes that this is how things are done in China. After the ceremony, the bill is presented to the tourist, charging upwards of $100 per head. The artists will then hand over their bills, and the tourists are obliged to follow suit.

Various confidence scams, tricks, & frauds Read More »

The future of news as shown by the 2008 election

From Steven Berlin Johnson’s “Old Growth Media And The Future Of News” (StevenBerlinJohnson.com: 14 March 2009):

The first Presidential election that I followed in an obsessive way was the 1992 election that Clinton won. I was as compulsive a news junkie about that campaign as I was about the Mac in college: every day the Times would have a handful of stories about the campaign stops or debates or latest polls. Every night I would dutifully tune into Crossfire to hear what the punditocracy had to say about the day’s events. I read Newsweek and Time and the New Republic, and scoured the New Yorker for its occasional political pieces. When the debates aired, I’d watch religiously and stay up late soaking in the commentary from the assembled experts.

That was hardly a desert, to be sure. But compare it to the information channels that were available to me following the 2008 election. Everything I relied on in 1992 was still around of course – except for the late, lamented Crossfire – but it was now part of a vast new forest of news, data, opinion, satire – and perhaps most importantly, direct experience. Sites like Talking Points Memo and Politico did extensive direct reporting. Daily Kos provided in-depth surveys and field reports on state races that the Times would never have had the ink to cover. Individual bloggers like Andrew Sullivan responded to each twist in the news cycle; HuffPo culled the most provocative opinion pieces from the rest of the blogosphere. Nate Silver at fivethirtyeight.com did meta-analysis of polling that blew away anything William Schneider dreamed of doing on CNN in 1992. When the economy imploded in September, I followed economist bloggers like Brad DeLong to get their expert take the candidates’ responses to the crisis. (Yochai Benchler talks about this phenomenon of academics engaging with the news cycle in a smart response here.) I watched the debates with a thousand virtual friends live-Twittering alongside me on the couch. All this was filtered and remixed through the extraordinary political satire of John Stewart and Stephen Colbert, which I watched via viral clips on the Web as much as I watched on TV.

What’s more: the ecosystem of political news also included information coming directly from the candidates. Think about the Philadelphia race speech, arguably one of the two or three most important events in the whole campaign. Eight million people watched it on YouTube alone. Now, what would have happened to that speech had it been delivered in 1992? Would any of the networks have aired it in its entirety? Certainly not. It would have been reduced to a minute-long soundbite on the evening news. CNN probably would have aired it live, which might have meant that 500,000 people caught it. Fox News and MSNBC? They didn’t exist yet. A few serious newspaper might have reprinted it in its entirety, which might have added another million to the audience. Online perhaps someone would have uploaded a transcript to Compuserve or The Well, but that’s about the most we could have hoped for.

There is no question in mind my mind that the political news ecosystem of 2008 was far superior to that of 1992: I had more information about the state of the race, the tactics of both campaigns, the issues they were wrestling with, the mind of the electorate in different regions of the country. And I had more immediate access to the candidates themselves: their speeches and unscripted exchanges; their body language and position papers.

The old line on this new diversity was that it was fundamentally parasitic: bloggers were interesting, sure, but if the traditional news organizations went away, the bloggers would have nothing to write about, since most of what they did was link to professionally reported stories. Let me be clear: traditional news organizations were an important part of the 2008 ecosystem, no doubt about it. … But no reasonable observer of the political news ecosystem could describe all the new species as parasites on the traditional media. Imagine how many barrels of ink were purchased to print newspaper commentary on Obama’s San Francisco gaffe about people “clinging to their guns and religion.” But the original reporting on that quote didn’t come from the Times or the Journal; it came from a “citizen reporter” named Mayhill Fowler, part of the Off The Bus project sponsored by Jay Rosen’s Newassignment.net and The Huffington Post.

The future of news as shown by the 2008 election Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

David Foster Wallace on postmodernism & waiting for the parents to come home

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

For me, the last few years of the postmodern era have seemed a bit like the way you feel when you’re in high school and your parents go on a trip, and you throw a party. You get all your friends over and throw this wild disgusting fabulous party. For a while it’s great, free and freeing, parental authority gone and overthrown, a cat’s-away-let’s-play Dionysian revel. But then time passes and the party gets louder and louder, and you run out of drugs, and nobody’s got any money for more drugs, and things get broken and spilled, and there’s a cigarette burn on the couch, and you’re the host and it’s your house too, and you gradually start wishing your parents would come back and restore some fucking order in your house. It’s not a perfect analogy, but the sense I get of my generation of writers and intellectuals or whatever is that it’s 3:00 A.M. and the couch has several burn-holes and somebody’s thrown up in the umbrella stand and we’re wishing the revel would end. The postmodern founders’ patricidal work was great, but patricide produces orphans, and no amount of revelry can make up for the fact that writers my age have been literary orphans throughout our formative years. We’re kind of wishing some parents would come back. And of course we’re uneasy about the fact that we wish they’d come back—I mean, what’s wrong with us? Are we total pussies? Is there something about authority and limits we actually need? And then the uneasiest feeling of all, as we start gradually to realize that parents in fact aren’t ever coming back—which means we’re going to have to be the parents.

David Foster Wallace on postmodernism & waiting for the parents to come home Read More »

David Foster Wallace on the importance of writing within formal constraints

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

You’re probably right about appreciating limits. The sixties’ movement in poetry to radical free verse, in fiction to radically experimental recursive forms—their legacy to my generation of would-be artists is at least an incentive to ask very seriously where literary art’s true relation to limits should be. We’ve seen that you can break any or all of the rules without getting laughed out of town, but we’ve also seen the toxicity that anarchy for its own sake can yield. It’s often useful to dispense with standard formulas, of course, but it’s just as often valuable and brave to see what can be done within a set of rules—which is why formal poetry’s so much more interesting to me than free verse. Maybe our touchstone now should be G. M. Hopkins, who made up his “own” set of formal constraints and then blew everyone’s footwear off from inside them. There’s something about free play within an ordered and disciplined structure that resonates for readers. And there’s something about complete caprice and flux that’s deadening.

David Foster Wallace on the importance of writing within formal constraints Read More »

David Foster Wallace on the problems with postmodern irony

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

Irony and cynicism were just what the U.S. hypocrisy of the fifties and sixties called for. That’s what made the early postmodernists great artists. The great thing about irony is that it splits things apart, gets up above them so we can see the flaws and hypocrisies and duplicates. The virtuous always triumph? Ward Cleaver is the prototypical fifties father? “Sure.” Sarcasm, parody, absurdism and irony are great ways to strip off stuff’s mask and show the unpleasant reality behind it. The problem is that once the rules of art are debunked, and once the unpleasant realities the irony diagnoses are revealed and diagnosed, “then” what do we do? Irony’s useful for debunking illusions, but most of the illusion-debunking in the U.S. has now been done and redone. Once everybody knows that equality of opportunity is bunk and Mike Brady’s bunk and Just Say No is bunk, now what do we do? All we seem to want to do is keep ridiculing the stuff. Postmodern irony and cynicism’s become an end in itself, a measure of hip sophistication and literary savvy. Few artists dare to try to talk about ways of working toward redeeming what’s wrong, because they’ll look sentimental and naive to all the weary ironists. Irony’s gone from liberating to enslaving. There’s some great essay somewhere that has a line about irony being the song of the prisoner who’s come to love his cage.

The problem is that, however misprised it’s been, what’s been passed down from the postmodern heyday is sarcasm, cynicism, a manic ennui, suspicion of all authority, suspicion of all constraints on conduct, and a terrible penchant for ironic diagnosis of unpleasantness instead of an ambition not just to diagnose and ridicule but to redeem. You’ve got to understand that this stuff has permeated the culture. It’s become our language; we’re so in it we don’t even see that it’s one perspective, one among many possible ways of seeing. Postmodern irony’s become our environment.

David Foster Wallace on the problems with postmodern irony Read More »

Mine fires that burn for 400 years

Centralia - Where there's smoke..
Creative Commons License photo credit: C. Young Photography

From Joshua Foer’s “Giant Burning Holes of the World” (Boing Boing: 16 June 2009):

… these sorts of mine fires can stay lit for a very long time. One burned in the city of Zwickau, Germany from 1476 to 1860. Another coal fire in Germany, at a place called Brennender Berg (Burning Mountain), has been smoking continually since 1688!

Mine fires that burn for 400 years Read More »

7 tools of propaganda

From Roger Ebert’s “The O’Reilly Procedure” (Roger Ebert’s Journal: 14 June 2009):

The seven propaganda devices include:

  • Name calling — giving something a bad label to make the audience reject it without examining the evidence;
  • Glittering generalities — the opposite of name calling;
  • Card stacking — the selective use of facts and half-truths;
  • Bandwagon — appeals to the desire, common to most of us, to follow the crowd;
  • Plain folks — an attempt to convince an audience that they, and their ideas, are “of the people”;
  • Transfer — carries over the authority, sanction and prestige of something we respect or dispute to something the speaker would want us to accept; and
  • Testimonials — involving a respected (or disrespected) person endorsing or rejecting an idea or person.

7 tools of propaganda Read More »

The Uncanny Valley, art forgery, & love

Apply new wax to old wood
Creative Commons License photo credit: hans s

From Errol Morris’ “Bamboozling Ourselves (Part 2)” (The New York Times: 28 May 2009):

[Errol Morris:] The Uncanny Valley is a concept developed by the Japanese robot scientist Masahiro Mori. It concerns the design of humanoid robots. Mori’s theory is relatively simple. We tend to reject robots that look too much like people. Slight discrepancies and incongruities between what we look like and what they look like disturb us. The closer a robot resembles a human, the more critical we become, the more sensitive to slight discrepancies, variations, imperfections. However, if we go far enough away from the humanoid, then we much more readily accept the robot as being like us. This accounts for the success of so many movie robots — from R2-D2 to WALL-E. They act like humans but they don’t look like humans. There is a region of acceptability — the peaks around The Uncanny Valley, the zone of acceptability that includes completely human and sort of human but not too human. The existence of The Uncanny Valley also suggests that we are programmed by natural selection to scrutinize the behavior and appearance of others. Survival no doubt depends on such an innate ability.

EDWARD DOLNICK: [The art forger Van Meegeren] wants to avoid it. So his big challenge is he wants to paint a picture that other people are going to take as Vermeer, because Vermeer is a brand name, because Vermeer is going to bring him lots of money, if he can get away with it, but he can’t paint a Vermeer. He doesn’t have that skill. So how is he going to paint a picture that doesn’t look like a Vermeer, but that people are going to say, “Oh! It’s a Vermeer?” How’s he going to pull it off? It’s a tough challenge. Now here’s the point of The Uncanny Valley: as your imitation gets closer and closer to the real thing, people think, “Good, good, good!” — but then when it’s very close, when it’s within 1 percent or something, instead of focusing on the 99 percent that is done well, they focus on the 1 percent that you’re missing, and you’re in trouble. Big trouble.

Van Meegeren is trapped in the valley. If he tries for the close copy, an almost exact copy, he’s going to fall short. He’s going to look silly. So what he does instead is rely on the blanks in Vermeer’s career, because hardly anything is known about him; he’s like Shakespeare in that regard. He’ll take advantage of those blanks by inventing a whole new era in Vermeer’s career. No one knows what he was up to all this time. He’ll throw in some Vermeer touches, including a signature, so that people who look at it will be led to think, “Yes, this is a Vermeer.”

Van Meegeren was sometimes careful, other times astonishingly reckless. He could have passed certain tests. What was peculiar, and what was quite startling to me, is that it turned out that nobody ever did any scientific test on Van Meegeren, even the stuff that was available in his day, until after he confessed. And to this day, people hardly ever test pictures, even multi-million dollar ones. And I was so surprised by that that I kept asking, over and over again: why? Why would that be? Before you buy a house, you have someone go through it for termites and the rest. How could it be that when you’re going to lay out $10 million for a painting, you don’t test it beforehand? And the answer is that you don’t test it because, at the point of being about to buy it, you’re in love! You’ve found something. It’s going to be the high mark of your collection; it’s going to be the making of you as a collector. You finally found this great thing. It’s available, and you want it. You want it to be real. You don’t want to have someone let you down by telling you that the painting isn’t what you think it is. It’s like being newly in love. Everything is candlelight and wine. Nobody hires a private detective at that point. It’s only years down the road when things have gone wrong that you say, “What was I thinking? What’s going on here?” The collector and the forger are in cahoots. The forger wants the collector to snap it up, and the collector wants it to be real. You are on the same side. You think that it would be a game of chess or something, you against him. “Has he got the paint right?” “Has he got the canvas?” You’re going to make this checkmark and that checkmark to see if the painting measures up. But instead, both sides are rooting for this thing to be real. If it is real, then you’ve got a masterpiece. If it’s not real, then today is just like yesterday. You’re back where you started, still on the prowl.

The Uncanny Valley, art forgery, & love Read More »

David Foster Wallace on rock, the rise of mass media, & the generation gap

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

Rock music itself bores me, usually. The phenomenon of rock interests me, though, because its birth was part of the rise of popular media, which completely changed the ways the U.S. was unified and split. The mass media unified the country geographically for pretty much the first time. Rock helped change the fundamental splits in the U.S. from geographical splits to generational ones. Very few people I talk to understand what “generation gap” ‘s implications really were. Kids loved rock partly because their parents didn’t, and obversely. In a mass mediated nation, it’s no longer North vs. South. It’s under-thirty vs. over thirty. I don’t think you can understand the sixties and Vietnam and love ins and LSD and the whole era of patricidal rebellion that helped inspire early postmodern fiction’s whole “We’re-going-to-trash-your-Beaver Cleaver-plasticized-G.O.P.-image-of-life-in-America” attitude without understanding rock ‘n roll. Because rock was and is all about busting loose, exceeding limits, and limits are usually set by parents, ancestors, older authorities.

David Foster Wallace on rock, the rise of mass media, & the generation gap Read More »

David Foster Wallace on the familiar & the strange

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

If you mean a post-industrial, mediated world, it’s inverted one of fiction’s big historical functions, that of providing data on distant cultures and persons. The first real generalization of human experience that novels tried to accomplish. If you lived in Bumfuck, Iowa, a hundred years ago and had no idea what life was like in India, good old Kipling goes over and presents it to you. … Well, but fiction’s presenting function for today’s reader has been reversed: since the whole global village is now presented as familiar, electronically immediate—satellites, microwaves, intrepid PBS anthropologists, Paul Simon’s Zulu back-ups—it’s almost like we need fiction writers to restore strange things’ ineluctable “strangeness,” to defamiliarize stuff, I guess you’d say.

… For our generation, the entire world seems to present itself as “familiar,” but since that’s of course an illusion in terms of anything really important about people, maybe any “realistic” fiction’s job is opposite what it used to be—no longer making the strange familiar but making the familiar strange again. It seems important to find ways of reminding ourselves that most “familiarity” is meditated and delusive.

David Foster Wallace on the familiar & the strange Read More »

Van Gogh on death

From Roger Ebert’s “Go gentle into that good night” (Roger Ebert’s Journal: 2 May 2009):

Van Gogh in Arles wrote this about death:

Looking at the stars always makes me dream, as simply as I dream over the black dots representing towns and villages on a map. Why? I ask myself, shouldn’t the shining dots of the sky be as accessible as the black dots on the map of France? Just as we take a train to get to Tarascon or Rouen, we take death to reach a star. We cannot get to a star while we are alive any more than we can take the train when we are dead. So to me it seems possible that cholera, tuberculosis and cancer are the celestial means of locomotion. Just as steamboats, buses and railways are the terrestrial means. To die quietly of old age, would be to go there on foot.

Van Gogh on death Read More »

Immortality, poetically

From Roger Ebert’s “Go gentle into that good night” (Roger Ebert’s Journal: 2 May 2009):

And there is Shakespeare, who came as close as any man to immortality. In my plans for life after death, I say, again with Whitman:

I bequeath myself to the dirt to grow from the grass I love,
If you want me again look for me under your boot-soles.

And with Will, the brother in Saul Bellow’s Herzog, I say: Look for me in the weather reports.

Immortality, poetically Read More »

Steve Jobs on mediocrity & market share

From Steven Levy’s “OK, Mac, Make a Wish: Apple’s ‘computer for the rest of us’ is, insanely, 20” (Newsweek: 2 February 2004):

If that’s so, then why is the Mac market share, even after Apple’s recent revival, sputtering at a measly 5 percent? Jobs has a theory about that, too. Once a company devises a great product, he says, it has a monopoly in that realm, and concentrates less on innovation than protecting its turf. “The Mac user interface was a 10-year monopoly,” says Jobs. “Who ended up running the company? Sales guys. At the critical juncture in the late ’80s, when they should have gone for market share, they went for profits. They made obscene profits for several years. And their products became mediocre. And then their monopoly ended with Windows 95. They behaved like a monopoly, and it came back to bite them, which always happens.”

Steve Jobs on mediocrity & market share Read More »

Fossils are the lucky ones

From Errol Morris’ “Whose Father Was He? (Part Five)” (The New York Times: 2 April 2009):

I had an opportunity to visit the fossil collections at the Museum of the Rockies in Bozeman, Montana. It was part of a dinosaur fossil-hunting trip with Jack Horner, the premier hunter of T-Rex skeletons. Downstairs in the lab, there was a Triceratops skull sitting on a table. I picked it up and inserted my finger into the brain cavity. (I had read all these stories about how small the Triceratops brain had to have been and I wanted to see for myself.) I said to Jack Horner, “To think that someday somebody will do that with my skull.” And he said, “You should be so lucky. It’s only the privileged few of us who get to be fossils.”

Fossils are the lucky ones Read More »

David Foster Wallace on leadership

From David Foster Wallace’s “The Weasel, Twelve Monkeys And The Shrub: Seven Days In The Life Of The Late, Great John McCain” (Rolling Stone: 13 April 2000):

The weird thing is that the word “leader” itself is cliché and boring, but when you come across somebody who actually is a real leader, that person isn’t cliché or boring at all; in fact he’s sort of the opposite of cliché and boring.

Obviously, a real leader isn’t just somebody who has ideas you agree with, nor is it just somebody you happen to think is a good guy. A real leader is somebody who, because of his own particular power and charisma and example, is able to inspire people, with “inspire” being used here in a serious and non-cliché way. A real leader can somehow get us to do certain things that deep down we think are good and want to be able to do but usually can’t get ourselves to do on our own. It’s a mysterious quality, hard to define, but we always know it when we see it, even as kids. You can probably remember seeing it in certain really great coaches, or teachers, or some extremely cool older kid you “looked up to” (interesting phrase) and wanted to be just like. Some of us remember seeing the quality as kids in a minister or rabbi, or a Scoutmaster, or a parent, or a friend’s parent, or a supervisor in a summer job. And yes, all these are “authority figures,” but it’s a special kind of authority. If you’ve ever spent time in the military, you know how incredibly easy it is to tell which of your superiors are real leaders and which aren’t, and how little rank has to do with it. A leader’s real “authority” is a power you voluntarily give him, and you grant him this authority not with resentment or resignation but happily; it feels right. Deep down, you almost always like how a real leader makes you feel, the way you find yourself working harder and pushing yourself and thinking in ways you couldn’t ever get to on your own.

Lincoln was, by all available evidence, a real leader, and Churchill, and Gandhi, and King. Teddy and Franklin Roosevelt, and de Gaulle, and certainly Marshall and maybe Eisenhower. (Of course Hitler was a real leader too, a very powerful one, so you have to watch out; all it is is a weird kind of power.)

Now you have to pay close attention to something that’s going to seem real obvious. There is a difference between a great leader and a great salesman. Because a salesman’s ultimate, overriding motivation is his own self-interest. If you buy what he’s selling, the salesman profits. So even though the salesman may have a very powerful, charismatic, admirable personality, and might even persuade you that buying really is in your interest (and it really might be) — still, a little part of you always knows that what the salesman’s ultimately after is something for himself. And this awareness is painful … although admittedly it’s a tiny pain, more like a twinge, and often unconscious. But if you’re subjected to enough great salesmen and salespitches and marketing concepts for long enough — like from your earliest Saturday-morning cartoons, let’s say — it is only a matter of time before you start believing deep down that everything is sales and marketing, and that whenever somebody seems like they care about you or about some noble idea or cause, that person is a salesman and really ultimately doesn’t give a shit about you or some cause but really just wants something for himself.

Yes, this is simplistic. All politicians sell, always have. FDR and JFK and MLK and Gandhi were great salesmen. But that’s not all they were. People could smell it. That weird little extra something. It had to do with “character” (which, yes, is also a cliché — suck it up).

David Foster Wallace on leadership Read More »

Why did Thomas Jefferson bring a stuffed moose to France?

From David G. Post’s “Jefferson’s Moose” (Remarks presented at the Stanford Law School Conference on Privacy in Cyberspace: 7 February 2000):

In 1787, Jefferson, then the American Minister to France, had the “complete skeleton, skin & horns of the Moose” shipped to him in Paris and mounted in the lobby of his hotel. One can only imagine the comments made by bemused onlookers and hotel staff.

This was no small undertaking at that time — I suppose it would be no small undertaking even today. It’s not as if he had no other things to do with his time or his money. It’s worth asking: Why did he do it? What could have possessed him?

He wanted, first, to shock. He wanted his French friends to stand back, to gasp, and to say: There really is a new world out there, one that has things in it that we can hardly imagine. He wanted them to have what Lessig called an “aha! moment” in regard to the New World from out of which Jefferson (and his moose) had emerged.

But there was another, more specific, purpose. He wanted to show them that this new world was not a degenerate place. The Comte de Buffon, probably the most celebrated naturalist of the late 18th Century, had propounded just such a theory about the degeneracy of life in the New World. Jefferson described Buffon’s theory this way:

“That the animals common both to the old and new world, are smaller in the latter; that those peculiar to the new, are on a smaller scale; that those which have been domesticated in both, have degenerated in America; and that on the whole the New World exhibits fewer species.”

Though it may be hard to appreciate from our more enlightened 21st century perspective, this was deadly serious stuff — both as science and, more to our point here, as politics; to Jefferson, Buffon’s theory had ominous political implications, for it was, as he put it, “within one step” of the notion that man, too, would degenerate in the New World. Thus, it could and did give a kind of intellectual cover to the notion that man in the New World could not be trusted to govern himself.

Sometimes a picture — or, better yet, a carcass — is worth a thousand words. So out comes the moose; larger than its European counterparts (the reindeer and caribou), its brooding presence in downtown Paris would surely make observers think twice about Buffon’s theory. Jefferson was no fool; he knew full well that one data point does not settle the argument, and he would provide, in his “Notes on the State of Virginia,” a detailed refutation of Buffon’s charge, page after page of careful analysis of the relative sizes of American and European animals.

Why did Thomas Jefferson bring a stuffed moose to France? Read More »

Totalitarian regimes adopt the trappings of religion for themselves

From Steven Weinberg’s “Without God” (The New York Review of Books: 25 September 2008):

It has often been noted that the greatest horrors of the twentieth century were perpetrated by regimes – Hitler’s Germany, Stalin’s Russia, Mao’s China – that while rejecting some or all of the teachings of religion, copied characteristics of religion at its worst: infallible leaders, sacred writings, mass rituals, the execution of apostates, and a sense of community that justified exterminating those outside the community.

Totalitarian regimes adopt the trappings of religion for themselves Read More »