definition

Steve Jobs, genius

From Stephen Fry’s “Steve Jobs” (The New Adventures of Stephen Fry: 6 October 2011):

Henry Ford didn’t invent the motor car, Rockefeller didn’t discover how to crack crude oil into petrol, Disney didn’t invent animation, the Macdonald brothers didn’t invent the hamburger, Martin Luther King didn’t invent oratory, neither Jane Austen, Tolstoy nor Flaubert invented the novel and D. W. Griffith, the Warner Brothers, Irving Thalberg and Steven Spielberg didn’t invent film-making. Steve Jobs didn’t invent computers and he didn’t invent packet switching or the mouse. But he saw that there were no limits to the power that creative combinations of technology and design could accomplish.

I once heard George Melly, on a programme about Louis Armstrong, do that dangerous thing and give his own definition of a genius. “A genius,” he said, “is someone who enters a field and works in it and when they leave it, it is different. By that token, Satchmo was a genius.” I don’t think any reasonable person could deny that Steve Jobs, by that same token, was a genius too.

Steve Jobs, genius Read More »

Umberto Eco on books

From Umberto Eco’s “Vegetal and mineral memory: The future of books” (Al-Ahram Weekly: 20—26 November 2003):

Libraries, over the centuries, have been the most important way of keeping our collective wisdom. They were and still are a sort of universal brain where we can retrieve what we have forgotten and what we still do not know. If you will allow me to use such a metaphor, a library is the best possible imitation, by human beings, of a divine mind, where the whole universe is viewed and understood at the same time. A person able to store in his or her mind the information provided by a great library would emulate in some way the mind of God. In other words, we have invented libraries because we know that we do not have divine powers, but we try to do our best to imitate them. …

First of all, we know that books are not ways of making somebody else think in our place; on the contrary, they are machines that provoke further thoughts. Only after the invention of writing was it possible to write such a masterpiece of spontaneous memory as Proust’s A la Recherche du Temps Perdu. Secondly, if once upon a time people needed to train their memories in order to remember things, after the invention of writing they had also to train their memories in order to remember books. Books challenge and improve memory; they do not narcotise it. …

YET IT IS EXACTLY AT THIS POINT that our unravelling activity must start because by hypertextual structure we usually mean two very different phenomena. First, there is the textual hypertext. In a traditional book one must read from left to right (or right to left, or up to down, according to different cultures) in a linear way. One can obviously skip through the pages, one—once arrived at page 300—can go back to check or re- read something at page 10—but this implies physical labour. In contrast to this, a hypertextual text is a multidimensional network or a maze in which every point or node can be potentially connected with any other node. Second, there is the systemic hypertext. The WWW is the Great Mother of All Hypertexts, a world-wide library where you can, or you will in short time, pick up all the books you wish. The Web is the general system of all existing hypertexts. …

Simply, books have proved to be the most suitable instrument for transmitting information. There are two sorts of book: those to be read and those to be consulted. As far as books-to-be-read are concerned, the normal way of reading them is the one that I would call the ‘detective story way’. You start from page one, where the author tells you that a crime has been committed, you follow every path of the detection process until the end, and finally you discover that the guilty one was the butler. End of the book and end of your reading experience. …

Then they are books to be consulted, like handbooks and encyclopaedias. Encyclopaedias are conceived in order to be consulted and never read from the first to the last page. …

Hypertexts will certainly render encyclopaedias and handbooks obsolete. Yesterday, it was possible to have a whole encyclopaedia on a CD-ROM; today, it is possible to have it on line with the advantage that this permits cross references and the non-linear retrieval of information. …

Books belong to those kinds of instruments that, once invented, have not been further improved because they are already alright, such as the hammer, the knife, spoon or scissors. …

TWO NEW INVENTIONS, however, are on the verge of being industrially exploited. One is printing on demand: after scanning the catalogues of many libraries or publishing houses a reader can select the book he needs, and the operator will push a button, and the machine will print and bind a single copy using the font the reader likes. … Simply put: every book will be tailored according to the desires of the buyer, as happened with old manuscripts.

The second invention is the e-book where by inserting a micro- cassette in the book’s spine or by connecting it to the internet one can have a book printed out in front of us. Even in this case, however, we shall still have a book, though as different from our current ones as ours are different from old manuscripts on parchment, and as the first Shakespeare folio of 1623 is different from the last Penguin edition. Yet, up to now e-books have not proved to be commercially successful as their inventors hoped. … E-books will probably prove to be useful for consulting information, as happens with dictionaries or special documents. …

Indeed, there are a lot of new technological devices that have not made previous ones obsolete. Cars run faster than bicycles, but they have not rendered bicycles obsolete, and no new technological improvements can make a bicycle better than it was before. The idea that a new technology abolishes a previous one is frequently too simplistic. Though after the invention of photography painters did not feel obliged to serve any longer as craftsmen reproducing reality, this did not mean that Daguerre’s invention only encouraged abstract painting. There is a whole tradition in modern painting that could not have existed without photographic models: think, for instance, of hyper-realism. Here, reality is seen by the painter’s eye through the photographic eye. This means that in the history of culture it has never been the case that something has simply killed something else. Rather, a new invention has always profoundly changed an older one. …

The computer creates new modes of production and diffusion of printed documents. …

Today there are new hypertextual poetics according to which even a book-to-read, even a poem, can be transformed to hypertext. At this point we are shifting to question two, since the problem is no longer, or not only, a physical one, but rather one that concerns the very nature of creative activity, of the reading process, and in order to unravel this skein of questions we have first of all to decide what we mean by a hypertextual link. …

In order to understand how texts of this genre can work we should decide whether the textual universe we are discussing is limited and finite, limited but virtually infinite, infinite but limited, or unlimited and infinite.

First of all, we should make a distinction between systems and texts. A system, for instance a linguistic system, is the whole of the possibilities displayed by a given natural language. A finite set of grammatical rules allows the speaker to produce an infinite number of sentences, and every linguistic item can be interpreted in terms of other linguistic or other semiotic items—a word by a definition, an event by an example, an animal or a flower by an image, and so on and so forth. …

Grammars, dictionaries and encyclopaedias are systems: by using them you can produce all the texts you like. But a text itself is not a linguistic or an encyclopaedic system. A given text reduces the infinite or indefinite possibilities of a system to make up a closed universe. If I utter the sentence, ‘This morning I had for breakfast…’, for example, the dictionary allows me to list many possible items, provided they are all organic. But if I definitely produce my text and utter, ‘This morning I had for breakfast bread and butter’, then I have excluded cheese, caviar, pastrami and apples. A text castrates the infinite possibilities of a system. …

Take a fairy tale, like Little Red Riding Hood. The text starts from a given set of characters and situations—a little girl, a mother, a grandmother, a wolf, a wood—and through a series of finite steps arrives at a solution. Certainly, you can read the fairy tale as an allegory and attribute different moral meanings to the events and to the actions of the characters, but you cannot transform Little Red Riding Hood into Cinderella. … This seems trivial, but the radical mistake of many deconstructionists was to believe that you can do anything you want with a text. This is blatantly false. …

Now suppose that a finite and limited text is organised hypertextually by many links connecting given words with other words. In a dictionary or an encyclopaedia the word wolf is potentially connected to every other word that makes up part of its possible definition or description (wolf is connected to animal, to mammal to ferocious, to legs, to fur, to eyes, to woods, to the names of the countries in which wolves exist, etc.). In Little Red Riding Hood, the wolf can be connected only with the textual sections in which it shows up or in which it is explicitly evoked. The series of possible links is finite and limited. How can hypertextual strategies be used to ‘open’ up a finite and limited text?

The first possibility is to make the text physically unlimited, in the sense that a story can be enriched by the successive contributions of different authors and in a double sense, let us say either two-dimensionally or three-dimensionally. By this I mean that given, for instance, Little Red Riding Hood, the first author proposes a starting situation (the girl enters the wood) and different contributors can then develop the story one after the other, for example, by having the girl meet not the wolf but Ali Baba, by having both enter an enchanted castle, having a confrontation with a magic crocodile, and so on, so that the story can continue for years. But the text can also be infinite in the sense that at every narrative disjunction, for instance, when the girl enters the wood, many authors can make many different choices. For one author, the girl may meet Pinocchio, for another she may be transformed into a swan, or enter the Pyramids and discover the treasury of the son of Tutankhamen. …

AT THIS POINT one can raise a question about the survival of the very notion of authorship and of the work of art, as an organic whole. And I want simply to inform my audience that this has already happened in the past without disturbing either authorship or organic wholes. The first example is that of the Italian Commedia dell’arte, in which upon a canovaccio, that is, a summary of the basic story, every performance, depending on the mood and fantasy of the actors, was different from every other so that we cannot identify any single work by a single author called Arlecchino servo di due padroni and can only record an uninterrupted series of performances, most of them definitely lost and all certainly different one from another.

Another example would be a jazz jam session. … What I want to say is that we are already accustomed to the idea of the absence of authorship in popular collective art in which every participant adds something, with experiences of jazz-like unending stories. …

A hypertext can give the illusion of opening up even a closed text: a detective story can be structured in such a way that its readers can select their own solution, deciding at the end if the guilty one should be the butler, the bishop, the detective, the narrator, the author or the reader. They can thus build up their own personal story. Such an idea is not a new one. Before the invention of computers, poets and narrators dreamt of a totally open text that readers could infinitely re-compose in different ways. Such was the idea of Le Livre, as extolled by Mallarmé. Raymond Queneau also invented a combinatorial algorithm by virtue of which it was possible to compose, from a finite set of lines, millions of poems. In the early sixties, Max Saporta wrote and published a novel whose pages could be displaced to compose different stories, and Nanni Balestrini gave a computer a disconnected list of verses that the machine combined in different ways to compose different poems. …

All these physically moveable texts give an impression of absolute freedom on the part of the reader, but this is only an impression, an illusion of freedom. The machinery that allows one to produce an infinite text with a finite number of elements has existed for millennia, and this is the alphabet. Using an alphabet with a limited number of letters one can produce billions of texts, and this is exactly what has been done from Homer to the present days. In contrast, a stimulus-text that provides us not with letters, or words, but with pre-established sequences of words, or of pages, does not set us free to invent anything we want. …

At the last borderline of free textuality there can be a text that starts as a closed one, let us say, Little Red Riding Hood or The Arabian Nights, and that I, the reader, can modify according to my inclinations, thus elaborating a second text, which is no longer the same as the original one, whose author is myself, even though the affirmation of my authorship is a weapon against the concept of definite authorship. …

A BOOK OFFERS US A TEXT which, while being open to multiple interpretations, tells us something that cannot be modified. … Alas, with an already written book, whose fate is determined by repressive, authorial decision, we cannot do this. We are obliged to accept fate and to realise that we are unable to change destiny. A hypertextual and interactive novel allows us to practice freedom and creativity, and I hope that such inventive activity will be implemented in the schools of the future. But the already and definitely written novel War and Peace does not confront us with the unlimited possibilities of our imagination, but with the severe laws governing life and death. …

Umberto Eco on books Read More »

Religion, God, history, morality

From Steve Paulson’s interview with Robert Wright, “God, He’s moody” (Salon: 24 June 2009):

Do you think religions share certain core principles?

Not many. People in the modern world, certainly in America, think of religion as being largely about prescribing moral behavior. But religion wasn’t originally about that at all. To judge by hunter-gatherer religions, religion was not fundamentally about morality before the invention of agriculture. It was trying to figure out why bad things happen and increasing the frequency with which good things happen. Why do you sometimes get earthquakes, storms, disease and get slaughtered? But then sometimes you get nice weather, abundant game and you get to do the slaughtering. Those were the religious questions in the beginning.

And bad things happened because the gods were against you or certain spirits had it out for you?

Yes, you had done something to offend a god or spirit. However, it was not originally a moral lapse. That’s an idea you see as societies get more complex. When you have a small group of hunter-gatherers, a robust moral system is not a big challenge. Everyone knows everybody, so it’s hard to conceal anything you steal. If you mess with somebody too much, there will be payback. Moral regulation is not a big problem in a simple society. But as society got more complex with the invention of agriculture and writing, morality did become a challenge. Religion filled that gap.

For people who claim that Israel was monotheistic from the get-go and its flirtations with polytheism were rare aberrations, it’s interesting that the Jerusalem temple, according to the Bible’s account, had all these other gods being worshiped in it. Asherah was in the temple. She seemed to be a consort or wife of Yahweh. And there were vessels devoted to Baal, the reviled Canaanite god. So Israel was fundamentally polytheistic at this point. Then King Josiah goes on a rampage as he tries to consolidate his own power by wiping out the other gods.

You make the point that the Quran is a different kind of sacred text than the Bible. It was probably written over the course of two decades, while the stories collected in the Bible were written over centuries. That’s why the Bible is such a diverse document.

We think of the Bible as a book, but in ancient times it would have been thought of as a library. There were books written by lots of different people, including a lot of cosmopolitan elites. You also see elements of Greek philosophy. The Quran is just one guy talking. In the Muslim view, he’s mediating the word of God. He’s not especially cosmopolitan. He is, according to Islamic tradition, illiterate. So it’s not surprising that the Quran didn’t have the intellectual diversity and, in some cases, the philosophical depth that you find in the Bible. I do think he was actually a very modern thinker. Muhammad’s argument for why you should be devoted exclusively to this one God is very modern.

Are you also saying we can be religious without believing in God?

By some definitions, yes. It’s hard to find a definition of religion that encompasses everything we call religion. The definition I like comes from William James. He said, “Religious belief consists of the belief that there is an unseen order and that our supreme good lies in harmoniously adjusting to that order.” In that sense, you can be religious without believing in God. In that sense, I’m religious. On the God question, I’m not sure.

Religion, God, history, morality Read More »

David Foster Wallace on fiction’s purpose in dark times

From Larry McCaffery’s “Conversation with David Foster Wallace” (Dalkey Archive Press at the University of Illinois: Summer 1993):

Look man, we’d probably most of us agree that these are dark times, and stupid ones, but do we need fiction that does nothing but dramatize how dark and stupid everything is? In dark times, the definition of good art would seem to be art that locates and applies CPR to those elements of what’s human and magical that still live and glow despite the times’ darkness. Really good fiction could have as dark a worldview as it wished, but it’d find a way both to depict this world and to illuminate the possibilities for being alive and human in it.

David Foster Wallace on fiction’s purpose in dark times Read More »

David Foster Wallace on David Lynch

From David Foster Wallace’s “David Lynch Keeps His Head” (Premier: September 1996):

AN ACADEMIC DEFINITION of Lynchian might be that the term “refers to a particular kind of irony where the very macabre and the very mundane combine in such a way as to reveal the former’s perpetual containment within the latter.” But like postmodern or pornographic, Lynchian is one of those Porter Stewart-type words that’s ultimately definable only ostensively – i.e., we know it when we see it. Ted Bundy wasn’t particularly Lynchian, but good old Jeffrey Dahmer, with his victims’ various anatomies neatly separated and stored in his fridge alongside his chocolate milk and Shedd Spread, was thoroughgoingly Lynchian. A recent homicide in Boston, in which the deacon of a South Shore church reportedly gave chase to a vehicle that bad cut him off, forced the car off the road, and shot the driver with a highpowered crossbow, was borderline Lynchian. A Rotary luncheon where everybody’s got a comb-over and a polyester sport coat and is eating bland Rotarian chicken and exchanging Republican platitudes with heartfelt sincerity and yet all are either amputees or neurologically damaged or both would be more Lynchian than not.

David Foster Wallace on David Lynch Read More »

Defining social media, social software, & Web 2.0

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Social media is the latest buzzword in a long line of buzzwords. It is often used to describe the collection of software that enables individuals and communities to gather, communicate, share, and in some cases collaborate or play. In tech circles, social media has replaced the earlier fave “social software.” Academics still tend to prefer terms like “computer-mediated communication” or “computer-supported cooperative work” to describe the practices that emerge from these tools and the old skool academics might even categorize these tools as “groupwork” tools. Social media is driven by another buzzword: “user-generated content” or content that is contributed by participants rather than editors.

… These tools are part of a broader notion of “Web2.0.” Yet-another-buzzword, Web2.0 means different things to different people.

For the technology crowd, Web2.0 was about a shift in development and deployment. Rather than producing a product, testing it, and shipping it to be consumed by an audience that was disconnected from the developer, Web2.0 was about the perpetual beta. This concept makes all of us giggle, but what this means is that, for technologists, Web2.0 was about constantly iterating the technology as people interacted with it and learning from what they were doing. To make this happen, we saw the rise of technologies that supported real-time interactions, user-generated content, remixing and mashups, APIs and open-source software that allowed mass collaboration in the development cycle. …

For the business crowd, Web2.0 can be understood as hope. Web2.0 emerged out of the ashes of the fallen tech bubble and bust. Scars ran deep throughout Silicon Valley and venture capitalists and entrepreneurs wanted to party like it was 1999. Web2.0 brought energy to this forlorn crowd. At first they were skeptical, but slowly they bought in. As a result, we’ve seen a resurgence of startups, venture capitalists, and conferences. At this point, Web2.0 is sometimes referred to as Bubble2.0, but there’s something to say about “hope” even when the VCs start co-opting that term because they want four more years.

For users, Web2.0 was all about reorganizing web-based practices around Friends. For many users, direct communication tools like email and IM were used to communicate with one’s closest and dearest while online communities were tools for connecting with strangers around shared interests. Web2.0 reworked all of that by allowing users to connect in new ways. While many of the tools may have been designed to help people find others, what Web2.0 showed was that people really wanted a way to connect with those that they already knew in new ways. Even tools like MySpace and Facebook which are typically labeled social networkING sites were never really about networking for most users. They were about socializing inside of pre-existing networks.

Defining social media, social software, & Web 2.0 Read More »

Bush, rhetoric, & the exercise of power

From Mark Danner’s “Words in a Time of War: Taking the Measure of the First Rhetoric-Major President” (Tomgram: 10 May 2007):

[Note: This commencement address was given to graduates of the Department of Rhetoric at Zellerbach Hall, University of California, Berkeley, on May 10, 2007]

I give you my favorite quotation from the Bush administration, put forward by the proverbial “unnamed Administration official” and published in the New York Times Magazine by the fine journalist Ron Suskind in October 2004. Here, in Suskind’s recounting, is what that “unnamed Administration official” told him:

“The aide said that guys like me were ‘in what we call the reality-based community,’ which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality.’ I nodded and murmured something about enlightenment principles and empiricism. He cut me off. ‘That’s not the way the world really works anymore,’ he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality — judiciously, as you will — we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors…. and you, all of you, will be left to just study what we do.'”

It was the assumption of this so-called preponderance that lay behind the philosophy of power enunciated by Bush’s Brain [Karl Rove] and that led to an attitude toward international law and alliances that is, in my view, quite unprecedented in American history. That radical attitude is brilliantly encapsulated in a single sentence drawn from the National Security Strategy of the United States of 2003: “Our strength as a nation-state will continue to be challenged by those who employ a strategy of the weak using international fora, judicial processes and terrorism.” Let me repeat that little troika of “weapons of the weak”: international fora (meaning the United Nations and like institutions), judicial processes (meaning courts, domestic and international), and…. terrorism. This strange gathering, put forward by the government of the United States, stems from the idea that power is, in fact, everything. In such a world, courts — indeed, law itself — can only limit the power of the most powerful state. Wielding preponderant power, what need has it for law? The latter must be, by definition, a weapon of the weak. The most powerful state, after all, makes reality.

Bush, rhetoric, & the exercise of power Read More »

A definition of cloud computing

From Darryl K. Taft’s “Predictions for the Cloud in 2009” (eWeek: 29 December 2008):

[Peter] Coffee, who is now director of platform research at Salesforce.com, said, “I’m currently using a simple reference model for what a ‘cloud computing’ initiative should try to provide. I’m borrowing from the famous Zero-One-Infinity rule, canonically defined in The Jargon File…”

He continued, “It seems to me that a serious effort at delivering cloud benefits pursues the following ideals—perhaps never quite reaching them, but clearly having them as goals within theoretical possibility: Zero—On-premise[s] infrastructure, acquisition cost, adoption cost and support cost. One—Coherent software environment—not a ‘stack’ of multiple products from different providers. This avoids the chaos of uncoordinated release cycles or deferred upgrades. Infinity—Scalability in response to changing need, integratability/interoperability with legacy assets and other services, and customizability/programmability from data, through logic, up into the user interface without compromising robust multitenancy.”

A definition of cloud computing Read More »

Problems with airport security

From Jeffrey Goldberg’s “The Things He Carried” (The Atlantic: November 2008):

Because the TSA’s security regimen seems to be mainly thing-based—most of its 44,500 airport officers are assigned to truffle through carry-on bags for things like guns, bombs, three-ounce tubes of anthrax, Crest toothpaste, nail clippers, Snapple, and so on—I focused my efforts on bringing bad things through security in many different airports, primarily my home airport, Washington’s Reagan National, the one situated approximately 17 feet from the Pentagon, but also in Los Angeles, New York, Miami, Chicago, and at the Wilkes-Barre/Scranton International Airport (which is where I came closest to arousing at least a modest level of suspicion, receiving a symbolic pat-down—all frisks that avoid the sensitive regions are by definition symbolic—and one question about the presence of a Leatherman Multi-Tool in my pocket; said Leatherman was confiscated and is now, I hope, living with the loving family of a TSA employee). And because I have a fair amount of experience reporting on terrorists, and because terrorist groups produce large quantities of branded knickknacks, I’ve amassed an inspiring collection of al-Qaeda T-shirts, Islamic Jihad flags, Hezbollah videotapes, and inflatable Yasir Arafat dolls (really). All these things I’ve carried with me through airports across the country. I’ve also carried, at various times: pocketknives, matches from hotels in Beirut and Peshawar, dust masks, lengths of rope, cigarette lighters, nail clippers, eight-ounce tubes of toothpaste (in my front pocket), bottles of Fiji Water (which is foreign), and, of course, box cutters. I was selected for secondary screening four times—out of dozens of passages through security checkpoints—during this extended experiment. At one screening, I was relieved of a pair of nail clippers; during another, a can of shaving cream.

During one secondary inspection, at O’Hare International Airport in Chicago, I was wearing under my shirt a spectacular, only-in-America device called a “Beerbelly,” a neoprene sling that holds a polyurethane bladder and drinking tube. The Beerbelly, designed originally to sneak alcohol—up to 80 ounces—into football games, can quite obviously be used to sneak up to 80 ounces of liquid through airport security. (The company that manufactures the Beerbelly also makes something called a “Winerack,” a bra that holds up to 25 ounces of booze and is recommended, according to the company’s Web site, for PTA meetings.) My Beerbelly, which fit comfortably over my beer belly, contained two cans’ worth of Bud Light at the time of the inspection. It went undetected. The eight-ounce bottle of water in my carry-on bag, however, was seized by the federal government.

Schnei­er and I walked to the security checkpoint. “Counter­terrorism in the airport is a show designed to make people feel better,” he said. “Only two things have made flying safer: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.” This assumes, of course, that al-Qaeda will target airplanes for hijacking, or target aviation at all. “We defend against what the terrorists did last week,” Schnei­er said. He believes that the country would be just as safe as it is today if airport security were rolled back to pre-9/11 levels. “Spend the rest of your money on intelligence, investigations, and emergency response.”

We took our shoes off and placed our laptops in bins. Schnei­er took from his bag a 12-ounce container labeled “saline solution.”

“It’s allowed,” he said. Medical supplies, such as saline solution for contact-lens cleaning, don’t fall under the TSA’s three-ounce rule.

“What’s allowed?” I asked. “Saline solution, or bottles labeled saline solution?”

“Bottles labeled saline solution. They won’t check what’s in it, trust me.”

They did not check. As we gathered our belongings, Schnei­er held up the bottle and said to the nearest security officer, “This is okay, right?” “Yep,” the officer said. “Just have to put it in the tray.”

“Maybe if you lit it on fire, he’d pay attention,” I said, risking arrest for making a joke at airport security. (Later, Schnei­er would carry two bottles labeled saline solution—24 ounces in total—through security. An officer asked him why he needed two bottles. “Two eyes,” he said. He was allowed to keep the bottles.)

We were in the clear. But what did we prove?

“We proved that the ID triangle is hopeless,” Schneier said.

The ID triangle: before a passenger boards a commercial flight, he interacts with his airline or the government three times—when he purchases his ticket; when he passes through airport security; and finally at the gate, when he presents his boarding pass to an airline agent. It is at the first point of contact, when the ticket is purchased, that a passenger’s name is checked against the government’s no-fly list. It is not checked again, and for this reason, Schnei­er argued, the process is merely another form of security theater.

“The goal is to make sure that this ID triangle represents one person,” he explained. “Here’s how you get around it. Let’s assume you’re a terrorist and you believe your name is on the watch list.” It’s easy for a terrorist to check whether the government has cottoned on to his existence, Schnei­er said; he simply has to submit his name online to the new, privately run CLEAR program, which is meant to fast-pass approved travelers through security. If the terrorist is rejected, then he knows he’s on the watch list.

To slip through the only check against the no-fly list, the terrorist uses a stolen credit card to buy a ticket under a fake name. “Then you print a fake boarding pass with your real name on it and go to the airport. You give your real ID, and the fake boarding pass with your real name on it, to security. They’re checking the documents against each other. They’re not checking your name against the no-fly list—that was done on the airline’s computers. Once you’re through security, you rip up the fake boarding pass, and use the real boarding pass that has the name from the stolen credit card. Then you board the plane, because they’re not checking your name against your ID at boarding.”

What if you don’t know how to steal a credit card?

“Then you’re a stupid terrorist and the government will catch you,” he said.

What if you don’t know how to download a PDF of an actual boarding pass and alter it on a home computer?

“Then you’re a stupid terrorist and the government will catch you.”

I couldn’t believe that what Schneier was saying was true—in the national debate over the no-fly list, it is seldom, if ever, mentioned that the no-fly list doesn’t work. “It’s true,” he said. “The gap blows the whole system out of the water.”

Problems with airport security Read More »

Richard Stallman on the 4 freedoms

From Richard Stallman’s “Transcript of Richard Stallman at the 4th international GPLv3 conference; 23rd August 2006” (FSF Europe: 23 August 2006):

Specifically, this refers to four essential freedoms, which are the definition of Free Software.

Freedom zero is the freedom to run the program, as you wish, for any purpose.

Freedom one is the freedom to study the source code and then change it so that it does what you wish.

Freedom two is the freedom to help your neighbour, which is the freedom to distribute, including publication, copies of the program to others when you wish.

Freedom three is the freedom to help build your community, which is the freedom to distribute, including publication, your modified versions, when you wish.

These four freedoms make it possible for users to live an upright, ethical life as a member of a community and enable us individually and collectively to have control over what our software does and thus to have control over our computing.

Richard Stallman on the 4 freedoms Read More »

What is Web 2.0?

From Bruce Sterling’s “Viridian Note 00459: Emerging Technology 2006” (The Viridian Design Movement: March 2006):

Here we’ve got the canonical Tim O’Reilly definition of Web 2.0:

“Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an ‘architecture of participation,’ and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.”

What is Web 2.0? Read More »

Word of the day: creative destruction

From Wikipedia’s “Creative destruction” (13 July 2006):

Creative destruction, introduced by the economist Joseph Schumpeter, describes the process of industrial transformation that accompanies radical innovation. In Schumpeter’s vision of capitalism, innovative entry by entrepreneurs was the force that sustained long-term economic growth, even as it destroyed the value of established companies that enjoyed some degree of monopoly power. …

There are numerous types of innovation generating creative destruction in an industry:

New markets or products
New equipment
New sources of labor and raw materials
New methods of organization or management
New methods of inventory management
New methods of transportation
New methods of communication (e.g., the Internet)
New methods of advertising and marketing
New financial instruments
New ways to lobby politicians or new legal strategies

Word of the day: creative destruction Read More »

3 English words with the most meanings

From Tim Bray’s “On Search: Squirmy Words” (29 June 2003):

First of all, the words that have the most variation in meaning and the most collisions with other words are the common ones. In the Oxford English Dictionary, the three words with the longest entries (i.e. largest number of meanings) are “set,” “run,” and “get.”

3 English words with the most meanings Read More »

The politics & basics of Unicode

From Tim Bray’s “On the Goodness of Unicode” (6 April 2003):

Unicode proper is a consortium of technology vendors that, many years ago in a flash of intelligence and public-spiritedness, decided to unify their work with that going on at the ISO. Thus, while there are officially two standards you should care about, Unicode and ISO 10646, through some political/organizational magic they are exactly the same, and if you’re using one you’re also using the other. …

The basics of Unicode are actually pretty simple. It defines a large (and steadily growing) number of characters – just under 100,000 last time I checked. Each character gets a name and a number, for example LATIN CAPITAL LETTER A is 65 and TIBETAN SYLLABLE OM is 3840. Unicode includes a table of useful character properties such as “this is lower case” or “this is a number” or “this is a punctuation mark”.

The politics & basics of Unicode Read More »

Wikipedia defines fascism

From Wikipedia’s “Fascism” (5 July 2006):

Fascism is a radical totalitarian political philosophy that combines elements of corporatism, authoritarianism, extreme nationalism, militarism, anti-rationalism, anti-anarchism, anti-communism and anti-liberalism. …

A recent definition that has attracted much favorable comment is that by Robert O. Paxton:

“Fascism may be defined as a form of political behavior marked by obsessive preoccupation with community decline, humiliation, or victim-hood and by compensatory cults of unity, energy, and purity, in which a mass-based party of committed nationalist militants, working in uneasy but effective collaboration with traditional elites, abandons democratic liberties and pursues with redemptive violence and without ethical or legal restraints goals of internal cleansing and external expansion.” (Anatomy of Fascism, p 218)

Fascism is associated by many scholars with one or more of the following characteristics: a very high degree of nationalism, economic corporatism, a powerful, dictatorial leader who portrays the nation, state or collective as superior to the individuals or groups composing it.

Wikipedia defines fascism Read More »

Quick ‘n dirty explanation of onion routing

From Ann Harrison’s Onion Routing Averts Prying Eyes (Wired News: 5 August 2004):

Computer programmers are modifying a communications system, originally developed by the U.S. Naval Research Lab, to help Internet users surf the Web anonymously and shield their online activities from corporate or government eyes.

The system is based on a concept called onion routing. It works like this: Messages, or packets of information, are sent through a distributed network of randomly selected servers, or nodes, each of which knows only its predecessor and successor. Messages flowing through this network are unwrapped by a symmetric encryption key at each server that peels off one layer and reveals instructions for the next downstream node. …

The Navy is financing the development of a second-generation onion-routing system called Tor, which addresses many of the flaws in the original design and makes it easier to use. The Tor client behaves like a SOCKS proxy (a common protocol for developing secure communication services), allowing applications like Mozilla, SSH and FTP clients to talk directly to Tor and route data streams through a network of onion routers, without long delays.

Quick ‘n dirty explanation of onion routing Read More »

How virtual machines work

From Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, & Jacob R. Lorch’s “SubVirt: Implementing malware with virtual machines
” [PDF] (: ):

A virtual-machine monitor (VMM) manages the resources of the underlying hardware and provides an abstraction of one or more virtual machines [20]. Each virtual machine can run a complete operating system and its applications. Figure 1 shows the architecture used by two modern VMMs (VMware and VirtualPC). Software running within a virtual machine is called guest software (i.e., guest operating systems and guest applications). All guest software (including the guest OS) runs in user mode; only the VMM runs in the most privileged level (kernel mode). The host OS in Figure 1 is used to provide portable access to a wide variety of I/O devices [44].

VMMs export hardware-level abstractions to guest software using emulated hardware. The guest OS interacts with the virtual hardware in the same manner as it would with real hardware (e.g., in/out instructions, DMA), and these interactions are trapped by the VMM and emulated in software. This emulation allows the guest OS to run without modification while maintaining control over the system at the VMM layer.

A VMM can support multiple OSes on one computer by multiplexing that computer’s hardware and providing the illusion of multiple, distinct virtual computers, each of which can run a separate operating system and its applications. The VMM isolates all resources of each virtual computer through redirection. For example, the VMM can map two virtual disks to different sectors of a shared physical disk, and the VMM can map the physical memory space of each virtual machine to different pages in the real machine’s memory. In addition to multiplexing a computer’s hardware, VMMs also provide a powerful platform for adding services to an existing system. For example, VMMs have been used to debug operating systems and system configurations [30, 49], migrate live machines [40], detect or prevent intrusions [18, 27, 8], and attest for code integrity [17]. These VM services are typically implemented outside the guest they are serving in order to avoid perturbing the guest.

One problem faced by VM services is the difficulty in understanding the states and events inside the guest they are serving; VM services operate at a different level of abstraction from guest software. Software running outside of a virtual machine views lowlevel virtual-machine state such as disk blocks, network packets, and memory. Software inside the virtual machine interprets this state as high-level abstractions such as files, TCP connections, and variables. This gap between the VMM’s view of data/events and guest software’s view of data/events is called the semantic gap [13].

Virtual-machine introspection (VMI) [18, 27] describes a family of techniques that enables a VM service to understand and modify states and events within the guest. VMI translates variables and guest memory addresses by reading the guest OS and applications’ symbol tables and page tables. VMI uses hardware or software breakpoints to enable a VM service to gain control at specific instruction addresses. Finally, VMI allows a VM service to invoke guest OS or application code. Invoking guest OS code allows the VM service to leverage existing, complex guest code to carry out general-purpose functionality such as reading a guest file from the file cache/disk system. VM services can protect themselves from guest code by disallowing external I/O. They can protect the guest data from perturbation by checkpointing it before changing its state and rolling the guest back later.

How virtual machines work Read More »