internet

Umberto Eco on books

From Umberto Eco’s “Vegetal and mineral memory: The future of books” (Al-Ahram Weekly: 20—26 November 2003):

Libraries, over the centuries, have been the most important way of keeping our collective wisdom. They were and still are a sort of universal brain where we can retrieve what we have forgotten and what we still do not know. If you will allow me to use such a metaphor, a library is the best possible imitation, by human beings, of a divine mind, where the whole universe is viewed and understood at the same time. A person able to store in his or her mind the information provided by a great library would emulate in some way the mind of God. In other words, we have invented libraries because we know that we do not have divine powers, but we try to do our best to imitate them. …

First of all, we know that books are not ways of making somebody else think in our place; on the contrary, they are machines that provoke further thoughts. Only after the invention of writing was it possible to write such a masterpiece of spontaneous memory as Proust’s A la Recherche du Temps Perdu. Secondly, if once upon a time people needed to train their memories in order to remember things, after the invention of writing they had also to train their memories in order to remember books. Books challenge and improve memory; they do not narcotise it. …

YET IT IS EXACTLY AT THIS POINT that our unravelling activity must start because by hypertextual structure we usually mean two very different phenomena. First, there is the textual hypertext. In a traditional book one must read from left to right (or right to left, or up to down, according to different cultures) in a linear way. One can obviously skip through the pages, one—once arrived at page 300—can go back to check or re- read something at page 10—but this implies physical labour. In contrast to this, a hypertextual text is a multidimensional network or a maze in which every point or node can be potentially connected with any other node. Second, there is the systemic hypertext. The WWW is the Great Mother of All Hypertexts, a world-wide library where you can, or you will in short time, pick up all the books you wish. The Web is the general system of all existing hypertexts. …

Simply, books have proved to be the most suitable instrument for transmitting information. There are two sorts of book: those to be read and those to be consulted. As far as books-to-be-read are concerned, the normal way of reading them is the one that I would call the ‘detective story way’. You start from page one, where the author tells you that a crime has been committed, you follow every path of the detection process until the end, and finally you discover that the guilty one was the butler. End of the book and end of your reading experience. …

Then they are books to be consulted, like handbooks and encyclopaedias. Encyclopaedias are conceived in order to be consulted and never read from the first to the last page. …

Hypertexts will certainly render encyclopaedias and handbooks obsolete. Yesterday, it was possible to have a whole encyclopaedia on a CD-ROM; today, it is possible to have it on line with the advantage that this permits cross references and the non-linear retrieval of information. …

Books belong to those kinds of instruments that, once invented, have not been further improved because they are already alright, such as the hammer, the knife, spoon or scissors. …

TWO NEW INVENTIONS, however, are on the verge of being industrially exploited. One is printing on demand: after scanning the catalogues of many libraries or publishing houses a reader can select the book he needs, and the operator will push a button, and the machine will print and bind a single copy using the font the reader likes. … Simply put: every book will be tailored according to the desires of the buyer, as happened with old manuscripts.

The second invention is the e-book where by inserting a micro- cassette in the book’s spine or by connecting it to the internet one can have a book printed out in front of us. Even in this case, however, we shall still have a book, though as different from our current ones as ours are different from old manuscripts on parchment, and as the first Shakespeare folio of 1623 is different from the last Penguin edition. Yet, up to now e-books have not proved to be commercially successful as their inventors hoped. … E-books will probably prove to be useful for consulting information, as happens with dictionaries or special documents. …

Indeed, there are a lot of new technological devices that have not made previous ones obsolete. Cars run faster than bicycles, but they have not rendered bicycles obsolete, and no new technological improvements can make a bicycle better than it was before. The idea that a new technology abolishes a previous one is frequently too simplistic. Though after the invention of photography painters did not feel obliged to serve any longer as craftsmen reproducing reality, this did not mean that Daguerre’s invention only encouraged abstract painting. There is a whole tradition in modern painting that could not have existed without photographic models: think, for instance, of hyper-realism. Here, reality is seen by the painter’s eye through the photographic eye. This means that in the history of culture it has never been the case that something has simply killed something else. Rather, a new invention has always profoundly changed an older one. …

The computer creates new modes of production and diffusion of printed documents. …

Today there are new hypertextual poetics according to which even a book-to-read, even a poem, can be transformed to hypertext. At this point we are shifting to question two, since the problem is no longer, or not only, a physical one, but rather one that concerns the very nature of creative activity, of the reading process, and in order to unravel this skein of questions we have first of all to decide what we mean by a hypertextual link. …

In order to understand how texts of this genre can work we should decide whether the textual universe we are discussing is limited and finite, limited but virtually infinite, infinite but limited, or unlimited and infinite.

First of all, we should make a distinction between systems and texts. A system, for instance a linguistic system, is the whole of the possibilities displayed by a given natural language. A finite set of grammatical rules allows the speaker to produce an infinite number of sentences, and every linguistic item can be interpreted in terms of other linguistic or other semiotic items—a word by a definition, an event by an example, an animal or a flower by an image, and so on and so forth. …

Grammars, dictionaries and encyclopaedias are systems: by using them you can produce all the texts you like. But a text itself is not a linguistic or an encyclopaedic system. A given text reduces the infinite or indefinite possibilities of a system to make up a closed universe. If I utter the sentence, ‘This morning I had for breakfast…’, for example, the dictionary allows me to list many possible items, provided they are all organic. But if I definitely produce my text and utter, ‘This morning I had for breakfast bread and butter’, then I have excluded cheese, caviar, pastrami and apples. A text castrates the infinite possibilities of a system. …

Take a fairy tale, like Little Red Riding Hood. The text starts from a given set of characters and situations—a little girl, a mother, a grandmother, a wolf, a wood—and through a series of finite steps arrives at a solution. Certainly, you can read the fairy tale as an allegory and attribute different moral meanings to the events and to the actions of the characters, but you cannot transform Little Red Riding Hood into Cinderella. … This seems trivial, but the radical mistake of many deconstructionists was to believe that you can do anything you want with a text. This is blatantly false. …

Now suppose that a finite and limited text is organised hypertextually by many links connecting given words with other words. In a dictionary or an encyclopaedia the word wolf is potentially connected to every other word that makes up part of its possible definition or description (wolf is connected to animal, to mammal to ferocious, to legs, to fur, to eyes, to woods, to the names of the countries in which wolves exist, etc.). In Little Red Riding Hood, the wolf can be connected only with the textual sections in which it shows up or in which it is explicitly evoked. The series of possible links is finite and limited. How can hypertextual strategies be used to ‘open’ up a finite and limited text?

The first possibility is to make the text physically unlimited, in the sense that a story can be enriched by the successive contributions of different authors and in a double sense, let us say either two-dimensionally or three-dimensionally. By this I mean that given, for instance, Little Red Riding Hood, the first author proposes a starting situation (the girl enters the wood) and different contributors can then develop the story one after the other, for example, by having the girl meet not the wolf but Ali Baba, by having both enter an enchanted castle, having a confrontation with a magic crocodile, and so on, so that the story can continue for years. But the text can also be infinite in the sense that at every narrative disjunction, for instance, when the girl enters the wood, many authors can make many different choices. For one author, the girl may meet Pinocchio, for another she may be transformed into a swan, or enter the Pyramids and discover the treasury of the son of Tutankhamen. …

AT THIS POINT one can raise a question about the survival of the very notion of authorship and of the work of art, as an organic whole. And I want simply to inform my audience that this has already happened in the past without disturbing either authorship or organic wholes. The first example is that of the Italian Commedia dell’arte, in which upon a canovaccio, that is, a summary of the basic story, every performance, depending on the mood and fantasy of the actors, was different from every other so that we cannot identify any single work by a single author called Arlecchino servo di due padroni and can only record an uninterrupted series of performances, most of them definitely lost and all certainly different one from another.

Another example would be a jazz jam session. … What I want to say is that we are already accustomed to the idea of the absence of authorship in popular collective art in which every participant adds something, with experiences of jazz-like unending stories. …

A hypertext can give the illusion of opening up even a closed text: a detective story can be structured in such a way that its readers can select their own solution, deciding at the end if the guilty one should be the butler, the bishop, the detective, the narrator, the author or the reader. They can thus build up their own personal story. Such an idea is not a new one. Before the invention of computers, poets and narrators dreamt of a totally open text that readers could infinitely re-compose in different ways. Such was the idea of Le Livre, as extolled by Mallarmé. Raymond Queneau also invented a combinatorial algorithm by virtue of which it was possible to compose, from a finite set of lines, millions of poems. In the early sixties, Max Saporta wrote and published a novel whose pages could be displaced to compose different stories, and Nanni Balestrini gave a computer a disconnected list of verses that the machine combined in different ways to compose different poems. …

All these physically moveable texts give an impression of absolute freedom on the part of the reader, but this is only an impression, an illusion of freedom. The machinery that allows one to produce an infinite text with a finite number of elements has existed for millennia, and this is the alphabet. Using an alphabet with a limited number of letters one can produce billions of texts, and this is exactly what has been done from Homer to the present days. In contrast, a stimulus-text that provides us not with letters, or words, but with pre-established sequences of words, or of pages, does not set us free to invent anything we want. …

At the last borderline of free textuality there can be a text that starts as a closed one, let us say, Little Red Riding Hood or The Arabian Nights, and that I, the reader, can modify according to my inclinations, thus elaborating a second text, which is no longer the same as the original one, whose author is myself, even though the affirmation of my authorship is a weapon against the concept of definite authorship. …

A BOOK OFFERS US A TEXT which, while being open to multiple interpretations, tells us something that cannot be modified. … Alas, with an already written book, whose fate is determined by repressive, authorial decision, we cannot do this. We are obliged to accept fate and to realise that we are unable to change destiny. A hypertextual and interactive novel allows us to practice freedom and creativity, and I hope that such inventive activity will be implemented in the schools of the future. But the already and definitely written novel War and Peace does not confront us with the unlimited possibilities of our imagination, but with the severe laws governing life and death. …

Umberto Eco on books Read More »

The email dead drop

From the L.A. Times‘ “Cyberspace Gives Al Qaeda Refuge“:

Simplicity seems to work best. One common method of communicating over the Internet is essentially an e-mail version of the classic dead drop.

Members of a cell are all given the same prearranged username and password for an e-mail account on an Internet service provider, or ISP, such as Hotmail or Yahoo, according to the recent joint report by the Treasury and Justice departments.

One member writes a message, but instead of sending it, he puts it in the ‘draft’ file and then logs off. Someone else can then sign onto the account using the same username and password, read the draft and then delete it.

‘Because the draft was never sent, the ISP does not retain a copy of it and there is no record of it traversing the Internet—it never went anywhere, its recipients came to it,’ the report said.

The email dead drop Read More »

Clay Shirky on the changes to publishing & media

From Parul Sehgal’s “Here Comes Clay Shirky” (Publisher’s Weekly: 21 June 2010):

PW: In April of this year, Wired‘s Kevin Kelly turned a Shirky quote—“Institutions will try to preserve the problem to which they are the solution”—into “the Shirky Principle,” in deference to the simple, yet powerful observation. … Kelly explained, “The Shirky Principle declares that complex solutions, like a company, or an industry, can become so dedicated to the problem they are the solution to, that often they inadvertently perpetuate the problem.”

CS: It is possible to think that the Internet will be a net positive for society while admitting that there are significant downsides—after all, it’s not a revolution if nobody loses.

No one will ever wonder, is there anything amusing for me on the Internet? That is a solved problem. What we should really care about are [the Internet’s] cultural uses.

In Here Comes Everybody I told the story of the Abbot of Sponheim who in 1492 wrote a book saying that if this printing press thing is allowed to expand, what will the scribes do for a living? But it was more important that Europe be literate than for scribes to have a job.

In a world where a book had to be a physical object, charging money was a way to cause more copies to come into circulation. In the digital world, charging money for something is a way to produce fewer copies. There is no way to preserve the status quo and not abandon that value.

Some of it’s the brilliant Upton Sinclair observation: “It’s hard to make a man understand something if his livelihood depends on him not understanding it.” From the laying on of hands of [Italian printer] Aldus Manutius on down, publishing has always been this way. This is a medium where a change to glue-based paperback binding constituted a revolution.

PW: When do you think a similar realization will come to book publishing?

CS: I think someone will make the imprint that bypasses the traditional distribution networks. Right now the big bottleneck is the head buyer at Barnes & Noble. That’s the seawall holding back the flood in publishing. Someone’s going to say, “I can do a business book or a vampire book or a romance novel, whatever, that might sell 60% of the units it would sell if I had full distribution and a multimillion dollar marketing campaign—but I can do it for 1% percent of the cost.” It has already happened a couple of times with specialty books. The moment of tip happens when enough things get joined up to create their own feedback loop, and the feedback loop in publishing changes when someone at Barnes & Noble says: “We can’t afford not to stock this particular book or series from an independent publisher.” It could be on Lulu, or iUniverse, whatever. And, I feel pretty confident saying it’s going to happen in the next five years.

Clay Shirky on the changes to publishing & media Read More »

These are their brilliant plans to save magazines?

From Jeremy W. Peters’ “In Magazine World, a New Crop of Chiefs” (The New York Times: 28 November 2010):

“This is the changing of the guard from an older school to a newer school,” said Justin B. Smith, president of the Atlantic Media Company. The changes, he added, were part of an inevitable evolution in publishing that was perhaps long overdue. “It is quite remarkable that it took until 2010, 15 years after the arrival of the Internet, for a new generation of leaders to emerge.”

At Time, the world’s largest magazine publisher, Mr. Griffin said he wanted to reintroduce the concept of “charging a fair price, and charging consumers who are interested in the product.” In other words, consumers can expect to pay more. “We spent a tremendous amount of money creating original content, original journalism, fact-checking, sending reporters overseas to cover wars,” he said. “You name it. What we’ve got to do as a business is get fair value for that.” Supplementing that approach, Mr. Griffin said, will be new partnerships within Time Warner, Time Inc.’s parent company, that allow magazines to take advantage of the vast film and visual resources at their disposal. One such partnership in the planning stages, he said, is a deal between a major cosmetics company and InStyle to broadcast from the red carpets of big Hollywood events like the Academy Awards and the Screen Actors Guild Awards.

But one thing Mr. Harty said the company was examining: expanding its licensed products. The company already pulls in more than a billion dollars a year selling products with a Better Homes and Gardens license at Wal-Mart stores. It is now planning to sell plants and bulbs with the magazine’s imprimatur directly to consumers. “We have relationships with all these consumers,” Mr. Harty said. “How can we figure out how to sell them goods and services? We believe that’s a key.”

These are their brilliant plans to save magazines? Read More »

David Pogue’s insights about tech over time

From David Pogue’s “The Lessons of 10 Years of Talking Tech” (The New York Times: 25 November 2010):

As tech decades go, this one has been a jaw-dropper. Since my first column in 2000, the tech world has not so much blossomed as exploded. Think of all the commonplace tech that didn’t even exist 10 years ago: HDTV, Blu-ray, GPS, Wi-Fi, Gmail, YouTube, iPod, iPhone, Kindle, Xbox, Wii, Facebook, Twitter, Android, online music stores, streaming movies and on and on.

With the turkey cooking, this seems like a good moment to review, to reminisce — and to distill some insight from the first decade in the new tech millennium.

Things don’t replace things; they just splinter. I can’t tell you how exhausting it is to keep hearing pundits say that some product is the “iPhone killer” or the “Kindle killer.” Listen, dudes: the history of consumer tech is branching, not replacing.

Things don’t replace things; they just add on. Sooner or later, everything goes on-demand. The last 10 years have brought a sweeping switch from tape and paper storage to digital downloads. Music, TV shows, movies, photos and now books and newspapers. We want instant access. We want it easy.

Some people’s gadgets determine their self-esteem. … Today’s gadgets are intensely personal. Your phone or camera or music player makes a statement, reflects your style and character. No wonder some people interpret criticisms of a product as a criticism of their choices. By extension, it’s a critique of them.

Everybody reads with a lens. … feelings run just as strongly in the tech realm. You can’t use the word “Apple,” “Microsoft” or “Google” in a sentence these days without stirring up emotion.

It’s not that hard to tell the winners from the losers. … There was the Microsoft Spot Watch (2003). This was a wireless wristwatch that could display your appointments and messages — but cost $10 a month, had to be recharged nightly and wouldn’t work outside your home city unless you filled out a Web form in advance.

Some concepts’ time may never come. The same “breakthrough” ideas keep surfacing — and bombing, year after year. For the love of Mike, people, nobody wants videophones!

Teenagers do not want “communicators” that do nothing but send text messages, either (AT&T Ogo, Sony Mylo, Motorola V200). People do not want to surf the Internet on their TV screens (WebTV, AOLTV, Google TV). And give it up on the stripped-down kitchen “Internet appliances” (3Com Audrey, Netpliance i-Opener, Virgin Webplayer). Nobody has ever bought one, and nobody ever will.

Forget about forever — nothing lasts a year. Of the thousands of products I’ve reviewed in 10 years, only a handful are still on the market. Oh, you can find some gadgets whose descendants are still around: iPod, BlackBerry, Internet Explorer and so on.

But it’s mind-frying to contemplate the millions of dollars and person-years that were spent on products and services that now fill the Great Tech Graveyard: Olympus M-Robe. PocketPC. Smart Display. MicroMV. MSN Explorer. Aibo. All those PlaysForSure music players, all those Palm organizers, all those GPS units you had to load up with maps from your computer.

Everybody knows that’s the way tech goes. The trick is to accept your
gadget’s obsolescence at the time you buy it…

Nobody can keep up. Everywhere I go, I meet people who express the same reaction to consumer tech today: there’s too much stuff coming too fast. It’s impossible to keep up with trends, to know what to buy, to avoid feeling left behind. They’re right. There’s never been a period of greater technological change. You couldn’t keep up with all of it if you tried.

David Pogue’s insights about tech over time Read More »

Australian police: don’t bank online with Windows

From Munir Kotadia’s “NSW Police: Don’t use Windows for internet banking” (ITnews: 9 October 2009):

Consumers wanting to safely connect to their internet banking service should use Linux or the Apple iPhone, according to a detective inspector from the NSW Police, who was giving evidence on behalf of the NSW Government at the public hearing into Cybercrime today in Sydney.

Detective Inspector Bruce van der Graaf from the Computer Crime Investigation Unit told the hearing that he uses two rules to protect himself from cybercriminals when banking online.

The first rule, he said, was to never click on hyperlinks to the banking site and the second was to avoid Microsoft Windows.

“If you are using the internet for a commercial transaction, use a Linux boot up disk – such as Ubuntu or some of the other flavours. Puppylinux is a nice small distribution that boots up fairly quickly.

Van der Graaf also mentioned the iPhone, which he called “quite safe” for internet banking.

“Another option is the Apple iPhone. It is only capable of running one process at a time so there is really no danger from infection,” he said.

Australian police: don’t bank online with Windows Read More »

Malware forges online bank statements to hide fraud

From Kim Zetter’s “New Malware Re-Writes Online Bank Statements to Cover Fraud” (Wired: 30 September 2009):

New malware being used by cybercrooks does more than let hackers loot a bank account; it hides evidence of a victim’s dwindling balance by rewriting online bank statements on the fly, according to a new report.

The sophisticated hack uses a Trojan horse program installed on the victim’s machine that alters html coding before it’s displayed in the user’s browser, to either erase evidence of a money transfer transaction entirely from a bank statement, or alter the amount of money transfers and balances.

The ruse buys the crooks time before a victim discovers the fraud, though won’t work if a victim uses an uninfected machine to check his or her bank balance.

The novel technique was employed in August by a gang who targeted customers of leading German banks and stole Euro 300,000 in three weeks, according to Yuval Ben-Itzhak, chief technology officer of computer security firm Finjan.

The victims’ computers are infected with the Trojan, known as URLZone, after visiting compromised legitimate web sites or rogue sites set up by the hackers.

Once a victim is infected, the malware grabs the consumer’s log in credentials to their bank account, then contacts a control center hosted on a machine in Ukraine for further instructions. The control center tells the Trojan how much money to wire transfer, and where to send it. To avoid tripping a bank’s automated anti-fraud detectors, the malware will withdraw random amounts, and check to make sure the withdrawal doesn’t exceed the victim’s balance.

The money gets transferred to the legitimate accounts of unsuspecting money mules who’ve been recruited online for work-at-home gigs, never suspecting that the money they’re allowing to flow through their account is being laundered. The mule transfers the money to the crook’s chosen account. The cyber gang Finjan tracked used each mule only twice, to avoid fraud pattern detection.

The researchers also found statistics in the command tool showing that out of 90,000 visitors to the gang’s rogue and compromised websites, 6,400 were infected with the URLZone trojan. Most of the attacks Finjan observed affected people using Internet Explorer browsers …

Finjan provided law enforcement officials with details about the gang’s activities and says the hosting company for the Ukraine server has since suspended the domain for the command and control center. But Finjan estimates that a gang using the scheme unimpeded could rake in about $7.3 million annually.

Malware forges online bank statements to hide fraud Read More »

What Google’s book settlement means

Google Book Search
Image via Wikipedia

From Robert Darnton’s “Google & the Future of Books” (The New York Review of Books: 12 February 2009):

As the Enlightenment faded in the early nineteenth century, professionalization set in. You can follow the process by comparing the Encyclopédie of Diderot, which organized knowledge into an organic whole dominated by the faculty of reason, with its successor from the end of the eighteenth century, the Encyclopédie méthodique, which divided knowledge into fields that we can recognize today: chemistry, physics, history, mathematics, and the rest. In the nineteenth century, those fields turned into professions, certified by Ph.D.s and guarded by professional associations. They metamorphosed into departments of universities, and by the twentieth century they had left their mark on campuses…

Along the way, professional journals sprouted throughout the fields, subfields, and sub-subfields. The learned societies produced them, and the libraries bought them. This system worked well for about a hundred years. Then commercial publishers discovered that they could make a fortune by selling subscriptions to the journals. Once a university library subscribed, the students and professors came to expect an uninterrupted flow of issues. The price could be ratcheted up without causing cancellations, because the libraries paid for the subscriptions and the professors did not. Best of all, the professors provided free or nearly free labor. They wrote the articles, refereed submissions, and served on editorial boards, partly to spread knowledge in the Enlightenment fashion, but mainly to advance their own careers.

The result stands out on the acquisitions budget of every research library: the Journal of Comparative Neurology now costs $25,910 for a year’s subscription; Tetrahedron costs $17,969 (or $39,739, if bundled with related publications as a Tetrahedron package); the average price of a chemistry journal is $3,490; and the ripple effects have damaged intellectual life throughout the world of learning. Owing to the skyrocketing cost of serials, libraries that used to spend 50 percent of their acquisitions budget on monographs now spend 25 percent or less. University presses, which depend on sales to libraries, cannot cover their costs by publishing monographs. And young scholars who depend on publishing to advance their careers are now in danger of perishing.

The eighteenth-century Republic of Letters had been transformed into a professional Republic of Learning, and it is now open to amateurs—amateurs in the best sense of the word, lovers of learning among the general citizenry. Openness is operating everywhere, thanks to “open access” repositories of digitized articles available free of charge, the Open Content Alliance, the Open Knowledge Commons, OpenCourseWare, the Internet Archive, and openly amateur enterprises like Wikipedia. The democratization of knowledge now seems to be at our fingertips. We can make the Enlightenment ideal come to life in reality.

What provoked these jeremianic- utopian reflections? Google. Four years ago, Google began digitizing books from research libraries, providing full-text searching and making books in the public domain available on the Internet at no cost to the viewer. For example, it is now possible for anyone, anywhere to view and download a digital copy of the 1871 first edition of Middlemarch that is in the collection of the Bodleian Library at Oxford. Everyone profited, including Google, which collected revenue from some discreet advertising attached to the service, Google Book Search. Google also digitized an ever-increasing number of library books that were protected by copyright in order to provide search services that displayed small snippets of the text. In September and October 2005, a group of authors and publishers brought a class action suit against Google, alleging violation of copyright. Last October 28, after lengthy negotiations, the opposing parties announced agreement on a settlement, which is subject to approval by the US District Court for the Southern District of New York.[2]

The settlement creates an enterprise known as the Book Rights Registry to represent the interests of the copyright holders. Google will sell access to a gigantic data bank composed primarily of copyrighted, out-of-print books digitized from the research libraries. Colleges, universities, and other organizations will be able to subscribe by paying for an “institutional license” providing access to the data bank. A “public access license” will make this material available to public libraries, where Google will provide free viewing of the digitized books on one computer terminal. And individuals also will be able to access and print out digitized versions of the books by purchasing a “consumer license” from Google, which will cooperate with the registry for the distribution of all the revenue to copyright holders. Google will retain 37 percent, and the registry will distribute 63 percent among the rightsholders.

Meanwhile, Google will continue to make books in the public domain available for users to read, download, and print, free of charge. Of the seven million books that Google reportedly had digitized by November 2008, one million are works in the public domain; one million are in copyright and in print; and five million are in copyright but out of print. It is this last category that will furnish the bulk of the books to be made available through the institutional license.

Many of the in-copyright and in-print books will not be available in the data bank unless the copyright owners opt to include them. They will continue to be sold in the normal fashion as printed books and also could be marketed to individual customers as digitized copies, accessible through the consumer license for downloading and reading, perhaps eventually on e-book readers such as Amazon’s Kindle.

After reading the settlement and letting its terms sink in—no easy task, as it runs to 134 pages and 15 appendices of legalese—one is likely to be dumbfounded: here is a proposal that could result in the world’s largest library. It would, to be sure, be a digital library, but it could dwarf the Library of Congress and all the national libraries of Europe. Moreover, in pursuing the terms of the settlement with the authors and publishers, Google could also become the world’s largest book business—not a chain of stores but an electronic supply service that could out-Amazon Amazon.

An enterprise on such a scale is bound to elicit reactions of the two kinds that I have been discussing: on the one hand, utopian enthusiasm; on the other, jeremiads about the danger of concentrating power to control access to information.

Google is not a guild, and it did not set out to create a monopoly. On the contrary, it has pursued a laudable goal: promoting access to information. But the class action character of the settlement makes Google invulnerable to competition. Most book authors and publishers who own US copyrights are automatically covered by the settlement. They can opt out of it; but whatever they do, no new digitizing enterprise can get off the ground without winning their assent one by one, a practical impossibility, or without becoming mired down in another class action suit. If approved by the court—a process that could take as much as two years—the settlement will give Google control over the digitizing of virtually all books covered by copyright in the United States.

Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

Google’s record suggests that it will not abuse its double-barreled fiscal-legal power. But what will happen if its current leaders sell the company or retire? The public will discover the answer from the prices that the future Google charges, especially the price of the institutional subscription licenses. The settlement leaves Google free to negotiate deals with each of its clients, although it announces two guiding principles: “(1) the realization of revenue at market rates for each Book and license on behalf of the Rightsholders and (2) the realization of broad access to the Books by the public, including institutions of higher education.”

What will happen if Google favors profitability over access? Nothing, if I read the terms of the settlement correctly. Only the registry, acting for the copyright holders, has the power to force a change in the subscription prices charged by Google, and there is no reason to expect the registry to object if the prices are too high. Google may choose to be generous in it pricing, and I have reason to hope it may do so; but it could also employ a strategy comparable to the one that proved to be so effective in pushing up the price of scholarly journals: first, entice subscribers with low initial rates, and then, once they are hooked, ratchet up the rates as high as the traffic will bear.

What Google’s book settlement means Read More »

RFID security problems

Old British passport cover
Creative Commons License photo credit: sleepymyf

2005

From Brian Krebs’ “Leaving Las Vegas: So Long DefCon and Blackhat” (The Washington Post: 1 August 2005):

DefCon 13 also was notable for being the location where two new world records were set — both involved shooting certain electronic signals unprecedented distances. Los Angeles-based Flexilis set the world record for transmitting data to and from a “passive” radio frequency identification (RFID) card — covering a distance of more than 69 feet. (Active RFID — the kind being integrated into foreign passports, for example — differs from passive RFID in that it emits its own magnetic signal and can only be detected from a much shorter distance.)

The second record set this year at DefCon was pulled off by some teens from Cincinnati, who broke the world record they set last year by building a device capable of maintaining an unamplified, 11-megabit 802.11b wireless Internet connection over a distance of 125 miles (the network actually spanned from Utah into Nevada).

From Andrew Brandt’s “Black Hat, Lynn Settle with Cisco, ISS” (PC World: 29 July 2005):

Security researcher Kevin Mahaffey makes a final adjustment to a series of radio antennas; Mahaffey used the directional antennas in a demonstration during his presentation, “Long Range RFID and its Security Implications.” Mahaffey and two of his colleagues demonstrated how he could increase the “read range” of radio frequency identification (RF) tags from the typical four to six inches to approximately 50 feet. Mahaffey said the tags could be read at a longer distance, but he wanted to perform the demonstration in the room where he gave the presentation, and that was the greatest distance within the room that he could demonstrate. RFID tags such as the one Mahaffey tested will begin to appear in U.S. passports later this year or next year.

2006

From Joris Evers and Declan McCullagh’s “Researchers: E-passports pose security risk” (CNET: 5 August 2006):

At a pair of security conferences here, researchers demonstrated that passports equipped with radio frequency identification (RFID) tags can be cloned with a laptop equipped with a $200 RFID reader and a similarly inexpensive smart card writer. In addition, they suggested that RFID tags embedded in travel documents could identify U.S. passports from a distance, possibly letting terrorists use them as a trigger for explosives.

At the Black Hat conference, Lukas Grunwald, a researcher with DN-Systems in Hildesheim, Germany, demonstrated that he could copy data stored in an RFID tag from his passport and write the data to a smart card equipped with an RFID chip.

From Kim Zetter’s “Hackers Clone E-Passports” (Wired: 3 August 2006):

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker — Walluf, Germany-based ACG Identification Technologies — but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports — called Golden Reader Tool and made by secunet Security Networks — and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader — which can also act as a writer — and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

Although he can clone the tag, Grunwald says it’s not possible, as far as he can tell, to change data on the chip, such as the name or birth date, without being detected. That’s because the passport uses cryptographic hashes to authenticate the data.

Grunwald’s technique requires a counterfeiter to have physical possession of the original passport for a time. A forger could not surreptitiously clone a passport in a traveler’s pocket or purse because of a built-in privacy feature called Basic Access Control that requires officials to unlock a passport’s RFID chip before reading it. The chip can only be unlocked with a unique key derived from the machine-readable data printed on the passport’s page.

To produce a clone, Grunwald has to program his copycat chip to answer to the key printed on the new passport. Alternatively, he can program the clone to dispense with Basic Access Control, which is an optional feature in the specification.

As planned, U.S. e-passports will contain a web of metal fiber embedded in the front cover of the documents to shield them from unauthorized readers. Though Basic Access Control would keep the chip from yielding useful information to attackers, it would still announce its presence to anyone with the right equipment. The government added the shielding after privacy activists expressed worries that a terrorist could simply point a reader at a crowd and identify foreign travelers.

In theory, with metal fibers in the front cover, nobody can sniff out the presence of an e-passport that’s closed. But [Kevin Mahaffey and John Hering of Flexilis] demonstrated in their video how even if a passport opens only half an inch — such as it might if placed in a purse or backpack — it can reveal itself to a reader at least two feet away.

In addition to cloning passport chips, Grunwald has been able to clone RFID ticket cards used by students at universities to buy cafeteria meals and add money to the balance on the cards.

He and his partners were also able to crash RFID-enabled alarm systems designed to sound when an intruder breaks a window or door to gain entry. Such systems require workers to pass an RFID card over a reader to turn the system on and off. Grunwald found that by manipulating data on the RFID chip he could crash the system, opening the way for a thief to break into the building through a window or door.

And they were able to clone and manipulate RFID tags used in hotel room key cards and corporate access cards and create a master key card to open every room in a hotel, office or other facility. He was able, for example, to clone Mifare, the most commonly used key-access system, designed by Philips Electronics. To create a master key he simply needed two or three key cards for different rooms to determine the structure of the cards. Of the 10 different types of RFID systems he examined that were being used in hotels, none used encryption.

Many of the card systems that did use encryption failed to change the default key that manufacturers program into the access card system before shipping, or they used sample keys that the manufacturer includes in instructions sent with the cards. Grunwald and his partners created a dictionary database of all the sample keys they found in such literature (much of which they found accidentally published on purchasers’ websites) to conduct what’s known as a dictionary attack. When attacking a new access card system, their RFDump program would search the list until it found the key that unlocked a card’s encryption.

“I was really surprised we were able to open about 75 percent of all the cards we collected,” he says.

2009

From Thomas Ricker’s “Video: Hacker war drives San Francisco cloning RFID passports” (Engadget: 2 February 2009):

Using a $250 Motorola RFID reader and antenna connected to his laptop, Chris recently drove around San Francisco reading RFID tags from passports, driver licenses, and other identity documents. In just 20 minutes, he found and cloned the passports of two very unaware US citizens.

RFID security problems Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Could Green Dam lead to the largest botnet in history?

Green_Damn_site_blocked.jpg

From Rob Cottingham’s “From blocking to botnet: Censorship isn’t the only problem with China’s new Internet blocking software” (Social Signal: 10 June 2009):

Any blocking software needs to update itself from time to time: at the very least to freshen its database of forbidden content, and more than likely to fix bugs, add features and improve performance. (Most anti-virus software does this.)

If all the software does is to refresh the list of banned sites, that limits the potential for abuse. But if the software is loading new executable code onto the computer, suddenly there’s the potential for something a lot bigger.

Say you’re a high-ranking official in the Chinese military. And let’s say you have some responsibility for the state’s capacity to wage so-called cyber warfare: digital assaults on an enemy’s technological infrastructure.

It strikes you: there’s a single backdoor into more that 40 million Chinese computers, capable of installing… well, nearly anything you want.

What if you used that backdoor, not just to update blocking software, but to create something else?

Say, the biggest botnet in history?

Still, a botnet 40 million strong (plus the installed base already in place in Chinese schools and other institutions) at the beck and call of the military is potentially a formidable weapon. Even if the Chinese government has no intention today of using Green Dam for anything other than blocking pornography, the temptation to repurpose it for military purposes may prove to be overwhelming.

Could Green Dam lead to the largest botnet in history? Read More »

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

Al Qaeda’s use of social networking sites Read More »

Newspapers are doomed

From Jeff Sigmund’s “Newspaper Web Site Audience Increases More Than Ten Percent In First Quarter To 73.3 Million Visitors” (Newspaper Association of America: 23 April 2009):

Newspaper Web sites attracted more than 73.3 million monthly unique visitors on average (43.6 percent of all Internet users) in the first quarter of 2009, a record number that reflects a 10.5 percent increase over the same period a year ago, according to a custom analysis provided by Nielsen Online for the Newspaper Association of America.

In addition, newspaper Web site visitors generated an average of more than 3.5 billion page views per month throughout the quarter, an increase of 12.8 percent over the same period a year ago (3.1 billion page views).

Contrast that with the article on Craigslist in Wikipedia (1 May 2009):

The site serves over twenty billion page views per month, putting it in 28th place overall among web sites world wide, ninth place overall among web sites in the United States (per Alexa.com on March 27, 2009), to over fifty million unique monthly visitors in the United States alone (per Compete.com on April 7, 2009). As of March 17, 2009 it was ranked 7th on Alexa. With over forty million new classified advertisements each month, Craigslist is the leading classifieds service in any medium. The site receives over one million new job listings each month, making it one of the top job boards in the world.

Even at its best, the entire newspaper industry only gets 1/5 of what Craigslist sees each month.

Newspapers are doomed Read More »

Criminal goods & service sold on the black market

From Ellen Messmer’s “Symantec takes cybercrime snapshot with ‘Underground Economy’ report” (Network World: 24 November 2008):

The “Underground Economy” report [from Symantec] contains a snapshot of online criminal activity observed from July 2007 to June 2008 by a Symantec team monitoring activities in Internet Relay Chat (IRC) and Web-based forums where stolen goods are advertised. Symantec estimates the total value of the goods advertised on what it calls “underground servers” was about $276 million, with credit-card information accounting for 59% of the total.

If that purloined information were successfully exploited, it probably would bring the buyers about $5 billion, according to the report — just a drop in the bucket, points out David Cowings, senior manager of operations at Symantec Security Response.

“Ninety-eight percent of the underground-economy servers have life spans of less than 6 months,” Cowings says. “The smallest IRC server we saw had five channels and 40 users. The largest IRC server network had 28,000 channels and 90,000 users.”

In the one year covered by the report, Symantec’s team observed more than 69,000 distinct advertisers and 44 million total messages online selling illicit credit-card and financial data, but the 10 most active advertisers appeared to account for 11% of the total messages posted and $575,000 in sales.

According to the report, a bank-account credential was selling for $10 to $1,000, depending on the balance and location of the account. Sellers also hawked specific financial sites’ vulnerabilities for an average price of $740, though prices did go as high as $2,999.

In other spots, the average price for a keystroke logger — malware used to capture a victim’s information — was an affordable $23. Attack tools, such as botnets, sold for an average of $225. “For $10, you could host a phishing site on someone’s server or compromised Web site,” Cowings says.

Desktop computer games appeared to be the most-pirated software, accounting for 49% of all file instances that Symantec observed. The second-highest category was utility applications; third-highest was multimedia productivity applications, such as photograph or HTML editors.

Criminal goods & service sold on the black market Read More »

Another huge botnet

From Kelly Jackson Higgins’ “Researchers Find Massive Botnet On Nearly 2 Million Infected Consumer, Business, Government PCs” (Dark Reading: 22 April 2009):

Researchers have discovered a major botnet operating out of the Ukraine that has infected 1.9 million machines, including large corporate and government PCs mainly in the U.S.

The botnet, which appears to be larger than the infamous Storm botnet was in its heyday, has infected machines from some 77 government-owned domains — 51 of which are U.S. government ones, according to Ophir Shalitin, marketing director of Finjan, which recently found the botnet. Shalitin says the botnet is controlled by six individuals and is hosted in Ukraine.

Aside from its massive size and scope, what is also striking about the botnet is what its malware can do to an infected machine. The malware lets an attacker read the victim’s email, communicate via HTTP in the botnet, inject code into other processes, visit Websites without the user knowing, and register as a background service on the infected machine, for instance.

Finjan says victims are infected when visiting legitimate Websites containing a Trojan that the company says is detected by only four of 39 anti-malware tools, according to a VirusTotal report run by Finjan researchers.

Around 45 percent of the bots are in the U.S., and the machines are Windows XP. Nearly 80 percent run Internet Explorer; 15 percent, Firefox; 3 percent, Opera; and 1 percent Safari. Finjan says the bots were found in banks and large corporations, as well as consumer machines.

Another huge botnet Read More »

Google’s server farm revealed

From Nicholas Carr’s “Google lifts its skirts” (Rough Type: 2 April 2009):

I was particularly surprised to learn that Google rented all its data-center space until 2005, when it built its first center. That implies that The Dalles, Oregon, plant (shown in the photo above) was the company’s first official data smelter. Each of Google’s containers holds 1,160 servers, and the facility’s original server building had 45 containers, which means that it probably was running a total of around 52,000 servers. Since The Dalles plant has three server buildings, that means – and here I’m drawing a speculative conclusion – that it might be running around 150,000 servers altogether.

Here are some more details, from Rich Miller’s report:

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the hot aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to [Jimmy] Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment …

[Urs] Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

Google’s server farm revealed Read More »

Now that the Seattle Post-Intelligencer has switched to the Web …

From William Yardley and Richard Pérez-Peña’s “Seattle Paper Shifts Entirely to the Web” (The New York Times: 16 March 2009):

The P-I, as it is called, will resemble a local Huffington Post more than a traditional newspaper, with a news staff of about 20 people rather than the 165 it had, and a site with mostly commentary, advice and links to other news sites, along with some original reporting.

The new P-I site has recruited some current and former government officials, including a former mayor, a former police chief and the current head of Seattle schools, to write columns, and it will repackage some material from Hearst’s large stable of magazines. It will keep some of the paper’s popular columnists and bloggers and the large number of unpaid local bloggers whose work appears on the site.

Because the newspaper has had no business staff of its own, the new operation plans to hire more than 20 people in areas like ad sales.

Now that the Seattle Post-Intelligencer has switched to the Web … Read More »

Why everyone wants a computer: socializing

From Paul Graham’s “Why TV Lost” (Paul Graham: March 2009):

The somewhat more surprising force was one specific type of innovation: social applications. The average teenage kid has a pretty much infinite capacity for talking to their friends. But they can’t physically be with them all the time. When I was in high school the solution was the telephone. Now it’s social networks, multiplayer games, and various messaging applications. The way you reach them all is through a computer. Which means every teenage kid (a) wants a computer with an Internet connection, (b) has an incentive to figure out how to use it, and (c) spends countless hours in front of it.

This was the most powerful force of all. This was what made everyone want computers. Nerds got computers because they liked them. Then gamers got them to play games on. But it was connecting to other people that got everyone else: that’s what made even grandmas and 14 year old girls want computers.

Why everyone wants a computer: socializing Read More »

The future of TV is the Internet

From Paul Graham’s “Why TV Lost” (Paul Graham: March 2009):

About twenty years ago people noticed computers and TV were on a collision course and started to speculate about what they’d produce when they converged. We now know the answer: computers. It’s clear now that even by using the word “convergence” we were giving TV too much credit. This won’t be convergence so much as replacement. People may still watch things they call “TV shows,” but they’ll watch them mostly on computers.

Whether [TV networks] like it or not, big changes are coming, because the Internet dissolves the two cornerstones of broadcast media: synchronicity and locality. On the Internet, you don’t have to send everyone the same signal, and you don’t have to send it to them from a local source. People will watch what they want when they want it, and group themselves according to whatever shared interest they feel most strongly. Maybe their strongest shared interest will be their physical location, but I’m guessing not. Which means local TV is probably dead. It was an artifact of limitations imposed by old technology.

The future of TV is the Internet Read More »

Facebook & the Dunbar number

From The Economist‘s “Primates on Facebook” (26 February 2009):

Robin Dunbar, an anthropologist who now works at Oxford University, concluded that the cognitive power of the brain limits the size of the social network that an individual of any given species can develop. Extrapolating from the brain sizes and social networks of apes, Dr Dunbar suggested that the size of the human brain allows stable networks of about 148. Rounded to 150, this has become famous as “the Dunbar number”.

Many institutions, from neolithic villages to the maniples of the Roman army, seem to be organised around the Dunbar number. Because everybody knows everybody else, such groups can run with a minimum of bureaucracy. But that does not prove Dr Dunbar’s hypothesis is correct, and other anthropologists, such as Russell Bernard and Peter Killworth, have come up with estimates of almost double the Dunbar number for the upper limit of human groups. Moreover, sociologists also distinguish between a person’s wider network, as described by the Dunbar number or something similar, and his social “core”. Peter Marsden, of Harvard University, found that Americans, even if they socialise a lot, tend to have only a handful of individuals with whom they “can discuss important matters”. A subsequent study found, to widespread concern, that this number is on a downward trend.

The rise of online social networks, with their troves of data, might shed some light on these matters. So The Economist asked Cameron Marlow, the “in-house sociologist” at Facebook, to crunch some numbers. Dr Marlow found that the average number of “friends” in a Facebook network is 120, consistent with Dr Dunbar’s hypothesis, and that women tend to have somewhat more than men. But the range is large, and some people have networks numbering more than 500, so the hypothesis cannot yet be regarded as proven.

What also struck Dr Marlow, however, was that the number of people on an individual’s friend list with whom he (or she) frequently interacts is remarkably small and stable. The more “active” or intimate the interaction, the smaller and more stable the group.

Thus an average man—one with 120 friends—generally responds to the postings of only seven of those friends by leaving comments on the posting individual’s photos, status messages or “wall”. An average woman is slightly more sociable, responding to ten. When it comes to two-way communication such as e-mails or chats, the average man interacts with only four people and the average woman with six. Among those Facebook users with 500 friends, these numbers are somewhat higher, but not hugely so. Men leave comments for 17 friends, women for 26. Men communicate with ten, women with 16.

What mainly goes up, therefore, is not the core network but the number of casual contacts that people track more passively. …

Put differently, people who are members of online social networks are not so much “networking” as they are “broadcasting their lives to an outer tier of acquaintances who aren’t necessarily inside the Dunbar circle,” says Lee Rainie, the director of the Pew Internet & American Life Project, a polling organisation.

Facebook & the Dunbar number Read More »