success

Umberto Eco on books

From Umberto Eco’s “Vegetal and mineral memory: The future of books” (Al-Ahram Weekly: 20—26 November 2003):

Libraries, over the centuries, have been the most important way of keeping our collective wisdom. They were and still are a sort of universal brain where we can retrieve what we have forgotten and what we still do not know. If you will allow me to use such a metaphor, a library is the best possible imitation, by human beings, of a divine mind, where the whole universe is viewed and understood at the same time. A person able to store in his or her mind the information provided by a great library would emulate in some way the mind of God. In other words, we have invented libraries because we know that we do not have divine powers, but we try to do our best to imitate them. …

First of all, we know that books are not ways of making somebody else think in our place; on the contrary, they are machines that provoke further thoughts. Only after the invention of writing was it possible to write such a masterpiece of spontaneous memory as Proust’s A la Recherche du Temps Perdu. Secondly, if once upon a time people needed to train their memories in order to remember things, after the invention of writing they had also to train their memories in order to remember books. Books challenge and improve memory; they do not narcotise it. …

YET IT IS EXACTLY AT THIS POINT that our unravelling activity must start because by hypertextual structure we usually mean two very different phenomena. First, there is the textual hypertext. In a traditional book one must read from left to right (or right to left, or up to down, according to different cultures) in a linear way. One can obviously skip through the pages, one—once arrived at page 300—can go back to check or re- read something at page 10—but this implies physical labour. In contrast to this, a hypertextual text is a multidimensional network or a maze in which every point or node can be potentially connected with any other node. Second, there is the systemic hypertext. The WWW is the Great Mother of All Hypertexts, a world-wide library where you can, or you will in short time, pick up all the books you wish. The Web is the general system of all existing hypertexts. …

Simply, books have proved to be the most suitable instrument for transmitting information. There are two sorts of book: those to be read and those to be consulted. As far as books-to-be-read are concerned, the normal way of reading them is the one that I would call the ‘detective story way’. You start from page one, where the author tells you that a crime has been committed, you follow every path of the detection process until the end, and finally you discover that the guilty one was the butler. End of the book and end of your reading experience. …

Then they are books to be consulted, like handbooks and encyclopaedias. Encyclopaedias are conceived in order to be consulted and never read from the first to the last page. …

Hypertexts will certainly render encyclopaedias and handbooks obsolete. Yesterday, it was possible to have a whole encyclopaedia on a CD-ROM; today, it is possible to have it on line with the advantage that this permits cross references and the non-linear retrieval of information. …

Books belong to those kinds of instruments that, once invented, have not been further improved because they are already alright, such as the hammer, the knife, spoon or scissors. …

TWO NEW INVENTIONS, however, are on the verge of being industrially exploited. One is printing on demand: after scanning the catalogues of many libraries or publishing houses a reader can select the book he needs, and the operator will push a button, and the machine will print and bind a single copy using the font the reader likes. … Simply put: every book will be tailored according to the desires of the buyer, as happened with old manuscripts.

The second invention is the e-book where by inserting a micro- cassette in the book’s spine or by connecting it to the internet one can have a book printed out in front of us. Even in this case, however, we shall still have a book, though as different from our current ones as ours are different from old manuscripts on parchment, and as the first Shakespeare folio of 1623 is different from the last Penguin edition. Yet, up to now e-books have not proved to be commercially successful as their inventors hoped. … E-books will probably prove to be useful for consulting information, as happens with dictionaries or special documents. …

Indeed, there are a lot of new technological devices that have not made previous ones obsolete. Cars run faster than bicycles, but they have not rendered bicycles obsolete, and no new technological improvements can make a bicycle better than it was before. The idea that a new technology abolishes a previous one is frequently too simplistic. Though after the invention of photography painters did not feel obliged to serve any longer as craftsmen reproducing reality, this did not mean that Daguerre’s invention only encouraged abstract painting. There is a whole tradition in modern painting that could not have existed without photographic models: think, for instance, of hyper-realism. Here, reality is seen by the painter’s eye through the photographic eye. This means that in the history of culture it has never been the case that something has simply killed something else. Rather, a new invention has always profoundly changed an older one. …

The computer creates new modes of production and diffusion of printed documents. …

Today there are new hypertextual poetics according to which even a book-to-read, even a poem, can be transformed to hypertext. At this point we are shifting to question two, since the problem is no longer, or not only, a physical one, but rather one that concerns the very nature of creative activity, of the reading process, and in order to unravel this skein of questions we have first of all to decide what we mean by a hypertextual link. …

In order to understand how texts of this genre can work we should decide whether the textual universe we are discussing is limited and finite, limited but virtually infinite, infinite but limited, or unlimited and infinite.

First of all, we should make a distinction between systems and texts. A system, for instance a linguistic system, is the whole of the possibilities displayed by a given natural language. A finite set of grammatical rules allows the speaker to produce an infinite number of sentences, and every linguistic item can be interpreted in terms of other linguistic or other semiotic items—a word by a definition, an event by an example, an animal or a flower by an image, and so on and so forth. …

Grammars, dictionaries and encyclopaedias are systems: by using them you can produce all the texts you like. But a text itself is not a linguistic or an encyclopaedic system. A given text reduces the infinite or indefinite possibilities of a system to make up a closed universe. If I utter the sentence, ‘This morning I had for breakfast…’, for example, the dictionary allows me to list many possible items, provided they are all organic. But if I definitely produce my text and utter, ‘This morning I had for breakfast bread and butter’, then I have excluded cheese, caviar, pastrami and apples. A text castrates the infinite possibilities of a system. …

Take a fairy tale, like Little Red Riding Hood. The text starts from a given set of characters and situations—a little girl, a mother, a grandmother, a wolf, a wood—and through a series of finite steps arrives at a solution. Certainly, you can read the fairy tale as an allegory and attribute different moral meanings to the events and to the actions of the characters, but you cannot transform Little Red Riding Hood into Cinderella. … This seems trivial, but the radical mistake of many deconstructionists was to believe that you can do anything you want with a text. This is blatantly false. …

Now suppose that a finite and limited text is organised hypertextually by many links connecting given words with other words. In a dictionary or an encyclopaedia the word wolf is potentially connected to every other word that makes up part of its possible definition or description (wolf is connected to animal, to mammal to ferocious, to legs, to fur, to eyes, to woods, to the names of the countries in which wolves exist, etc.). In Little Red Riding Hood, the wolf can be connected only with the textual sections in which it shows up or in which it is explicitly evoked. The series of possible links is finite and limited. How can hypertextual strategies be used to ‘open’ up a finite and limited text?

The first possibility is to make the text physically unlimited, in the sense that a story can be enriched by the successive contributions of different authors and in a double sense, let us say either two-dimensionally or three-dimensionally. By this I mean that given, for instance, Little Red Riding Hood, the first author proposes a starting situation (the girl enters the wood) and different contributors can then develop the story one after the other, for example, by having the girl meet not the wolf but Ali Baba, by having both enter an enchanted castle, having a confrontation with a magic crocodile, and so on, so that the story can continue for years. But the text can also be infinite in the sense that at every narrative disjunction, for instance, when the girl enters the wood, many authors can make many different choices. For one author, the girl may meet Pinocchio, for another she may be transformed into a swan, or enter the Pyramids and discover the treasury of the son of Tutankhamen. …

AT THIS POINT one can raise a question about the survival of the very notion of authorship and of the work of art, as an organic whole. And I want simply to inform my audience that this has already happened in the past without disturbing either authorship or organic wholes. The first example is that of the Italian Commedia dell’arte, in which upon a canovaccio, that is, a summary of the basic story, every performance, depending on the mood and fantasy of the actors, was different from every other so that we cannot identify any single work by a single author called Arlecchino servo di due padroni and can only record an uninterrupted series of performances, most of them definitely lost and all certainly different one from another.

Another example would be a jazz jam session. … What I want to say is that we are already accustomed to the idea of the absence of authorship in popular collective art in which every participant adds something, with experiences of jazz-like unending stories. …

A hypertext can give the illusion of opening up even a closed text: a detective story can be structured in such a way that its readers can select their own solution, deciding at the end if the guilty one should be the butler, the bishop, the detective, the narrator, the author or the reader. They can thus build up their own personal story. Such an idea is not a new one. Before the invention of computers, poets and narrators dreamt of a totally open text that readers could infinitely re-compose in different ways. Such was the idea of Le Livre, as extolled by Mallarmé. Raymond Queneau also invented a combinatorial algorithm by virtue of which it was possible to compose, from a finite set of lines, millions of poems. In the early sixties, Max Saporta wrote and published a novel whose pages could be displaced to compose different stories, and Nanni Balestrini gave a computer a disconnected list of verses that the machine combined in different ways to compose different poems. …

All these physically moveable texts give an impression of absolute freedom on the part of the reader, but this is only an impression, an illusion of freedom. The machinery that allows one to produce an infinite text with a finite number of elements has existed for millennia, and this is the alphabet. Using an alphabet with a limited number of letters one can produce billions of texts, and this is exactly what has been done from Homer to the present days. In contrast, a stimulus-text that provides us not with letters, or words, but with pre-established sequences of words, or of pages, does not set us free to invent anything we want. …

At the last borderline of free textuality there can be a text that starts as a closed one, let us say, Little Red Riding Hood or The Arabian Nights, and that I, the reader, can modify according to my inclinations, thus elaborating a second text, which is no longer the same as the original one, whose author is myself, even though the affirmation of my authorship is a weapon against the concept of definite authorship. …

A BOOK OFFERS US A TEXT which, while being open to multiple interpretations, tells us something that cannot be modified. … Alas, with an already written book, whose fate is determined by repressive, authorial decision, we cannot do this. We are obliged to accept fate and to realise that we are unable to change destiny. A hypertextual and interactive novel allows us to practice freedom and creativity, and I hope that such inventive activity will be implemented in the schools of the future. But the already and definitely written novel War and Peace does not confront us with the unlimited possibilities of our imagination, but with the severe laws governing life and death. …

Umberto Eco on books Read More »

How the Madden NFL videogame was developed

From Patrick Hruby’s “The Franchise: The inside story of how Madden NFL became a video game dynasty” (ESPN: 22 July 2010):

1982

Harvard grad and former Apple employee Trip Hawkins founds video game maker Electronic Arts, in part to create a football game; one year later, the company releases “One-on-One: Dr. J vs. Larry Bird,” the first game to feature licensed sports celebrities. Art imitates life.

1983-84

Hawkins approaches former Oakland Raiders coach and NFL television analyst John Madden to endorse a football game. Madden agrees, but insists on realistic game play with 22 on-screen players, a daunting technical challenge.

1988-90

EA releases the first Madden football game for the Apple II home computer; a subsequent Sega Genesis home console port blends the Apple II game’s realism with control pad-heavy, arcade-style action, becoming a smash hit.

madden-nfl-covers-sm.jpg

You can measure the impact of “Madden” through its sales: as many as 2 million copies in a single week, 85 million copies since the game’s inception and more than $3 billion in total revenue. You can chart the game’s ascent, shoulder to shoulder, alongside the $20 billion-a-year video game industry, which is either co-opting Hollywood (see “Tomb Raider” and “Prince of Persia”) or topping it (opening-week gross of “Call of Duty: Modern Warfare 2”: $550 million; “The Dark Knight”: $204 million).

Some of the pain was financial. Just as EA brought its first games to market in 1983, the home video game industry imploded. In a two-year span, Coleco abandoned the business, Intellivision went from 1,200 employees to five and Atari infamously dumped thousands of unsold game cartridges into a New Mexico landfill. Toy retailers bailed, concluding that video games were a Cabbage Patch-style fad. Even at EA — a hot home computer startup — continued solvency was hardly assured.

In 1988, “John Madden Football” was released for the Apple II computer and became a modest commercial success.

THE STAKES WERE HIGH for a pair of upstart game makers, with a career-making opportunity and a $100,000 development contract on the line. In early 1990, Troy Lyndon and Mike Knox of San Diego-based Park Place Productions met with Hawkins to discuss building a “Madden” game for Sega’s upcoming home video game console, the Genesis. …

Because the game that made “Madden” a phenomenon wasn’t the initial Apple II release, it was the Genesis follow-up, a surprise smash spawned by an entirely different mindset. Hawkins wanted “Madden” to play out like the NFL. Equivalent stats. Similar play charts. Real football.

In 1990, EA had a market cap of about $60 million; three years later, that number swelled to $2 billion.

In 2004, EA paid the NFL a reported $300 million-plus for five years of exclusive rights to teams and players. The deal was later extended to 2013. Just like that, competing games went kaput. The franchise stands alone, triumphant, increasingly encumbered by its outsize success.

Hawkins left EA in the early 1990s to spearhead 3D0, an ill-fated console maker that became a doomed software house. An icy rift between the company and its founder ensued.

How the Madden NFL videogame was developed Read More »

The Uncanny Valley, art forgery, & love

Apply new wax to old wood
Creative Commons License photo credit: hans s

From Errol Morris’ “Bamboozling Ourselves (Part 2)” (The New York Times: 28 May 2009):

[Errol Morris:] The Uncanny Valley is a concept developed by the Japanese robot scientist Masahiro Mori. It concerns the design of humanoid robots. Mori’s theory is relatively simple. We tend to reject robots that look too much like people. Slight discrepancies and incongruities between what we look like and what they look like disturb us. The closer a robot resembles a human, the more critical we become, the more sensitive to slight discrepancies, variations, imperfections. However, if we go far enough away from the humanoid, then we much more readily accept the robot as being like us. This accounts for the success of so many movie robots — from R2-D2 to WALL-E. They act like humans but they don’t look like humans. There is a region of acceptability — the peaks around The Uncanny Valley, the zone of acceptability that includes completely human and sort of human but not too human. The existence of The Uncanny Valley also suggests that we are programmed by natural selection to scrutinize the behavior and appearance of others. Survival no doubt depends on such an innate ability.

EDWARD DOLNICK: [The art forger Van Meegeren] wants to avoid it. So his big challenge is he wants to paint a picture that other people are going to take as Vermeer, because Vermeer is a brand name, because Vermeer is going to bring him lots of money, if he can get away with it, but he can’t paint a Vermeer. He doesn’t have that skill. So how is he going to paint a picture that doesn’t look like a Vermeer, but that people are going to say, “Oh! It’s a Vermeer?” How’s he going to pull it off? It’s a tough challenge. Now here’s the point of The Uncanny Valley: as your imitation gets closer and closer to the real thing, people think, “Good, good, good!” — but then when it’s very close, when it’s within 1 percent or something, instead of focusing on the 99 percent that is done well, they focus on the 1 percent that you’re missing, and you’re in trouble. Big trouble.

Van Meegeren is trapped in the valley. If he tries for the close copy, an almost exact copy, he’s going to fall short. He’s going to look silly. So what he does instead is rely on the blanks in Vermeer’s career, because hardly anything is known about him; he’s like Shakespeare in that regard. He’ll take advantage of those blanks by inventing a whole new era in Vermeer’s career. No one knows what he was up to all this time. He’ll throw in some Vermeer touches, including a signature, so that people who look at it will be led to think, “Yes, this is a Vermeer.”

Van Meegeren was sometimes careful, other times astonishingly reckless. He could have passed certain tests. What was peculiar, and what was quite startling to me, is that it turned out that nobody ever did any scientific test on Van Meegeren, even the stuff that was available in his day, until after he confessed. And to this day, people hardly ever test pictures, even multi-million dollar ones. And I was so surprised by that that I kept asking, over and over again: why? Why would that be? Before you buy a house, you have someone go through it for termites and the rest. How could it be that when you’re going to lay out $10 million for a painting, you don’t test it beforehand? And the answer is that you don’t test it because, at the point of being about to buy it, you’re in love! You’ve found something. It’s going to be the high mark of your collection; it’s going to be the making of you as a collector. You finally found this great thing. It’s available, and you want it. You want it to be real. You don’t want to have someone let you down by telling you that the painting isn’t what you think it is. It’s like being newly in love. Everything is candlelight and wine. Nobody hires a private detective at that point. It’s only years down the road when things have gone wrong that you say, “What was I thinking? What’s going on here?” The collector and the forger are in cahoots. The forger wants the collector to snap it up, and the collector wants it to be real. You are on the same side. You think that it would be a game of chess or something, you against him. “Has he got the paint right?” “Has he got the canvas?” You’re going to make this checkmark and that checkmark to see if the painting measures up. But instead, both sides are rooting for this thing to be real. If it is real, then you’ve got a masterpiece. If it’s not real, then today is just like yesterday. You’re back where you started, still on the prowl.

The Uncanny Valley, art forgery, & love Read More »

A better alternative to text CAPTCHAs

From Rich Gossweiler, Maryam Kamvar, & Shumeet Baluja’s “What’s Up CAPTCHA?: A CAPTCHA Based On Image Orientation” (Google: 20-24 April 2009):

There are several classes of images which can be successfully oriented by computers. Some objects, such as faces, cars, pedestrians, sky, grass etc.

Many images, however, are difficult for computers to orient. For example, indoor scenes have variations in lighting sources, and abstract and close-up images provide the greatest challenge to both computers and people, often because no clear anchor points or lighting sources exist.

The average performance on outdoor photographs, architecture photographs and typical tourist type photographs was significantly higher than the performance on abstract photographs, close-ups and backgrounds. When an analysis of the features used to make the discriminations was done, it was found that the edge features play a significant role.

It is important not to simply select random images for this task. There are many cues which can quickly reveal the upright orientation of an image to automated systems; these images must be filtered out. For example, if typical vacation or snapshot photos are used, automated rotation accuracies can be in the 90% range. The existence of any of the cues in the presented images will severely limit the effectiveness of the approach. Three common cues are listed below:

1. Text: Usually the predominant orientation of text in an image reveals the upright orientation of an image.

2. Faces and People: Most photographs are taken with the face(s) / people upright in the image.

3. Blue skies, green grass, and beige sand: These are all revealing clues, and are present in many travel/tourist photographs found on the web. Extending this beyond color, in general, the sky often has few texture/edges in comparison to the ground. Additional cues found important in human tests include "grass", "trees", "cars", "water" and "clouds".

Second, due to sometimes warped objects, lack of shading and lighting cues, and often unrealistic colors, cartoons also make ideal candidates. … Finally, although we did not alter the content of the image, it may be possible to simply alter the color- mapping, overall lighting curves, and hue/saturation levels to reveal images that appear unnatural but remain recognizable to people.

To normalize the shape and size of the images, we scaled each image to a 180×180 pixel square and we then applied a circular mask to remove the image corners.

We have created a system that has sufficiently high human- success rates and sufficiently low computer-success rates. When using three images, the rotational CAPTCHA system results in an 84% human success metric, and a .009% bot-success metric (assuming random guessing). These metrics are based on two variables: the number of images we require a user to rotate and the size of the acceptable error window (the degrees from upright which we still consider to be upright). Predictably, as the number of images shown becomes greater, the probability of correctly solving them decreases. However, as the error window increases, the probability of correctly solving them increases. The system which results in an 84% human success rate and .009% bot success rate asks the user to rotate three images, each within 16° of upright (8-degrees on either side of upright).

A CAPTCHA system which displayed ≥ 3 images with a ≤ 16-degree error window would achieve a guess success rate of less than 1 in 10,000, a standard acceptable computer success rates for CAPTCHAs.

In our experiments, users moved a slider to rotate the image to its upright position. On small display devices such as a mobile phone, they could directly manipulate the image using a touch screen, as seen in Figure 12, or can rotate it via button presses.

A better alternative to text CAPTCHAs Read More »

Criminal goods & service sold on the black market

From Ellen Messmer’s “Symantec takes cybercrime snapshot with ‘Underground Economy’ report” (Network World: 24 November 2008):

The “Underground Economy” report [from Symantec] contains a snapshot of online criminal activity observed from July 2007 to June 2008 by a Symantec team monitoring activities in Internet Relay Chat (IRC) and Web-based forums where stolen goods are advertised. Symantec estimates the total value of the goods advertised on what it calls “underground servers” was about $276 million, with credit-card information accounting for 59% of the total.

If that purloined information were successfully exploited, it probably would bring the buyers about $5 billion, according to the report — just a drop in the bucket, points out David Cowings, senior manager of operations at Symantec Security Response.

“Ninety-eight percent of the underground-economy servers have life spans of less than 6 months,” Cowings says. “The smallest IRC server we saw had five channels and 40 users. The largest IRC server network had 28,000 channels and 90,000 users.”

In the one year covered by the report, Symantec’s team observed more than 69,000 distinct advertisers and 44 million total messages online selling illicit credit-card and financial data, but the 10 most active advertisers appeared to account for 11% of the total messages posted and $575,000 in sales.

According to the report, a bank-account credential was selling for $10 to $1,000, depending on the balance and location of the account. Sellers also hawked specific financial sites’ vulnerabilities for an average price of $740, though prices did go as high as $2,999.

In other spots, the average price for a keystroke logger — malware used to capture a victim’s information — was an affordable $23. Attack tools, such as botnets, sold for an average of $225. “For $10, you could host a phishing site on someone’s server or compromised Web site,” Cowings says.

Desktop computer games appeared to be the most-pirated software, accounting for 49% of all file instances that Symantec observed. The second-highest category was utility applications; third-highest was multimedia productivity applications, such as photograph or HTML editors.

Criminal goods & service sold on the black market Read More »

Crazy anti-terrorism plans that worked

From a Special Operations officer quoted in Tom Ricks’s Inbox (The Washington Post: 5 October 2008):

One of the most interesting operations was the laundry mat [sic]. Having lost many troops and civilians to bombings, the Brits decided they needed to determine who was making the bombs and where they were being manufactured. One bright fellow recommended they operate a laundry and when asked “what the hell he was talking about,” he explained the plan and it was incorporated — to much success.

The plan was simple: Build a laundry and staff it with locals and a few of their own. The laundry would then send out “color coded” special discount tickets, to the effect of “get two loads for the price of one,” etc. The color coding was matched to specific streets and thus when someone brought in their laundry, it was easy to determine the general location from which a city map was coded.

While the laundry was indeed being washed, pressed and dry cleaned, it had one additional cycle — every garment, sheet, glove, pair of pants, was first sent through an analyzer, located in the basement, that checked for bomb-making residue. The analyzer was disguised as just another piece of the laundry equipment; good OPSEC [operational security]. Within a few weeks, multiple positives had shown up, indicating the ingredients of bomb residue, and intelligence had determined which areas of the city were involved. To narrow their target list, [the laundry] simply sent out more specific coupons [numbered] to all houses in the area, and before long they had good addresses. After confirming addresses, authorities with the SAS teams swooped down on the multiple homes and arrested multiple personnel and confiscated numerous assembled bombs, weapons and ingredients. During the entire operation, no one was injured or killed.
ad_icon

By the way, the gentleman also told the story of how [the British] also bugged every new car going into Northern Ireland, and thus knew everything [Sinn Fein leader] Gerry Adams was discussing. They did this because Adams always conducted mobile meetings and always used new cars.

The Israelis have a term for this type of thinking, “Embracing the Meshugganah,” which literally translated means, embrace the craziness, because the crazier the plan, the less likely the adversary will have thought about it, and thus, not have implemented a counter-measure.

Crazy anti-terrorism plans that worked Read More »

Chemically remove bad memories

From Nicholas Carr’s “Remembering to forget” (Rough Type: 22 October 2008):

Slowly but surely, scientists are getting closer to developing a drug that will allow people to eliminate unpleasant memories. The new issue of Neuron features a report from a group of Chinese scientists who were able to use a chemical – the protein alpha-CaM kinase II – to successfully erase memories from the minds of mice. The memory losses, report the authors, are “not caused by disrupting the retrieval access to the stored information but are, rather, due to the active erasure of the stored memories.” The erasure, moreover, “is highly restricted to the memory being retrieved while leaving other memories intact. Therefore, our study reveals a molecular genetic paradigm through which a given memory, such as new or old fear memory, can be rapidly and specifically erased in a controlled and inducible manner in the brain.”

One can think of a whole range of applications, from the therapeutic to the cosmetic to the political.

Chemically remove bad memories Read More »

Conficker creating a new gargantuan botneth

From Asavin Wattanajantra’s “Windows worm could create the ‘world’s biggest botnet’” (IT PRO: 19 January 2009):

The Downadup or “Conficker” worm has increased to over nine million infections over the weekend – increasing from 2.4 million in a four-day period, according to F-Secure.

The worm has password cracking capabilities, which is often successful because company passwords sometimes match a predefined password list that the worm carries.

Corporate networks around the world have already been infected by the network worm, which is particularly hard to eradicate as it is able to evolve – making use of a long list of websites – by downloading another version of itself.

Rik Ferguson, solution architect at Trend Micro, told IT PRO that the worm was very difficult to block for security companies as they had to make sure that they blocked every single one of the hundreds of domains that it could download from.

Ferguson said that the worm was creating a staggering amount of infections, even if just the most conservative infection estimates are taken into account. He said: “What’s particularly interesting about this worm is that it is the first hybrid with old school worm infection capabilities and command and control infrastructure.”

Conficker creating a new gargantuan botneth Read More »

Why botnet operators do it: profit, politics, & prestige

From Clive Akass’ “Storm worm ‘making millions a day’” (Personal Computer World: 11 February 2008):

The people behind the Storm worm are making millions of pounds a day by using it to generate revenue, according to IBM’s principal web security strategist.

Joshua Corman, of IBM Internet Security Systems, said that in the past it had been assumed that web security attacks were essential ego driven. But now attackers fell in three camps.

‘I call them my three Ps, profit, politics and prestige,’ he said during a debate at a NetEvents forum in Barcelona.

The Storm worm, which had been around about a year, had been a tremendous financial success because it created a botnet of compromised machines that could be used to launch profitable spam attacks.

Not only do the criminals get money simply for sending out the spam in much more quantity than could be sent by a single machine but they get a cut of any business done off the spam.

Why botnet operators do it: profit, politics, & prestige Read More »

Wikipedia, freedom, & changes in production

From Clay Shirky’s “Old Revolutions, Good; New Revolutions, Bad” (Britannica Blog: 14 June 2007):

Gorman’s theory about print – its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.

Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect – the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.

The success of Wikipedia forces a profound question on print culture: how is information to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.

Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.

Wikipedia, freedom, & changes in production Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

A botnet with a contingency plan

From Gregg Keizer’s “Massive botnet returns from the dead, starts spamming” (Computerworld: 26 November 2008):

A big spam-spewing botnet shut down two weeks ago has been resurrected, security researchers said today, and is again under the control of criminals.

The “Srizbi” botnet returned from the dead late Tuesday, said Fengmin Gong, chief security content officer at FireEye Inc., when the infected PCs were able to successfully reconnect with new command-and-control servers, which are now based in Estonia.

Srizbi was knocked out more than two weeks ago when McColo Corp., a hosting company that had been accused of harboring a wide range of criminal activities, was yanked off the Internet by its upstream service providers. With McColo down, PCs infected with Srizbi and other bot Trojan horses were unable to communicate with their command servers, which had been hosted by McColo. As a result, spam levels dropped precipitously.

But as other researchers noted last week, Srizbi had a fallback strategy. In the end, that strategy paid off for the criminals who control the botnet.

According to Gong, when Srizbi bots were unable to connect with the command-and-control servers hosted by McColo, they tried to connect with new servers via domains that were generated on the fly by an internal algorithm. FireEye reverse-engineered Srizbi, rooted out that algorithm and used it to predict, then preemptively register, several hundred of the possible routing domains.

The domain names, said Gong, were generated on a three-day cycle, and for a while, FireEye was able to keep up — and effectively block Srizbi’s handlers from regaining control.

“We have registered a couple hundred domains,” Gong said, “but we made the decision that we cannot afford to spend so much money to keep registering so many [domain] names.”

Once FireEye stopped preempting Srizbi’s makers, the latter swooped in and registered the five domains in the next cycle. Those domains, in turn, pointed Srizbi bots to the new command-and-control servers, which then immediately updated the infected machines to a new version of the malware.

A botnet with a contingency plan Read More »

How Obama raised money in Silicon Valley & using the Net

From Joshua Green’s “The Amazing Money Machine” (The Atlantic: June 2008):

That early fund-raiser [in February 2007] and others like it were important to Obama in several respects. As someone attempting to build a campaign on the fly, he needed money to operate. As someone who dared challenge Hillary Clinton, he needed a considerable amount of it. And as a newcomer to national politics, though he had grassroots appeal, he needed to establish credibility by making inroads to major donors—most of whom, in California as elsewhere, had been locked down by the Clinton campaign.

Silicon Valley was a notable exception. The Internet was still in its infancy when Bill Clinton last ran for president, in 1996, and most of the immense fortunes had not yet come into being; the emerging tech class had not yet taken shape. So, unlike the magnates in California real estate (Walter Shorenstein), apparel (Esprit founder Susie Tompkins Buell), and entertainment (name your Hollywood celeb), who all had long-established loyalty to the Clintons, the tech community was up for grabs in 2007. In a colossal error of judgment, the Clinton campaign never made a serious approach, assuming that Obama would fade and that lack of money and cutting-edge technology couldn’t possibly factor into what was expected to be an easy race. Some of her staff tried to arrange “prospect meetings” in Silicon Valley, but they were overruled. “There was massive frustration about not being able to go out there and recruit people,” a Clinton consultant told me last year. As a result, the wealthiest region of the wealthiest state in the nation was left to Barack Obama.

Furthermore, in Silicon Valley’s unique reckoning, what everyone else considered to be Obama’s major shortcomings—his youth, his inexperience—here counted as prime assets.

[John Roos, Obama’s Northern California finance chair and the CEO of the Palo Alto law firm Wilson Sonsini Goodrich & Rosati]: “… we recognize what great companies have been built on, and that’s ideas, talent, and inspirational leadership.”

The true killer app on My.BarackObama.com is the suite of fund-raising tools. You can, of course, click on a button and make a donation, or you can sign up for the subscription model, as thousands already have, and donate a little every month. You can set up your own page, establish your target number, pound your friends into submission with e-mails to pony up, and watch your personal fund-raising “thermometer” rise. “The idea,” [Joe Rospars, a veteran of Dean’s campaign who had gone on to found an Internet fund-raising company and became Obama’s new-media director] says, “is to give them the tools and have them go out and do all this on their own.”

“What’s amazing,” says Peter Leyden of the New Politics Institute, “is that Hillary built the best campaign that has ever been done in Democratic politics on the old model—she raised more money than anyone before her, she locked down all the party stalwarts, she assembled an all-star team of consultants, and she really mastered this top-down, command-and-control type of outfit. And yet, she’s getting beaten by this political start-up that is essentially a totally different model of the new politics.”

Before leaving Silicon Valley, I stopped by the local Obama headquarters. It was a Friday morning in early March, and the circus had passed through town more than a month earlier, after Obama lost the California primary by nine points. Yet his headquarters was not only open but jammed with volunteers. Soon after I arrived, everyone gathered around a speakerphone, and Obama himself, between votes on the Senate floor, gave a brief hortatory speech telling volunteers to call wavering Edwards delegates in Iowa before the county conventions that Saturday (they took place two months after the presidential caucuses). Afterward, people headed off to rows of computers, put on telephone headsets, and began punching up phone numbers on the Web site, ringing a desk bell after every successful call. The next day, Obama gained nine delegates, including a Clinton delegate.

The most striking thing about all this was that the headquarters is entirely self-sufficient—not a dime has come from the Obama campaign. Instead, everything from the computers to the telephones to the doughnuts and coffee—even the building’s rent and utilities—is user-generated, arranged and paid for by local volunteers. It is one of several such examples across the country, and no other campaign has put together anything that can match this level of self-sufficiency.

But while his rivals continued to depend on big givers, Obama gained more and more small donors, until they finally eclipsed the big ones altogether. In February, the Obama campaign reported that 94 percent of their donations came in increments of $200 or less, versus 26 percent for Clinton and 13 percent for McCain. Obama’s claim of 1,276,000 donors through March is so large that Clinton doesn’t bother to compete; she stopped regularly providing her own number last year.

“If the typical Gore event was 20 people in a living room writing six-figure checks,” Gorenberg told me, “and the Kerry event was 2,000 people in a hotel ballroom writing four-figure checks, this year for Obama we have stadium rallies of 20,000 people who pay absolutely nothing, and then go home and contribute a few dollars online.” Obama himself shrewdly capitalizes on both the turnout and the connectivity of his stadium crowds by routinely asking them to hold up their cell phones and punch in a five-digit number to text their contact information to the campaign—to win their commitment right there on the spot.

How Obama raised money in Silicon Valley & using the Net Read More »

How the settlers changed America’s ecology, radically

From Charles C. Mann’s “America, Found & Lost” (National Geographic: May 2007):

It is just possible that John Rolfe was responsible for the worms—specifically the common night crawler and the red marsh worm, creatures that did not exist in the Americas before Columbus. Rolfe was a colonist in Jamestown, Virginia, the first successful English colony in North America. Most people know him today, if they know him at all, as the man who married Pocahontas. A few history buffs understand that Rolfe was one of the primary forces behind Jamestown’s eventual success. The worms hint at a third, still more important role: Rolfe inadvertently helped unleash a convulsive and permanent change in the American landscape.

Like many young English blades, Rolfe smoked – or, as the phrase went in those days, “drank” – tobacco, a fad since the Spanish had first carried back samples of Nicotiana tabacum from the Caribbean. Indians in Virginia also drank tobacco, but it was a different species, Nicotiana rustica. Virginia leaf was awful stuff, wrote colonist William Strachey: “poor and weak and of a biting taste.” After arriving in Jamestown in 1610, Rolfe talked a shipmaster into bringing him N. tabacum seeds from Trinidad and Venezuela. Six years later Rolfe returned to England with his wife, Pocahontas, and the first major shipment of his tobacco. “Pleasant, sweet, and strong,” as Rolfe’s friend Ralph Hamor described it, Jamestown’s tobacco was a hit. By 1620 the colony exported up to 50,000 pounds (23,000 kilograms) of it – and at least six times more a decade later. Ships bellied up to Jamestown and loaded up with barrels of tobacco leaves. To balance the weight, sailors dumped out ballast, mostly stones and soil. That dirt almost certainly contained English earthworms.

TWO HUNDRED AND FIFTY MILLION years ago the world contained a single landmass known to scientists as Pangaea. Geologic forces broke this vast expanse into pieces, sundering Eurasia and the Americas. Over time the two halves of the world developed wildly different suites of plants and animals. Columbus’s signal accomplishment was, in the phrase of historian Alfred Crosby, to reknit the torn seams of Pangaea. After 1492, the world’s ecosystems collided and mixed as European vessels carried thousands of species to new homes across the oceans. The Columbian exchange, as Crosby called it, is why there are tomatoes in Italy, oranges in Florida, chocolates in Switzerland, and hot peppers in Thailand. It is arguably the most important event in the history of life since the death of the dinosaurs.

But the largest ecological impact may have been wreaked by a much smaller, seemingly benign domestic animal: the European honeybee. In early 1622, a ship arrived in Jamestown that was a living exhibit of the Columbian exchange. It was loaded with exotic entities for the colonists to experiment with: grapevine cuttings, silkworm eggs, and beehives. Most bees pollinate only a few species; they tend to be fussy about where they live. European honeybees, promiscuous beasts, reside almost anywhere and pollinate almost anything in sight. Quickly, they swarmed from their hives and set up shop throughout the Americas.

How the settlers changed America’s ecology, radically Read More »

If concerts bring money in for the music biz, what happens when concerts get smaller?

From Jillian Cohen’s “The Show Must Go On” (The American: March/April 2008):

You can’t steal a concert. You can’t download the band—or the sweaty fans in the front row, or the merch guy, or the sound tech—to your laptop to take with you. Concerts are not like albums—easy to burn, copy, and give to your friends. If you want to share the concert-going experience, you and your friends all have to buy tickets. For this reason, many in the ailing music industry see concerts as the next great hope to revive their business.

It’s a blip that already is fading, to the dismay of the major record labels. CD sales have dropped 25 percent since 2000 and digital downloads haven’t picked up the slack. As layoffs swept the major labels this winter, many industry veterans turned their attention to the concert business, pinning their hopes on live performances as a way to bolster their bottom line.

Concerts might be a short-term fix. As one national concert promoter says, “The road is where the money is.” But in the long run, the music business can’t depend on concert tours for a simple, biological reason: the huge tour profits that have been generated in the last few decades have come from performers who are in their 40s, 50s, and 60s. As these artists get older, they’re unlikely to be replaced, because the industry isn’t investing in new talent development.

When business was good—as it was when CD sales grew through much of the 1990s—music labels saw concert tours primarily as marketing vehicles for albums. Now, they’re seizing on the reverse model. Tours have become a way to market the artist as a brand, with the fan clubs, limited-edition doodads, and other profitable products and services that come with the territory.

“Overall, it’s not a pretty picture for some parts of the industry,” JupiterResearch analyst David Card wrote in November when he released a report on digital music sales. “Labels must act more like management companies, and tap into the broadest collection of revenue streams and licensing as possible,” he said. “Advertising and creative packaging and bundling will have to play a bigger role than they have. And the $3 billion-plus touring business is not exactly up for grabs—it’s already competitive and not very profitable. Music companies of all types need to use the Internet for more cost-effective marketing, and A&R [artist development] risk has to be spread more fairly.”

The ‘Heritage Act’ Dilemma

Even so, belief in the touring business was so strong last fall that Madonna signed over her next ten years to touring company Live Nation—the folks who put on megatours for The Rolling Stones, The Police, and other big headliners—in a deal reportedly worth more than $120 million. The Material Girl’s arrangement with Live Nation is known in the industry as a 360-degree deal. Such deals may give artists a big upfront payout in exchange for allowing record labels or, in Madonna’s case, tour producers to profit from all aspects of their business, including touring, merchandise, sponsorships, and more.

While 360 deals may work for big stars, insiders warn that they’re not a magic bullet that will save record labels from their foundering, top-heavy business model. Some artists have done well by 360 contracts, including alt-metal act Korn and British pop sensation Robbie Williams. With these successes in mind, some tout the deals as a way for labels to recoup money they’re losing from downloads and illegal file sharing. But the artists who are offered megamillions for a piece of their brand already have built it through years of album releases, heavy touring, and careful fan-base development.

Not all these deals are good ones, says Bob McLynn, who manages pop-punk act Fall Out Boy and other young artists through his agency, Crush Management. Labels still have a lot to offer, he says. They pay for recording sessions, distribute CDs, market a band’s music, and put up money for touring, music-video production, and other expenses. But in exchange, music companies now want to profit from more than a band’s albums and recording masters. “The artist owns the brand, and now the labels—because they can’t sell as many albums—are trying to get in on the brand,” McLynn says. “To be honest, if an artist these days is looking for a traditional major-label deal for several hundred thousand dollars, they will have to be willing to give up some of that brand.

”For a young act, such offers may be enticing, but McLynn urges caution. “If they’re not going to give you a lot of money for it, it’s a mistake,” says the manager, who helped build Fall Out Boy’s huge teen fan base through constant touring and Internet marketing, only later signing the band to a big label. “I had someone from a major label ask me recently, ‘Hey, I have this new artist; can we convert the deal to a 360 deal?’” McLynn recalls. “I told him [it would cost] $2 million to consider it. He thought I was crazy; but I’m just saying, how is that crazy? If you want all these extra rights and if this artist does blow up, then that’s the best deal in the world for you. If you’re not taking a risk, why am I going to give you this? And if it’s not a lot of money, you’re not taking a risk.”

A concert-tour company’s margin is about 4 percent, Live Nation CEO Michael Rapino has said, while the take on income from concessions, T-shirts, and other merchandise sold at shows can be much higher. The business had a record-setting year in 2006, which saw The Rolling Stones, Madonna, U2, Barbra Streisand, and other popular, high-priced tours on the road. But in 2007, North American gross concert dollars dropped more than 10 percent to $2.6 billion, according to Billboard statistics. Concert attendance fell by more than 19 percent to 51 million. Fewer people in the stands means less merchandise sold and concession-stand food eaten.

Now add this wrinkle: if you pour tens of millions of dollars into a 360 deal, as major labels and Live Nation have done with their big-name stars, you will need the act to tour for a long time to recoup your investment. “For decades we’ve been fueled by acts from the ’60s,” says Gary Bongiovanni, editor of the touring-industry trade magazine Pollstar. Three decades ago, no one would have predicted that Billy Joel or Rod Stewart would still be touring today, Bongiovanni notes, yet the industry has come to depend on artists such as these, known as “heritage acts.” “They’re the ones that draw the highest ticket prices and biggest crowds for our year-end charts,” he says. Consider the top-grossing tours of 2006 and 2007: veterans such as The Rolling Stones, Rod Stewart, Barbra Streisand, and Roger Waters were joined by comparative youngsters Madonna, U2, and Bon Jovi. Only two of the 20 acts—former Mouseketeers Justin Timberlake and Christina Aguilera—were younger than 30.

These young stars, the ones who are prone to taking what industry observer Bob Lefsetz calls “media shortcuts,” such as appearing on MTV, may have less chance of developing real staying power. Lefsetz, formerly an entertainment lawyer and consultant to major labels, has for 20 years published an industry newsletter (now a blog) called the Lefsetz Letter. “Whatever a future [superstar] act will be, it won’t be as ubiquitous as the acts from the ’60s because we were all listening to Top 40 radio,” he says.

From the 1960s to the 1980s, music fans discovered new music primarily on the radio and purchased albums in record stores. The stations young people listened to might have played rock, country, or soul; but whatever the genre, DJs introduced listeners to the hits of tomorrow and guided them toward retail stores and concert halls.

Today, music is available in so many genres and subgenres, via so many distribution streams—including cell phones, social networking sites, iTunes, Pure Volume, and Limewire—that common ground rarely exists for post–Baby Boom fans. This in turn makes it harder for tour promoters to corral the tens of thousands of ticket holders they need to fill an arena. “More people can make music than ever before. They can get it heard, but it’s such a cacophony of noise that it will be harder to get any notice,” says Lefsetz.

Most major promoters don’t know how to capture young people’s interest and translate it into ticket sales, he says. It’s not that his students don’t listen to music, but that they seek to discover it online, from friends, or via virtual buzz. They’ll go out to clubs and hear bands, but they rarely attend big arena concerts. Promoters typically spend 40 percent to 50 percent of their promotional budgets on radio and newspaper advertising, Barnet says. “High school and college students—what percentage of tickets do they buy? And you’re spending most of your advertising dollars on media that don’t even focus on those demographics.” Conversely, the readers and listeners of traditional media are perfect for high-grossing heritage tours. As long as tickets sell for those events, promoters won’t have to change their approach, Barnet says. Heritage acts also tend to sell more CDs, says Pollstar’s Bongiovanni. “Your average Rod Stewart fan is more likely to walk into a record store, if they can find one, than your average Fall Out Boy fan.”

Personally, [Live Nation’s chairman of global music and global touring, Arthur Fogel] said, he’d been disappointed in the young bands he’d seen open for the headliners on Live Nation’s big tours. Live performance requires a different skill set from recorded tracks. It’s the difference between playing music and putting on a show, he said. “More often than not, I find young bands get up and play their music but are not investing enough time or energy into creating that show.” It’s incumbent on the industry to find bands that can rise to the next level, he added. “We aren’t seeing that development that’s creating the next generation of stadium headliners. Hopefully that will change.”

Live Nation doesn’t see itself spearheading such a change, though. In an earlier interview with Billboard magazine, Rapino took a dig at record labels’ model of bankrolling ten bands in the hope that one would become a success. “We don’t want to be in the business of pouring tens of millions of dollars into unknown acts, throwing it against the wall and then hoping that enough sticks that we only lose some of our money,” he said. “It’s not part of our business plan to be out there signing 50 or 60 young acts every year.”

And therein lies the rub. If the big dog in the touring pack won’t take responsibility for nurturing new talent and the labels have less capital to invest in artist development, where will the future megatour headliners come from?

Indeed, despite its all-encompassing moniker, the 360 deal isn’t the only option for musicians, nor should it be. Some artists may find they need the distribution reach and bankroll that a traditional big-label deal provides. Others might negotiate with independent labels for profit sharing or licensing arrangements in which they’ll retain more control of their master recordings. Many will earn the bulk of their income from licensing their songs for use on TV shows, movie soundtracks, and video games. Some may take an entirely do-it-yourself approach, in which they’ll write, produce, perform, and distribute all of their own music—and keep any of the profits they make.

There are growing signs of this transition. The Eagles recently partnered with Wal-Mart to give the discount chain exclusive retail-distribution rights to the band’s latest album. Paul McCartney chose to release his most recent record through Starbucks, and last summer Prince gave away his newest CD to London concertgoers and to readers of a British tabloid. And in a move that earned nearly as much ink as Madonna’s 360 deal, rock act Radiohead let fans download its new release directly from the band’s website for whatever price listeners were willing to pay. Though the numbers are debated, one source, ComScore, reported that in the first month 1.2 million people downloaded the album. About 40 percent paid for it, at an average of about $6 each—well above the usual cut an artist would get in royalties. The band also self-released the album in an $80 limited-edition package and, months later, as a CD with traditional label distribution. Such a move wouldn’t work for just any artist. Radiohead had the luxury of a fan base that it developed over more than a dozen years with a major label. But the band’s experiment showed creativity and adaptability.

If concerts bring money in for the music biz, what happens when concerts get smaller? Read More »

Details on the Storm & Nugache botnets

From Dennis Fisher’s “Storm, Nugache lead dangerous new botnet barrage” (SearchSecurity.com: 19 December 2007):

[Dave Dittrich, a senior security engineer and researcher at the University of Washington in Seattle], one of the top botnet researchers in the world, has been tracking botnets for close to a decade and has seen it all. But this new piece of malware, which came to be known as Nugache, was a game-changer. With no C&C server to target, bots capable of sending encrypted packets and the possibility of any peer on the network suddenly becoming the de facto leader of the botnet, Nugache, Dittrich knew, would be virtually impossible to stop.

Dittrich and other researchers say that when they analyze the code these malware authors are putting out, what emerges is a picture of a group of skilled, professional software developers learning from their mistakes, improving their code on a weekly basis and making a lot of money in the process.

The way that Storm, Nugache and other similar programs make money for their creators is typically twofold. First and foremost, Storm’s creator controls a massive botnet that he can use to send out spam runs, either for himself or for third parties who pay for the service. Storm-infected PCs have been sending out various spam messages, including pump-and-dump stock scams, pitches for fake medications and highly targeted phishing messages, throughout 2007, and by some estimates were responsible for more than 75% of the spam on the Internet at certain points this year.

Secondly, experts say that Storm’s author has taken to sectioning off his botnet into smaller pieces and then renting those subnets out to other attackers. Estimates of the size of the Storm network have ranged as high as 50 million PCs, but Brandon Enright, a network security analyst at the University of California at San Diego, who wrote a tool called Stormdrain to locate and count infect machines, put the number at closer to 20,000. Dittrich estimates that the size of the Nugache network was roughly equivalent to Enright’s estimates for Storm.

“The Storm network has a team of very smart people behind it. They change it constantly. When the attacks against searching started to be successful, they completely changed how commands are distributed in the network,” said Enright. “If AV adapts, they re-adapt. If attacks by researchers adapt, they re-adapt. If someone tries to DoS their distribution system, they DoS back.”

The other worrisome detail in all of this is that there’s significant evidence that the authors of these various pieces of malware are sharing information and techniques, if not collaborating outright.

“I’m pretty sure that there are tactics being shared between the Nugache and Storm authors,” Dittrich said. “There’s a direct lineage from Sdbot to Rbot to Mytob to Bancos. These guys can just sell the Web front-end to these things and the customers can pick their options and then just hit go.”

Once just a hobby for devious hackers, writing malware is now a profession and its products have helped create a global shadow economy. That infrastructure stretches from the mob-controlled streets of Moscow to the back alleys of Malaysia to the office parks of Silicon Valley. In that regard, Storm, Nugache and the rest are really just the first products off the assembly line, the Model Ts of P2P malware.

Details on the Storm & Nugache botnets Read More »

Do’s and don’ts for open source software development

From Jono DiCarlo’s “Ten Ways to Make More Humane Open Source Software” (5 October 2007):

Do

  1. Get a Benevolent Dictator
    Someone who has a vision for the UI. Someone who can and will say “no” to features that don’t fit the vision.
  2. Make the Program Usable In Its Default State
    Don’t rely on configurable behavior. It adds complexity, solves little, and most users will never touch it anyway. Usable default behavior is required.
  3. Design Around Tasks
    Figure out the tasks that people want to do with your software. Make those tasks as easy as possible. Kill any feature that gets in the way.
  4. Write a Plug-In Architecture
    It’s the only good solution I’ve seen to the dilemma of providing a complete feature set without bloating the application.
  5. User Testing, User Testing, User Testing!!
    Without user testing, you are designing by guesswork and superstition.

Do Not

  1. Develop Without A Vision
    “When someone suggests another feature, we’ll find a place to cram it in!”
  2. Join the Clone Wars
    “Closed-source program X is popular. Let’s just duplicate its interface!”
  3. Leave the UI Design Up To The End User
    “I’m not sure how that should work. I’ll make it a check box on the preferences screen.”
  4. Make the Interface a Thin Veneer over the Underlying Implementation
    “But it’s got a GUI now! That makes it user-friendly, right?”
  5. Treat UI Design as Babysitting Idiots
    “They should all quit whining and read the manual already.”

Do’s and don’ts for open source software development Read More »

Success of The Shawshank Redemption

From John Swansburg’s “The Shawshank Reputation” (Legal Affairs: March/April 2004):

Yet even King didn’t think [The Shawshank Redemption] stood a chance at the box office-and he was right. Though the movie got good reviews, and seven Oscar nominations, Shawshank in its original release grossed only about half of the $35 million it cost to make.

The movie came back from the dead on video. It was the top rental of 1995, and its popularity has not much abated since. The new Zagat film guide, for instance, rated it higher than Annie Hall and a little picture called Citizen Kane. The movie is currently ranked second on the Internet Movie Database’s Top 250 movies poll, behind only The Godfather.

Success of The Shawshank Redemption Read More »

Failure every 30 years produces better design

From The New York Times‘ “Form Follows Function. Now Go Out and Cut the Grass.“:

Failure, [Henry] Petroski shows, works. Or rather, engineers only learn from things that fail: bridges that collapse, software that crashes, spacecraft that explode. Everything that is designed fails, and everything that fails leads to better design. Next time at least that mistake won’t be made: Aleve won’t be packed in child-proof bottles so difficult to open that they stymie the arthritic patients seeking the pills inside; narrow suspension bridges won’t be built without “stay cables” like the ill-fated Tacoma Narrows Bridge, which was twisted to its destruction by strong winds in 1940.

Successes have fewer lessons to teach. This is one reason, Mr. Petroski points out, that there has been a major bridge disaster every 30 years. Gradually the techniques and knowledge of one generation become taken for granted; premises are no longer scrutinized. So they are re-applied in ambitious projects by creators who no longer recognize these hidden flaws and assumptions.

Mr. Petroski suggests that 30 years – an implicit marker of generational time – is the period between disasters in many specialized human enterprises, the period between, say, the beginning of manned space travel and the Challenger disaster, or the beginnings of nuclear energy and the 1979 accident at Three Mile Island. …

Mr. Petroski cites an epigram of Epictetus: “Everything has two handles – by one of which it ought to be carried and by the other not.”

Failure every 30 years produces better design Read More »