sharing

Clay Shirky on the changes to publishing & media

From Parul Sehgal’s “Here Comes Clay Shirky” (Publisher’s Weekly: 21 June 2010):

PW: In April of this year, Wired‘s Kevin Kelly turned a Shirky quote—“Institutions will try to preserve the problem to which they are the solution”—into “the Shirky Principle,” in deference to the simple, yet powerful observation. … Kelly explained, “The Shirky Principle declares that complex solutions, like a company, or an industry, can become so dedicated to the problem they are the solution to, that often they inadvertently perpetuate the problem.”

CS: It is possible to think that the Internet will be a net positive for society while admitting that there are significant downsides—after all, it’s not a revolution if nobody loses.

No one will ever wonder, is there anything amusing for me on the Internet? That is a solved problem. What we should really care about are [the Internet’s] cultural uses.

In Here Comes Everybody I told the story of the Abbot of Sponheim who in 1492 wrote a book saying that if this printing press thing is allowed to expand, what will the scribes do for a living? But it was more important that Europe be literate than for scribes to have a job.

In a world where a book had to be a physical object, charging money was a way to cause more copies to come into circulation. In the digital world, charging money for something is a way to produce fewer copies. There is no way to preserve the status quo and not abandon that value.

Some of it’s the brilliant Upton Sinclair observation: “It’s hard to make a man understand something if his livelihood depends on him not understanding it.” From the laying on of hands of [Italian printer] Aldus Manutius on down, publishing has always been this way. This is a medium where a change to glue-based paperback binding constituted a revolution.

PW: When do you think a similar realization will come to book publishing?

CS: I think someone will make the imprint that bypasses the traditional distribution networks. Right now the big bottleneck is the head buyer at Barnes & Noble. That’s the seawall holding back the flood in publishing. Someone’s going to say, “I can do a business book or a vampire book or a romance novel, whatever, that might sell 60% of the units it would sell if I had full distribution and a multimillion dollar marketing campaign—but I can do it for 1% percent of the cost.” It has already happened a couple of times with specialty books. The moment of tip happens when enough things get joined up to create their own feedback loop, and the feedback loop in publishing changes when someone at Barnes & Noble says: “We can’t afford not to stock this particular book or series from an independent publisher.” It could be on Lulu, or iUniverse, whatever. And, I feel pretty confident saying it’s going to happen in the next five years.

Clay Shirky on the changes to publishing & media Read More »

Unix: An Oral History

From ‘s “Unix: An Oral History” (: ):

Multics

Gordon M. Brown

[Multics] was designed to include fault-free continuous operation capabilities, convenient remote terminal access and selective information sharing. One of the most important features of Multics was to follow the trend towards integrated multi-tasking and permit multiple programming environments and different human interfaces under one operating system.

Moreover, two key concepts had been picked up on in the development of Multics that would later serve to define Unix. These were that the less important features of the system introduced more complexity and, conversely, that the most important property of algorithms was simplicity. Ritchie explained this to Mahoney, articulating that:

The relationship of Multics to [the development of Unix] is actually interesting and fairly complicated. There were a lot of cultural things that were sort of taken over wholesale. And these include important things, [such as] the hierarchical file system and tree-structure file system – which incidentally did not get into the first version of Unix on the PDP-7. This is an example of why the whole thing is complicated. But any rate, things like the hierarchical file system, choices of simple things like the characters you use to edit lines as you’re typing, erasing characters were the same as those we had. I guess the most important fundamental thing is just the notion that the basic style of interaction with the machine, the fact that there was the notion of a command line, the notion was an explicit shell program. In fact the name shell came from Multics. A lot of extremely important things were completely internalized, and of course this is the way it is. A lot of that came through from Multics.

The Beginning

Michael Errecart and Cameron Jones

Files to Share

The Unix file system was based almost entirely on the file system for the failed Multics project. The idea was for file sharing to take place with explicitly separated file systems for each user, so that there would be no locking of file tables.

A major part of the answer to this question is that the file system had to be open. The needs of the group dictated that every user had access to every other user’s files, so the Unix system had to be extremely open. This openness is even seen in the password storage, which is not hidden at all but is encrypted. Any user can see all the encrypted passwords, but can only test one solution per second, which makes it extremely time consuming to try to break into the system.

The idea of standard input and output for devices eventually found its way into Unix as pipes. Pipes enabled users and programmers to send one function’s output into another function by simply placing a vertical line, a ‘|’ between the two functions. Piping is one of the most distinct features of Unix …

Language from B to C

… Thompson was intent on having Unix be portable, and the creation of a portable language was intrinsic to this. …

Finding a Machine

Darice Wong & Jake Knerr

… Thompson devoted a month apiece to the shell, editor, assembler, and other software tools. …

Use of Unix started in the patent office of Bell Labs, but by 1972 there were a number of non-research organizations at Bell Labs that were beginning to use Unix for software development. Morgan recalls the importance of text processing in the establishment of Unix. …

Building Unix

Jason Aughenbaugh, Jonathan Jessup, & Nicholas Spicher

The Origin of Pipes

The first edition of Thompson and Ritchie’s The Unix Programmer’s Manual was dated November 3, 1971; however, the idea of pipes is not mentioned until the Version 3 Unix manual, published in February 1973. …

Software Tools

grep was, in fact, one of the first programs that could be classified as a software tool. Thompson designed it at the request of McIlroy, as McIlroy explains:

One afternoon I asked Ken Thompson if he could lift the regular expression recognizer out of the editor and make a one-pass program to do it. He said yes. The next morning I found a note in my mail announcing a program named grep. It worked like a charm. When asked what that funny name meant, Ken said it was obvious. It stood for the editor command that it simulated, g/re/p (global regular expression print)….From that special-purpose beginning, grep soon became a household word. (Something I had to stop myself from writing in the first paragraph above shows how firmly naturalized the idea now is: ‘I used ed to grep out words from the dictionary.’) More than any other single program, grep focused the viewpoint that Kernighan and Plauger christened and formalized in Software Tools: make programs that do one thing and do it well, with as few preconceptions about input syntax as possible.

eqn and grep are illustrative of the Unix toolbox philosophy that McIlroy phrases as, “Write programs that do one thing and do it well. Write programs to work together. Write programs that handle text streams, because that is a universal interface.” This philosophy was enshrined in Kernighan and Plauger’s 1976 book, Software Tools, and reiterated in the “Foreword” to the issue of The Bell Systems Technical Journal that also introduced pipes.

Ethos

Robert Murray-Rust & Malika Seth

McIlroy says,

This is the Unix philosophy. Write programs that do one thing and do it well. Write programs to work together. Write programs that handle text streams because, that is a universal interface.

The dissemination of Unix, with a focus on what went on within Bell Labs

Steve Chen

In 1973, the first Unix applications were installed on a system involved in updating directory information and intercepting calls to numbers that had been changed. This was the first time Unix had been used in supporting an actual, ongoing operating business. Soon, Unix was being used to automate the operations systems at Bell Laboratories. It was automating the monitoring, involved in measurement, and helping to rout calls and ensure the quality of the calls.

There were numerous reasons for the friendliness the academic society, especially the academic Computer Science community, showed towards Unix. John Stoneback relates a few of these:

Unix came into many CS departments largely because it was the only powerful interactive system that could run on the sort of hardware (PDP-11s) that universities could afford in the mid ’70s. In addition, Unix itself was also very inexpensive. Since source code was provided, it was a system that could be shaped to the requirements of a particular installation. It was written in a language considerably more attractive than assembly, and it was small enough to be studied and understood by individuals. (John Stoneback, “The Collegiate Community,” Unix Review, October 1985, p. 27.)

The key features and characteristics of Unix that held it above other operating systems at the time were its software tools, its portability, its flexibility, and the fact that it was simple, compact, and efficient. The development of Unix in Bell Labs was carried on under a set of principles that the researchers had developed to guide their work. These principles included:

(i) Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

(ii) Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.

(iii) Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.

(iv) Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.”

(M.D. McIlroy, E.N. Pinson, and B.A. Tague “Unix Time-Sharing System Forward,” The Bell System Technical Journal, July-Aug 1088 vol 57, number 6 part 2. P. 1902)

Unix: An Oral History Read More »

Prices for various services and software in the underground

From Tom Espiner’s “Cracking open the cybercrime economy” (CNET News: 14 December 2007):

“Over the years, the criminal elements, the ones who are making money, making millions out of all this online crime, are just getting stronger and stronger. I don’t think we are really winning this war.”

As director of antivirus research for F-Secure, you might expect Mikko Hypponen to overplay the seriousness of the situation. But according to the Finnish company, during 2007 the number of samples of malicious code on its database doubled, having taken 20 years to reach the size it was at the beginning of this year.

“From Trojan creation sites out of Germany and the Eastern bloc, you can purchase kits and support for malware in yearly contracts,” said [David Marcus, security research manager at McAfee Avert Labs]. “They present themselves as a cottage industry which sells tools or creation kits. It’s hard to tell if it’s a conspiracy or a bunch of autonomous individuals who are good at covering their tracks.”

Joe Telafici, director of operations at McAfee’s Avert Labs, said Storm is continuing to evolve. “We’ve seen periodic activity from Storm indicating that it is still actively being maintained. They have actually ripped out core pieces of functionality to modify the obfuscation mechanisms that weren’t working any more. Most people keep changing the wrapper until it gets by (security software)–these guys changed the functionality.”

Peter Gutmann, a security researcher at the University of Auckland, says in a report that malicious software via the affiliate model–in which someone pays others to infect users with spyware and Trojans–has become more prevalent in 2007.

The affiliate model was pioneered by the iframedollars.biz site in 2005, which paid Webmasters 6 cents per infected site. Since then, this has been extended to a “vast number of adware affiliates,” according to Gutmann. For example, one adware supplier pays 30 cents for each install in the United States, 20 cents in Canada, 10 cents in the United Kingdom, and 1 or 2 cents elsewhere.

Hackers also piggyback malicious software on legitimate software. According to Gutmann, versions of coolwebsearch co-install a mail zombie and a keystroke logger, while some peer-to-peer and file-sharing applications come with bundled adware and spyware.

In March, the price quoted on malware sites for the Gozi Trojan, which steals data and sends it to hackers in an encrypted form, was between $1,000 and $2,000 for the basic version. Buyers could purchase add-on services at varying prices starting at $20.

In the 2007 black economy, everything can be outsourced, according to Gutmann. A scammer can buy hosts for a phishing site, buy spam services to lure victims, buy drops to send the money to, and pay a cashier to cash out the accounts. …

Antidetection vendors sell services to malicious-software and botnet vendors, who sell stolen credit card data to middlemen. Those middlemen then sell that information to fraudsters who deal in stolen credit card data and pay a premium for verifiably active accounts. “The money seems to be in the middlemen,” Gutmann says.

One example of this is the Gozi Trojan. According to reports, the malware was available this summer as a service from iFrameBiz and stat482.com, who bought the Trojan from the HangUp team, a group of Russian hackers. The Trojan server was managed by 76service.com, and hosted by the Russian Business Network, which security vendors allege offered “bullet-proof” hosting for phishing sites and other illicit operations.

According to Gutmann, there are many independent malicious-software developers selling their wares online. Private releases can be tailored to individual clients, while vendors offer support services, often bundling antidetection. For example, the private edition of Hav-rat version 1.2, a Trojan written by hacker Havalito, is advertised as being completely undetectable by antivirus companies. If it does get detected then it will be replaced with a new copy that again is supposedly undetectable.

Hackers can buy denial-of-service attacks for $100 per day, while spammers can buy CDs with harvested e-mail addresses. Spammers can also send mail via spam brokers, handled via online forums such as specialham.com and spamforum.biz. In this environment, $1 buys 1,000 to 5,000 credits, while $1,000 buys 10,000 compromised PCs. Credit is deducted when the spam is accepted by the target mail server. The brokers handle spam distribution via open proxies, relays and compromised PCs, while the sending is usually done from the client’s PC using broker-provided software and control information.

Carders, who mainly deal in stolen credit card details, openly publish prices, or engage in private negotiations to decide the price, with some sources giving bulk discounts for larger purchases. The rate for credit card details is approximately $1 for all the details down to the Card Verification Value (CVV); $10 for details with CVV linked to a Social Security number; and $50 for a full bank account.

Prices for various services and software in the underground Read More »

Wikipedia, freedom, & changes in production

From Clay Shirky’s “Old Revolutions, Good; New Revolutions, Bad” (Britannica Blog: 14 June 2007):

Gorman’s theory about print – its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.

Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect – the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.

The success of Wikipedia forces a profound question on print culture: how is information to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.

Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.

Wikipedia, freedom, & changes in production Read More »

How movies are moved around on botnets

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Figure 2.11 illustrates the use of botnets for selling stolen intellectual property, in this case Movies, TV shows, or video. The diagram is based on information from the Pyramid of Internet Piracy created by Motion Picture Arts Association (MPAA) and an actual case. To start the process, a supplier rips a movie or software from an existing DVD or uses a camcorder to record a first run movie in the theaters. These are either burnt to DVDs to be sold on the black market or they are sold or provided to a Release Group. The Release Group is likely to be an organized crime group, excuse me, business associates who wish to invest in the entertainment industry. I am speculating that the Release Group engages (hires) a botnet operator that can meet their delivery and performance specifications. The botherder then commands the botnet clients to retrieve the media from the supplier and store it in a participating botnet client. These botnet clients may be qualified according to the system processor speed and the nature of the Internet connection. The huge Internet pipe, fast connection, and lax security at most universities make them a prime target for this form of botnet application. MPAA calls these clusters of high speed locations “Topsites.”

. . .

According to the MPAA, 44 percent of all movie piracy is attributed to college students. Therefore it makes sense that the Release Groups would try to use university botnet clients as Topsites. The next groups in the chain are called Facilitators. They operate Web sites and search engines and act as Internet directories. These may be Web sites for which you pay a monthly fee or a fee per download. Finally individuals download the films for their own use or they list them via Peer-to-Peer sharing applications like Gnutella, BitTorrent for download.

How movies are moved around on botnets Read More »

If concerts bring money in for the music biz, what happens when concerts get smaller?

From Jillian Cohen’s “The Show Must Go On” (The American: March/April 2008):

You can’t steal a concert. You can’t download the band—or the sweaty fans in the front row, or the merch guy, or the sound tech—to your laptop to take with you. Concerts are not like albums—easy to burn, copy, and give to your friends. If you want to share the concert-going experience, you and your friends all have to buy tickets. For this reason, many in the ailing music industry see concerts as the next great hope to revive their business.

It’s a blip that already is fading, to the dismay of the major record labels. CD sales have dropped 25 percent since 2000 and digital downloads haven’t picked up the slack. As layoffs swept the major labels this winter, many industry veterans turned their attention to the concert business, pinning their hopes on live performances as a way to bolster their bottom line.

Concerts might be a short-term fix. As one national concert promoter says, “The road is where the money is.” But in the long run, the music business can’t depend on concert tours for a simple, biological reason: the huge tour profits that have been generated in the last few decades have come from performers who are in their 40s, 50s, and 60s. As these artists get older, they’re unlikely to be replaced, because the industry isn’t investing in new talent development.

When business was good—as it was when CD sales grew through much of the 1990s—music labels saw concert tours primarily as marketing vehicles for albums. Now, they’re seizing on the reverse model. Tours have become a way to market the artist as a brand, with the fan clubs, limited-edition doodads, and other profitable products and services that come with the territory.

“Overall, it’s not a pretty picture for some parts of the industry,” JupiterResearch analyst David Card wrote in November when he released a report on digital music sales. “Labels must act more like management companies, and tap into the broadest collection of revenue streams and licensing as possible,” he said. “Advertising and creative packaging and bundling will have to play a bigger role than they have. And the $3 billion-plus touring business is not exactly up for grabs—it’s already competitive and not very profitable. Music companies of all types need to use the Internet for more cost-effective marketing, and A&R [artist development] risk has to be spread more fairly.”

The ‘Heritage Act’ Dilemma

Even so, belief in the touring business was so strong last fall that Madonna signed over her next ten years to touring company Live Nation—the folks who put on megatours for The Rolling Stones, The Police, and other big headliners—in a deal reportedly worth more than $120 million. The Material Girl’s arrangement with Live Nation is known in the industry as a 360-degree deal. Such deals may give artists a big upfront payout in exchange for allowing record labels or, in Madonna’s case, tour producers to profit from all aspects of their business, including touring, merchandise, sponsorships, and more.

While 360 deals may work for big stars, insiders warn that they’re not a magic bullet that will save record labels from their foundering, top-heavy business model. Some artists have done well by 360 contracts, including alt-metal act Korn and British pop sensation Robbie Williams. With these successes in mind, some tout the deals as a way for labels to recoup money they’re losing from downloads and illegal file sharing. But the artists who are offered megamillions for a piece of their brand already have built it through years of album releases, heavy touring, and careful fan-base development.

Not all these deals are good ones, says Bob McLynn, who manages pop-punk act Fall Out Boy and other young artists through his agency, Crush Management. Labels still have a lot to offer, he says. They pay for recording sessions, distribute CDs, market a band’s music, and put up money for touring, music-video production, and other expenses. But in exchange, music companies now want to profit from more than a band’s albums and recording masters. “The artist owns the brand, and now the labels—because they can’t sell as many albums—are trying to get in on the brand,” McLynn says. “To be honest, if an artist these days is looking for a traditional major-label deal for several hundred thousand dollars, they will have to be willing to give up some of that brand.

”For a young act, such offers may be enticing, but McLynn urges caution. “If they’re not going to give you a lot of money for it, it’s a mistake,” says the manager, who helped build Fall Out Boy’s huge teen fan base through constant touring and Internet marketing, only later signing the band to a big label. “I had someone from a major label ask me recently, ‘Hey, I have this new artist; can we convert the deal to a 360 deal?’” McLynn recalls. “I told him [it would cost] $2 million to consider it. He thought I was crazy; but I’m just saying, how is that crazy? If you want all these extra rights and if this artist does blow up, then that’s the best deal in the world for you. If you’re not taking a risk, why am I going to give you this? And if it’s not a lot of money, you’re not taking a risk.”

A concert-tour company’s margin is about 4 percent, Live Nation CEO Michael Rapino has said, while the take on income from concessions, T-shirts, and other merchandise sold at shows can be much higher. The business had a record-setting year in 2006, which saw The Rolling Stones, Madonna, U2, Barbra Streisand, and other popular, high-priced tours on the road. But in 2007, North American gross concert dollars dropped more than 10 percent to $2.6 billion, according to Billboard statistics. Concert attendance fell by more than 19 percent to 51 million. Fewer people in the stands means less merchandise sold and concession-stand food eaten.

Now add this wrinkle: if you pour tens of millions of dollars into a 360 deal, as major labels and Live Nation have done with their big-name stars, you will need the act to tour for a long time to recoup your investment. “For decades we’ve been fueled by acts from the ’60s,” says Gary Bongiovanni, editor of the touring-industry trade magazine Pollstar. Three decades ago, no one would have predicted that Billy Joel or Rod Stewart would still be touring today, Bongiovanni notes, yet the industry has come to depend on artists such as these, known as “heritage acts.” “They’re the ones that draw the highest ticket prices and biggest crowds for our year-end charts,” he says. Consider the top-grossing tours of 2006 and 2007: veterans such as The Rolling Stones, Rod Stewart, Barbra Streisand, and Roger Waters were joined by comparative youngsters Madonna, U2, and Bon Jovi. Only two of the 20 acts—former Mouseketeers Justin Timberlake and Christina Aguilera—were younger than 30.

These young stars, the ones who are prone to taking what industry observer Bob Lefsetz calls “media shortcuts,” such as appearing on MTV, may have less chance of developing real staying power. Lefsetz, formerly an entertainment lawyer and consultant to major labels, has for 20 years published an industry newsletter (now a blog) called the Lefsetz Letter. “Whatever a future [superstar] act will be, it won’t be as ubiquitous as the acts from the ’60s because we were all listening to Top 40 radio,” he says.

From the 1960s to the 1980s, music fans discovered new music primarily on the radio and purchased albums in record stores. The stations young people listened to might have played rock, country, or soul; but whatever the genre, DJs introduced listeners to the hits of tomorrow and guided them toward retail stores and concert halls.

Today, music is available in so many genres and subgenres, via so many distribution streams—including cell phones, social networking sites, iTunes, Pure Volume, and Limewire—that common ground rarely exists for post–Baby Boom fans. This in turn makes it harder for tour promoters to corral the tens of thousands of ticket holders they need to fill an arena. “More people can make music than ever before. They can get it heard, but it’s such a cacophony of noise that it will be harder to get any notice,” says Lefsetz.

Most major promoters don’t know how to capture young people’s interest and translate it into ticket sales, he says. It’s not that his students don’t listen to music, but that they seek to discover it online, from friends, or via virtual buzz. They’ll go out to clubs and hear bands, but they rarely attend big arena concerts. Promoters typically spend 40 percent to 50 percent of their promotional budgets on radio and newspaper advertising, Barnet says. “High school and college students—what percentage of tickets do they buy? And you’re spending most of your advertising dollars on media that don’t even focus on those demographics.” Conversely, the readers and listeners of traditional media are perfect for high-grossing heritage tours. As long as tickets sell for those events, promoters won’t have to change their approach, Barnet says. Heritage acts also tend to sell more CDs, says Pollstar’s Bongiovanni. “Your average Rod Stewart fan is more likely to walk into a record store, if they can find one, than your average Fall Out Boy fan.”

Personally, [Live Nation’s chairman of global music and global touring, Arthur Fogel] said, he’d been disappointed in the young bands he’d seen open for the headliners on Live Nation’s big tours. Live performance requires a different skill set from recorded tracks. It’s the difference between playing music and putting on a show, he said. “More often than not, I find young bands get up and play their music but are not investing enough time or energy into creating that show.” It’s incumbent on the industry to find bands that can rise to the next level, he added. “We aren’t seeing that development that’s creating the next generation of stadium headliners. Hopefully that will change.”

Live Nation doesn’t see itself spearheading such a change, though. In an earlier interview with Billboard magazine, Rapino took a dig at record labels’ model of bankrolling ten bands in the hope that one would become a success. “We don’t want to be in the business of pouring tens of millions of dollars into unknown acts, throwing it against the wall and then hoping that enough sticks that we only lose some of our money,” he said. “It’s not part of our business plan to be out there signing 50 or 60 young acts every year.”

And therein lies the rub. If the big dog in the touring pack won’t take responsibility for nurturing new talent and the labels have less capital to invest in artist development, where will the future megatour headliners come from?

Indeed, despite its all-encompassing moniker, the 360 deal isn’t the only option for musicians, nor should it be. Some artists may find they need the distribution reach and bankroll that a traditional big-label deal provides. Others might negotiate with independent labels for profit sharing or licensing arrangements in which they’ll retain more control of their master recordings. Many will earn the bulk of their income from licensing their songs for use on TV shows, movie soundtracks, and video games. Some may take an entirely do-it-yourself approach, in which they’ll write, produce, perform, and distribute all of their own music—and keep any of the profits they make.

There are growing signs of this transition. The Eagles recently partnered with Wal-Mart to give the discount chain exclusive retail-distribution rights to the band’s latest album. Paul McCartney chose to release his most recent record through Starbucks, and last summer Prince gave away his newest CD to London concertgoers and to readers of a British tabloid. And in a move that earned nearly as much ink as Madonna’s 360 deal, rock act Radiohead let fans download its new release directly from the band’s website for whatever price listeners were willing to pay. Though the numbers are debated, one source, ComScore, reported that in the first month 1.2 million people downloaded the album. About 40 percent paid for it, at an average of about $6 each—well above the usual cut an artist would get in royalties. The band also self-released the album in an $80 limited-edition package and, months later, as a CD with traditional label distribution. Such a move wouldn’t work for just any artist. Radiohead had the luxury of a fan base that it developed over more than a dozen years with a major label. But the band’s experiment showed creativity and adaptability.

If concerts bring money in for the music biz, what happens when concerts get smaller? Read More »

Details on the Storm & Nugache botnets

From Dennis Fisher’s “Storm, Nugache lead dangerous new botnet barrage” (SearchSecurity.com: 19 December 2007):

[Dave Dittrich, a senior security engineer and researcher at the University of Washington in Seattle], one of the top botnet researchers in the world, has been tracking botnets for close to a decade and has seen it all. But this new piece of malware, which came to be known as Nugache, was a game-changer. With no C&C server to target, bots capable of sending encrypted packets and the possibility of any peer on the network suddenly becoming the de facto leader of the botnet, Nugache, Dittrich knew, would be virtually impossible to stop.

Dittrich and other researchers say that when they analyze the code these malware authors are putting out, what emerges is a picture of a group of skilled, professional software developers learning from their mistakes, improving their code on a weekly basis and making a lot of money in the process.

The way that Storm, Nugache and other similar programs make money for their creators is typically twofold. First and foremost, Storm’s creator controls a massive botnet that he can use to send out spam runs, either for himself or for third parties who pay for the service. Storm-infected PCs have been sending out various spam messages, including pump-and-dump stock scams, pitches for fake medications and highly targeted phishing messages, throughout 2007, and by some estimates were responsible for more than 75% of the spam on the Internet at certain points this year.

Secondly, experts say that Storm’s author has taken to sectioning off his botnet into smaller pieces and then renting those subnets out to other attackers. Estimates of the size of the Storm network have ranged as high as 50 million PCs, but Brandon Enright, a network security analyst at the University of California at San Diego, who wrote a tool called Stormdrain to locate and count infect machines, put the number at closer to 20,000. Dittrich estimates that the size of the Nugache network was roughly equivalent to Enright’s estimates for Storm.

“The Storm network has a team of very smart people behind it. They change it constantly. When the attacks against searching started to be successful, they completely changed how commands are distributed in the network,” said Enright. “If AV adapts, they re-adapt. If attacks by researchers adapt, they re-adapt. If someone tries to DoS their distribution system, they DoS back.”

The other worrisome detail in all of this is that there’s significant evidence that the authors of these various pieces of malware are sharing information and techniques, if not collaborating outright.

“I’m pretty sure that there are tactics being shared between the Nugache and Storm authors,” Dittrich said. “There’s a direct lineage from Sdbot to Rbot to Mytob to Bancos. These guys can just sell the Web front-end to these things and the customers can pick their options and then just hit go.”

Once just a hobby for devious hackers, writing malware is now a profession and its products have helped create a global shadow economy. That infrastructure stretches from the mob-controlled streets of Moscow to the back alleys of Malaysia to the office parks of Silicon Valley. In that regard, Storm, Nugache and the rest are really just the first products off the assembly line, the Model Ts of P2P malware.

Details on the Storm & Nugache botnets Read More »

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »

From P2P to social sharing

From Clay Shirky’s “File-sharing Goes Social“:

The RIAA has taken us on a tour of networking strategies in the last few years, by constantly changing the environment file-sharing systems operate in. In hostile environments, organisms often adapt to become less energetic but harder to kill, and so it is now. With the RIAA’s waves of legal attacks driving experimentation with decentralized file-sharing tools, file-sharing networks have progressively traded efficiency for resistance to legal attack. …

There are several activities that are both illegal and popular, and these suffer from what economists call high transaction costs. Buying marijuana involves considerably more work than buying roses, in part because every transaction involves risk for both parties, and in part because neither party can rely on the courts for redress from unfair transactions. As a result, the market for marijuana today (or NYC tattoo artists in the 1980s, or gin in the 1920s, etc) involves trusted intermediaries who broker introductions.

These intermediaries act as a kind of social Visa system; in the same way a credit card issuer has a relationship with both buyer and seller, and an incentive to see that transactions go well, an introducer in an illegal transaction has an incentive to make sure that neither side defects from the transaction. And all parties, of course, have an incentive to avoid detection. …

There are many ways to move to such membrane-bounded systems, of course, including retrofitting existing networks to allow sub-groups with controlled membership (possibly using email white-list or IM buddy-list tools); adopting any of the current peer-to-peer tools designed for secure collaboration (e.g. Groove, Shinkuro, WASTE etc); or even going to physical distribution. As Andrew Odlyzko has pointed out, sending disks through the mail can move enough bits in a 24 hour period to qualify as broadband, and there are now file-sharing networks whose members simply snail mail one another mountable drives of music. …

The disadvantage of social sharing is simple — limited membership means fewer files. The advantage is equally simple — a socially bounded system is more effective than nothing, and safer than Kazaa. …

From P2P to social sharing Read More »

Cameraphones are different cameras & different phones

From David Pescovitz’s “The Big Picture“:

Mobile researcher John Poisson, CEO of the Fours Initiative, focuses on how cameraphones could revolutionize photography and communication — if people would only start using them more.

As the leader of Sony Corporation’s mobile media research and design groups in Tokyo, John Poisson spent two years focused on how people use cameraphones, and why they don’t use them more often.

TheFeature: What have you learned over the course of your research?

Poisson: People think of the cameraphone as a more convenient tool for digital photography, an extension of the digital camera. That’s missing the mark. The mobile phone is a communications device. The minute you attach a camera to that, and give people the ability to share the content that they’re creating in real time, the dynamic changes significantly.

TheFeature: Aren’t providers already developing applications to take advantage of that shift?

Poisson: Well, we have things like the ability to moblog, to publish pictures to a blog, which is not necessarily the most relevant model to consumers. Those tools are developed by people who understand blogging and apply it in their daily lives. But it ignores the trend that we and Mimi Ito and others are seeing as part of the evolution of photography. If you look at the way people have (historically) used cameras, it started off with portraiture and photographs of record — formalized photographs with a capital “P.” Then as the technology evolved, we had this notion of something called a snapshot, which is much more informal. People could take a higher number of pictures with not so much concern over composition. It was more about capturing an experience than photographing something. The limit of that path was the Polaroid. It was about taking the picture and sharing it instantly. What we have today is the ability to create today is a kind of distributed digital manifestation of that process.

Cameraphones are different cameras & different phones Read More »