control

The life cycle of a botnet client

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

What makes a botnet a botnet? In particular, how do you distinguish a botnet client from just another hacker break-in? First, the clients in a botnet must be able to take actions on the client without the hacker having to log into the client’s operating system (Windows, UNIX, or Mac OS). Second, many clients must be able to act in a coordinated fashion to accomplish a common goal with little or no intervention from the hacker. If a collection of computers meet this criteria it is a botnet.

The life of a botnet client, or botclient, begins when it has been exploited. A prospective botclient can be exploited via malicious code that a user is tricked into running; attacks against unpatched vulnerabilities; backdoors left by Trojan worms or remote access Trojans; and password guessing and brute force access attempts. In this section we’ll discuss each of these methods of exploiting botnets.

Rallying and Securing the Botnet Client

Although the order in the life cycle may vary, at some point early in the life of a new botnet client it must call home, a process called “rallying. “When rallying, the botnet client initiates contact with the botnet Command and Control (C&C) Server. Currently, most botnets use IRC for Command and Control.

Rallying is the term given for the first time a botnet client logins in to a C&C server. The login may use some form of encryption or authentication to limit the ability of others to eavesdrop on the communications. Some botnets are beginning to encrypt the communicated data.

At this point the new botnet client may request updates. The updates could be updated exploit software, an updated list of C&C server names, IP addresses, and/or channel names. This will assure that the botnet client can be managed and can be recovered should the current C&C server be taken offline.

The next order of business is to secure the new client from removal. The client can request location of the latest anti-antivirus (Anti-A/V) tool from the C&C server. The newly controlled botclient would download this soft- ware and execute it to remove the A/V tool, hide from it, or render it ineffective.

Shutting off the A/V tool may raise suspicions if the user is observant. Some botclients will run a dll that neuters the A/V tool. With an Anti-A/V dll in place the A/V tool may appear to be working normally except that it never detects or reports the files related to the botnet client. It may also change the Hosts file and LMHosts file so that attempts to contact an A/V vendor for updates will not succeed. Using this method, attempts to contact an A/V vendor can be redirected to a site containing malicious code or can yield a “website or server not found” error.

One tool, hidden32. exe, is used to hide applications that have a GUI interface from the user. Its use is simple; the botherder creates a batch file that executes hidden32 with the name of the executable to be hidden as its parameter. Another stealthy tool, HideUserv2, adds an invisible user to the administrator group.

Waiting for Orders and Retrieving the Payload

Once secured, the botnet client will listen to the C&C communications channel.

The botnet client will then request the associated payload. The payload is the term I give the software representing the intended function of this botnet client.

The life cycle of a botnet client Read More »

How the Greek cell phone network was compromised

From Vassilis Prevelakis and Diomidis Spinellis’ “The Athens Affair” (IEEE Spectrum: July 2007):

On 9 March 2005, a 38-year-old Greek electrical engineer named Costas Tsalikidis was found hanged in his Athens loft apartment, an apparent suicide. It would prove to be merely the first public news of a scandal that would roil Greece for months.

The next day, the prime minister of Greece was told that his cellphone was being bugged, as were those of the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.

The victims were customers of Athens-based Vodafone-Panafon, generally known as Vodafone Greece, the country’s largest cellular service provider; Tsalikidis was in charge of network planning at the company.

We now know that the illegally implanted software, which was eventually found in a total of four of Vodafone’s Greek switches, created parallel streams of digitized voice for the tapped phone calls. One stream was the ordinary one, between the two calling parties. The other stream, an exact copy, was directed to other cellphones, allowing the tappers to listen in on the conversations on the cellphones, and probably also to record them. The software also routed location and other information about those phone calls to these shadow handsets via automated text messages.

The day after Tsalikidis’s body was discovered, CEO Koronias met with the director of the Greek prime minister’s political office. Yiannis Angelou, and the minister of public order, Giorgos Voulgarakis. Koronias told them that rogue software used the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones and handed over a list of bugged numbers. Besides the prime minister and his wife, phones belonging to the ministers of national defense, foreign affairs, and justice, the mayor of Athens, and the Greek European Union commissioner were all compromised. Others belonged to members of civil rights organizations, peace activists, and antiglobalization groups; senior staff at the ministries of National Defense, Public Order, Merchant Marine, and Foreign Affairs; the New Democracy ruling party; the Hellenic Navy general staff; and a Greek-American employee at the United States Embassy in Athens.

First, consider how a phone call, yours or a prime minister’s, gets completed. Long before you dial a number on your handset, your cellphone has been communicating with nearby cellular base stations. One of those stations, usually the nearest, has agreed to be the intermediary between your phone and the network as a whole. Your telephone handset converts your words into a stream of digital data that is sent to a transceiver at the base station.

The base station’s activities are governed by a base station controller, a special-purpose computer within the station that allocates radio channels and helps coordinate handovers between the transceivers under its control.

This controller in turn communicates with a mobile switching center that takes phone calls and connects them to call recipients within the same switching center, other switching centers within the company, or special exchanges that act as gateways to foreign networks, routing calls to other telephone networks (mobile or landline). The mobile switching centers are particularly important to the Athens affair because they hosted the rogue phone-tapping software, and it is there that the eavesdropping originated. They were the logical choice, because they are at the heart of the network; the intruders needed to take over only a few of them in order to carry out their attack.

Both the base station controllers and the switching centers are built around a large computer, known as a switch, capable of creating a dedicated communications path between a phone within its network and, in principle, any other phone in the world. Switches are holdovers from the 1970s, an era when powerful computers filled rooms and were built around proprietary hardware and software. Though these computers are smaller nowadays, the system’s basic architecture remains largely unchanged.

Like most phone companies, Vodafone Greece uses the same kind of computer for both its mobile switching centers and its base station controllers—Ericsson’s AXE line of switches. A central processor coordinates the switch’s operations and directs the switch to set up a speech or data path from one phone to another and then routes a call through it. Logs of network activity and billing records are stored on disk by a separate unit, called a management processor.

The key to understanding the hack at the heart of the Athens affair is knowing how the Ericsson AXE allows lawful intercepts—what are popularly called “wiretaps.” Though the details differ from country to country, in Greece, as in most places, the process starts when a law enforcement official goes to a court and obtains a warrant, which is then presented to the phone company whose customer is to be tapped.

Nowadays, all wiretaps are carried out at the central office. In AXE exchanges a remote-control equipment subsystem, or RES, carries out the phone tap by monitoring the speech and data streams of switched calls. It is a software subsystem typically used for setting up wiretaps, which only law officers are supposed to have access to. When the wiretapped phone makes a call, the RES copies the conversation into a second data stream and diverts that copy to a phone line used by law enforcement officials.

Ericsson optionally provides an interception management system (IMS), through which lawful call intercepts are set up and managed. When a court order is presented to the phone company, its operators initiate an intercept by filling out a dialog box in the IMS software. The optional IMS in the operator interface and the RES in the exchange each contain a list of wiretaps: wiretap requests in the case of the IMS, actual taps in the RES. Only IMS-initiated wiretaps should be active in the RES, so a wiretap in the RES without a request for a tap in the IMS is a pretty good indicator that an unauthorized tap has occurred. An audit procedure can be used to find any discrepancies between them.

It took guile and some serious programming chops to manipulate the lawful call-intercept functions in Vodafone’s mobile switching centers. The intruders’ task was particularly complicated because they needed to install and operate the wiretapping software on the exchanges without being detected by Vodafone or Ericsson system administrators. From time to time the intruders needed access to the rogue software to update the lists of monitored numbers and shadow phones. These activities had to be kept off all logs, while the software itself had to be invisible to the system administrators conducting routine maintenance activities. The intruders achieved all these objectives.

The challenge faced by the intruders was to use the RES’s capabilities to duplicate and divert the bits of a call stream without using the dialog-box interface to the IMS, which would create auditable logs of their activities. The intruders pulled this off by installing a series of patches to 29 separate blocks of code, according to Ericsson officials who testified before the Greek parliamentary committee that investigated the wiretaps. This rogue software modified the central processor’s software to directly initiate a wiretap, using the RES’s capabilities. Best of all, for them, the taps were not visible to the operators, because the IMS and its user interface weren’t used.

The full version of the software would have recorded the phone numbers being tapped in an official registry within the exchange. And, as we noted, an audit could then find a discrepancy between the numbers monitored by the exchange and the warrants active in the IMS. But the rogue software bypassed the IMS. Instead, it cleverly stored the bugged numbers in two data areas that were part of the rogue software’s own memory space, which was within the switch’s memory but isolated and not made known to the rest of the switch.

That by itself put the rogue software a long way toward escaping detection. But the perpetrators hid their own tracks in a number of other ways as well. There were a variety of circumstances by which Vodafone technicians could have discovered the alterations to the AXE’s software blocks. For example, they could have taken a listing of all the blocks, which would show all the active processes running within the AXE—similar to the task manager output in Microsoft Windows or the process status (ps) output in Unix. They then would have seen that some processes were active, though they shouldn’t have been. But the rogue software apparently modified the commands that list the active blocks in a way that omitted certain blocks—the ones that related to intercepts—from any such listing.

In addition, the rogue software might have been discovered during a software upgrade or even when Vodafone technicians installed a minor patch. It is standard practice in the telecommunications industry for technicians to verify the existing block contents before performing an upgrade or patch. We don’t know why the rogue software was not detected in this way, but we suspect that the software also modified the operation of the command used to print the checksums—codes that create a kind of signature against which the integrity of the existing blocks can be validated. One way or another, the blocks appeared unaltered to the operators.

Finally, the software included a back door to allow the perpetrators to control it in the future. This, too, was cleverly constructed to avoid detection. A report by the Hellenic Authority for the Information and Communication Security and Privacy (the Greek abbreviation is ADAE) indicates that the rogue software modified the exchange’s command parser—a routine that accepts commands from a person with system administrator status—so that innocuous commands followed by six spaces would deactivate the exchange’s transaction log and the alarm associated with its deactivation, and allow the execution of commands associated with the lawful interception subsystem. In effect, it was a signal to allow operations associated with the wiretaps but leave no trace of them. It also added a new user name and password to the system, which could be used to obtain access to the exchange.

…Security experts have also discovered other rootkits for general-purpose operating systems, such as Linux, Windows, and Solaris, but to our knowledge this is the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch.

So the investigators painstakingly reconstructed an approximation of the original PLEX source files that the intruders developed. It turned out to be the equivalent of about 6500 lines of code, a surprisingly substantial piece of software.

How the Greek cell phone network was compromised Read More »

9 reasons the Storm botnet is different

From Bruce Schneier’s “Gathering ‘Storm’ Superworm Poses Grave Threat to PC Nets” (Wired: 4 October 2007):

Storm represents the future of malware. Let’s look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. …

3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. …

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. …

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect. …

5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” …

6. Storm’s payload — the code it uses to spread — morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites — anything to entice users to click on a phony link. …

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. …

9. Last month, Storm began attacking anti-spam sites focused on identifying it — spamhaus.org, 419eater and so on — and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

9 reasons the Storm botnet is different Read More »

The Chinese Internet threat

From Shane Harris’ “China’s Cyber-Militia” (National Journal: 31 May 2008):

Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.

One prominent expert told National Journal he believes that China’s People’s Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. “They said that, with confidence, it had been traced back to the PLA.” These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.

Bennett, whose former trade association includes some of the nation’s largest computer-security companies and who has testified before Congress on the vulnerability of information networks, also said that a blackout in February, which affected 3 million customers in South Florida, was precipitated by a cyber-hacker. That outage cut off electricity along Florida’s east coast, from Daytona Beach to Monroe County, and affected eight power-generating stations.

A second information-security expert independently corroborated Bennett’s account of the Florida blackout. According to this individual, who cited sources with direct knowledge of the investigation, a Chinese PLA hacker attempting to map Florida Power & Light’s computer infrastructure apparently made a mistake.

The industry source, who conducts security research for government and corporate clients, said that hackers in China have devoted considerable time and resources to mapping the technology infrastructure of other U.S. companies. That assertion has been backed up by the current vice chairman of the Joint Chiefs of Staff, who said last year that Chinese sources are probing U.S. government and commercial networks.

“The Chinese operate both through government agencies, as we do, but they also operate through sponsoring other organizations that are engaging in this kind of international hacking, whether or not under specific direction. It’s a kind of cyber-militia.… It’s coming in volumes that are just staggering.”

In addition to disruptive attacks on networks, officials are worried about the Chinese using long-established computer-hacking techniques to steal sensitive information from government agencies and U.S. corporations.

Brenner, the U.S. counterintelligence chief, said he knows of “a large American company” whose strategic information was obtained by its Chinese counterparts in advance of a business negotiation. As Brenner recounted the story, “The delegation gets to China and realizes, ‘These guys on the other side of the table know every bottom line on every significant negotiating point.’ They had to have got this by hacking into [the company’s] systems.”

During a trip to Beijing in December 2007, spyware programs designed to clandestinely remove information from personal computers and other electronic equipment were discovered on devices used by Commerce Secretary Carlos Gutierrez and possibly other members of a U.S. trade delegation, according to a computer-security expert with firsthand knowledge of the spyware used. Gutierrez was in China with the Joint Commission on Commerce and Trade, a high-level delegation that includes the U.S. trade representative and that meets with Chinese officials to discuss such matters as intellectual-property rights, market access, and consumer product safety. According to the computer-security expert, the spyware programs were designed to open communications channels to an outside system, and to download the contents of the infected devices at regular intervals. The source said that the computer codes were identical to those found in the laptop computers and other devices of several senior executives of U.S. corporations who also had their electronics “slurped” while on business in China.

The Chinese make little distinction between hackers who work for the government and those who undertake cyber-adventures on its behalf. “There’s a huge pool of Chinese individuals, students, academics, unemployed, whatever it may be, who are, at minimum, not discouraged from trying this out,” said Rodger Baker, a senior China analyst for Stratfor, a private intelligence firm. So-called patriotic-hacker groups have launched attacks from inside China, usually aimed at people they think have offended the country or pose a threat to its strategic interests. At a minimum the Chinese government has done little to shut down these groups, which are typically composed of technologically skilled and highly nationalistic young men.

The military is not waiting for China, or any other nation or hacker group, to strike a lethal cyber-blow. In March, Air Force Gen. Kevin Chilton, the chief of U.S. Strategic Command, said that the Pentagon has its own cyberwar plans. “Our challenge is to define, shape, develop, deliver, and sustain a cyber-force second to none,” Chilton told the Senate Armed Services Committee. He asked appropriators for an “increased emphasis” on the Defense Department’s cyber-capabilities to help train personnel to “conduct network warfare.”

The Air Force is in the process of setting up a Cyberspace Command, headed by a two-star general and comprising about 160 individuals assigned to a handful of bases. As Wired noted in a recent profile, Cyberspace Command “is dedicated to the proposition that the next war will be fought in the electromagnetic spectrum and that computers are military weapons.” The Air Force has launched a TV ad campaign to drum up support for the new command, and to call attention to cyberwar. “You used to need an army to wage a war,” a narrator in the TV spot declares. “Now all you need is an Internet connection.”

The Chinese Internet threat Read More »

Lots of good info about the FBI’s far-reaching wiretapping of US phone systems

From Ryan Singel’s “Point, Click … Eavesdrop: How the FBI Wiretap Net Operates” (Wired News: 29 August 2007):

The FBI has quietly built a sophisticated, point-and-click surveillance system that performs instant wiretaps on almost any communications device, according to nearly a thousand pages of restricted documents newly released under the Freedom of Information Act.

The surveillance system, called DCSNet, for Digital Collection System Network, connects FBI wiretapping rooms to switches controlled by traditional land-line operators, internet-telephony providers and cellular companies. It is far more intricately woven into the nation’s telecom infrastructure than observers suspected.

It’s a “comprehensive wiretap system that intercepts wire-line phones, cellular phones, SMS and push-to-talk systems,” says Steven Bellovin, a Columbia University computer science professor and longtime surveillance expert.

DCSNet is a suite of software that collects, sifts and stores phone numbers, phone calls and text messages. The system directly connects FBI wiretapping outposts around the country to a far-reaching private communications network.

The $10 million DCS-3000 client, also known as Red Hook, handles pen-registers and trap-and-traces, a type of surveillance that collects signaling information — primarily the numbers dialed from a telephone — but no communications content. (Pen registers record outgoing calls; trap-and-traces record incoming calls.)

DCS-6000, known as Digital Storm, captures and collects the content of phone calls and text messages for full wiretap orders.

A third, classified system, called DCS-5000, is used for wiretaps targeting spies or terrorists.

What DCSNet Can Do

Together, the surveillance systems let FBI agents play back recordings even as they are being captured (like TiVo), create master wiretap files, send digital recordings to translators, track the rough location of targets in real time using cell-tower information, and even stream intercepts outward to mobile surveillance vans.

FBI wiretapping rooms in field offices and undercover locations around the country are connected through a private, encrypted backbone that is separated from the internet. Sprint runs it on the government’s behalf.

The network allows an FBI agent in New York, for example, to remotely set up a wiretap on a cell phone based in Sacramento, California, and immediately learn the phone’s location, then begin receiving conversations, text messages and voicemail pass codes in New York. With a few keystrokes, the agent can route the recordings to language specialists for translation.

The numbers dialed are automatically sent to FBI analysts trained to interpret phone-call patterns, and are transferred nightly, by external storage devices, to the bureau’s Telephone Application Database, where they’re subjected to a type of data mining called link analysis.

The numerical scope of DCSNet surveillance is still guarded. But we do know that as telecoms have become more wiretap-friendly, the number of criminal wiretaps alone has climbed from 1,150 in 1996 to 1,839 in 2006. That’s a 60 percent jump. And in 2005, 92 percent of those criminal wiretaps targeted cell phones, according to a report published last year.

These figures include both state and federal wiretaps, and do not include antiterrorism wiretaps, which dramatically expanded after 9/11. They also don’t count the DCS-3000’s collection of incoming and outgoing phone numbers dialed. Far more common than full-blown wiretaps, this level of surveillance requires only that investigators certify that the phone numbers are relevant to an investigation.

In the 1990s, the Justice Department began complaining to Congress that digital technology, cellular phones and features like call forwarding would make it difficult for investigators to continue to conduct wiretaps. Congress responded by passing the Communications Assistance for Law Enforcement Act, or CALEA, in 1994, mandating backdoors in U.S. telephone switches.

CALEA requires telecommunications companies to install only telephone-switching equipment that meets detailed wiretapping standards. Prior to CALEA, the FBI would get a court order for a wiretap and present it to a phone company, which would then create a physical tap of the phone system.

With new CALEA-compliant digital switches, the FBI now logs directly into the telecom’s network. Once a court order has been sent to a carrier and the carrier turns on the wiretap, the communications data on a surveillance target streams into the FBI’s computers in real time.

The released documents suggest that the FBI’s wiretapping engineers are struggling with peer-to-peer telephony provider Skype, which offers no central location to wiretap, and with innovations like caller-ID spoofing and phone-number portability.

Despite its ease of use, the new technology is proving more expensive than a traditional wiretap. Telecoms charge the government an average of $2,200 for a 30-day CALEA wiretap, while a traditional intercept costs only $250, according to the Justice Department inspector general. A federal wiretap order in 2006 cost taxpayers $67,000 on average, according to the most recent U.S. Court wiretap report.

What’s more, under CALEA, the government had to pay to make pre-1995 phone switches wiretap-friendly. The FBI has spent almost $500 million on that effort, but many traditional wire-line switches still aren’t compliant.

Processing all the phone calls sucked in by DCSNet is also costly. At the backend of the data collection, the conversations and phone numbers are transferred to the FBI’s Electronic Surveillance Data Management System, an Oracle SQL database that’s seen a 62 percent growth in wiretap volume over the last three years — and more than 3,000 percent growth in digital files like e-mail. Through 2007, the FBI has spent $39 million on the system, which indexes and analyzes data for agents, translators and intelligence analysts.

Lots of good info about the FBI’s far-reaching wiretapping of US phone systems Read More »

If concerts bring money in for the music biz, what happens when concerts get smaller?

From Jillian Cohen’s “The Show Must Go On” (The American: March/April 2008):

You can’t steal a concert. You can’t download the band—or the sweaty fans in the front row, or the merch guy, or the sound tech—to your laptop to take with you. Concerts are not like albums—easy to burn, copy, and give to your friends. If you want to share the concert-going experience, you and your friends all have to buy tickets. For this reason, many in the ailing music industry see concerts as the next great hope to revive their business.

It’s a blip that already is fading, to the dismay of the major record labels. CD sales have dropped 25 percent since 2000 and digital downloads haven’t picked up the slack. As layoffs swept the major labels this winter, many industry veterans turned their attention to the concert business, pinning their hopes on live performances as a way to bolster their bottom line.

Concerts might be a short-term fix. As one national concert promoter says, “The road is where the money is.” But in the long run, the music business can’t depend on concert tours for a simple, biological reason: the huge tour profits that have been generated in the last few decades have come from performers who are in their 40s, 50s, and 60s. As these artists get older, they’re unlikely to be replaced, because the industry isn’t investing in new talent development.

When business was good—as it was when CD sales grew through much of the 1990s—music labels saw concert tours primarily as marketing vehicles for albums. Now, they’re seizing on the reverse model. Tours have become a way to market the artist as a brand, with the fan clubs, limited-edition doodads, and other profitable products and services that come with the territory.

“Overall, it’s not a pretty picture for some parts of the industry,” JupiterResearch analyst David Card wrote in November when he released a report on digital music sales. “Labels must act more like management companies, and tap into the broadest collection of revenue streams and licensing as possible,” he said. “Advertising and creative packaging and bundling will have to play a bigger role than they have. And the $3 billion-plus touring business is not exactly up for grabs—it’s already competitive and not very profitable. Music companies of all types need to use the Internet for more cost-effective marketing, and A&R [artist development] risk has to be spread more fairly.”

The ‘Heritage Act’ Dilemma

Even so, belief in the touring business was so strong last fall that Madonna signed over her next ten years to touring company Live Nation—the folks who put on megatours for The Rolling Stones, The Police, and other big headliners—in a deal reportedly worth more than $120 million. The Material Girl’s arrangement with Live Nation is known in the industry as a 360-degree deal. Such deals may give artists a big upfront payout in exchange for allowing record labels or, in Madonna’s case, tour producers to profit from all aspects of their business, including touring, merchandise, sponsorships, and more.

While 360 deals may work for big stars, insiders warn that they’re not a magic bullet that will save record labels from their foundering, top-heavy business model. Some artists have done well by 360 contracts, including alt-metal act Korn and British pop sensation Robbie Williams. With these successes in mind, some tout the deals as a way for labels to recoup money they’re losing from downloads and illegal file sharing. But the artists who are offered megamillions for a piece of their brand already have built it through years of album releases, heavy touring, and careful fan-base development.

Not all these deals are good ones, says Bob McLynn, who manages pop-punk act Fall Out Boy and other young artists through his agency, Crush Management. Labels still have a lot to offer, he says. They pay for recording sessions, distribute CDs, market a band’s music, and put up money for touring, music-video production, and other expenses. But in exchange, music companies now want to profit from more than a band’s albums and recording masters. “The artist owns the brand, and now the labels—because they can’t sell as many albums—are trying to get in on the brand,” McLynn says. “To be honest, if an artist these days is looking for a traditional major-label deal for several hundred thousand dollars, they will have to be willing to give up some of that brand.

”For a young act, such offers may be enticing, but McLynn urges caution. “If they’re not going to give you a lot of money for it, it’s a mistake,” says the manager, who helped build Fall Out Boy’s huge teen fan base through constant touring and Internet marketing, only later signing the band to a big label. “I had someone from a major label ask me recently, ‘Hey, I have this new artist; can we convert the deal to a 360 deal?’” McLynn recalls. “I told him [it would cost] $2 million to consider it. He thought I was crazy; but I’m just saying, how is that crazy? If you want all these extra rights and if this artist does blow up, then that’s the best deal in the world for you. If you’re not taking a risk, why am I going to give you this? And if it’s not a lot of money, you’re not taking a risk.”

A concert-tour company’s margin is about 4 percent, Live Nation CEO Michael Rapino has said, while the take on income from concessions, T-shirts, and other merchandise sold at shows can be much higher. The business had a record-setting year in 2006, which saw The Rolling Stones, Madonna, U2, Barbra Streisand, and other popular, high-priced tours on the road. But in 2007, North American gross concert dollars dropped more than 10 percent to $2.6 billion, according to Billboard statistics. Concert attendance fell by more than 19 percent to 51 million. Fewer people in the stands means less merchandise sold and concession-stand food eaten.

Now add this wrinkle: if you pour tens of millions of dollars into a 360 deal, as major labels and Live Nation have done with their big-name stars, you will need the act to tour for a long time to recoup your investment. “For decades we’ve been fueled by acts from the ’60s,” says Gary Bongiovanni, editor of the touring-industry trade magazine Pollstar. Three decades ago, no one would have predicted that Billy Joel or Rod Stewart would still be touring today, Bongiovanni notes, yet the industry has come to depend on artists such as these, known as “heritage acts.” “They’re the ones that draw the highest ticket prices and biggest crowds for our year-end charts,” he says. Consider the top-grossing tours of 2006 and 2007: veterans such as The Rolling Stones, Rod Stewart, Barbra Streisand, and Roger Waters were joined by comparative youngsters Madonna, U2, and Bon Jovi. Only two of the 20 acts—former Mouseketeers Justin Timberlake and Christina Aguilera—were younger than 30.

These young stars, the ones who are prone to taking what industry observer Bob Lefsetz calls “media shortcuts,” such as appearing on MTV, may have less chance of developing real staying power. Lefsetz, formerly an entertainment lawyer and consultant to major labels, has for 20 years published an industry newsletter (now a blog) called the Lefsetz Letter. “Whatever a future [superstar] act will be, it won’t be as ubiquitous as the acts from the ’60s because we were all listening to Top 40 radio,” he says.

From the 1960s to the 1980s, music fans discovered new music primarily on the radio and purchased albums in record stores. The stations young people listened to might have played rock, country, or soul; but whatever the genre, DJs introduced listeners to the hits of tomorrow and guided them toward retail stores and concert halls.

Today, music is available in so many genres and subgenres, via so many distribution streams—including cell phones, social networking sites, iTunes, Pure Volume, and Limewire—that common ground rarely exists for post–Baby Boom fans. This in turn makes it harder for tour promoters to corral the tens of thousands of ticket holders they need to fill an arena. “More people can make music than ever before. They can get it heard, but it’s such a cacophony of noise that it will be harder to get any notice,” says Lefsetz.

Most major promoters don’t know how to capture young people’s interest and translate it into ticket sales, he says. It’s not that his students don’t listen to music, but that they seek to discover it online, from friends, or via virtual buzz. They’ll go out to clubs and hear bands, but they rarely attend big arena concerts. Promoters typically spend 40 percent to 50 percent of their promotional budgets on radio and newspaper advertising, Barnet says. “High school and college students—what percentage of tickets do they buy? And you’re spending most of your advertising dollars on media that don’t even focus on those demographics.” Conversely, the readers and listeners of traditional media are perfect for high-grossing heritage tours. As long as tickets sell for those events, promoters won’t have to change their approach, Barnet says. Heritage acts also tend to sell more CDs, says Pollstar’s Bongiovanni. “Your average Rod Stewart fan is more likely to walk into a record store, if they can find one, than your average Fall Out Boy fan.”

Personally, [Live Nation’s chairman of global music and global touring, Arthur Fogel] said, he’d been disappointed in the young bands he’d seen open for the headliners on Live Nation’s big tours. Live performance requires a different skill set from recorded tracks. It’s the difference between playing music and putting on a show, he said. “More often than not, I find young bands get up and play their music but are not investing enough time or energy into creating that show.” It’s incumbent on the industry to find bands that can rise to the next level, he added. “We aren’t seeing that development that’s creating the next generation of stadium headliners. Hopefully that will change.”

Live Nation doesn’t see itself spearheading such a change, though. In an earlier interview with Billboard magazine, Rapino took a dig at record labels’ model of bankrolling ten bands in the hope that one would become a success. “We don’t want to be in the business of pouring tens of millions of dollars into unknown acts, throwing it against the wall and then hoping that enough sticks that we only lose some of our money,” he said. “It’s not part of our business plan to be out there signing 50 or 60 young acts every year.”

And therein lies the rub. If the big dog in the touring pack won’t take responsibility for nurturing new talent and the labels have less capital to invest in artist development, where will the future megatour headliners come from?

Indeed, despite its all-encompassing moniker, the 360 deal isn’t the only option for musicians, nor should it be. Some artists may find they need the distribution reach and bankroll that a traditional big-label deal provides. Others might negotiate with independent labels for profit sharing or licensing arrangements in which they’ll retain more control of their master recordings. Many will earn the bulk of their income from licensing their songs for use on TV shows, movie soundtracks, and video games. Some may take an entirely do-it-yourself approach, in which they’ll write, produce, perform, and distribute all of their own music—and keep any of the profits they make.

There are growing signs of this transition. The Eagles recently partnered with Wal-Mart to give the discount chain exclusive retail-distribution rights to the band’s latest album. Paul McCartney chose to release his most recent record through Starbucks, and last summer Prince gave away his newest CD to London concertgoers and to readers of a British tabloid. And in a move that earned nearly as much ink as Madonna’s 360 deal, rock act Radiohead let fans download its new release directly from the band’s website for whatever price listeners were willing to pay. Though the numbers are debated, one source, ComScore, reported that in the first month 1.2 million people downloaded the album. About 40 percent paid for it, at an average of about $6 each—well above the usual cut an artist would get in royalties. The band also self-released the album in an $80 limited-edition package and, months later, as a CD with traditional label distribution. Such a move wouldn’t work for just any artist. Radiohead had the luxury of a fan base that it developed over more than a dozen years with a major label. But the band’s experiment showed creativity and adaptability.

If concerts bring money in for the music biz, what happens when concerts get smaller? Read More »

China’s increasing control over American dollars

From James Fallows’ “The $1.4 Trillion Question” (The Atlantic: January/February 2008):

Through the quarter-century in which China has been opening to world trade, Chinese leaders have deliberately held down living standards for their own people and propped them up in the United States. This is the real meaning of the vast trade surplus—$1.4 trillion and counting, going up by about $1 billion per day—that the Chinese government has mostly parked in U.S. Treasury notes. In effect, every person in the (rich) United States has over the past 10 years or so borrowed about $4,000 from someone in the (poor) People’s Republic of China. Like so many imbalances in economics, this one can’t go on indefinitely, and therefore won’t. But the way it ends—suddenly versus gradually, for predictable reasons versus during a panic—will make an enormous difference to the U.S. and Chinese economies over the next few years, to say nothing of bystanders in Europe and elsewhere.

When the dollar is strong, the following (good) things happen: the price of food, fuel, imports, manufactured goods, and just about everything else (vacations in Europe!) goes down. The value of the stock market, real estate, and just about all other American assets goes up. Interest rates go down—for mortgage loans, credit-card debt, and commercial borrowing. Tax rates can be lower, since foreign lenders hold down the cost of financing the national debt. The only problem is that American-made goods become more expensive for foreigners, so the country’s exports are hurt.

When the dollar is weak, the following (bad) things happen: the price of food, fuel, imports, and so on (no more vacations in Europe) goes up. The value of the stock market, real estate, and just about all other American assets goes down. Interest rates are higher. Tax rates can be higher, to cover the increased cost of financing the national debt. The only benefit is that American-made goods become cheaper for foreigners, which helps create new jobs and can raise the value of export-oriented American firms (winemakers in California, producers of medical devices in New England).

Americans sometimes debate (though not often) whether in principle it is good to rely so heavily on money controlled by a foreign government. The debate has never been more relevant, because America has never before been so deeply in debt to one country. Meanwhile, the Chinese are having a debate of their own—about whether the deal makes sense for them. Certainly China’s officials are aware that their stock purchases prop up 401(k) values, their money-market holdings keep down American interest rates, and their bond purchases do the same thing—plus allow our government to spend money without raising taxes.

China’s increasing control over American dollars Read More »

Details on the Storm & Nugache botnets

From Dennis Fisher’s “Storm, Nugache lead dangerous new botnet barrage” (SearchSecurity.com: 19 December 2007):

[Dave Dittrich, a senior security engineer and researcher at the University of Washington in Seattle], one of the top botnet researchers in the world, has been tracking botnets for close to a decade and has seen it all. But this new piece of malware, which came to be known as Nugache, was a game-changer. With no C&C server to target, bots capable of sending encrypted packets and the possibility of any peer on the network suddenly becoming the de facto leader of the botnet, Nugache, Dittrich knew, would be virtually impossible to stop.

Dittrich and other researchers say that when they analyze the code these malware authors are putting out, what emerges is a picture of a group of skilled, professional software developers learning from their mistakes, improving their code on a weekly basis and making a lot of money in the process.

The way that Storm, Nugache and other similar programs make money for their creators is typically twofold. First and foremost, Storm’s creator controls a massive botnet that he can use to send out spam runs, either for himself or for third parties who pay for the service. Storm-infected PCs have been sending out various spam messages, including pump-and-dump stock scams, pitches for fake medications and highly targeted phishing messages, throughout 2007, and by some estimates were responsible for more than 75% of the spam on the Internet at certain points this year.

Secondly, experts say that Storm’s author has taken to sectioning off his botnet into smaller pieces and then renting those subnets out to other attackers. Estimates of the size of the Storm network have ranged as high as 50 million PCs, but Brandon Enright, a network security analyst at the University of California at San Diego, who wrote a tool called Stormdrain to locate and count infect machines, put the number at closer to 20,000. Dittrich estimates that the size of the Nugache network was roughly equivalent to Enright’s estimates for Storm.

“The Storm network has a team of very smart people behind it. They change it constantly. When the attacks against searching started to be successful, they completely changed how commands are distributed in the network,” said Enright. “If AV adapts, they re-adapt. If attacks by researchers adapt, they re-adapt. If someone tries to DoS their distribution system, they DoS back.”

The other worrisome detail in all of this is that there’s significant evidence that the authors of these various pieces of malware are sharing information and techniques, if not collaborating outright.

“I’m pretty sure that there are tactics being shared between the Nugache and Storm authors,” Dittrich said. “There’s a direct lineage from Sdbot to Rbot to Mytob to Bancos. These guys can just sell the Web front-end to these things and the customers can pick their options and then just hit go.”

Once just a hobby for devious hackers, writing malware is now a profession and its products have helped create a global shadow economy. That infrastructure stretches from the mob-controlled streets of Moscow to the back alleys of Malaysia to the office parks of Silicon Valley. In that regard, Storm, Nugache and the rest are really just the first products off the assembly line, the Model Ts of P2P malware.

Details on the Storm & Nugache botnets Read More »

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »

Maintaining control in a subdued country

From Louis Menard’s “From the Ashes: A new history of Europe since 1945” (The New Yorker [28 November 2005]: 168):

[Tony Judt, author of Postwar: A History of Europe Since 1945] notes that France, a country with a population of some forty million, was administered by fifteen hundred Nazis, plus six thousand Germen policemen. A skeleton team sufficed in the Netherlands as well.

Maintaining control in a subdued country Read More »

Feral cities of the future

From Richard J. Norton’s “Feral cities – The New Strategic Environment” (Naval War College Review: Autumn, 2003):

Imagine a great metropolis covering hundreds of square miles. Once a vital component in a national economy, this sprawling urban environment is now a vast collection of blighted buildings, an immense petri dish of both ancient and new diseases, a territory where the rule of law has long been replaced by near anarchy in which the only security available is that which is attained through brute power. Such cities have been routinely imagined in apocalyptic movies and in certain science-fiction genres, where they are often portrayed as gigantic versions of T. S. Eliot’s Rat’s Alley. Yet this city would still be globally connected. It would possess at least a modicum of commercial linkages, and some of its inhabitants would have access to the world’s most modern communication and computing technologies. It would, in effect, be a feral city.

The putative “feral city” is (or would be) a metropolis with a population of more than a million people in a state the government of which has lost the ability to maintain the rule of law within the city’s boundaries yet remains a functioning actor in the greater international system.

In a feral city social services are all but nonexistent, and the vast majority of the city’s occupants have no access to even the most basic health or security assistance. There is no social safety net. Human security is for the most part a matter of individual initiative. Yet a feral city does not descend into complete, random chaos. Some elements, be they criminals, armed resistance groups, clans, tribes, or neighborhood associations, exert various degrees of control over portions of the city. Intercity, city-state, and even international commercial transactions occur, but corruption, avarice, and violence are their hallmarks. A feral city experiences massive levels of disease and creates enough pollution to qualify as an international environmental disaster zone. Most feral cities would suffer from massive urban hypertrophy, covering vast expanses of land. The city’s structures range from once-great buildings symbolic of state power to the meanest shantytowns and slums. Yet even under these conditions, these cities continue to grow, and the majority of occupants do not voluntarily leave.

Feral cities would exert an almost magnetic influence on terrorist organizations. Such megalopolises will provide exceptionally safe havens for armed resistance groups, especially those having cultural affinity with at least one sizable segment of the city’s population. The efficacy and portability of the most modern computing and communication systems allow the activities of a worldwide terrorist, criminal, or predatory and corrupt commercial network to be coordinated and directed with equipment easily obtained on the open market and packed into a minivan. The vast size of a feral city, with its buildings, other structures, and subterranean spaces, would offer nearly perfect protection from overhead sensors, whether satellites or unmanned aerial vehicles. The city’s population represents for such entities a ready source of recruits and a built-in intelligence network. Collecting human intelligence against them in this environment is likely to be a daunting task. Should the city contain airport or seaport facilities, such an organization would be able to import and export a variety of items. The feral city environment will actually make it easier for an armed resistance group that does not already have connections with criminal organizations to make them. The linkage between such groups, once thought to be rather unlikely, is now so commonplace as to elicit no comment.

Feral cities of the future Read More »