business

Cloned trucks used to commit crimes

From Brian Ross’ “Fake FedEx Trucks; When the Drugs Absolutely Have to Get There” (ABC News: 18 January 2008):

Savvy criminals are using some of the country’s most credible logos, including FedEx, Wal-Mart, DirecTV and the U.S. Border Patrol, to create fake trucks to smuggle drugs, money and illegal aliens across the border, according to a report by the Florida Department of Law Enforcement.

Termed “cloned” vehicles, the report also warns that terrorists could use the same fake trucks to gain access to secure areas with hidden weapons.

The report says criminals have been able to easily obtain the necessary vinyl logo markings and signs for $6,000 or less. Authorities say “cosmetically cloned commercial vehicles are not illegal.”

In another case, a truck painted with DirecTV and other markings was pulled over in a routine traffic stop in Mississippi and discovered to be carrying 786 pounds of cocaine.

Police said they became suspicious because the truck carried the markings or DirecTV and several of its rivals. An 800 number on the truck’s rear to report bad driving referred callers to an adult sex chat line.

Cloned trucks used to commit crimes Read More »

How the Greek cell phone network was compromised

From Vassilis Prevelakis and Diomidis Spinellis’ “The Athens Affair” (IEEE Spectrum: July 2007):

On 9 March 2005, a 38-year-old Greek electrical engineer named Costas Tsalikidis was found hanged in his Athens loft apartment, an apparent suicide. It would prove to be merely the first public news of a scandal that would roil Greece for months.

The next day, the prime minister of Greece was told that his cellphone was being bugged, as were those of the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.

The victims were customers of Athens-based Vodafone-Panafon, generally known as Vodafone Greece, the country’s largest cellular service provider; Tsalikidis was in charge of network planning at the company.

We now know that the illegally implanted software, which was eventually found in a total of four of Vodafone’s Greek switches, created parallel streams of digitized voice for the tapped phone calls. One stream was the ordinary one, between the two calling parties. The other stream, an exact copy, was directed to other cellphones, allowing the tappers to listen in on the conversations on the cellphones, and probably also to record them. The software also routed location and other information about those phone calls to these shadow handsets via automated text messages.

The day after Tsalikidis’s body was discovered, CEO Koronias met with the director of the Greek prime minister’s political office. Yiannis Angelou, and the minister of public order, Giorgos Voulgarakis. Koronias told them that rogue software used the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones and handed over a list of bugged numbers. Besides the prime minister and his wife, phones belonging to the ministers of national defense, foreign affairs, and justice, the mayor of Athens, and the Greek European Union commissioner were all compromised. Others belonged to members of civil rights organizations, peace activists, and antiglobalization groups; senior staff at the ministries of National Defense, Public Order, Merchant Marine, and Foreign Affairs; the New Democracy ruling party; the Hellenic Navy general staff; and a Greek-American employee at the United States Embassy in Athens.

First, consider how a phone call, yours or a prime minister’s, gets completed. Long before you dial a number on your handset, your cellphone has been communicating with nearby cellular base stations. One of those stations, usually the nearest, has agreed to be the intermediary between your phone and the network as a whole. Your telephone handset converts your words into a stream of digital data that is sent to a transceiver at the base station.

The base station’s activities are governed by a base station controller, a special-purpose computer within the station that allocates radio channels and helps coordinate handovers between the transceivers under its control.

This controller in turn communicates with a mobile switching center that takes phone calls and connects them to call recipients within the same switching center, other switching centers within the company, or special exchanges that act as gateways to foreign networks, routing calls to other telephone networks (mobile or landline). The mobile switching centers are particularly important to the Athens affair because they hosted the rogue phone-tapping software, and it is there that the eavesdropping originated. They were the logical choice, because they are at the heart of the network; the intruders needed to take over only a few of them in order to carry out their attack.

Both the base station controllers and the switching centers are built around a large computer, known as a switch, capable of creating a dedicated communications path between a phone within its network and, in principle, any other phone in the world. Switches are holdovers from the 1970s, an era when powerful computers filled rooms and were built around proprietary hardware and software. Though these computers are smaller nowadays, the system’s basic architecture remains largely unchanged.

Like most phone companies, Vodafone Greece uses the same kind of computer for both its mobile switching centers and its base station controllers—Ericsson’s AXE line of switches. A central processor coordinates the switch’s operations and directs the switch to set up a speech or data path from one phone to another and then routes a call through it. Logs of network activity and billing records are stored on disk by a separate unit, called a management processor.

The key to understanding the hack at the heart of the Athens affair is knowing how the Ericsson AXE allows lawful intercepts—what are popularly called “wiretaps.” Though the details differ from country to country, in Greece, as in most places, the process starts when a law enforcement official goes to a court and obtains a warrant, which is then presented to the phone company whose customer is to be tapped.

Nowadays, all wiretaps are carried out at the central office. In AXE exchanges a remote-control equipment subsystem, or RES, carries out the phone tap by monitoring the speech and data streams of switched calls. It is a software subsystem typically used for setting up wiretaps, which only law officers are supposed to have access to. When the wiretapped phone makes a call, the RES copies the conversation into a second data stream and diverts that copy to a phone line used by law enforcement officials.

Ericsson optionally provides an interception management system (IMS), through which lawful call intercepts are set up and managed. When a court order is presented to the phone company, its operators initiate an intercept by filling out a dialog box in the IMS software. The optional IMS in the operator interface and the RES in the exchange each contain a list of wiretaps: wiretap requests in the case of the IMS, actual taps in the RES. Only IMS-initiated wiretaps should be active in the RES, so a wiretap in the RES without a request for a tap in the IMS is a pretty good indicator that an unauthorized tap has occurred. An audit procedure can be used to find any discrepancies between them.

It took guile and some serious programming chops to manipulate the lawful call-intercept functions in Vodafone’s mobile switching centers. The intruders’ task was particularly complicated because they needed to install and operate the wiretapping software on the exchanges without being detected by Vodafone or Ericsson system administrators. From time to time the intruders needed access to the rogue software to update the lists of monitored numbers and shadow phones. These activities had to be kept off all logs, while the software itself had to be invisible to the system administrators conducting routine maintenance activities. The intruders achieved all these objectives.

The challenge faced by the intruders was to use the RES’s capabilities to duplicate and divert the bits of a call stream without using the dialog-box interface to the IMS, which would create auditable logs of their activities. The intruders pulled this off by installing a series of patches to 29 separate blocks of code, according to Ericsson officials who testified before the Greek parliamentary committee that investigated the wiretaps. This rogue software modified the central processor’s software to directly initiate a wiretap, using the RES’s capabilities. Best of all, for them, the taps were not visible to the operators, because the IMS and its user interface weren’t used.

The full version of the software would have recorded the phone numbers being tapped in an official registry within the exchange. And, as we noted, an audit could then find a discrepancy between the numbers monitored by the exchange and the warrants active in the IMS. But the rogue software bypassed the IMS. Instead, it cleverly stored the bugged numbers in two data areas that were part of the rogue software’s own memory space, which was within the switch’s memory but isolated and not made known to the rest of the switch.

That by itself put the rogue software a long way toward escaping detection. But the perpetrators hid their own tracks in a number of other ways as well. There were a variety of circumstances by which Vodafone technicians could have discovered the alterations to the AXE’s software blocks. For example, they could have taken a listing of all the blocks, which would show all the active processes running within the AXE—similar to the task manager output in Microsoft Windows or the process status (ps) output in Unix. They then would have seen that some processes were active, though they shouldn’t have been. But the rogue software apparently modified the commands that list the active blocks in a way that omitted certain blocks—the ones that related to intercepts—from any such listing.

In addition, the rogue software might have been discovered during a software upgrade or even when Vodafone technicians installed a minor patch. It is standard practice in the telecommunications industry for technicians to verify the existing block contents before performing an upgrade or patch. We don’t know why the rogue software was not detected in this way, but we suspect that the software also modified the operation of the command used to print the checksums—codes that create a kind of signature against which the integrity of the existing blocks can be validated. One way or another, the blocks appeared unaltered to the operators.

Finally, the software included a back door to allow the perpetrators to control it in the future. This, too, was cleverly constructed to avoid detection. A report by the Hellenic Authority for the Information and Communication Security and Privacy (the Greek abbreviation is ADAE) indicates that the rogue software modified the exchange’s command parser—a routine that accepts commands from a person with system administrator status—so that innocuous commands followed by six spaces would deactivate the exchange’s transaction log and the alarm associated with its deactivation, and allow the execution of commands associated with the lawful interception subsystem. In effect, it was a signal to allow operations associated with the wiretaps but leave no trace of them. It also added a new user name and password to the system, which could be used to obtain access to the exchange.

…Security experts have also discovered other rootkits for general-purpose operating systems, such as Linux, Windows, and Solaris, but to our knowledge this is the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch.

So the investigators painstakingly reconstructed an approximation of the original PLEX source files that the intruders developed. It turned out to be the equivalent of about 6500 lines of code, a surprisingly substantial piece of software.

How the Greek cell phone network was compromised Read More »

9 reasons the Storm botnet is different

From Bruce Schneier’s “Gathering ‘Storm’ Superworm Poses Grave Threat to PC Nets” (Wired: 4 October 2007):

Storm represents the future of malware. Let’s look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. …

3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. …

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. …

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect. …

5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” …

6. Storm’s payload — the code it uses to spread — morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites — anything to entice users to click on a phony link. …

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. …

9. Last month, Storm began attacking anti-spam sites focused on identifying it — spamhaus.org, 419eater and so on — and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

9 reasons the Storm botnet is different Read More »

The Chinese Internet threat

From Shane Harris’ “China’s Cyber-Militia” (National Journal: 31 May 2008):

Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.

One prominent expert told National Journal he believes that China’s People’s Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. “They said that, with confidence, it had been traced back to the PLA.” These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.

Bennett, whose former trade association includes some of the nation’s largest computer-security companies and who has testified before Congress on the vulnerability of information networks, also said that a blackout in February, which affected 3 million customers in South Florida, was precipitated by a cyber-hacker. That outage cut off electricity along Florida’s east coast, from Daytona Beach to Monroe County, and affected eight power-generating stations.

A second information-security expert independently corroborated Bennett’s account of the Florida blackout. According to this individual, who cited sources with direct knowledge of the investigation, a Chinese PLA hacker attempting to map Florida Power & Light’s computer infrastructure apparently made a mistake.

The industry source, who conducts security research for government and corporate clients, said that hackers in China have devoted considerable time and resources to mapping the technology infrastructure of other U.S. companies. That assertion has been backed up by the current vice chairman of the Joint Chiefs of Staff, who said last year that Chinese sources are probing U.S. government and commercial networks.

“The Chinese operate both through government agencies, as we do, but they also operate through sponsoring other organizations that are engaging in this kind of international hacking, whether or not under specific direction. It’s a kind of cyber-militia.… It’s coming in volumes that are just staggering.”

In addition to disruptive attacks on networks, officials are worried about the Chinese using long-established computer-hacking techniques to steal sensitive information from government agencies and U.S. corporations.

Brenner, the U.S. counterintelligence chief, said he knows of “a large American company” whose strategic information was obtained by its Chinese counterparts in advance of a business negotiation. As Brenner recounted the story, “The delegation gets to China and realizes, ‘These guys on the other side of the table know every bottom line on every significant negotiating point.’ They had to have got this by hacking into [the company’s] systems.”

During a trip to Beijing in December 2007, spyware programs designed to clandestinely remove information from personal computers and other electronic equipment were discovered on devices used by Commerce Secretary Carlos Gutierrez and possibly other members of a U.S. trade delegation, according to a computer-security expert with firsthand knowledge of the spyware used. Gutierrez was in China with the Joint Commission on Commerce and Trade, a high-level delegation that includes the U.S. trade representative and that meets with Chinese officials to discuss such matters as intellectual-property rights, market access, and consumer product safety. According to the computer-security expert, the spyware programs were designed to open communications channels to an outside system, and to download the contents of the infected devices at regular intervals. The source said that the computer codes were identical to those found in the laptop computers and other devices of several senior executives of U.S. corporations who also had their electronics “slurped” while on business in China.

The Chinese make little distinction between hackers who work for the government and those who undertake cyber-adventures on its behalf. “There’s a huge pool of Chinese individuals, students, academics, unemployed, whatever it may be, who are, at minimum, not discouraged from trying this out,” said Rodger Baker, a senior China analyst for Stratfor, a private intelligence firm. So-called patriotic-hacker groups have launched attacks from inside China, usually aimed at people they think have offended the country or pose a threat to its strategic interests. At a minimum the Chinese government has done little to shut down these groups, which are typically composed of technologically skilled and highly nationalistic young men.

The military is not waiting for China, or any other nation or hacker group, to strike a lethal cyber-blow. In March, Air Force Gen. Kevin Chilton, the chief of U.S. Strategic Command, said that the Pentagon has its own cyberwar plans. “Our challenge is to define, shape, develop, deliver, and sustain a cyber-force second to none,” Chilton told the Senate Armed Services Committee. He asked appropriators for an “increased emphasis” on the Defense Department’s cyber-capabilities to help train personnel to “conduct network warfare.”

The Air Force is in the process of setting up a Cyberspace Command, headed by a two-star general and comprising about 160 individuals assigned to a handful of bases. As Wired noted in a recent profile, Cyberspace Command “is dedicated to the proposition that the next war will be fought in the electromagnetic spectrum and that computers are military weapons.” The Air Force has launched a TV ad campaign to drum up support for the new command, and to call attention to cyberwar. “You used to need an army to wage a war,” a narrator in the TV spot declares. “Now all you need is an Internet connection.”

The Chinese Internet threat Read More »

Cheating, security, & theft in virtual worlds and online games

From Federico Biancuzzi’s interview with security researchers Greg Hoglund & Gary McGraw, authors of Exploiting Online Games, in “Real Flaws in Virtual Worlds” (SecurityFocus: 20 December 2007):

The more I dug into online game security, the more interesting things became. There are multiple threads intersecting in our book: hackers who cheat in online games and are not detected can make tons of money selling virtual items in the middle market; the law says next to nothing about cheating in online games, so doing so is really not illegal; the kinds of technological attacks and exploits that hackers are using to cheat in online games are an interesting bellwether; software is evolving to look very much like massively distributed online games look today with thick clients and myriad time and state related security problems. [Emphasis added]

In Brazil, a criminal gang even kidnapped a star MMORPG player in order to take away his character, and its associated virtual wealth.

The really interesting thing about online game security is that the attackers are in most cases after software running on their own machine, not software running on somebody else’s box. That’s a real change. Interestingly, the laws we have developed in computer security don’t have much to say about cheating in a game or hacking software on your own PC.

Cheating, security, & theft in virtual worlds and online games Read More »

An analysis of splogs: spam blogs

From Charles C. Mann’s “Spam + Blogs = Trouble” (Wired: September 2006):

Some 56 percent of active English-language blogs are spam, according to a study released in May by Tim Finin, a researcher at the University of Maryland, Baltimore County, and two of his students. “The blogosphere is growing fast,” Finin says. “But the splogosphere is now growing faster.”

A recent survey by Mitesh Vasa, a Virginia-based software engineer and splog researcher, found that in December 2005, Blogger was hosting more than 100,000 sploggers. (Many of these are likely pseudonyms for the same people.)

Some Title, the splog that commandeered my name, was created by Dan Goggins, the proud possessor of a 2005 master’s degree in computer science from Brigham Young University. Working out of his home in a leafy subdivision in Springville, Utah, Goggins, his BYU friend and partner, John Jonas, and their handful of employees operate “a few thousand” splogs. “It’s not that many,” Goggins says modestly. “Some people have a lot of sites.” Trolling the Net, I came across a PowerPoint presentation for a kind of spammers’ conference that details some of the earnings of the Goggins-Jonas partnership. Between August and October of 2005, they made at least $71,136.89.

In addition to creating massive numbers of phony blogs, sploggers sometimes take over abandoned real blogs. More than 10 million of the 12.9 million profiles on Blogger surveyed by splog researcher Vasa in June were inactive, either because the bloggers had stopped blogging or because they never got started.

Not only do sploggers create fake blogs or take over abandoned ones, they use robo-software to flood real blogs with bogus comments that link back to the splog. (“Great post! For more on this subject, click here!”) Statistics compiled by Akismet, a system put together by WordPress developer Mullenweg that tries to filter out blog spam, suggest that more than nine out of 10 comments in the blogosphere are spam.

Maryland researcher Finin and his students found that splogs produce about three-quarters of the pings from English-language blogs. Another way of saying this is that the legitimate blogosphere generates about 300,000 posts a day, but the splogosphere emits 900,000, inundating the ping servers.

Another giveaway: Both Some Title and the grave-robbing page it links to had Web addresses in the .info domain. Spammers flock to .info, which was created as an alternative to the crowded .com, because its domain names are cheaper – registrars often let people use them gratis for the first year – which is helpful for those, like sploggers, who buy Internet addresses in bulk. Splogs so commonly have .info addresses that many experts simply assume all blogs from that domain are fake.

An analysis of splogs: spam blogs Read More »

Serial-numbered confetti

From Bruce Schneier’s “News” (Crypto-Gram: 15 September 2007):

Taser — yep, that’s the company’s name as well as the product’s name — is now selling a personal-use version of their product. It’s called the Taser C2, and it has an interesting embedded identification technology. Whenever the weapon is fired, it also sprays some serial-number bar-coded confetti, so a firing can be traced to a weapon and — presumably — the owner.
http://www.taser.com/products/consumers/Pages/C2.aspx

Serial-numbered confetti Read More »

A wireless router with 2 networks: 1 secure, 1 open

From Bruce Schneier’s “My Open Wireless Network” (Crypto-Gram: 15 January 2008):

A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either “Bill” or “Linus” mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network. In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It’s a really clever idea.

A wireless router with 2 networks: 1 secure, 1 open Read More »

Anonymity and Netflix

From Bruce Schneier’s “Anonymity and the Netflix Dataset” (Crypto-Gram: 15 January 2008):

The point of the research was to demonstrate how little information is required to de-anonymize information in the Netflix dataset.

What the University of Texas researchers demonstrate is that this process isn’t hard, and doesn’t require a lot of data. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. This would certainly hold true for our book reading habits, our internet shopping habits, our telephone habits and our web searching habits.

Other research reaches the same conclusion. Using public anonymous data from the 1990 census, Latanya Sweeney found that 87 percent of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides. Expanding the geographic scope to an entire county reduces that to a still-significant 18 percent. “In general,” the researchers wrote, “few characteristics are needed to uniquely identify a person.”

Stanford University researchers reported similar results using 2000 census data. It turns out that date of birth, which (unlike birthday month and day alone) sorts people into thousands of different buckets, is incredibly valuable in disambiguating people.

Anonymity and Netflix Read More »

If concerts bring money in for the music biz, what happens when concerts get smaller?

From Jillian Cohen’s “The Show Must Go On” (The American: March/April 2008):

You can’t steal a concert. You can’t download the band—or the sweaty fans in the front row, or the merch guy, or the sound tech—to your laptop to take with you. Concerts are not like albums—easy to burn, copy, and give to your friends. If you want to share the concert-going experience, you and your friends all have to buy tickets. For this reason, many in the ailing music industry see concerts as the next great hope to revive their business.

It’s a blip that already is fading, to the dismay of the major record labels. CD sales have dropped 25 percent since 2000 and digital downloads haven’t picked up the slack. As layoffs swept the major labels this winter, many industry veterans turned their attention to the concert business, pinning their hopes on live performances as a way to bolster their bottom line.

Concerts might be a short-term fix. As one national concert promoter says, “The road is where the money is.” But in the long run, the music business can’t depend on concert tours for a simple, biological reason: the huge tour profits that have been generated in the last few decades have come from performers who are in their 40s, 50s, and 60s. As these artists get older, they’re unlikely to be replaced, because the industry isn’t investing in new talent development.

When business was good—as it was when CD sales grew through much of the 1990s—music labels saw concert tours primarily as marketing vehicles for albums. Now, they’re seizing on the reverse model. Tours have become a way to market the artist as a brand, with the fan clubs, limited-edition doodads, and other profitable products and services that come with the territory.

“Overall, it’s not a pretty picture for some parts of the industry,” JupiterResearch analyst David Card wrote in November when he released a report on digital music sales. “Labels must act more like management companies, and tap into the broadest collection of revenue streams and licensing as possible,” he said. “Advertising and creative packaging and bundling will have to play a bigger role than they have. And the $3 billion-plus touring business is not exactly up for grabs—it’s already competitive and not very profitable. Music companies of all types need to use the Internet for more cost-effective marketing, and A&R [artist development] risk has to be spread more fairly.”

The ‘Heritage Act’ Dilemma

Even so, belief in the touring business was so strong last fall that Madonna signed over her next ten years to touring company Live Nation—the folks who put on megatours for The Rolling Stones, The Police, and other big headliners—in a deal reportedly worth more than $120 million. The Material Girl’s arrangement with Live Nation is known in the industry as a 360-degree deal. Such deals may give artists a big upfront payout in exchange for allowing record labels or, in Madonna’s case, tour producers to profit from all aspects of their business, including touring, merchandise, sponsorships, and more.

While 360 deals may work for big stars, insiders warn that they’re not a magic bullet that will save record labels from their foundering, top-heavy business model. Some artists have done well by 360 contracts, including alt-metal act Korn and British pop sensation Robbie Williams. With these successes in mind, some tout the deals as a way for labels to recoup money they’re losing from downloads and illegal file sharing. But the artists who are offered megamillions for a piece of their brand already have built it through years of album releases, heavy touring, and careful fan-base development.

Not all these deals are good ones, says Bob McLynn, who manages pop-punk act Fall Out Boy and other young artists through his agency, Crush Management. Labels still have a lot to offer, he says. They pay for recording sessions, distribute CDs, market a band’s music, and put up money for touring, music-video production, and other expenses. But in exchange, music companies now want to profit from more than a band’s albums and recording masters. “The artist owns the brand, and now the labels—because they can’t sell as many albums—are trying to get in on the brand,” McLynn says. “To be honest, if an artist these days is looking for a traditional major-label deal for several hundred thousand dollars, they will have to be willing to give up some of that brand.

”For a young act, such offers may be enticing, but McLynn urges caution. “If they’re not going to give you a lot of money for it, it’s a mistake,” says the manager, who helped build Fall Out Boy’s huge teen fan base through constant touring and Internet marketing, only later signing the band to a big label. “I had someone from a major label ask me recently, ‘Hey, I have this new artist; can we convert the deal to a 360 deal?’” McLynn recalls. “I told him [it would cost] $2 million to consider it. He thought I was crazy; but I’m just saying, how is that crazy? If you want all these extra rights and if this artist does blow up, then that’s the best deal in the world for you. If you’re not taking a risk, why am I going to give you this? And if it’s not a lot of money, you’re not taking a risk.”

A concert-tour company’s margin is about 4 percent, Live Nation CEO Michael Rapino has said, while the take on income from concessions, T-shirts, and other merchandise sold at shows can be much higher. The business had a record-setting year in 2006, which saw The Rolling Stones, Madonna, U2, Barbra Streisand, and other popular, high-priced tours on the road. But in 2007, North American gross concert dollars dropped more than 10 percent to $2.6 billion, according to Billboard statistics. Concert attendance fell by more than 19 percent to 51 million. Fewer people in the stands means less merchandise sold and concession-stand food eaten.

Now add this wrinkle: if you pour tens of millions of dollars into a 360 deal, as major labels and Live Nation have done with their big-name stars, you will need the act to tour for a long time to recoup your investment. “For decades we’ve been fueled by acts from the ’60s,” says Gary Bongiovanni, editor of the touring-industry trade magazine Pollstar. Three decades ago, no one would have predicted that Billy Joel or Rod Stewart would still be touring today, Bongiovanni notes, yet the industry has come to depend on artists such as these, known as “heritage acts.” “They’re the ones that draw the highest ticket prices and biggest crowds for our year-end charts,” he says. Consider the top-grossing tours of 2006 and 2007: veterans such as The Rolling Stones, Rod Stewart, Barbra Streisand, and Roger Waters were joined by comparative youngsters Madonna, U2, and Bon Jovi. Only two of the 20 acts—former Mouseketeers Justin Timberlake and Christina Aguilera—were younger than 30.

These young stars, the ones who are prone to taking what industry observer Bob Lefsetz calls “media shortcuts,” such as appearing on MTV, may have less chance of developing real staying power. Lefsetz, formerly an entertainment lawyer and consultant to major labels, has for 20 years published an industry newsletter (now a blog) called the Lefsetz Letter. “Whatever a future [superstar] act will be, it won’t be as ubiquitous as the acts from the ’60s because we were all listening to Top 40 radio,” he says.

From the 1960s to the 1980s, music fans discovered new music primarily on the radio and purchased albums in record stores. The stations young people listened to might have played rock, country, or soul; but whatever the genre, DJs introduced listeners to the hits of tomorrow and guided them toward retail stores and concert halls.

Today, music is available in so many genres and subgenres, via so many distribution streams—including cell phones, social networking sites, iTunes, Pure Volume, and Limewire—that common ground rarely exists for post–Baby Boom fans. This in turn makes it harder for tour promoters to corral the tens of thousands of ticket holders they need to fill an arena. “More people can make music than ever before. They can get it heard, but it’s such a cacophony of noise that it will be harder to get any notice,” says Lefsetz.

Most major promoters don’t know how to capture young people’s interest and translate it into ticket sales, he says. It’s not that his students don’t listen to music, but that they seek to discover it online, from friends, or via virtual buzz. They’ll go out to clubs and hear bands, but they rarely attend big arena concerts. Promoters typically spend 40 percent to 50 percent of their promotional budgets on radio and newspaper advertising, Barnet says. “High school and college students—what percentage of tickets do they buy? And you’re spending most of your advertising dollars on media that don’t even focus on those demographics.” Conversely, the readers and listeners of traditional media are perfect for high-grossing heritage tours. As long as tickets sell for those events, promoters won’t have to change their approach, Barnet says. Heritage acts also tend to sell more CDs, says Pollstar’s Bongiovanni. “Your average Rod Stewart fan is more likely to walk into a record store, if they can find one, than your average Fall Out Boy fan.”

Personally, [Live Nation’s chairman of global music and global touring, Arthur Fogel] said, he’d been disappointed in the young bands he’d seen open for the headliners on Live Nation’s big tours. Live performance requires a different skill set from recorded tracks. It’s the difference between playing music and putting on a show, he said. “More often than not, I find young bands get up and play their music but are not investing enough time or energy into creating that show.” It’s incumbent on the industry to find bands that can rise to the next level, he added. “We aren’t seeing that development that’s creating the next generation of stadium headliners. Hopefully that will change.”

Live Nation doesn’t see itself spearheading such a change, though. In an earlier interview with Billboard magazine, Rapino took a dig at record labels’ model of bankrolling ten bands in the hope that one would become a success. “We don’t want to be in the business of pouring tens of millions of dollars into unknown acts, throwing it against the wall and then hoping that enough sticks that we only lose some of our money,” he said. “It’s not part of our business plan to be out there signing 50 or 60 young acts every year.”

And therein lies the rub. If the big dog in the touring pack won’t take responsibility for nurturing new talent and the labels have less capital to invest in artist development, where will the future megatour headliners come from?

Indeed, despite its all-encompassing moniker, the 360 deal isn’t the only option for musicians, nor should it be. Some artists may find they need the distribution reach and bankroll that a traditional big-label deal provides. Others might negotiate with independent labels for profit sharing or licensing arrangements in which they’ll retain more control of their master recordings. Many will earn the bulk of their income from licensing their songs for use on TV shows, movie soundtracks, and video games. Some may take an entirely do-it-yourself approach, in which they’ll write, produce, perform, and distribute all of their own music—and keep any of the profits they make.

There are growing signs of this transition. The Eagles recently partnered with Wal-Mart to give the discount chain exclusive retail-distribution rights to the band’s latest album. Paul McCartney chose to release his most recent record through Starbucks, and last summer Prince gave away his newest CD to London concertgoers and to readers of a British tabloid. And in a move that earned nearly as much ink as Madonna’s 360 deal, rock act Radiohead let fans download its new release directly from the band’s website for whatever price listeners were willing to pay. Though the numbers are debated, one source, ComScore, reported that in the first month 1.2 million people downloaded the album. About 40 percent paid for it, at an average of about $6 each—well above the usual cut an artist would get in royalties. The band also self-released the album in an $80 limited-edition package and, months later, as a CD with traditional label distribution. Such a move wouldn’t work for just any artist. Radiohead had the luxury of a fan base that it developed over more than a dozen years with a major label. But the band’s experiment showed creativity and adaptability.

If concerts bring money in for the music biz, what happens when concerts get smaller? Read More »

China’s increasing control over American dollars

From James Fallows’ “The $1.4 Trillion Question” (The Atlantic: January/February 2008):

Through the quarter-century in which China has been opening to world trade, Chinese leaders have deliberately held down living standards for their own people and propped them up in the United States. This is the real meaning of the vast trade surplus—$1.4 trillion and counting, going up by about $1 billion per day—that the Chinese government has mostly parked in U.S. Treasury notes. In effect, every person in the (rich) United States has over the past 10 years or so borrowed about $4,000 from someone in the (poor) People’s Republic of China. Like so many imbalances in economics, this one can’t go on indefinitely, and therefore won’t. But the way it ends—suddenly versus gradually, for predictable reasons versus during a panic—will make an enormous difference to the U.S. and Chinese economies over the next few years, to say nothing of bystanders in Europe and elsewhere.

When the dollar is strong, the following (good) things happen: the price of food, fuel, imports, manufactured goods, and just about everything else (vacations in Europe!) goes down. The value of the stock market, real estate, and just about all other American assets goes up. Interest rates go down—for mortgage loans, credit-card debt, and commercial borrowing. Tax rates can be lower, since foreign lenders hold down the cost of financing the national debt. The only problem is that American-made goods become more expensive for foreigners, so the country’s exports are hurt.

When the dollar is weak, the following (bad) things happen: the price of food, fuel, imports, and so on (no more vacations in Europe) goes up. The value of the stock market, real estate, and just about all other American assets goes down. Interest rates are higher. Tax rates can be higher, to cover the increased cost of financing the national debt. The only benefit is that American-made goods become cheaper for foreigners, which helps create new jobs and can raise the value of export-oriented American firms (winemakers in California, producers of medical devices in New England).

Americans sometimes debate (though not often) whether in principle it is good to rely so heavily on money controlled by a foreign government. The debate has never been more relevant, because America has never before been so deeply in debt to one country. Meanwhile, the Chinese are having a debate of their own—about whether the deal makes sense for them. Certainly China’s officials are aware that their stock purchases prop up 401(k) values, their money-market holdings keep down American interest rates, and their bond purchases do the same thing—plus allow our government to spend money without raising taxes.

China’s increasing control over American dollars Read More »

Details on the Storm & Nugache botnets

From Dennis Fisher’s “Storm, Nugache lead dangerous new botnet barrage” (SearchSecurity.com: 19 December 2007):

[Dave Dittrich, a senior security engineer and researcher at the University of Washington in Seattle], one of the top botnet researchers in the world, has been tracking botnets for close to a decade and has seen it all. But this new piece of malware, which came to be known as Nugache, was a game-changer. With no C&C server to target, bots capable of sending encrypted packets and the possibility of any peer on the network suddenly becoming the de facto leader of the botnet, Nugache, Dittrich knew, would be virtually impossible to stop.

Dittrich and other researchers say that when they analyze the code these malware authors are putting out, what emerges is a picture of a group of skilled, professional software developers learning from their mistakes, improving their code on a weekly basis and making a lot of money in the process.

The way that Storm, Nugache and other similar programs make money for their creators is typically twofold. First and foremost, Storm’s creator controls a massive botnet that he can use to send out spam runs, either for himself or for third parties who pay for the service. Storm-infected PCs have been sending out various spam messages, including pump-and-dump stock scams, pitches for fake medications and highly targeted phishing messages, throughout 2007, and by some estimates were responsible for more than 75% of the spam on the Internet at certain points this year.

Secondly, experts say that Storm’s author has taken to sectioning off his botnet into smaller pieces and then renting those subnets out to other attackers. Estimates of the size of the Storm network have ranged as high as 50 million PCs, but Brandon Enright, a network security analyst at the University of California at San Diego, who wrote a tool called Stormdrain to locate and count infect machines, put the number at closer to 20,000. Dittrich estimates that the size of the Nugache network was roughly equivalent to Enright’s estimates for Storm.

“The Storm network has a team of very smart people behind it. They change it constantly. When the attacks against searching started to be successful, they completely changed how commands are distributed in the network,” said Enright. “If AV adapts, they re-adapt. If attacks by researchers adapt, they re-adapt. If someone tries to DoS their distribution system, they DoS back.”

The other worrisome detail in all of this is that there’s significant evidence that the authors of these various pieces of malware are sharing information and techniques, if not collaborating outright.

“I’m pretty sure that there are tactics being shared between the Nugache and Storm authors,” Dittrich said. “There’s a direct lineage from Sdbot to Rbot to Mytob to Bancos. These guys can just sell the Web front-end to these things and the customers can pick their options and then just hit go.”

Once just a hobby for devious hackers, writing malware is now a profession and its products have helped create a global shadow economy. That infrastructure stretches from the mob-controlled streets of Moscow to the back alleys of Malaysia to the office parks of Silicon Valley. In that regard, Storm, Nugache and the rest are really just the first products off the assembly line, the Model Ts of P2P malware.

Details on the Storm & Nugache botnets Read More »

Google PageRank explained

From Danny Sullivan’s “What Is Google PageRank? A Guide For Searchers & Webmasters” (Search Engine Land: 26 April 2007):

Let’s start with what Google says. In a nutshell, it considers links to be like votes. In addition, it considers that some votes are more important than others. PageRank is Google’s system of counting link votes and determining which pages are most important based on them. These scores are then used along with many other things to determine if a page will rank well in a search.

PageRank is only a score that represents the importance of a page, as Google estimates it (By the way, that estimate of importance is considered to be Google’s opinion and protected in the US by the First Amendment. When Google was once sued over altering PageRank scores for some sites, a US court ruled: “PageRanks are opinions–opinions of the significance of particular Web sites as they correspond to a search query….the court concludes Google’s PageRanks are entitled to full constitutional protection.)

Google PageRank explained Read More »

Virtual kidnappings a problem in Mexico

From Marc Lacey’s “Exploiting Real Fears With ‘Virtual Kidnappings’ ” (The New York Times: 29 April 2008):

MEXICO CITY — The phone call begins with the cries of an anguished child calling for a parent: “Mama! Papa!” The youngster’s sobs are quickly replaced by a husky male voice that means business.

“We’ve got your child,” he says in rapid-fire Spanish, usually adding an expletive for effect and then rattling off a list of demands that might include cash or jewels dropped off at a certain street corner or a sizable deposit made to a local bank.

The twist is that little Pablo or Teresa is safe and sound at school, not duct-taped to a chair in a rundown flophouse somewhere or stuffed in the back of a pirate taxi. But when the cellphone call comes in, that is not at all clear.

This is “virtual kidnapping,” the name being given to Mexico’s latest crime craze, one that has capitalized on the raw nerves of a country that has been terrorized by the real thing for years.

A new hot line set up to deal with the problem of kidnappings in which no one is actually kidnapped received more than 30,000 complaints from last December to the end of February, Joel Ortega, Mexico City’s police chief, announced recently. There have been eight arrests, and 3,415 telephone numbers have been identified as those used by extortionists, he said.

But identifying the phone numbers — they are now listed on a government Web site — has done little to slow the extortion calls. Nearly all the calls are from cellphones, most of them stolen, authorities say.

On top of that, many extortionists are believed to be pulling off the scams from prisons.

Authorities say hundreds of different criminal gangs are engaged in various telephone scams. Besides the false kidnappings, callers falsely tell people they have won cars or money. Sometimes, people are told to turn off their cellphones for an hour so the service can be repaired; then, relatives are called and told that the cellphone’s owner has been kidnapped. Ransom demands have even been made by text message.

No money changed hands in her case, but in many instances — as many as a third of the calls, one study showed — the criminals make off with some valuables. One estimate put the take from telephone scams in Mexico in the last six months at 186.6 million pesos, nearly $20 million.

Virtual kidnappings a problem in Mexico Read More »

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »

6 reasons why “content” has been devalued

From Jonathan Handel’s “Is Content Worthless?” (The Huffington Post: 11 April 2008):

Everyone focuses on piracy, but there are actually six related reasons for the devaluation of content. The first is supply and demand. Demand — the number of consumers and their available leisure time – is relatively constant, but supply — online content — has grown enormously in the last decade. Some of this is professional content set free from boundaries of time and space, now available worldwide, anytime, and usually at no cost (whether legally or not). Even more is user generated content (UGC) — websites, blogs, YouTube videos — created by non-professionals who don’t care whether they get paid, and who themselves pay little or nothing to create and distribute it.

The second is the loss of physical form. It just seems natural to value a physical thing more highly than something intangible. Physical objects have been with us since the beginning of time; distributable intangible content has not. Perhaps for that reason, we tend to focus on per-unit costs (zero for an intangible such as a movie download), while forgetting about fixed costs (such as the cost of making the movie in the first place). Also, and critically, if you steal something tangible, you deny it to the owner; a purloined DVD is no longer available to the merchant, for instance. But if you misappropriate an intangible, it’s still there for others to use. …

The third reason is that acquiring content is increasingly frictionless. It’s often easier, particularly for young people, to access content on the Internet than through traditional means. …

Fourth is that most new media business models are ad-supported rather than pay per view or subscription. If there’s no cost to the user, why should consumers see the content as valuable, and if some content is free, why not all of it? …

Fifth is market forces in the technology industry. Computers, web services, and consumer electronic devices are more valuable when more content is available. In turn, these products make content more usable by providing new distribution channels. Traditional media companies are slow to adopt these new technologies, for fear of cannibalizing revenue from existing channels and offending powerful distribution partners. In contrast, non-professionals, long denied access to distribution, rush to use the new technologies, as do pirates of professional content. As a result, technological innovation reduces the market share of paid professional content.

Finally, there’s culture. A generation of users has grown up indifferent or hostile to copyright, particularly in music, movies and software.

6 reasons why “content” has been devalued Read More »

Amazon’s infrastructure and the cloud

From Spencer Reiss’ “Cloud Computing. Available at Amazon.com Today” (Wired: 21 April 2008):

Almost a third of [Amazon]’s total number of sales last year were made by people selling their stuff through the Amazon machine. The company calls them seller-customers, and there are 1.3 million of them.

Log in to Amazon’s gateway today and more than 100 separate services leap into action, crunching data, comparing alternatives, and constructing a totally customized page (all in about 200 milliseconds).

Developers get a cheap, instant, essentially limitless computing cloud.

Amazon’s infrastructure and the cloud Read More »

I for one welcome our new OS overlords: Google Chrome

As some of you may have heard, Google has announced its own web browser, Chrome. It’s releasing the Windows version today, with Mac & Linux versions to follow.

To educate people about the new browser & its goals, they release a 38 pg comic book drawn by the brilliant Scott McCloud. It’s a really good read, but it gets a bit technical at times. However, someone did a “Reader’s Digest” version, which you can read here:

http://technologizer.com/2008/09/01/google-chrome-comic-the-readers-digest-version

I highly encourage you to read it. This browser is doing some very interesting, smart things. And it’s open source, so other browsers can use its code & ideas.

If you want to read the full comic, you can do so here:

http://www.google.com/googlebooks/chrome/

BTW … I don’t think Chrome has the potential of becoming the next big browser; I think instead it has the potential to become the next big operating system. See http://www.techcrunch.com/2008/09/01/meet-chrome-googles-windows-killer/ for more on that.

I for one welcome our new OS overlords: Google Chrome Read More »

Dropbox for Linux is coming soon

According to this announcement, a Linux client for Dropbox should be coming out in a week or so:

http://forums.getdropbox.com/topic.php?id=2371&replies=1

I’ve been using Dropbox for several months, and it’s really, really great.

What is it? Watch this video:

http://www.getdropbox.com/screencast

It’s backup and auto-syncing done REALLY well. Best of all, you can sync between more than one computer, even if one is owned by someone else. So I could create a folder then share it with Robert. It shows up on his machine. If either of us changes files in the folder, those changes are auto-synced with each other.

Very nice.

So check it out when you get a chance. 2 GB are free. After that, you pay a small fee.

Dropbox for Linux is coming soon Read More »