management

Evaluating software features

When developing software, it’s important to rank your features, as you can’t do everything, & not everything is worth doing. One way to rank features is to categorize them in order of importance using the following three categories:

  1. Required/Essential/Necessary: Mission critical features that must be present
  2. Preferred/Conditional: Important features & enhancements that bring better experience & easier management, but can wait until later release if necessary
  3. Optional/Nice To Have: If resources permit, sure, but otherwise…

Of course, you should also group your features based upon the kinds of features they are. Here’s a suggestion for those groups:

  • User experience
  • Management
  • Security

Evaluating software features Read More »

Wikipedia, freedom, & changes in production

From Clay Shirky’s “Old Revolutions, Good; New Revolutions, Bad” (Britannica Blog: 14 June 2007):

Gorman’s theory about print – its capabilities ushered in an age very different from manuscript culture — is correct, and the same kind of shift is at work today. As with the transition from manuscripts to print, the new technologies offer virtues that did not previously exist, but are now an assumed and permanent part of our intellectual environment. When reproduction, distribution, and findability were all hard, as they were for the last five hundred years, we needed specialists to undertake those jobs, and we properly venerated them for the service they performed. Now those tasks are simpler, and the earlier roles have instead become obstacles to direct access.

Digital and networked production vastly increase three kinds of freedom: freedom of speech, of the press, and of assembly. This perforce increases the freedom of anyone to say anything at any time. This freedom has led to an explosion in novel content, much of it mediocre, but freedom is like that. Critically, this expansion of freedom has not undermined any of the absolute advantages of expertise; the virtues of mastery remain as they were. What has happened is that the relative advantages of expertise are in precipitous decline. Experts the world over have been shocked to discover that they were consulted not as a direct result of their expertise, but often as a secondary effect – the apparatus of credentialing made finding experts easier than finding amateurs, even when the amateurs knew the same things as the experts.

The success of Wikipedia forces a profound question on print culture: how is information to be shared with the majority of the population? This is an especially tough question, as print culture has so manifestly failed at the transition to a world of unlimited perfect copies. Because Wikipedia’s contents are both useful and available, it has eroded the monopoly held by earlier modes of production. Other encyclopedias now have to compete for value to the user, and they are failing because their model mainly commits them to denying access and forbidding sharing. If Gorman wants more people reading Britannica, the choice lies with its management. Were they to allow users unfettered access to read and share Britannica’s content tomorrow, the only interesting question is whether their readership would rise a ten-fold or a hundred-fold.

Britannica will tell you that they don’t want to compete on universality of access or sharability, but this is the lament of the scribe who thinks that writing fast shouldn’t be part of the test. In a world where copies have become cost-free, people who expend their resources to prevent access or sharing are forgoing the principal advantages of the new tools, and this dilemma is common to every institution modeled on the scarcity and fragility of physical copies. Academic libraries, which in earlier days provided a service, have outsourced themselves as bouncers to publishers like Reed-Elsevier; their principal job, in the digital realm, is to prevent interested readers from gaining access to scholarly material.

Wikipedia, freedom, & changes in production Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

Why American car companies are in trouble

From Paul Ingrassia’s “How Detroit Drove Into a Ditch” (The Wall Street Journal: 25 October 2008):

This situation doesn’t stem from the recent meltdown in banking and the markets. GM, Ford and Chrysler have been losing billions since 2005, when the U.S. economy was still healthy. The financial crisis does, however, greatly exacerbate Detroit’s woes. As car sales plunge — both in the U.S. and in Detroit’s once-booming overseas markets — it’s becoming nearly impossible for the companies to cut costs fast enough to keep pace with the evaporation of their revenue. All three companies, once the very symbol of American economic might, need new capital, but their options for raising it are limited.

In all this lies a tale of hubris, missed opportunities, disastrous decisions and flawed leadership of almost biblical proportions. In fact, for the last 30 years Detroit has gone astray, repented, gone astray and repented again in a cycle not unlike the Israelites in the Book of Exodus.

Detroit failed to grasp — or at least to address — the fundamental nature of its Japanese competition. Japan’s car companies, and more recently the Germans and Koreans, gained a competitive advantage largely by forging an alliance with American workers.

Detroit, meanwhile, has remained mired in mutual mistrust with the United Auto Workers union. While the suspicion has abated somewhat in recent years, it never has disappeared — which is why Detroit’s factories remain vastly more cumbersome to manage than the factories of foreign car companies in the U.S.

Two incidents in 1936 and 1937 formed this lasting labor-management divide: the sit-down strike at GM’s factories in Flint, Mich., and the Battle of the Overpass in Detroit, in which Ford goons beat up union organizers. But the United Auto Workers prevailed, and as the GM-Ford-Chrysler oligopoly emerged in the 1940s, the union gained a labor monopoly in American auto factories. As costs increased, the companies routinely passed them on to U.S. consumers, who had virtually no alternatives in buying cars.

Nissan, Toyota and other Japanese car companies soon started building factories in America, followed by German and Korean auto makers. There are now 16 foreign-owned assembly plants in the U.S., and many more that build engines, transmissions and other components.

Several years ago Ford even considered dropping cars altogether because they weren’t profitable, and focusing entirely on trucks. Then in 2005, Hurricane Katrina and growing oil demand from China and India sent gasoline prices soaring and SUV sales plunging. GM lost $10.6 billion that year. Ford topped that by losing $12.7 billion in 2006. Last summer Daimler gave up on Chrysler, selling it to private-equity powerhouse Cerberus for about one-fourth of what it had paid to buy Chrysler. Last fall the UAW approved significant wage and benefit concessions, but they won’t kick in until 2010. That might be too late. GM lost $15.5 billion in this year’s second quarter, Ford lost $8.7 billion, and further losses are coming. (Closely held Chrysler, of course, doesn’t report financial results.)

Why American car companies are in trouble Read More »

How the Greek cell phone network was compromised

From Vassilis Prevelakis and Diomidis Spinellis’ “The Athens Affair” (IEEE Spectrum: July 2007):

On 9 March 2005, a 38-year-old Greek electrical engineer named Costas Tsalikidis was found hanged in his Athens loft apartment, an apparent suicide. It would prove to be merely the first public news of a scandal that would roil Greece for months.

The next day, the prime minister of Greece was told that his cellphone was being bugged, as were those of the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.

The victims were customers of Athens-based Vodafone-Panafon, generally known as Vodafone Greece, the country’s largest cellular service provider; Tsalikidis was in charge of network planning at the company.

We now know that the illegally implanted software, which was eventually found in a total of four of Vodafone’s Greek switches, created parallel streams of digitized voice for the tapped phone calls. One stream was the ordinary one, between the two calling parties. The other stream, an exact copy, was directed to other cellphones, allowing the tappers to listen in on the conversations on the cellphones, and probably also to record them. The software also routed location and other information about those phone calls to these shadow handsets via automated text messages.

The day after Tsalikidis’s body was discovered, CEO Koronias met with the director of the Greek prime minister’s political office. Yiannis Angelou, and the minister of public order, Giorgos Voulgarakis. Koronias told them that rogue software used the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones and handed over a list of bugged numbers. Besides the prime minister and his wife, phones belonging to the ministers of national defense, foreign affairs, and justice, the mayor of Athens, and the Greek European Union commissioner were all compromised. Others belonged to members of civil rights organizations, peace activists, and antiglobalization groups; senior staff at the ministries of National Defense, Public Order, Merchant Marine, and Foreign Affairs; the New Democracy ruling party; the Hellenic Navy general staff; and a Greek-American employee at the United States Embassy in Athens.

First, consider how a phone call, yours or a prime minister’s, gets completed. Long before you dial a number on your handset, your cellphone has been communicating with nearby cellular base stations. One of those stations, usually the nearest, has agreed to be the intermediary between your phone and the network as a whole. Your telephone handset converts your words into a stream of digital data that is sent to a transceiver at the base station.

The base station’s activities are governed by a base station controller, a special-purpose computer within the station that allocates radio channels and helps coordinate handovers between the transceivers under its control.

This controller in turn communicates with a mobile switching center that takes phone calls and connects them to call recipients within the same switching center, other switching centers within the company, or special exchanges that act as gateways to foreign networks, routing calls to other telephone networks (mobile or landline). The mobile switching centers are particularly important to the Athens affair because they hosted the rogue phone-tapping software, and it is there that the eavesdropping originated. They were the logical choice, because they are at the heart of the network; the intruders needed to take over only a few of them in order to carry out their attack.

Both the base station controllers and the switching centers are built around a large computer, known as a switch, capable of creating a dedicated communications path between a phone within its network and, in principle, any other phone in the world. Switches are holdovers from the 1970s, an era when powerful computers filled rooms and were built around proprietary hardware and software. Though these computers are smaller nowadays, the system’s basic architecture remains largely unchanged.

Like most phone companies, Vodafone Greece uses the same kind of computer for both its mobile switching centers and its base station controllers—Ericsson’s AXE line of switches. A central processor coordinates the switch’s operations and directs the switch to set up a speech or data path from one phone to another and then routes a call through it. Logs of network activity and billing records are stored on disk by a separate unit, called a management processor.

The key to understanding the hack at the heart of the Athens affair is knowing how the Ericsson AXE allows lawful intercepts—what are popularly called “wiretaps.” Though the details differ from country to country, in Greece, as in most places, the process starts when a law enforcement official goes to a court and obtains a warrant, which is then presented to the phone company whose customer is to be tapped.

Nowadays, all wiretaps are carried out at the central office. In AXE exchanges a remote-control equipment subsystem, or RES, carries out the phone tap by monitoring the speech and data streams of switched calls. It is a software subsystem typically used for setting up wiretaps, which only law officers are supposed to have access to. When the wiretapped phone makes a call, the RES copies the conversation into a second data stream and diverts that copy to a phone line used by law enforcement officials.

Ericsson optionally provides an interception management system (IMS), through which lawful call intercepts are set up and managed. When a court order is presented to the phone company, its operators initiate an intercept by filling out a dialog box in the IMS software. The optional IMS in the operator interface and the RES in the exchange each contain a list of wiretaps: wiretap requests in the case of the IMS, actual taps in the RES. Only IMS-initiated wiretaps should be active in the RES, so a wiretap in the RES without a request for a tap in the IMS is a pretty good indicator that an unauthorized tap has occurred. An audit procedure can be used to find any discrepancies between them.

It took guile and some serious programming chops to manipulate the lawful call-intercept functions in Vodafone’s mobile switching centers. The intruders’ task was particularly complicated because they needed to install and operate the wiretapping software on the exchanges without being detected by Vodafone or Ericsson system administrators. From time to time the intruders needed access to the rogue software to update the lists of monitored numbers and shadow phones. These activities had to be kept off all logs, while the software itself had to be invisible to the system administrators conducting routine maintenance activities. The intruders achieved all these objectives.

The challenge faced by the intruders was to use the RES’s capabilities to duplicate and divert the bits of a call stream without using the dialog-box interface to the IMS, which would create auditable logs of their activities. The intruders pulled this off by installing a series of patches to 29 separate blocks of code, according to Ericsson officials who testified before the Greek parliamentary committee that investigated the wiretaps. This rogue software modified the central processor’s software to directly initiate a wiretap, using the RES’s capabilities. Best of all, for them, the taps were not visible to the operators, because the IMS and its user interface weren’t used.

The full version of the software would have recorded the phone numbers being tapped in an official registry within the exchange. And, as we noted, an audit could then find a discrepancy between the numbers monitored by the exchange and the warrants active in the IMS. But the rogue software bypassed the IMS. Instead, it cleverly stored the bugged numbers in two data areas that were part of the rogue software’s own memory space, which was within the switch’s memory but isolated and not made known to the rest of the switch.

That by itself put the rogue software a long way toward escaping detection. But the perpetrators hid their own tracks in a number of other ways as well. There were a variety of circumstances by which Vodafone technicians could have discovered the alterations to the AXE’s software blocks. For example, they could have taken a listing of all the blocks, which would show all the active processes running within the AXE—similar to the task manager output in Microsoft Windows or the process status (ps) output in Unix. They then would have seen that some processes were active, though they shouldn’t have been. But the rogue software apparently modified the commands that list the active blocks in a way that omitted certain blocks—the ones that related to intercepts—from any such listing.

In addition, the rogue software might have been discovered during a software upgrade or even when Vodafone technicians installed a minor patch. It is standard practice in the telecommunications industry for technicians to verify the existing block contents before performing an upgrade or patch. We don’t know why the rogue software was not detected in this way, but we suspect that the software also modified the operation of the command used to print the checksums—codes that create a kind of signature against which the integrity of the existing blocks can be validated. One way or another, the blocks appeared unaltered to the operators.

Finally, the software included a back door to allow the perpetrators to control it in the future. This, too, was cleverly constructed to avoid detection. A report by the Hellenic Authority for the Information and Communication Security and Privacy (the Greek abbreviation is ADAE) indicates that the rogue software modified the exchange’s command parser—a routine that accepts commands from a person with system administrator status—so that innocuous commands followed by six spaces would deactivate the exchange’s transaction log and the alarm associated with its deactivation, and allow the execution of commands associated with the lawful interception subsystem. In effect, it was a signal to allow operations associated with the wiretaps but leave no trace of them. It also added a new user name and password to the system, which could be used to obtain access to the exchange.

…Security experts have also discovered other rootkits for general-purpose operating systems, such as Linux, Windows, and Solaris, but to our knowledge this is the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch.

So the investigators painstakingly reconstructed an approximation of the original PLEX source files that the intruders developed. It turned out to be the equivalent of about 6500 lines of code, a surprisingly substantial piece of software.

How the Greek cell phone network was compromised Read More »

If concerts bring money in for the music biz, what happens when concerts get smaller?

From Jillian Cohen’s “The Show Must Go On” (The American: March/April 2008):

You can’t steal a concert. You can’t download the band—or the sweaty fans in the front row, or the merch guy, or the sound tech—to your laptop to take with you. Concerts are not like albums—easy to burn, copy, and give to your friends. If you want to share the concert-going experience, you and your friends all have to buy tickets. For this reason, many in the ailing music industry see concerts as the next great hope to revive their business.

It’s a blip that already is fading, to the dismay of the major record labels. CD sales have dropped 25 percent since 2000 and digital downloads haven’t picked up the slack. As layoffs swept the major labels this winter, many industry veterans turned their attention to the concert business, pinning their hopes on live performances as a way to bolster their bottom line.

Concerts might be a short-term fix. As one national concert promoter says, “The road is where the money is.” But in the long run, the music business can’t depend on concert tours for a simple, biological reason: the huge tour profits that have been generated in the last few decades have come from performers who are in their 40s, 50s, and 60s. As these artists get older, they’re unlikely to be replaced, because the industry isn’t investing in new talent development.

When business was good—as it was when CD sales grew through much of the 1990s—music labels saw concert tours primarily as marketing vehicles for albums. Now, they’re seizing on the reverse model. Tours have become a way to market the artist as a brand, with the fan clubs, limited-edition doodads, and other profitable products and services that come with the territory.

“Overall, it’s not a pretty picture for some parts of the industry,” JupiterResearch analyst David Card wrote in November when he released a report on digital music sales. “Labels must act more like management companies, and tap into the broadest collection of revenue streams and licensing as possible,” he said. “Advertising and creative packaging and bundling will have to play a bigger role than they have. And the $3 billion-plus touring business is not exactly up for grabs—it’s already competitive and not very profitable. Music companies of all types need to use the Internet for more cost-effective marketing, and A&R [artist development] risk has to be spread more fairly.”

The ‘Heritage Act’ Dilemma

Even so, belief in the touring business was so strong last fall that Madonna signed over her next ten years to touring company Live Nation—the folks who put on megatours for The Rolling Stones, The Police, and other big headliners—in a deal reportedly worth more than $120 million. The Material Girl’s arrangement with Live Nation is known in the industry as a 360-degree deal. Such deals may give artists a big upfront payout in exchange for allowing record labels or, in Madonna’s case, tour producers to profit from all aspects of their business, including touring, merchandise, sponsorships, and more.

While 360 deals may work for big stars, insiders warn that they’re not a magic bullet that will save record labels from their foundering, top-heavy business model. Some artists have done well by 360 contracts, including alt-metal act Korn and British pop sensation Robbie Williams. With these successes in mind, some tout the deals as a way for labels to recoup money they’re losing from downloads and illegal file sharing. But the artists who are offered megamillions for a piece of their brand already have built it through years of album releases, heavy touring, and careful fan-base development.

Not all these deals are good ones, says Bob McLynn, who manages pop-punk act Fall Out Boy and other young artists through his agency, Crush Management. Labels still have a lot to offer, he says. They pay for recording sessions, distribute CDs, market a band’s music, and put up money for touring, music-video production, and other expenses. But in exchange, music companies now want to profit from more than a band’s albums and recording masters. “The artist owns the brand, and now the labels—because they can’t sell as many albums—are trying to get in on the brand,” McLynn says. “To be honest, if an artist these days is looking for a traditional major-label deal for several hundred thousand dollars, they will have to be willing to give up some of that brand.

”For a young act, such offers may be enticing, but McLynn urges caution. “If they’re not going to give you a lot of money for it, it’s a mistake,” says the manager, who helped build Fall Out Boy’s huge teen fan base through constant touring and Internet marketing, only later signing the band to a big label. “I had someone from a major label ask me recently, ‘Hey, I have this new artist; can we convert the deal to a 360 deal?’” McLynn recalls. “I told him [it would cost] $2 million to consider it. He thought I was crazy; but I’m just saying, how is that crazy? If you want all these extra rights and if this artist does blow up, then that’s the best deal in the world for you. If you’re not taking a risk, why am I going to give you this? And if it’s not a lot of money, you’re not taking a risk.”

A concert-tour company’s margin is about 4 percent, Live Nation CEO Michael Rapino has said, while the take on income from concessions, T-shirts, and other merchandise sold at shows can be much higher. The business had a record-setting year in 2006, which saw The Rolling Stones, Madonna, U2, Barbra Streisand, and other popular, high-priced tours on the road. But in 2007, North American gross concert dollars dropped more than 10 percent to $2.6 billion, according to Billboard statistics. Concert attendance fell by more than 19 percent to 51 million. Fewer people in the stands means less merchandise sold and concession-stand food eaten.

Now add this wrinkle: if you pour tens of millions of dollars into a 360 deal, as major labels and Live Nation have done with their big-name stars, you will need the act to tour for a long time to recoup your investment. “For decades we’ve been fueled by acts from the ’60s,” says Gary Bongiovanni, editor of the touring-industry trade magazine Pollstar. Three decades ago, no one would have predicted that Billy Joel or Rod Stewart would still be touring today, Bongiovanni notes, yet the industry has come to depend on artists such as these, known as “heritage acts.” “They’re the ones that draw the highest ticket prices and biggest crowds for our year-end charts,” he says. Consider the top-grossing tours of 2006 and 2007: veterans such as The Rolling Stones, Rod Stewart, Barbra Streisand, and Roger Waters were joined by comparative youngsters Madonna, U2, and Bon Jovi. Only two of the 20 acts—former Mouseketeers Justin Timberlake and Christina Aguilera—were younger than 30.

These young stars, the ones who are prone to taking what industry observer Bob Lefsetz calls “media shortcuts,” such as appearing on MTV, may have less chance of developing real staying power. Lefsetz, formerly an entertainment lawyer and consultant to major labels, has for 20 years published an industry newsletter (now a blog) called the Lefsetz Letter. “Whatever a future [superstar] act will be, it won’t be as ubiquitous as the acts from the ’60s because we were all listening to Top 40 radio,” he says.

From the 1960s to the 1980s, music fans discovered new music primarily on the radio and purchased albums in record stores. The stations young people listened to might have played rock, country, or soul; but whatever the genre, DJs introduced listeners to the hits of tomorrow and guided them toward retail stores and concert halls.

Today, music is available in so many genres and subgenres, via so many distribution streams—including cell phones, social networking sites, iTunes, Pure Volume, and Limewire—that common ground rarely exists for post–Baby Boom fans. This in turn makes it harder for tour promoters to corral the tens of thousands of ticket holders they need to fill an arena. “More people can make music than ever before. They can get it heard, but it’s such a cacophony of noise that it will be harder to get any notice,” says Lefsetz.

Most major promoters don’t know how to capture young people’s interest and translate it into ticket sales, he says. It’s not that his students don’t listen to music, but that they seek to discover it online, from friends, or via virtual buzz. They’ll go out to clubs and hear bands, but they rarely attend big arena concerts. Promoters typically spend 40 percent to 50 percent of their promotional budgets on radio and newspaper advertising, Barnet says. “High school and college students—what percentage of tickets do they buy? And you’re spending most of your advertising dollars on media that don’t even focus on those demographics.” Conversely, the readers and listeners of traditional media are perfect for high-grossing heritage tours. As long as tickets sell for those events, promoters won’t have to change their approach, Barnet says. Heritage acts also tend to sell more CDs, says Pollstar’s Bongiovanni. “Your average Rod Stewart fan is more likely to walk into a record store, if they can find one, than your average Fall Out Boy fan.”

Personally, [Live Nation’s chairman of global music and global touring, Arthur Fogel] said, he’d been disappointed in the young bands he’d seen open for the headliners on Live Nation’s big tours. Live performance requires a different skill set from recorded tracks. It’s the difference between playing music and putting on a show, he said. “More often than not, I find young bands get up and play their music but are not investing enough time or energy into creating that show.” It’s incumbent on the industry to find bands that can rise to the next level, he added. “We aren’t seeing that development that’s creating the next generation of stadium headliners. Hopefully that will change.”

Live Nation doesn’t see itself spearheading such a change, though. In an earlier interview with Billboard magazine, Rapino took a dig at record labels’ model of bankrolling ten bands in the hope that one would become a success. “We don’t want to be in the business of pouring tens of millions of dollars into unknown acts, throwing it against the wall and then hoping that enough sticks that we only lose some of our money,” he said. “It’s not part of our business plan to be out there signing 50 or 60 young acts every year.”

And therein lies the rub. If the big dog in the touring pack won’t take responsibility for nurturing new talent and the labels have less capital to invest in artist development, where will the future megatour headliners come from?

Indeed, despite its all-encompassing moniker, the 360 deal isn’t the only option for musicians, nor should it be. Some artists may find they need the distribution reach and bankroll that a traditional big-label deal provides. Others might negotiate with independent labels for profit sharing or licensing arrangements in which they’ll retain more control of their master recordings. Many will earn the bulk of their income from licensing their songs for use on TV shows, movie soundtracks, and video games. Some may take an entirely do-it-yourself approach, in which they’ll write, produce, perform, and distribute all of their own music—and keep any of the profits they make.

There are growing signs of this transition. The Eagles recently partnered with Wal-Mart to give the discount chain exclusive retail-distribution rights to the band’s latest album. Paul McCartney chose to release his most recent record through Starbucks, and last summer Prince gave away his newest CD to London concertgoers and to readers of a British tabloid. And in a move that earned nearly as much ink as Madonna’s 360 deal, rock act Radiohead let fans download its new release directly from the band’s website for whatever price listeners were willing to pay. Though the numbers are debated, one source, ComScore, reported that in the first month 1.2 million people downloaded the album. About 40 percent paid for it, at an average of about $6 each—well above the usual cut an artist would get in royalties. The band also self-released the album in an $80 limited-edition package and, months later, as a CD with traditional label distribution. Such a move wouldn’t work for just any artist. Radiohead had the luxury of a fan base that it developed over more than a dozen years with a major label. But the band’s experiment showed creativity and adaptability.

If concerts bring money in for the music biz, what happens when concerts get smaller? Read More »

His employer’s misconfigured laptop gets him charged with a crime

From Robert McMillan’s “A misconfigured laptop, a wrecked life” (NetworkWorld: 18 June 2008):

When the Commonwealth of Massachusetts issued Michael Fiola a Dell Latitude in November 2006, it set off a chain of events that would cost him his job, his friends and about a year of his life, as he fought criminal charges that he had downloaded child pornography onto the laptop. Last week, prosecutors dropped their year-old case after a state investigation of his computer determined there was insufficient evidence to prove he had downloaded the files.

An initial state investigation had come to the opposite conclusion, and authorities took a second look at Fiola’s case only after he hired a forensic investigator to look at his laptop. What she found was scary, given the gravity of the charges against him: The Microsoft SMS (Systems Management Server) software used to keep his laptop up to date was not functional. Neither was its antivirus protection. And the laptop was crawling with malicious programs that were most likely responsible for the files on his PC.

Fiola had been an investigator with the state’s Department of Industrial Accidents, examining businesses to see whether they had worker’s compensation plans. Over the past two days, however, he’s become a spokesman for people who have had their lives ruined by malicious software.

[Fiola narrates his story:] We had a laptop basically to do our reports instantaneously. If I went to a business and found that they were out of compliance, I would log on and type in a report so it could get back to the home office in Boston immediately. We also used it to research businesses. …

My boss called me into his office at 9 a.m. The director of the Department of Industrial Accidents, my immediate supervisor, and the personnel director were there. They handed me a letter and said, “You are being fired for a violation of the computer usage policy. You have pornography on your computer. You’re fired. Clean out your desk. Let’s go.” …

It was horrible. No paycheck. I lost all my benefits. I lost my insurance. My wife is very, very understanding. She took the bull by the horns and found an attorney. I was just paralyzed, I couldn’t do anything. I can’t describe the feeling to you. I wouldn’t wish this on my worst enemy. It’s just devastating.

If you get in a car accident and you kill somebody, people talk to you afterwards. All our friends abandoned us. The only family that stood by us was my dad, her parents, my stepdaughter and one other good friend of ours. And that was it. Nobody called. We spent many weekends at home just crying. I’m 53 years old and I don’t think I’ve cried as much in my whole life as I did in the past 18 months. …

His employer’s misconfigured laptop gets him charged with a crime Read More »

To solve a problem, you first have to figure out the problem

From Russell L. Ackoff & Daniel Greenberg’s Turning Learning Right Side Up: Putting Education Back on Track (2008):

A classic story illustrates very well the potential cost of placing a problem in a disciplinary box. It involves a multistoried office building in New York. Occupants began complaining about the poor elevator service provided in the building. Waiting times for elevators at peak hours, they said, were excessively long. Several of the tenants threatened to break their leases and move out of the building because of this…

Management authorized a study to determine what would be the best solution. The study revealed that because of the age of the building no engineering solution could be justified economically. The engineers said that management would just have to live with the problem permanently.

The desperate manager called a meeting of his staff, which included a young recently hired graduate in personnel psychology…The young man had not focused on elevator performance but on the fact that people complained about waiting only a few minutes. Why, he asked himself, were they complaining about waiting for only a very short time? He concluded that the complaints were a consequence of boredom. Therefore, he took the problem to be one of giving those waiting something to occupy their time pleasantly. He suggested installing mirrors in the elevator boarding areas so that those waiting could look at each other or themselves without appearing to do so. The manager took up his suggestion. The installation of mirrors was made quickly and at a relatively low cost. The complaints about waiting stopped.

Today, mirrors in elevator lobbies and even on elevators in tall buildings are commonplace.

To solve a problem, you first have to figure out the problem Read More »

Examples of tweaking old technologies to add social aspects

From Clay Shirky’s “Group as User: Flaming and the Design of Social Software” (Clay Shirky’s Writings About the Internet: 5 November 2004):

This possibility of adding novel social components to old tools presents an enormous opportunity. To take the most famous example, the Slashdot moderation system puts the ability to rate comments into the hands of the users themselves. The designers took the traditional bulletin board format — threaded posts, sorted by time — and added a quality filter. And instead of assuming that all users are alike, the Slashdot designers created a karma system, to allow them to discriminate in favor of users likely to rate comments in ways that would benefit the community. And, to police that system, they created a meta-moderation system, to solve the ‘Who will guard the guardians’ problem. …

Likewise, Craigslist took the mailing list, and added a handful of simple features with profound social effects. First, all of Craigslist is an enclosure, owned by Craig … Because he has a business incentive to make his list work, he and his staff remove posts if enough readers flag them as inappropriate. …

And, on the positive side, the addition of a “Nominate for ‘Best of Craigslist'” button in every email creates a social incentive for users to post amusing or engaging material. … The only reason you would nominate a post for ‘Best of’ is if you wanted other users to see it — if you were acting in a group context, in other words. …

Jonah Brucker-Cohen’s Bumplist stands out as an experiment in experimenting the social aspect of mailing lists. Bumplist, whose motto is “an email community for the determined”, is a mailing list for 6 people, which anyone can join. When the 7th user joins, the first is bumped and, if they want to be back on, must re-join, bumping the second user, ad infinitum. … However, it is a vivid illustration of the ways simple changes to well-understood software can produce radically different social effects.

You could easily imagine many such experiments. What would it take, for example, to design a mailing list that was flame-retardant? Once you stop regarding all users as isolated actors, a number of possibilities appear. You could institute induced lag, where, once a user contributed 5 posts in the space of an hour, a cumulative 10 minute delay would be added to each subsequent post. Every post would be delivered eventually, but it would retard the rapid-reply nature of flame wars, introducing a cooling off period for the most vociferous participants.

You could institute a kind of thread jail, where every post would include a ‘Worst of’ button, in the manner of Craigslist. Interminable, pointless threads (e.g. Which Operating System Is Objectively Best?) could be sent to thread jail if enough users voted them down. (Though users could obviously change subject headers and evade this restriction, the surprise, first noted by Julian Dibbell, is how often users respect negative communal judgment, even when they don’t respect the negative judgment of individuals. [ See Rape in Cyberspace — search for “aggressively antisocial vibes.”])

You could institute a ‘Get a room!’ feature, where any conversation that involved two users ping-ponging six or more posts (substitute other numbers to taste) would be automatically re-directed to a sub-list, limited to that pair. The material could still be archived, and so accessible to interested lurkers, but the conversation would continue without the attraction of an audience.

You could imagine a similar exercise, working on signal/noise ratios generally, and keying off the fact that there is always a most active poster on mailing lists, who posts much more often than even the second most active, and much much more often than the median poster. Oddly, the most active poster is often not even aware that they occupy this position (seeing ourselves as others see us is difficult in mediated spaces as well,) but making them aware of it often causes them to self-moderate. You can imagine flagging all posts by the most active poster, whoever that happened to be, or throttling the maximum number of posts by any user to some multiple of average posting tempo.

Examples of tweaking old technologies to add social aspects Read More »

Ubuntu Hacks available now

The Ubuntu distribution simplifies Linux by providing a sensible collection of applications, an easy-to-use package manager, and lots of fine-tuning, which make it possibly the best Linux for desktops and laptops. Readers of both Linux Journal and TUX Magazine confirmed this by voting Ubuntu as the best Linux distribution in each publication’s 2005 Readers Choice Awards. None of that simplification, however, makes Ubuntu any less fun if you’re a hacker or a power user.

Like all books in the Hacks series, Ubuntu Hacks includes 100 quick tips and tricks for all users of all technical levels. Beginners will appreciate the installation advice and tips on getting the most out of the free applications packaged with Ubuntu, while intermediate and advanced readers will learn the ins-and-outs of power management, wireless roaming, 3D video acceleration, server configuration, and much more.

I contributed 10 of the 100 hacks in this book, including information on the following topics:

  • Encrypt Your Email and Important Files
  • Surf the Web Anonymously
  • Keep Windows Malware off Your System
  • Mount Removable Devices with Persistent Names
  • Mount Remote Directories Securely and Easily
  • Make Videos of Your Tech-Support Questions

I’ve been using K/Ubuntu for over a year (heck, it’s only two years old!), and it’s the best distro I’ve ever used. I was really excited to contribute my 10 hacks to Ubuntu Hacks, as this is defintely a book any advanced Linux user would love.

Buy Ubuntu Hacks from Amazon!

Ubuntu Hacks available now Read More »

Windows directory services

From David HM Spector’s Unfinished Business Part 2: Closing the Circle (LinuxDevCenter: 7 July 2003):

… an integrated enterprise directory service does give network managers a much greater ability to manage large-scale networks and resources from almost every perspective.

Unlike most UNIX systems, Windows environments are homogeneous. There are three modes of operation in terms of user and resource management in the Windows universe:

1. Stand-alone.
2. Domain membership through a domain controller.
3. Organizational-unit membership in an LDAP-based directory such as Active Directory (or via a third-party directory such as NDS, but those are declining as more organizations switch to AD). …

Three major pieces of software make up the bulk of what Active Directory does:

* LDAP, the Lightweight Directory Access Protocol.
* Kerberos, the authorization system originally developed as part of MIT Athena (later, the basis for the security components in OSF’s DME).
* A SQL database.

These components interact with the Windows APIs to deliver a one-stop repository for any attribute that can be used to describe a system, a service, a device, users, groups, a relationship, a policy, an authorization, or another relationship in a computing environment. …

LDAP in AD is used to manage:

* DNS addresses
* Workstation and server descriptions
* Printers
* Print queues
* Volume mappings
* Certificates
* Licenses
* Policies (such as ACLs, security policies, etc.)
* Groups
* Users
* Contacts

All of these data are stored in one unified system, which can be broken down relatively easily (with some major caveats) by physical location (site), division, organization unit, or department and workgroup, and managed in a distributed fashion. These data can be replicated for redundancy and performance purposes. All Windows APIs must operate within this system if they are to participate in the network and have access to its resources. Repository data is wrapped up by and authenticated through the use of Kerberos Tickets, which makes the system (again, general Windows caveats applied) secure. …

The most interesting part of this story is that 95% of the hard work has already been done! Microsoft didn’t invent totally new LDAP schemas to make Active Directory as comprehensive as it is — as usual, they embraced and extended the work of others. LDAP schemas already exist, and are publicly available to cover:

* Contact management: The InetOrgPerson schema
* IP Addresses, Users, Server/Workstation Info: The NIS schema
* Kerberos tickets: IETF Kerberos KDC schema

Of course, Microsoft’s own schemas are available for perusal on any Active Directory server (or, if you happen to have a Macintosh OS X box, look in /etc/openldap, for all of Microsoft’s schemas are there). …

Windows directory services Read More »

The 80/20 rule

From F. John Reh’s “How the 80/20 rule can help you be more effective” (About.com):

In 1906, Italian economist Vilfredo Pareto created a mathematical formula to describe the unequal distribution of wealth in his country, observing that twenty percent of the people owned eighty percent of the wealth. In the late 1940s, Dr. Joseph M. Juran inaccurately attributed the 80/20 Rule to Pareto, calling it Pareto’s Principle. …

Quality Management pioneer, Dr. Joseph Juran, working in the US in the 1930s and 40s recognized a universal principle he called the “vital few and trivial many” and reduced it to writing. …

As a result, Dr. Juran’s observation of the “vital few and trivial many”, the principle that 20 percent of something always are responsible for 80 percent of the results, became known as Pareto’s Principle or the 80/20 Rule. …

The 80/20 Rule means that in anything a few (20 percent) are vital and many(80 percent) are trivial. In Pareto’s case it meant 20 percent of the people owned 80 percent of the wealth. In Juran’s initial work he identified 20 percent of the defects causing 80 percent of the problems. Project Managers know that 20 percent of the work (the first 10 percent and the last 10 percent) consume 80 percent of your time and resources. You can apply the 80/20 Rule to almost anything, from the science of management to the physical world.

You know 20 percent of you stock takes up 80 percent of your warehouse space and that 80 percent of your stock comes from 20 percent of your suppliers. Also 80 percent of your sales will come from 20 percent of your sales staff. 20 percent of your staff will cause 80 percent of your problems, but another 20 percent of your staff will provide 80 percent of your production. It works both ways.

The value of the Pareto Principle for a manager is that it reminds you to focus on the 20 percent that matters. Of the things you do during your day, only 20 percent really matter. Those 20 percent produce 80 percent of your results.

The 80/20 rule Read More »

Embarassing email story #1056

From MedZilla’s “Emails ‘gone bad’“:

In another example of embarrassing and damaging emails sent during work is an investigation that uncovered 622 emails exchanged between Arapahoe County (Colo.) Clerk and Recorder Tracy K. Baker and his Assistant Chief Deputy Leesa Sale. Of those emails, 570 were sexually explicit. That’s not the only thing Baker’s lawyers are having to explain in court. Seems the emails also revealed Baker might have misused public funds, among other things.

Embarassing email story #1056 Read More »

Intel: anyone can challenge anyone

From FORTUNE’s “Lessons in Leadership: The Education of Andy Grove“:

[Intel CEO Andy] Grove had never been one to rely on others’ interpretations of reality. … At Intel he fostered a culture in which “knowledge power” would trump “position power.” Anyone could challenge anyone else’s idea, so long as it was about the idea and not the person–and so long as you were ready for the demand “Prove it.” That required data. Without data, an idea was only a story–a representation of reality and thus subject to distortion.

Intel: anyone can challenge anyone Read More »

Incompetent people don’t know it

From The New York Times:

Dunning, a professor of psychology at Cornell, worries about this because, according to his research, most incompetent people do not know that they are incompetent.

On the contrary. People who do things badly, Dunning has found in studies conducted with a graduate student, Justin Kruger, are usually supremely confident of their abilities — more confident, in fact, than people who do things well.

Incompetent people don’t know it Read More »