internet

New Zealand’s new copyright law

From Mark Gibbs’ “New Zealand gets insane copyright law” (Network World: 20 February 2009):

A law was recently passed in New Zealand that has created what many consider to be the world’s harshest copyright enforcement law. This insanity, found in Sections 92A and C of New Zealand’s Copyright Amendment Act 2008 establishes – and I am not making this up – a guilt upon accusation principle!

Yep, you read that right. This means that anyone accused of “copyright infringement” will get his Internet connection cut off; and treated as guilty until proven innocent.

And if that weren’t enough, this crazy legislation defines anyone providing Internet access as an ISP and makes them responsible for monitoring and cutting off Internet access for anyone who uses their services and is accused of copyright violations. Thus libraries, schools, coffee shops, cafes – anyone offering any kind of Internet access – will be considered ISPs and become responsible and potentially liable.

New Zealand’s new copyright law Read More »

The end of Storm

From Brian Krebs’ “Atrivo Shutdown Hastened Demise of Storm Worm” (The Washington Post: 17 October 2008):

The infamous Storm worm, which powered a network of thousands of compromised PCs once responsible for sending more than 20 percent of all spam, appears to have died off. Security experts say Storm’s death knell was sounded by the recent shutdown of Atrivo, a California based ISP that was home to a number of criminal cyber crime operations, including at least three of the master servers used to control the Storm network.

Three out of four of [Storm’s] control servers were located at Atrivo, a.k.a. Intercage, said Joe Stewart, a senior security researcher with Atlanta based SecureWorks who helped unlock the secrets of the complex Storm network. The fourth server, he said, operated out of Hosting.ua, an Internet provider based in the Ukraine.

Stewart said the final spam run blasted out by Storm was on Sept. 18.Three days later, Atrivo was forced off the Internet after its sole remaining upstream provider — Pacific Internet Exchange (PIE) — decided to stop routing for the troubled ISP. In the weeks leading up to that disconnection, four other upstream providers severed connectivity to Atrivo, following detailed reports from Security Fix and Host Exploit that pointed to a massive amount of spam, malicious software and a host of other cyber criminal operations emanating from it.

Stewart said spam sent by the Storm network had been steadily decreasing throughout 2008, aided in large part by the inclusion of the malware in Microsoft’s malicious software removal tool, which has scrubbed Storm from hundreds of thousands of PCs since last fall. Stewart said it’s impossible to tell whether the Storm worm was disrupted by the Atrivo shutdown or if the worm’s authors pulled the plug themselves and decided to move on. But at least 30,000 systems remain infected with the Storm malware.

The end of Storm Read More »

The end of Storm?

From “Storm Worm botnet cracked wide open” (Heise Security: 9 January 2009):

A team of researchers from Bonn University and RWTH Aachen University have analysed the notorious Storm Worm botnet, and concluded it certainly isn’t as invulnerable as it once seemed. Quite the reverse, for in theory it can be rapidly eliminated using software developed and at least partially disclosed by Georg Wicherski, Tillmann Werner, Felix Leder and Mark Schlösser. However it seems in practice the elimination process would fall foul of the law.

Over the last two years, Storm Worm has demonstrated how easily organised internet criminals have been able to spread this infection. During that period, the Storm Worm botnet has accumulated more than a million infected computers, known as drones or zombies, obeying the commands of a control server and using peer-to-peer techniques to locate new servers. Even following a big clean-up with Microsoft’s Malicious Software Removal Tool, around 100,000 drones probably still remain. That means the Storm Worm botnet is responsible for a considerable share of the Spam tsunami and for many distributed denial-of-service attacks. It’s astonishing that no one has succeeded in dismantling the network, but these researchers say it isn’t due to technical finesse on the part of the Storm Worm’s developers.

Existing knowledge of the techniques used by the Storm Worm has mainly been obtained by observing the behaviour of infected systems, but the researchers took a different approach to disarm it. They reverse translated large parts of the machine code of the drone client program and analysed it, taking a particularly close look at the functions for communications between drones and with the server.

Using this background knowledge, they were able to develop their own client, which links itself into the peer-to-peer structure of a Storm Worm network in such a way that queries from other drones, looking for new command servers, can be reliably routed to it. That enables it to divert drones to a new server. The second step was to analyse the protocol for passing commands. The researchers were astonished to find that the server doesn’t have to authenticate itself to clients, so using their knowledge they were able to direct drones to a simple server. The latter could then issue commands to the test Storm worm drones in the laboratory so that, for example, they downloaded a specific program from a server, perhaps a special cleaning program, and ran it. The students then went on to write such a program.

The team has not yet taken the final step of putting the whole thing into action with a genuine Storm Worm botnet in the wild. From a legal point of view, that could involve many problems. Any unauthorised access to third-party computers could be regarded as tampering with data, which is punishable under paragraph § 303a of the German Penal Code. That paragraph threatens up to two years’ imprisonment for unlawfully deleting, suppressing, making unusable or changing third-party data. Although this legal process would only come into effect if there was a criminal complaint from an injured party, or if there was special public interest in the prosecution of the crime.

Besides risks of coming up against the criminal law, there is also a danger of civil claims for damages by the owners of infected PCs, because the operation might cause collateral damage. There are almost certain to be configurations in which the cleaning goes wrong, perhaps disabling computers so they won’t run any more. Botnet operators could also be expected to strike back, causing further damage.

The end of Storm? Read More »

Largest botnet as of 2006: 1.5 M machines

From Gregg Keizer’s “Dutch Botnet Bigger Than Expected” (InformationWeek: 21 October 2005):

Dutch prosecutors who last month arrested a trio of young men for creating a large botnet allegedly used to extort a U.S. company, steal identities, and distribute spyware now say they bagged bigger prey: a botnet of 1.5 million machines.

According to Wim de Bruin, a spokesman for the Public Prosecution Service (Openbaar Ministerie, or OM), when investigators at GOVCERT.NL, the Netherlands’ Computer Emergency Response Team, and several Internet service providers began dismantling the botnet, they discovered it consisted of about 1.5 million compromised computers, 15 times the 100,000 PCs first thought.

The three suspects, ages 19, 22, and 27, were arrested Oct. 6 …

The trio supposedly used the Toxbot Trojan horse to infect the vast number of machines, easily the largest controlled by arrested attackers.

Largest botnet as of 2006: 1.5 M machines Read More »

Why botnet operators do it: profit, politics, & prestige

From Clive Akass’ “Storm worm ‘making millions a day’” (Personal Computer World: 11 February 2008):

The people behind the Storm worm are making millions of pounds a day by using it to generate revenue, according to IBM’s principal web security strategist.

Joshua Corman, of IBM Internet Security Systems, said that in the past it had been assumed that web security attacks were essential ego driven. But now attackers fell in three camps.

‘I call them my three Ps, profit, politics and prestige,’ he said during a debate at a NetEvents forum in Barcelona.

The Storm worm, which had been around about a year, had been a tremendous financial success because it created a botnet of compromised machines that could be used to launch profitable spam attacks.

Not only do the criminals get money simply for sending out the spam in much more quantity than could be sent by a single machine but they get a cut of any business done off the spam.

Why botnet operators do it: profit, politics, & prestige Read More »

Srizbi, Bobax, & Storm – the rankings

From Gregg Keizer’s “RSA – Top botnets control 1M hijacked computers” (Computerworld: 4 October 2008):

Joe Stewart, director of malware research at SecureWorks, presented his survey at the RSA Conference, which opened Monday in San Francisco. The survey ranked the top 11 botnets that send spam; by extrapolating their size, Stewart estimated the bots on his list control just over a million machines and are capable of flooding the Internet with more than 100 billion spam messages every day.

The botnet at the top of the chart is Srizbi. According to Stewart, this botnet — which also goes by the names “Cbeplay” and “Exchanger” — has an estimated 315,000 bots and can blast out 60 billion messages a day.

While it may not have gotten the publicity that Storm has during the last year, it’s built around a much more substantial collection of hijacked computers, said Stewart. In comparison, Storm’s botnet counts just 85,000 machines, only 35,000 of which are set up to send spam. Storm, in fact, is No. 5 on Stewart’s list.

“Storm is pretty insignificant at this point,” said Stewart. “It got all this attention, so Microsoft added it to its malicious software detection tool [in September 2007], and that’s removed hundreds of thousands of compromised PCs from the botnet.”

The second-largest botnet is “Bobax,” which boasts an estimated 185,000 hacked systems in its collection. Able to spam approximately nine billion messages a day, Bobax has been around for some time, but recently has been in the news again, albeit under one of its several aliases.

Srizbi, Bobax, & Storm – the rankings Read More »

Number of bots drops 20% on Christmas

From Robert Lemos’ “Bot-infected PCs get a refresh” (SecurityFocus: 28 December 2006):

On Christmas day, the number of bots tracked by the Shadowserver group dropped nearly 20 percent.

The dramatic decrease in weekly totals–from more than 500,000 infected systems to less than 400,000 computers–puzzled researchers. The Internet Storm Center, a threat monitoring group managed by the SANS Institute, confirmed a drop of about 10 percent.

One of the Internet Storm Center’s network monitoring volunteers posited that the decrease was due to the large number of computers given as gifts this Christmas. The systems running Microsoft Windows XP will be using Service Pack 2, which also means the firewall will be on by default, adding an additional hurdle for bot herder looking to reclaim their drones.

“Many of the infected machines are turned off, the new shiny ones have not been infected, and the Internet is momentarily a safer place,” Marcus Sachs, director of the ISC, stated in a diary entry. “But like you said, give it a few weeks and we’ll be right back to where we started from.”

Number of bots drops 20% on Christmas Read More »

1/4 of all Internet computers part of a botnet?

From Nate Anderson’s “Vint Cerf: one quarter of all computers part of a botnet” (Ars Technica: 25 January 2007):

The BBC’s Tim Weber, who was in the audience of an Internet panel featuring Vint Cerf, Michael Dell, John Markoff of the New York Times, and Jon Zittrain of Oxford, came away most impressed by the botnet statistics. Cerf told his listeners that approximately 600 million computers are connected to the Internet, and that 150 million of them might be participants in a botnet—nearly all of them unwilling victims. Weber remarks that “in most cases the owners of these computers have not the slightest idea what their little beige friend in the study is up to.”

In September 2006, security research firm Arbor Networks announced that it was now seeing botnet-based denial of service attacks capable of generating an astonishing 10-20Gbps of junk data. The company notes that when major attacks of this sort began, ISPs often do exactly what the attacker wants them to do: take the target site offline.

1/4 of all Internet computers part of a botnet? Read More »

How ARP works

From Chris Sanders’ “Packet School 201 – Part 1 (ARP)” (Completely Full of I.T.: 23 December 2007):

The basic idea behind ARP is for a machine to broadcast its IP address and MAC address to all of the clients in its broadcast domain in order to find out the IP address associated with a particular MAC address. Basically put, it looks like this:

Computer A – “Hey everybody, my IP address is XX.XX.XX.XX, and my MAC address is XX:XX:XX:XX:XX:XX. I need to send something to whoever has the IP address XX.XX.XX.XX, but I don’t know what their hardware address is. Will whoever has this IP address please respond back with their MAC address?

All of the other computers that receive the broadcast will simply ignore it, however, the one who does have the requested IP address will send its MAC address to Computer A. With this information in hand, the exchange of data can being.

Computer B – “Hey Computer A. I am who you are looking for with the IP address of XX.XX.XX.XX. My MAC address is XX:XX:XX:XX:XX:XX.

One of the best ways I’ve seen this concept described is through the limousine driver analogy. If you have ever flown, then chances are when you get off of a plane, you have seen a limo driver standing with a sign bearing someone’s last name. Here, the driver knows the name of the person he is picking up, but doesn’t know what they look like. The driver holds up the sign so that everyone can see it. All of the people getting off of the plane see the sign, and if it isn’t them, they simply ignore it. The person whose name is on the card however, sees it, approaches the driver, and identifies himself.

How ARP works Read More »

The future of security

From Bruce Schneier’s “Security in Ten Years” (Crypto-Gram: 15 December 2007):

Bruce Schneier: … The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.

Marcus Ranum: … Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.

Bruce Schneier: … I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

The future of security Read More »

A single medium, with a single search engine, & a single info source

From Nicholas Carr’s “All hail the information triumvirate!” (Rough Type: 22 January 2009):

Today, another year having passed, I did the searches [on Google] again. And guess what:

World War II: #1
Israel: #1
George Washington: #1
Genome: #1
Agriculture: #1
Herman Melville: #1
Internet: #1
Magna Carta: #1
Evolution: #1
Epilepsy: #1

Yes, it’s a clean sweep for Wikipedia.

The first thing to be said is: Congratulations, Wikipedians. You rule. Seriously, it’s a remarkable achievement. Who would have thought that a rag-tag band of anonymous volunteers could achieve what amounts to hegemony over the results of the most popular search engine, at least when it comes to searches for common topics.

The next thing to be said is: what we seem to have here is evidence of a fundamental failure of the Web as an information-delivery service. Three things have happened, in a blink of history’s eye: (1) a single medium, the Web, has come to dominate the storage and supply of information, (2) a single search engine, Google, has come to dominate the navigation of that medium, and (3) a single information source, Wikipedia, has come to dominate the results served up by that search engine. Even if you adore the Web, Google, and Wikipedia – and I admit there’s much to adore – you have to wonder if the transformation of the Net from a radically heterogeneous information source to a radically homogeneous one is a good thing. Is culture best served by an information triumvirate?

It’s hard to imagine that Wikipedia articles are actually the very best source of information for all of the many thousands of topics on which they now appear as the top Google search result. What’s much more likely is that the Web, through its links, and Google, through its search algorithms, have inadvertently set into motion a very strong feedback loop that amplifies popularity and, in the end, leads us all, lemminglike, down the same well-trod path – the path of least resistance. You might call this the triumph of the wisdom of the crowd. I would suggest that it would be more accurately described as the triumph of the wisdom of the mob. The former sounds benign; the latter, less so.

A single medium, with a single search engine, & a single info source Read More »

Those who know how to fix know how to destroy as well

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

This is true in many aspects of our society. Here’s what I said in my book, Secrets and Lies (page 389): “As technology becomes more complicated, society’s experts become more specialized. And in almost every area, those with the expertise to build society’s infrastructure also have the expertise to destroy it. Ask any doctor how to poison someone untraceably, and he can tell you. Ask someone who works in aircraft maintenance how to drop a 747 out of the sky without getting caught, and he’ll know. Now ask any Internet security professional how to take down the Internet, permanently. I’ve heard about half a dozen different ways, and I know I haven’t exhausted the possibilities.”

Those who know how to fix know how to destroy as well Read More »

Bruce Schneier on security & crime economics

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Basically, you’re asking if crime pays. Most of the time, it doesn’t, and the problem is the different risk characteristics. If I make a computer security mistake — in a book, for a consulting client, at BT — it’s a mistake. It might be expensive, but I learn from it and move on. As a criminal, a mistake likely means jail time — time I can’t spend earning my criminal living. For this reason, it’s hard to improve as a criminal. And this is why there are more criminal masterminds in the movies than in real life.

Crime has been part of our society since our species invented society, and it’s not going away anytime soon. The real question is, “Why is there so much crime and hacking on the Internet, and why isn’t anyone doing anything about it?”

The answer is in the economics of Internet vulnerabilities and attacks: the organizations that are in the position to mitigate the risks aren’t responsible for the risks. This is an externality, and if you want to fix the problem you need to address it. In this essay (more here), I recommend liabilities; companies need to be liable for the effects of their software flaws. A related problem is that the Internet security market is a lemon’s market (discussed here), but there are strategies for dealing with that, too.

Bruce Schneier on security & crime economics Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

Matthew, the blind phone phreaker

From Kevin Poulsen’s “Teenage Hacker Is Blind, Brash and in the Crosshairs of the FBI” (Wired: 29 February 2008):

At 4 in the morning of May 1, 2005, deputies from the El Paso County Sheriff’s Office converged on the suburban Colorado Springs home of Richard Gasper, a TSA screener at the local Colorado Springs Municipal Airport. They were expecting to find a desperate, suicidal gunman holding Gasper and his daughter hostage.

“I will shoot,” the gravely voice had warned, in a phone call to police minutes earlier. “I’m not afraid. I will shoot, and then I will kill myself, because I don’t care.”

But instead of a gunman, it was Gasper himself who stepped into the glare of police floodlights. Deputies ordered Gasper’s hands up and held him for 90 minutes while searching the house. They found no armed intruder, no hostages bound in duct tape. Just Gasper’s 18-year-old daughter and his baffled parents.

A federal Joint Terrorism Task Force would later conclude that Gasper had been the victim of a new type of nasty hoax, called “swatting,” that was spreading across the United States. Pranksters were phoning police with fake murders and hostage crises, spoofing their caller IDs so the calls appear to be coming from inside the target’s home. The result: police SWAT teams rolling to the scene, sometimes bursting into homes, guns drawn.

Now the FBI thinks it has identified the culprit in the Colorado swatting as a 17-year-old East Boston phone phreak known as “Li’l Hacker.” Because he’s underage, Wired.com is not reporting Li’l Hacker’s last name. His first name is Matthew, and he poses a unique challenge to the federal justice system, because he is blind from birth.

Interviews by Wired.com with Matt and his associates, and a review of court documents, FBI reports and audio recordings, paints a picture of a young man with an uncanny talent for quick telephone con jobs. Able to commit vast amounts of information to memory instantly, Matt has mastered the intricacies of telephone switching systems, while developing an innate understanding of human psychology and organization culture — knowledge that he uses to manipulate his patsies and torment his foes.

Matt says he ordered phone company switch manuals off the internet and paid to have them translated into Braille. He became a regular caller to internal telephone company lines, where he’d masquerade as an employee to perform tricks like tracing telephone calls, getting free phone features, obtaining confidential customer information and disconnecting his rivals’ phones.

It was, relatively speaking, mild stuff. The teen though, soon fell in with a bad crowd. The party lines were dominated by a gang of half-a-dozen miscreants who informally called themselves the “Wrecking Crew” and “The Cavalry.”

By then, Matt’s reputation had taken on a life of its own, and tales of some of his hacks — perhaps apocryphal — are now legends. According to Daniels, he hacked his school’s PBX so that every phone would ring at once. Another time, he took control of a hotel elevator, sending it up and down over and over again. One story has it that Matt phoned a telephone company frame room worker at home in the middle of the night, and persuaded him to get out of bed and return to work to disconnect someone’s phone.

Matthew, the blind phone phreaker Read More »

A botnet with a contingency plan

From Gregg Keizer’s “Massive botnet returns from the dead, starts spamming” (Computerworld: 26 November 2008):

A big spam-spewing botnet shut down two weeks ago has been resurrected, security researchers said today, and is again under the control of criminals.

The “Srizbi” botnet returned from the dead late Tuesday, said Fengmin Gong, chief security content officer at FireEye Inc., when the infected PCs were able to successfully reconnect with new command-and-control servers, which are now based in Estonia.

Srizbi was knocked out more than two weeks ago when McColo Corp., a hosting company that had been accused of harboring a wide range of criminal activities, was yanked off the Internet by its upstream service providers. With McColo down, PCs infected with Srizbi and other bot Trojan horses were unable to communicate with their command servers, which had been hosted by McColo. As a result, spam levels dropped precipitously.

But as other researchers noted last week, Srizbi had a fallback strategy. In the end, that strategy paid off for the criminals who control the botnet.

According to Gong, when Srizbi bots were unable to connect with the command-and-control servers hosted by McColo, they tried to connect with new servers via domains that were generated on the fly by an internal algorithm. FireEye reverse-engineered Srizbi, rooted out that algorithm and used it to predict, then preemptively register, several hundred of the possible routing domains.

The domain names, said Gong, were generated on a three-day cycle, and for a while, FireEye was able to keep up — and effectively block Srizbi’s handlers from regaining control.

“We have registered a couple hundred domains,” Gong said, “but we made the decision that we cannot afford to spend so much money to keep registering so many [domain] names.”

Once FireEye stopped preempting Srizbi’s makers, the latter swooped in and registered the five domains in the next cycle. Those domains, in turn, pointed Srizbi bots to the new command-and-control servers, which then immediately updated the infected machines to a new version of the malware.

A botnet with a contingency plan Read More »

The NSA and threats to privacy

From James Bamford’s “Big Brother Is Listening” (The Atlantic: April 2006):

This legislation, the 1978 Foreign Intelligence Surveillance Act, established the FISA court—made up of eleven judges handpicked by the chief justice of the United States—as a secret part of the federal judiciary. The court’s job is to decide whether to grant warrants requested by the NSA or the FBI to monitor communications of American citizens and legal residents. The law allows the government up to three days after it starts eavesdropping to ask for a warrant; every violation of FISA carries a penalty of up to five years in prison. Between May 18, 1979, when the court opened for business, until the end of 2004, it granted 18,742 NSA and FBI applications; it turned down only four outright.

Such facts worry Jonathan Turley, a George Washington University law professor who worked for the NSA as an intern while in law school in the 1980s. The FISA “courtroom,” hidden away on the top floor of the Justice Department building (because even its location is supposed to be secret), is actually a heavily protected, windowless, bug-proof installation known as a Sensitive Compartmented Information Facility, or SCIF.

It is true that the court has been getting tougher. From 1979 through 2000, it modified only two out of 13,087 warrant requests. But from the start of the Bush administration, in 2001, the number of modifications increased to 179 out of 5,645 requests. Most of those—173—involved what the court terms “substantive modifications.”

Contrary to popular perception, the NSA does not engage in “wiretapping”; it collects signals intelligence, or “sigint.” In contrast to the image we have from movies and television of an FBI agent placing a listening device on a target’s phone line, the NSA intercepts entire streams of electronic communications containing millions of telephone calls and e-mails. It runs the intercepts through very powerful computers that screen them for particular names, telephone numbers, Internet addresses, and trigger words or phrases. Any communications containing flagged information are forwarded by the computer for further analysis.

Names and information on the watch lists are shared with the FBI, the CIA, the Department of Homeland Security, and foreign intelligence services. Once a person’s name is in the files, even if nothing incriminating ever turns up, it will likely remain there forever. There is no way to request removal, because there is no way to confirm that a name is on the list.

In December of 1997, in a small factory outside the southern French city of Toulouse, a salesman got caught in the NSA’s electronic web. Agents working for the NSA’s British partner, the Government Communications Headquarters, learned of a letter of credit, valued at more than $1.1 million, issued by Iran’s defense ministry to the French company Microturbo. According to NSA documents, both the NSA and the GCHQ concluded that Iran was attempting to secretly buy from Microturbo an engine for the embargoed C-802 anti-ship missile. Faxes zapping back and forth between Toulouse and Tehran were intercepted by the GCHQ, which sent them on not just to the NSA but also to the Canadian and Australian sigint agencies, as well as to Britain’s MI6. The NSA then sent the reports on the salesman making the Iranian deal to a number of CIA stations around the world, including those in Paris and Bonn, and to the U.S. Commerce Department and the Customs Service. Probably several hundred people in at least four countries were reading the company’s communications.

Such events are central to the current debate involving the potential harm caused by the NSA’s warrantless domestic eavesdropping operation. Even though the salesman did nothing wrong, his name made its way into the computers and onto the watch lists of intelligence, customs, and other secret and law-enforcement organizations around the world. Maybe nothing will come of it. Maybe the next time he tries to enter the United States or Britain he will be denied, without explanation. Maybe he will be arrested. As the domestic eavesdropping program continues to grow, such uncertainties may plague innocent Americans whose names are being run through the supercomputers even though the NSA has not met the established legal standard for a search warrant. It is only when such citizens are turned down while applying for a job with the federal government—or refused when seeking a Small Business Administration loan, or turned back by British customs agents when flying to London on vacation, or even placed on a “no-fly” list—that they will realize that something is very wrong. But they will never learn why.

General Michael Hayden, director of the NSA from 1999 to 2005 and now principal deputy director of national intelligence, noted in 2002 that during the 1990s, e-communications “surpassed traditional communications. That is the same decade when mobile cell phones increased from 16 million to 741 million—an increase of nearly 50 times. That is the same decade when Internet users went from about 4 million to 361 million—an increase of over 90 times. Half as many land lines were laid in the last six years of the 1990s as in the whole previous history of the world. In that same decade of the 1990s, international telephone traffic went from 38 billion minutes to over 100 billion. This year, the world’s population will spend over 180 billion minutes on the phone in international calls alone.”

Intercepting communications carried by satellite is fairly simple for the NSA. The key conduits are the thirty Intelsat satellites that ring the Earth, 22,300 miles above the equator. Many communications from Europe, Africa, and the Middle East to the eastern half of the United States, for example, are first uplinked to an Intelsat satellite and then downlinked to AT&T’s ground station in Etam, West Virginia. From there, phone calls, e-mails, and other communications travel on to various parts of the country. To listen in on that rich stream of information, the NSA built a listening post fifty miles away, near Sugar Grove, West Virginia. Consisting of a group of very large parabolic dishes, hidden in a heavily forested valley and surrounded by tall hills, the post can easily intercept the millions of calls and messages flowing every hour into the Etam station. On the West Coast, high on the edge of a bluff overlooking the Okanogan River, near Brewster, Washington, is the major commercial downlink for communications to and from Asia and the Pacific. Consisting of forty parabolic dishes, it is reportedly the largest satellite antenna farm in the Western Hemisphere. A hundred miles to the south, collecting every whisper, is the NSA’s western listening post, hidden away on a 324,000-acre Army base in Yakima, Washington. The NSA posts collect the international traffic beamed down from the Intelsat satellites over the Atlantic and Pacific. But each also has a number of dishes that appear to be directed at domestic telecommunications satellites.

Until recently, most international telecommunications flowing into and out of the United States traveled by satellite. But faster, more reliable undersea fiber-optic cables have taken the lead, and the NSA has adapted. The agency taps into the cables that don’t reach our shores by using specially designed submarines, such as the USS Jimmy Carter, to attach a complex “bug” to the cable itself. This is difficult, however, and undersea taps are short-lived because the batteries last only a limited time. The fiber-optic transmission cables that enter the United States from Europe and Asia can be tapped more easily at the landing stations where they come ashore. With the acquiescence of the telecommunications companies, it is possible for the NSA to attach monitoring equipment inside the landing station and then run a buried encrypted fiber-optic “backhaul” line to NSA headquarters at Fort Meade, Maryland, where the river of data can be analyzed by supercomputers in near real time.

Tapping into the fiber-optic network that carries the nation’s Internet communications is even easier, as much of the information transits through just a few “switches” (similar to the satellite downlinks). Among the busiest are MAE East (Metropolitan Area Ethernet), in Vienna, Virginia, and MAE West, in San Jose, California, both owned by Verizon. By accessing the switch, the NSA can see who’s e-mailing with whom over the Internet cables and can copy entire messages. Last September, the Federal Communications Commission further opened the door for the agency. The 1994 Communications Assistance for Law Enforcement Act required telephone companies to rewire their networks to provide the government with secret access. The FCC has now extended the act to cover “any type of broadband Internet access service” and the new Internet phone services—and ordered company officials never to discuss any aspect of the program.

The National Security Agency was born in absolute secrecy. Unlike the CIA, which was created publicly by a congressional act, the NSA was brought to life by a top-secret memorandum signed by President Truman in 1952, consolidating the country’s various military sigint operations into a single agency. Even its name was secret, and only a few members of Congress were informed of its existence—and they received no information about some of its most important activities. Such secrecy has lent itself to abuse.

During the Vietnam War, for instance, the agency was heavily involved in spying on the domestic opposition to the government. Many of the Americans on the watch lists of that era were there solely for having protested against the war. … Even so much as writing about the NSA could land a person a place on a watch list.

For instance, during World War I, the government read and censored thousands of telegrams—the e-mail of the day—sent hourly by telegraph companies. Though the end of the war brought with it a reversion to the Radio Act of 1912, which guaranteed the secrecy of communications, the State and War Departments nevertheless joined together in May of 1919 to create America’s first civilian eavesdropping and code-breaking agency, nicknamed the Black Chamber. By arrangement, messengers visited the telegraph companies each morning and took bundles of hard-copy telegrams to the agency’s offices across town. These copies were returned before the close of business that day.

A similar tale followed the end of World War II. In August of 1945, President Truman ordered an end to censorship. That left the Signal Security Agency (the military successor to the Black Chamber, which was shut down in 1929) without its raw intelligence—the telegrams provided by the telegraph companies. The director of the SSA sought access to cable traffic through a secret arrangement with the heads of the three major telegraph companies. The companies agreed to turn all telegrams over to the SSA, under a plan code-named Operation Shamrock. It ran until the government’s domestic spying programs were publicly revealed, in the mid-1970s.

Frank Church, the Idaho Democrat who led the first probe into the National Security Agency, warned in 1975 that the agency’s capabilities

“could be turned around on the American people, and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it is done, is within the reach of the government to know. Such is the capacity of this technology.”

The NSA and threats to privacy Read More »

George Clinton and the sample troll

From Tim Wu’s “On Copyright’s Authorship Policy” (Internet Archive: 2007):

On May 4, 2001, a one-man corporation named Bridgeport Music, Inc. launched over 500 counts of copyright infringement against more than 800 different artists and labels.1 Bridgeport Music has no employees, and other than copyrights, no reported assets.2 Technically, Bridgeport is a “catalogue company.” Others call it a “sample troll.”

Bridgeport is the owner of valuable copyrights, including many of funk singer George Clinton’s most famous songs – songs which are sampled in a good amount of rap music.3 Bridgeport located every sample of Clinton’s and other copyrights it owned, and sued based on the legal position that any sampling of a sound recording, no matter how minimal or unnoticeable, is still an infringement.

During the course of Bridgeport’s campaign, it has won two important victories. First, the Sixth Circuit, the appellate court for Nashville adopted Bridgeport’s theory of infringement. In Bridgeport Music, Inc. v. Dimension Films,4 the defendants sampled a single chord from the George Clinton tune “Get Off Your Ass and Jam,” changed the pitch, and looped the sound. Despite the plausible defense that one note is but a de minimus use of the work, the Sixth Circuit ruled for Bridgeport and created a stark rule: any sampling, no matter how minimal or undetectable, is a copyright infringement. Said the court in Bridgeport, “Get a license or do not sample. We do not see this as stifling creativity in any significant way.”5 In 2006 Bridgeport convinced a district court to enjoin the sales of the bestselling Notorious B.I.G. album, Ready to Die, for “illegal sampling.”6 A jury then awarded Bridgeport more than four million dollars in damages.7

The Bridgeport cases have been heavily criticized, and taken as a prime example of copyright’s excesses.8 Yet the deeper problem with the Bridgeport litigation is not necessarily a problem of too much copyright. It can be equally concluded that the ownership of the relevant rights is the root of the problem. George Clinton, the actual composer and recording artist, takes a much different approach to sampling. “When hip-hop came out,” said Clinton in an interview with journalist Rick Karr, “I was glad to hear it, especially when it was our songs – it was a way to get back on the radio.”9 Clinton accepts sampling of his work, and has released a three CD collection of his sounds for just that purpose.10 The problem is that he doesn’t own many of his most important copyrights. Instead, it is Bridgeport, the one-man company, that owns the rights to Clinton’s work. In the 1970s Bridgeport, through its owner Armen Boladian, managed to seize most of George Clinton’s copyrights and many other valuable rights. In at least a few cases, Boladian assigned the copyrights to Bridgeport by writing a contract and then faking Clinton’s signature.11 As Clinton puts it “he just stole ‘em.”12 With the copyrights to Clinton’s songs in the hands of Bridgeport – an entity with no vested interest in the works beyond their sheer economic value – the targeting of sampling is not surprising.

1 Tim Wu, Jay-Z Versus the Sample Troll, Slate Magazine, Nov. 16, 2006, http://www.slate.com/id/2153961/.

2 See Bridgeport Music, Inc.’s corporate entity details, Michigan Department of Labor & Economic Growth, available at http://www.dleg.state.mi.us/bcs_corp/dt_corp.asp?id_nbr=190824&name_entity=BRIDGEPORT%20MUSIC,%20INC (last visited Mar. 18, 2007).

3 See Wu, supra note 1.

4 410 F.3d 792 (6th Cir. 2005).

5 Id. at 801.

6 Jeff Leeds, Judge Freezes Notorious B.I.G. Album, N.Y. Times, Mar. 21, 2006, at E2.

7 Id.

8 See, e.g., Matthew R. Broodin, Comment, Bridgeport Music, Inc. v. Dimension Films: The Death of the Substantial Similarity Test in Digital Samping Copyright Infringemnt Claims—The Sixth Circuit’s Flawed Attempt at a Bright Line Rule, 6 Minn. J. L. Sci. & Tech. 825 (2005); Jeffrey F. Kersting, Comment, Singing a Different Tune: Was the Sixth Circuit Justified in Changing the Protection of Sound Recordings in Bridgeport Music, Inc. v. Dimension Films?, 74 U. Cin. L. Rev. 663 (2005) (answering the title question in the negative); John Schietinger, Note, Bridgeport Music, Inc. v. Dimension Films: How the Sixth Circuit Missed a Beat on Digital Music Sampling, 55 DePaul L. Rev. 209 (2005).

9 Interview by Rick Karr with George Clinton, at the 5th Annual Future of Music Policy Summit, Wash. D.C. (Sept. 12, 2005), video clip available at http://www.tvworldwide.com/showclip.cfm?ID=6128&clip=2 [hereinafter Clinton Interview].

10 George Clinton, Sample Some of Disc, Sample Some of D.A.T., Vols. 1-3 (1993-94).

11 Sound Generator, George Clinton awarded Funkadelic master recordings (Jun. 6, 2005), http://www.soundgenerator.com/news/showarticle.cfm?articleid=5555.

12 Clinton Interview, supra note 9.

George Clinton and the sample troll Read More »

How Obama raised money in Silicon Valley & using the Net

From Joshua Green’s “The Amazing Money Machine” (The Atlantic: June 2008):

That early fund-raiser [in February 2007] and others like it were important to Obama in several respects. As someone attempting to build a campaign on the fly, he needed money to operate. As someone who dared challenge Hillary Clinton, he needed a considerable amount of it. And as a newcomer to national politics, though he had grassroots appeal, he needed to establish credibility by making inroads to major donors—most of whom, in California as elsewhere, had been locked down by the Clinton campaign.

Silicon Valley was a notable exception. The Internet was still in its infancy when Bill Clinton last ran for president, in 1996, and most of the immense fortunes had not yet come into being; the emerging tech class had not yet taken shape. So, unlike the magnates in California real estate (Walter Shorenstein), apparel (Esprit founder Susie Tompkins Buell), and entertainment (name your Hollywood celeb), who all had long-established loyalty to the Clintons, the tech community was up for grabs in 2007. In a colossal error of judgment, the Clinton campaign never made a serious approach, assuming that Obama would fade and that lack of money and cutting-edge technology couldn’t possibly factor into what was expected to be an easy race. Some of her staff tried to arrange “prospect meetings” in Silicon Valley, but they were overruled. “There was massive frustration about not being able to go out there and recruit people,” a Clinton consultant told me last year. As a result, the wealthiest region of the wealthiest state in the nation was left to Barack Obama.

Furthermore, in Silicon Valley’s unique reckoning, what everyone else considered to be Obama’s major shortcomings—his youth, his inexperience—here counted as prime assets.

[John Roos, Obama’s Northern California finance chair and the CEO of the Palo Alto law firm Wilson Sonsini Goodrich & Rosati]: “… we recognize what great companies have been built on, and that’s ideas, talent, and inspirational leadership.”

The true killer app on My.BarackObama.com is the suite of fund-raising tools. You can, of course, click on a button and make a donation, or you can sign up for the subscription model, as thousands already have, and donate a little every month. You can set up your own page, establish your target number, pound your friends into submission with e-mails to pony up, and watch your personal fund-raising “thermometer” rise. “The idea,” [Joe Rospars, a veteran of Dean’s campaign who had gone on to found an Internet fund-raising company and became Obama’s new-media director] says, “is to give them the tools and have them go out and do all this on their own.”

“What’s amazing,” says Peter Leyden of the New Politics Institute, “is that Hillary built the best campaign that has ever been done in Democratic politics on the old model—she raised more money than anyone before her, she locked down all the party stalwarts, she assembled an all-star team of consultants, and she really mastered this top-down, command-and-control type of outfit. And yet, she’s getting beaten by this political start-up that is essentially a totally different model of the new politics.”

Before leaving Silicon Valley, I stopped by the local Obama headquarters. It was a Friday morning in early March, and the circus had passed through town more than a month earlier, after Obama lost the California primary by nine points. Yet his headquarters was not only open but jammed with volunteers. Soon after I arrived, everyone gathered around a speakerphone, and Obama himself, between votes on the Senate floor, gave a brief hortatory speech telling volunteers to call wavering Edwards delegates in Iowa before the county conventions that Saturday (they took place two months after the presidential caucuses). Afterward, people headed off to rows of computers, put on telephone headsets, and began punching up phone numbers on the Web site, ringing a desk bell after every successful call. The next day, Obama gained nine delegates, including a Clinton delegate.

The most striking thing about all this was that the headquarters is entirely self-sufficient—not a dime has come from the Obama campaign. Instead, everything from the computers to the telephones to the doughnuts and coffee—even the building’s rent and utilities—is user-generated, arranged and paid for by local volunteers. It is one of several such examples across the country, and no other campaign has put together anything that can match this level of self-sufficiency.

But while his rivals continued to depend on big givers, Obama gained more and more small donors, until they finally eclipsed the big ones altogether. In February, the Obama campaign reported that 94 percent of their donations came in increments of $200 or less, versus 26 percent for Clinton and 13 percent for McCain. Obama’s claim of 1,276,000 donors through March is so large that Clinton doesn’t bother to compete; she stopped regularly providing her own number last year.

“If the typical Gore event was 20 people in a living room writing six-figure checks,” Gorenberg told me, “and the Kerry event was 2,000 people in a hotel ballroom writing four-figure checks, this year for Obama we have stadium rallies of 20,000 people who pay absolutely nothing, and then go home and contribute a few dollars online.” Obama himself shrewdly capitalizes on both the turnout and the connectivity of his stadium crowds by routinely asking them to hold up their cell phones and punch in a five-digit number to text their contact information to the campaign—to win their commitment right there on the spot.

How Obama raised money in Silicon Valley & using the Net Read More »

How movies are moved around on botnets

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Figure 2.11 illustrates the use of botnets for selling stolen intellectual property, in this case Movies, TV shows, or video. The diagram is based on information from the Pyramid of Internet Piracy created by Motion Picture Arts Association (MPAA) and an actual case. To start the process, a supplier rips a movie or software from an existing DVD or uses a camcorder to record a first run movie in the theaters. These are either burnt to DVDs to be sold on the black market or they are sold or provided to a Release Group. The Release Group is likely to be an organized crime group, excuse me, business associates who wish to invest in the entertainment industry. I am speculating that the Release Group engages (hires) a botnet operator that can meet their delivery and performance specifications. The botherder then commands the botnet clients to retrieve the media from the supplier and store it in a participating botnet client. These botnet clients may be qualified according to the system processor speed and the nature of the Internet connection. The huge Internet pipe, fast connection, and lax security at most universities make them a prime target for this form of botnet application. MPAA calls these clusters of high speed locations “Topsites.”

. . .

According to the MPAA, 44 percent of all movie piracy is attributed to college students. Therefore it makes sense that the Release Groups would try to use university botnet clients as Topsites. The next groups in the chain are called Facilitators. They operate Web sites and search engines and act as Internet directories. These may be Web sites for which you pay a monthly fee or a fee per download. Finally individuals download the films for their own use or they list them via Peer-to-Peer sharing applications like Gnutella, BitTorrent for download.

How movies are moved around on botnets Read More »