Nicholas Carr’s cloud koan
From Nicholas Carr’s “Cloud koan” (Rough Type: 1 October 2009):
Not everything will move into the cloud, but the cloud will move into everything.
Nicholas Carr’s cloud koan Read More »
From Nicholas Carr’s “Cloud koan” (Rough Type: 1 October 2009):
Not everything will move into the cloud, but the cloud will move into everything.
Nicholas Carr’s cloud koan Read More »
From Darryl K. Taft’s “Predictions for the Cloud in 2009” (eWeek: 29 December 2008):
[Peter] Coffee, who is now director of platform research at Salesforce.com, said, “I’m currently using a simple reference model for what a ‘cloud computing’ initiative should try to provide. I’m borrowing from the famous Zero-One-Infinity rule, canonically defined in The Jargon File…”
He continued, “It seems to me that a serious effort at delivering cloud benefits pursues the following ideals—perhaps never quite reaching them, but clearly having them as goals within theoretical possibility: Zero—On-premise[s] infrastructure, acquisition cost, adoption cost and support cost. One—Coherent software environment—not a ‘stack’ of multiple products from different providers. This avoids the chaos of uncoordinated release cycles or deferred upgrades. Infinity—Scalability in response to changing need, integratability/interoperability with legacy assets and other services, and customizability/programmability from data, through logic, up into the user interface without compromising robust multitenancy.”
A definition of cloud computing Read More »
From Nicholas Carr’s “Further musings on the network effect and the cloud” (Rough Type: 27 October 2008):
I think O’Reilly did a nice job of identifying the different layers of the cloud computing business – infrastructure, development platform, applications – and I think he’s right that they’ll have different economic and competitive characteristics. One thing we don’t know yet, though, is whether those layers will in the long run exist as separate industry sectors or whether they’ll collapse into a single supply model. In other words, will the infrastructure suppliers also come to dominate the supply of apps? Google and Microsoft are obviously trying to play across all three layers, while Amazon so far seems content to focus on the infrastructure business and Salesforce is expanding from the apps layer to the development platform layer. The degree to which the layers remain, or don’t remain, discrete business sectors will play a huge role in determining the ultimate shape, economics, and degree of consolidation in cloud computing.
Let me end on a speculative note: There’s one layer in the cloud that O’Reilly failed to mention, and that layer is actually on top of the application layer. It’s what I’ll call the device layer – encompassing all the various appliances people will use to tap the cloud – and it may ultimately come to be the most interesting layer. A hundred years ago, when Tesla, Westinghouse, Insull, and others were building the cloud of that time – the electric grid – companies viewed the effort in terms of the inputs to their business: in particular, the power they needed to run the machines that produced the goods they sold. But the real revolutionary aspect of the electric grid was not the way it changed business inputs – though that was indeed dramatic – but the way it changed business outputs. After the grid was built, we saw an avalanche of new products outfitted with electric cords, many of which were inconceivable before the grid’s arrival. The real fortunes were made by those companies that thought most creatively about the devices that consumers would plug into the grid. Today, we’re already seeing hints of the device layer – of the cloud as output rather than input. Look at the way, for instance, that the little old iPod has shaped the digital music cloud.
Many layers of cloud computing, or just one? Read More »
From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):
The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.
The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.
…
How Google Is Different from MSN and Yahoo
Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.
Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.
1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.
2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.
3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.
4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.
5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.
6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.
Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).
Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.
Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.
…
Several observations are warranted:
1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.
2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.
3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.
…
Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.
Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.
There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.
…
The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.
The Technology Precepts
… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.
Cheap Hardware and Smart Software
Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.
In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.
The focus on low-cost, commodity hardware and smart software is part of the Google culture.
…
Logical Architecture
Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.
…
The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.
…
When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.
…
In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.
…
Speed and Then More Speed
…
Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.
Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.
…
… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.
…
Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.
…
Eliminate or Reduce Certain System Expenses
Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.
…
Drawbacks of the Googleplex
…
The Laws of Physics: Heat and Power 101
…
In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.
The most failure prone components are:
- Fans.
- IDE drives which fail at the rate of one per 1,000 drives per day.
- Power supplies which fail at a lower rate.
Leveraging the Googleplex
…
Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:
1. Google is fast anywhere in the world.
2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.
3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.
4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.
5. Google squeezes more work out of programmers and engineers by design.
6. Google does not break down, or at least it has not gone offline since 2000.
7. Google’s Googleplex can deliver desktop-server applications now.
8. Google’s applications install and update without burdening the user with gory details and messy crashes.
9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.
An analysis of Google’s technology, 2005 Read More »
From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):
Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:
1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.
This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.
That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.
So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.
In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.
2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.
The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.
3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.
This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.
Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.
This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.
Tim O’Reilly defines cloud computing Read More »
From Spencer Reiss’ “Cloud Computing. Available at Amazon.com Today” (Wired: 21 April 2008):
Almost a third of [Amazon]’s total number of sales last year were made by people selling their stuff through the Amazon machine. The company calls them seller-customers, and there are 1.3 million of them.
…
Log in to Amazon’s gateway today and more than 100 separate services leap into action, crunching data, comparing alternatives, and constructing a totally customized page (all in about 200 milliseconds).
…
Developers get a cheap, instant, essentially limitless computing cloud.
Amazon’s infrastructure and the cloud Read More »