October 2008

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »

6 reasons why “content” has been devalued

From Jonathan Handel’s “Is Content Worthless?” (The Huffington Post: 11 April 2008):

Everyone focuses on piracy, but there are actually six related reasons for the devaluation of content. The first is supply and demand. Demand — the number of consumers and their available leisure time – is relatively constant, but supply — online content — has grown enormously in the last decade. Some of this is professional content set free from boundaries of time and space, now available worldwide, anytime, and usually at no cost (whether legally or not). Even more is user generated content (UGC) — websites, blogs, YouTube videos — created by non-professionals who don’t care whether they get paid, and who themselves pay little or nothing to create and distribute it.

The second is the loss of physical form. It just seems natural to value a physical thing more highly than something intangible. Physical objects have been with us since the beginning of time; distributable intangible content has not. Perhaps for that reason, we tend to focus on per-unit costs (zero for an intangible such as a movie download), while forgetting about fixed costs (such as the cost of making the movie in the first place). Also, and critically, if you steal something tangible, you deny it to the owner; a purloined DVD is no longer available to the merchant, for instance. But if you misappropriate an intangible, it’s still there for others to use. …

The third reason is that acquiring content is increasingly frictionless. It’s often easier, particularly for young people, to access content on the Internet than through traditional means. …

Fourth is that most new media business models are ad-supported rather than pay per view or subscription. If there’s no cost to the user, why should consumers see the content as valuable, and if some content is free, why not all of it? …

Fifth is market forces in the technology industry. Computers, web services, and consumer electronic devices are more valuable when more content is available. In turn, these products make content more usable by providing new distribution channels. Traditional media companies are slow to adopt these new technologies, for fear of cannibalizing revenue from existing channels and offending powerful distribution partners. In contrast, non-professionals, long denied access to distribution, rush to use the new technologies, as do pirates of professional content. As a result, technological innovation reduces the market share of paid professional content.

Finally, there’s culture. A generation of users has grown up indifferent or hostile to copyright, particularly in music, movies and software.

6 reasons why “content” has been devalued Read More »

Thinking like an engineer; thinking like a security pro

From Bruce Schneier’s “Inside the Twisted Mind of the Security Professional” (Wired: 20 March 2008):

This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.

Thinking like an engineer; thinking like a security pro Read More »

His employer’s misconfigured laptop gets him charged with a crime

From Robert McMillan’s “A misconfigured laptop, a wrecked life” (NetworkWorld: 18 June 2008):

When the Commonwealth of Massachusetts issued Michael Fiola a Dell Latitude in November 2006, it set off a chain of events that would cost him his job, his friends and about a year of his life, as he fought criminal charges that he had downloaded child pornography onto the laptop. Last week, prosecutors dropped their year-old case after a state investigation of his computer determined there was insufficient evidence to prove he had downloaded the files.

An initial state investigation had come to the opposite conclusion, and authorities took a second look at Fiola’s case only after he hired a forensic investigator to look at his laptop. What she found was scary, given the gravity of the charges against him: The Microsoft SMS (Systems Management Server) software used to keep his laptop up to date was not functional. Neither was its antivirus protection. And the laptop was crawling with malicious programs that were most likely responsible for the files on his PC.

Fiola had been an investigator with the state’s Department of Industrial Accidents, examining businesses to see whether they had worker’s compensation plans. Over the past two days, however, he’s become a spokesman for people who have had their lives ruined by malicious software.

[Fiola narrates his story:] We had a laptop basically to do our reports instantaneously. If I went to a business and found that they were out of compliance, I would log on and type in a report so it could get back to the home office in Boston immediately. We also used it to research businesses. …

My boss called me into his office at 9 a.m. The director of the Department of Industrial Accidents, my immediate supervisor, and the personnel director were there. They handed me a letter and said, “You are being fired for a violation of the computer usage policy. You have pornography on your computer. You’re fired. Clean out your desk. Let’s go.” …

It was horrible. No paycheck. I lost all my benefits. I lost my insurance. My wife is very, very understanding. She took the bull by the horns and found an attorney. I was just paralyzed, I couldn’t do anything. I can’t describe the feeling to you. I wouldn’t wish this on my worst enemy. It’s just devastating.

If you get in a car accident and you kill somebody, people talk to you afterwards. All our friends abandoned us. The only family that stood by us was my dad, her parents, my stepdaughter and one other good friend of ours. And that was it. Nobody called. We spent many weekends at home just crying. I’m 53 years old and I don’t think I’ve cried as much in my whole life as I did in the past 18 months. …

His employer’s misconfigured laptop gets him charged with a crime Read More »

Bush’s Manicheanism destroyed him

From Glenn Greenwald’s “A tragic legacy: How a good vs. evil mentality destroyed the Bush presidency” (Salon: 20 June 2007):

One of the principal dangers of vesting power in a leader who is convinced of his own righteousness — who believes that, by virtue of his ascension to political power, he has been called to a crusade against Evil — is that the moral imperative driving the mission will justify any and all means used to achieve it. Those who have become convinced that they are waging an epic and all-consuming existential war against Evil cannot, by the very premises of their belief system, accept any limitations — moral, pragmatic, or otherwise — on the methods adopted to triumph in this battle.

Efforts to impose limits on waging war against Evil will themselves be seen as impediments to Good, if not as an attempt to aid and abet Evil. In a Manichean worldview, there is no imperative that can compete with the mission of defeating Evil. The primacy of that mandate is unchallengeable. Hence, there are no valid reasons for declaring off-limits any weapons that can be deployed in service of the war against Evil.

Equally operative in the Manichean worldview is the principle that those who are warriors for a universal Good cannot recognize that the particular means they employ in service of their mission may be immoral or even misguided. The very fact that the instruments they embrace are employed in service of their Manichean mission renders any such objections incoherent. How can an act undertaken in order to strengthen the side of Good, and to weaken the forces of Evil, ever be anything other than Good in itself? Thus, any act undertaken by a warrior of Good in service of the war against Evil is inherently moral for that reason alone.

It is from these premises that the most amoral or even most reprehensible outcomes can be — and often are — produced by political movements and political leaders grounded in universal moral certainties. Intoxicated by his own righteousness and therefore immune from doubt, the Manichean warrior becomes capable of acts of moral monstrousness that would be unthinkable in the absence of such unquestionable moral conviction. One who believes himself to be leading a supreme war against Evil on behalf of Good will be incapable of understanding any claims that he himself is acting immorally.

That is the essence of virtually every argument Bush supporters make regarding terrorism. No matter what objection is raised to the never-ending expansions of executive power, no matter what competing values are touted (due process, the rule of law, the principles our country embodies, how we are perceived around the world), the response will always be that The Terrorists are waging war against us and our overarching priority — one that overrides all others — is to protect ourselves, to triumph over Evil. By definition, then, there can never be any good reason to oppose vesting powers in the government to protect us from The Terrorists because that goal outweighs all others.

But our entire system of government, from its inception, has been based upon a very different calculus — that is, that many things matter besides merely protecting ourselves against threats, and consequently, we are willing to accept risks, even potentially fatal ones, in order to secure those other values. From its founding, America has rejected the worldview of prioritizing physical safety above all else, as such a mentality leads to an impoverished and empty civic life. The premise of America is and always has been that imposing limitations on government power is necessary to secure liberty and avoid tyranny even if it means accepting an increased risk of death as a result. That is the foundational American value.

It is this courageous demand for core liberties even if such liberties provide less than maximum protection from physical risks that has made America bold, brave, and free. Societies driven exclusively or primarily by a fear of avoiding Evil, minimizing risks, and seeking above all else that our government “protects” us are not free. That is a path that inevitably leads to authoritarianism — an increasingly strong and empowered leader in whom the citizens vest ever-increasing faith and power in exchange for promises of safety. That is most assuredly not the historical ethos of the United States.

The Bill of Rights contains numerous limitations on government power, and many of them render us more vulnerable to threats. If there is a serial killer on the loose in a community, the police would be able to find and apprehend him much more easily if they could simply invade and search everyone’s homes at will and without warning. Nonetheless, the Fourth Amendment expressly prohibits the police from undertaking such searches. It requires both probable cause and a judicial warrant before police may do so, even though such limitations on state power will enable dangerous killers to elude capture.

The scare tactic of telling Americans that every desired expansion of government power is justified by the Evil Terrorist Threat — and that there is no need to worry because the president is Good and will use these powers only to protect us — is effective because it has immediate rhetorical appeal. Most people, especially when placed in fear of potentially fatal threats, are receptive to the argument that maximizing protection is the only thing that matters, and that no abstract concept (such as liberty, or freedom, or due process, or adhering to civilized norms) is worth risking one’s life by accepting heightened levels of vulnerability.

But nothing in life is perfectly safe. Perfect safety is an illusion. When pursued by an individual to the exclusion of all else, it creates a tragically worthless, paralyzed way of life. On the political level, safety as the paramount goal produces tyranny, causing people to vest as much power as possible in the government, without limits, in exchange for the promise of maximum protection.

Bush’s Manicheanism destroyed him Read More »

How technologies have changed politics, & how Obama uses tech

From Marc Ambinder’s “HisSpace” (The Atlantic: June 2008):

Improvements to the printing press helped Andrew Jackson form and organize the Democratic Party, and he courted newspaper editors and publishers, some of whom became members of his Cabinet, with a zeal then unknown among political leaders. But the postal service, which was coming into its own as he reached for the presidency, was perhaps even more important to his election and public image. Jackson’s exploits in the War of 1812 became well known thanks in large measure to the distribution network that the postal service had created, and his 1828 campaign—among the first to distribute biographical pamphlets by mail—reinforced his heroic image. As president, he turned the office of postmaster into a patronage position, expanded the postal network further—the historian Richard John has pointed out that by the middle of Jackson’s first term, there were 2,000 more postal workers in America than soldiers in the Army—and used it to keep his populist base rallied behind him.

Abraham Lincoln became a national celebrity, according to the historian Allen Guelzo’s new book, Lincoln and Douglas: The Debates That Defined America, when transcripts of those debates were reprinted nationwide in newspapers, which were just then reaching critical mass in distribution beyond the few Eastern cities where they had previously flourished. Newspapers enabled Lincoln, an odd-looking man with a reed-thin voice, to become a viable national candidate …

Franklin Delano Roosevelt used radio to make his case for a dramatic redefinition of government itself, quickly mastering the informal tone best suited to the medium. In his fireside chats, Roosevelt reached directly into American living rooms at pivotal moments of his presidency. His talks—which by turns soothed, educated, and pressed for change—held the New Deal together.

And of course John F. Kennedy famously rode into the White House thanks in part to the first televised presidential debate in U.S. history, in which his keen sense of the medium’s visual impact, plus a little makeup, enabled him to fashion the look of a winner (especially when compared with a pale and haggard Richard Nixon). Kennedy used TV primarily to create and maintain his public image, not as a governing tool, but he understood its strengths and limitations before his peers did …

[Obama’s] speeches play well on YouTube, which allows for more than the five-second sound bites that have characterized the television era. And he recognizes the importance of transparency and consistency at a time when access to everything a politician has ever said is at the fingertips of every voter. But as Joshua Green notes in the preceding pages, Obama has truly set himself apart by his campaign’s use of the Internet to organize support. No other candidate in this or any other election has ever built a support network like Obama’s. The campaign’s 8,000 Web-based affinity groups, 750,000 active volunteers, and 1,276,000 donors have provided him with an enormous financial and organizational advantage in the Democratic primary.

What Obama seems to promise is, at its outer limits, a participatory democracy in which the opportunities for participation have been radically expanded. He proposes creating a public, Google-like database of every federal dollar spent. He aims to post every piece of non-emergency legislation online for five days before he signs it so that Americans can comment. A White House blog—also with comments—would be a near certainty. Overseeing this new apparatus would be a chief technology officer.

There is some precedent for Obama’s vision. The British government has already used the Web to try to increase interaction with its citizenry, to limited effect. In November 2006, it established a Web site for citizens seeking redress from their government, http://petitions.pm.gov.uk/. More than 29,000 petitions have since been submitted, and about 9.5 percent of Britons have signed at least one of them. The petitions range from the class-conscious (“Order a independent report to identify reasons that the living conditions of working class people are poor in relation to higher classes”) to the parochial (“We the undersigned petition the Prime Minister to re-open sunderland ice rink”).

How technologies have changed politics, & how Obama uses tech Read More »

Correcting wrong info reinforces false beliefs

From Jonathan M. Gitlin’s “Does ideology trump facts? Studies say it often does” (Ars Technica: 24 September 2008):

We like to think that people will be well informed before making important decisions, such as who to vote for, but the truth is that’s not always the case. Being uninformed is one thing, but having a population that’s actively misinformed presents problems when it comes to participating in the national debate, or the democratic process. If the findings of some political scientists are right, attempting to correct misinformation might do nothing more than reinforce the false belief.

This sort of misinformation isn’t hypothetical; in 2003 a study found that viewers of Fox News were significantly more misinformed about the Iraq war, with far greater percentages of viewers erroneously believing that Iraq possessed WMDs or that there was a credible link between the 9/11 attack and Saddam Hussein than those who got their news from other outlets like NPR and PBS. This has led to the rise of websites like FactCheck and SourceWatch.

Saying that correcting misinformation does little more than reinforce a false belief is a pretty controversial proposal, but the claim is based on a number of studies that examine the effect of political or ideological bias on fact correction. In the studies, volunteers were shown news items or political adverts that contained misinformation, followed by a correction. For example, a study by John Bullock of Yale showed volunteers a political ad created by NARAL that linked Justice John Roberts to a violent anti-abortion group, followed by news that the ad had been withdrawn. Interestingly, Democratic participants had a worse opinion of Roberts after being shown the ad, even after they were told it was false.

Over half (56 percent) of Democratic subjects disapproved of Roberts before the misinformation. That rose to 80 percent afterward, but even after correcting the misinformation, 72 percent of Democratic subjects still had a negative opinion. Republican volunteers, on the other hand, only showed a small increase in disapproval after watching the misinformation (11 percent vs 14 percent).

Correcting wrong info reinforces false beliefs Read More »

An elderly Eskimo & his unusual knife

From Wade Davis’ “Wade Davis: an Inuit elder and his shit knife” (Boing Boing: 26 September 2008):

The Inuit didn’t fear the cold; they took advantage of it. During the 1950s the Canadian government forced the Inuit into settlements. A family from Arctic Bay told me this fantastic story of their grandfather who refused to go. The family, fearful for his life, took away all of his tools and all of his implements, thinking that would force him into the settlement. But instead, he just slipped out of an igloo on a cold Arctic night, pulled down his caribou and sealskin trousers, and defecated into his hand. As the feces began to freeze, he shaped it into the form of an implement. And when the blade started to take shape, he put a spray of saliva along the leading edge to sharpen it. That’s when what they call the “shit knife” took form. He used it to butcher a dog. Skinned the dog with it. Improvised a sled with the dog’s rib cage, and then, using the skin, he harnessed up an adjacent living dog. He put the shit knife in his belt and disapp eared into the night.

An elderly Eskimo & his unusual knife Read More »

How to run a command repeatedly

You can use the watch command, but it unfortunately isn’t available for Mac OS X. At least, from Apple. Sveinbjorn Thordarson (great name!) has a version of watch that you can download and compile on your OS X box. It’s available at http://www.sveinbjorn.org/watch_macosx.

Or, you can use this shell script:

while true ; do foo ; sleep 1 ; done

This will run foo every second until you press Ctrl-C to cancel the script.

How to run a command repeatedly Read More »