twitter

David Pogue’s insights about tech over time

From David Pogue’s “The Lessons of 10 Years of Talking Tech” (The New York Times: 25 November 2010):

As tech decades go, this one has been a jaw-dropper. Since my first column in 2000, the tech world has not so much blossomed as exploded. Think of all the commonplace tech that didn’t even exist 10 years ago: HDTV, Blu-ray, GPS, Wi-Fi, Gmail, YouTube, iPod, iPhone, Kindle, Xbox, Wii, Facebook, Twitter, Android, online music stores, streaming movies and on and on.

With the turkey cooking, this seems like a good moment to review, to reminisce — and to distill some insight from the first decade in the new tech millennium.

Things don’t replace things; they just splinter. I can’t tell you how exhausting it is to keep hearing pundits say that some product is the “iPhone killer” or the “Kindle killer.” Listen, dudes: the history of consumer tech is branching, not replacing.

Things don’t replace things; they just add on. Sooner or later, everything goes on-demand. The last 10 years have brought a sweeping switch from tape and paper storage to digital downloads. Music, TV shows, movies, photos and now books and newspapers. We want instant access. We want it easy.

Some people’s gadgets determine their self-esteem. … Today’s gadgets are intensely personal. Your phone or camera or music player makes a statement, reflects your style and character. No wonder some people interpret criticisms of a product as a criticism of their choices. By extension, it’s a critique of them.

Everybody reads with a lens. … feelings run just as strongly in the tech realm. You can’t use the word “Apple,” “Microsoft” or “Google” in a sentence these days without stirring up emotion.

It’s not that hard to tell the winners from the losers. … There was the Microsoft Spot Watch (2003). This was a wireless wristwatch that could display your appointments and messages — but cost $10 a month, had to be recharged nightly and wouldn’t work outside your home city unless you filled out a Web form in advance.

Some concepts’ time may never come. The same “breakthrough” ideas keep surfacing — and bombing, year after year. For the love of Mike, people, nobody wants videophones!

Teenagers do not want “communicators” that do nothing but send text messages, either (AT&T Ogo, Sony Mylo, Motorola V200). People do not want to surf the Internet on their TV screens (WebTV, AOLTV, Google TV). And give it up on the stripped-down kitchen “Internet appliances” (3Com Audrey, Netpliance i-Opener, Virgin Webplayer). Nobody has ever bought one, and nobody ever will.

Forget about forever — nothing lasts a year. Of the thousands of products I’ve reviewed in 10 years, only a handful are still on the market. Oh, you can find some gadgets whose descendants are still around: iPod, BlackBerry, Internet Explorer and so on.

But it’s mind-frying to contemplate the millions of dollars and person-years that were spent on products and services that now fill the Great Tech Graveyard: Olympus M-Robe. PocketPC. Smart Display. MicroMV. MSN Explorer. Aibo. All those PlaysForSure music players, all those Palm organizers, all those GPS units you had to load up with maps from your computer.

Everybody knows that’s the way tech goes. The trick is to accept your
gadget’s obsolescence at the time you buy it…

Nobody can keep up. Everywhere I go, I meet people who express the same reaction to consumer tech today: there’s too much stuff coming too fast. It’s impossible to keep up with trends, to know what to buy, to avoid feeling left behind. They’re right. There’s never been a period of greater technological change. You couldn’t keep up with all of it if you tried.

David Pogue’s insights about tech over time Read More »

Ambient awareness & social media

From Clive Thompson’s “Brave New World of Digital Intimacy” (The New York Times Magazine: 5 September 2008):

In essence, Facebook users didn’t think they wanted constant, up-to-the-minute updates on what other people are doing. Yet when they experienced this sort of omnipresent knowledge, they found it intriguing and addictive. Why?

Social scientists have a name for this sort of incessant online contact. They call it “ambient awareness.” It is, they say, very much like being physically near someone and picking up on his mood through the little things he does — body language, sighs, stray comments — out of the corner of your eye. Facebook is no longer alone in offering this sort of interaction online. In the last year, there has been a boom in tools for “microblogging”: posting frequent tiny updates on what you’re doing. The phenomenon is quite different from what we normally think of as blogging, because a blog post is usually a written piece, sometimes quite long: a statement of opinion, a story, an analysis. But these new updates are something different. They’re far shorter, far more frequent and less carefully considered. One of the most popular new tools is Twitter, a Web site and messaging service that allows its two-million-plus users to broadcast to their friends haiku-length updates — limited to 140 characters, as brief as a mobile-phone text message — on what they’re doing. There are other services for reporting where you’re traveling (Dopplr) or for quickly tossing online a stream of the pictures, videos or Web sites you’re looking at (Tumblr). And there are even tools that give your location. When the new iPhone, with built-in tracking, was introduced in July, one million people began using Loopt, a piece of software that automatically tells all your friends exactly where you are.

This is the paradox of ambient awareness. Each little update — each individual bit of social information — is insignificant on its own, even supremely mundane. But taken together, over time, the little snippets coalesce into a surprisingly sophisticated portrait of your friends’ and family members’ lives, like thousands of dots making a pointillist painting. This was never before possible, because in the real world, no friend would bother to call you up and detail the sandwiches she was eating. The ambient information becomes like “a type of E.S.P.,” as Haley described it to me, an invisible dimension floating over everyday life.

“It’s like I can distantly read everyone’s mind,” Haley went on to say. “I love that. I feel like I’m getting to something raw about my friends. It’s like I’ve got this heads-up display for them.” It can also lead to more real-life contact, because when one member of Haley’s group decides to go out to a bar or see a band and Twitters about his plans, the others see it, and some decide to drop by — ad hoc, self-organizing socializing. And when they do socialize face to face, it feels oddly as if they’ve never actually been apart. They don’t need to ask, “So, what have you been up to?” because they already know. Instead, they’ll begin discussing something that one of the friends Twittered that afternoon, as if picking up a conversation in the middle.

You could also regard the growing popularity of online awareness as a reaction to social isolation, the modern American disconnectedness that Robert Putnam explored in his book “Bowling Alone.” The mobile workforce requires people to travel more frequently for work, leaving friends and family behind, and members of the growing army of the self-employed often spend their days in solitude. Ambient intimacy becomes a way to “feel less alone,” as more than one Facebook and Twitter user told me.

Ambient awareness & social media Read More »

The future of news as shown by the 2008 election

From Steven Berlin Johnson’s “Old Growth Media And The Future Of News” (StevenBerlinJohnson.com: 14 March 2009):

The first Presidential election that I followed in an obsessive way was the 1992 election that Clinton won. I was as compulsive a news junkie about that campaign as I was about the Mac in college: every day the Times would have a handful of stories about the campaign stops or debates or latest polls. Every night I would dutifully tune into Crossfire to hear what the punditocracy had to say about the day’s events. I read Newsweek and Time and the New Republic, and scoured the New Yorker for its occasional political pieces. When the debates aired, I’d watch religiously and stay up late soaking in the commentary from the assembled experts.

That was hardly a desert, to be sure. But compare it to the information channels that were available to me following the 2008 election. Everything I relied on in 1992 was still around of course – except for the late, lamented Crossfire – but it was now part of a vast new forest of news, data, opinion, satire – and perhaps most importantly, direct experience. Sites like Talking Points Memo and Politico did extensive direct reporting. Daily Kos provided in-depth surveys and field reports on state races that the Times would never have had the ink to cover. Individual bloggers like Andrew Sullivan responded to each twist in the news cycle; HuffPo culled the most provocative opinion pieces from the rest of the blogosphere. Nate Silver at fivethirtyeight.com did meta-analysis of polling that blew away anything William Schneider dreamed of doing on CNN in 1992. When the economy imploded in September, I followed economist bloggers like Brad DeLong to get their expert take the candidates’ responses to the crisis. (Yochai Benchler talks about this phenomenon of academics engaging with the news cycle in a smart response here.) I watched the debates with a thousand virtual friends live-Twittering alongside me on the couch. All this was filtered and remixed through the extraordinary political satire of John Stewart and Stephen Colbert, which I watched via viral clips on the Web as much as I watched on TV.

What’s more: the ecosystem of political news also included information coming directly from the candidates. Think about the Philadelphia race speech, arguably one of the two or three most important events in the whole campaign. Eight million people watched it on YouTube alone. Now, what would have happened to that speech had it been delivered in 1992? Would any of the networks have aired it in its entirety? Certainly not. It would have been reduced to a minute-long soundbite on the evening news. CNN probably would have aired it live, which might have meant that 500,000 people caught it. Fox News and MSNBC? They didn’t exist yet. A few serious newspaper might have reprinted it in its entirety, which might have added another million to the audience. Online perhaps someone would have uploaded a transcript to Compuserve or The Well, but that’s about the most we could have hoped for.

There is no question in mind my mind that the political news ecosystem of 2008 was far superior to that of 1992: I had more information about the state of the race, the tactics of both campaigns, the issues they were wrestling with, the mind of the electorate in different regions of the country. And I had more immediate access to the candidates themselves: their speeches and unscripted exchanges; their body language and position papers.

The old line on this new diversity was that it was fundamentally parasitic: bloggers were interesting, sure, but if the traditional news organizations went away, the bloggers would have nothing to write about, since most of what they did was link to professionally reported stories. Let me be clear: traditional news organizations were an important part of the 2008 ecosystem, no doubt about it. … But no reasonable observer of the political news ecosystem could describe all the new species as parasites on the traditional media. Imagine how many barrels of ink were purchased to print newspaper commentary on Obama’s San Francisco gaffe about people “clinging to their guns and religion.” But the original reporting on that quote didn’t come from the Times or the Journal; it came from a “citizen reporter” named Mayhill Fowler, part of the Off The Bus project sponsored by Jay Rosen’s Newassignment.net and The Huffington Post.

The future of news as shown by the 2008 election Read More »

Interviewed for an article about mis-uses of Twitter

The Saint Louis Beacon published an article on 27 April 2009 titled “Tweets from the jury box aren’t amusing“, about legal “cases across the country where jurors have used cell phones, BlackBerrys and other devices to comment – sometimes minute by minute or second by second on Twitter, for instance – on what they are doing, hearing and seeing during jury duty.” In it, I was quoted as follows:

The small mobile devices so prevalent today can encourage a talkative habit that some people may find hard to break.

“People get so used to doing it all the time,” said Scott Granneman, an adjunct professor of communications and journalism at Washington University. “They don’t stop to think that being on a jury means they should stop. It’s an etiquette thing.”

Moreover, he added, the habit can become so ingrained in some people – even those on juries who are warned repeatedly they should not Tweet or text or talk about they case they are on – that they make excuses.

“It’s habitual,” Granneman said. “They say ‘It’s just my friends and family reading this.’ They don’t know the whole world is following this.

“Anybody can go to any Twitter page. There may be only eight people following you, but anybody can go anyone’s page and anybody can reTweet – forward someone’s page: ‘Oh my God, the defense attorney is so stupid.’ That can go on and on and on.”

Interviewed for an article about mis-uses of Twitter Read More »

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »