David Pogue’s insights about tech over time

From David Pogue’s “The Lessons of 10 Years of Talking Tech” (The New York Times: 25 November 2010):

As tech decades go, this one has been a jaw-dropper. Since my first column in 2000, the tech world has not so much blossomed as exploded. Think of all the commonplace tech that didn’t even exist 10 years ago: HDTV, Blu-ray, GPS, Wi-Fi, Gmail, YouTube, iPod, iPhone, Kindle, Xbox, Wii, Facebook, Twitter, Android, online music stores, streaming movies and on and on.

With the turkey cooking, this seems like a good moment to review, to reminisce — and to distill some insight from the first decade in the new tech millennium.

Things don’t replace things; they just splinter. I can’t tell you how exhausting it is to keep hearing pundits say that some product is the “iPhone killer” or the “Kindle killer.” Listen, dudes: the history of consumer tech is branching, not replacing.

Things don’t replace things; they just add on. Sooner or later, everything goes on-demand. The last 10 years have brought a sweeping switch from tape and paper storage to digital downloads. Music, TV shows, movies, photos and now books and newspapers. We want instant access. We want it easy.

Some people’s gadgets determine their self-esteem. … Today’s gadgets are intensely personal. Your phone or camera or music player makes a statement, reflects your style and character. No wonder some people interpret criticisms of a product as a criticism of their choices. By extension, it’s a critique of them.

Everybody reads with a lens. … feelings run just as strongly in the tech realm. You can’t use the word “Apple,” “Microsoft” or “Google” in a sentence these days without stirring up emotion.

It’s not that hard to tell the winners from the losers. … There was the Microsoft Spot Watch (2003). This was a wireless wristwatch that could display your appointments and messages — but cost $10 a month, had to be recharged nightly and wouldn’t work outside your home city unless you filled out a Web form in advance.

Some concepts’ time may never come. The same “breakthrough” ideas keep surfacing — and bombing, year after year. For the love of Mike, people, nobody wants videophones!

Teenagers do not want “communicators” that do nothing but send text messages, either (AT&T Ogo, Sony Mylo, Motorola V200). People do not want to surf the Internet on their TV screens (WebTV, AOLTV, Google TV). And give it up on the stripped-down kitchen “Internet appliances” (3Com Audrey, Netpliance i-Opener, Virgin Webplayer). Nobody has ever bought one, and nobody ever will.

Forget about forever — nothing lasts a year. Of the thousands of products I’ve reviewed in 10 years, only a handful are still on the market. Oh, you can find some gadgets whose descendants are still around: iPod, BlackBerry, Internet Explorer and so on.

But it’s mind-frying to contemplate the millions of dollars and person-years that were spent on products and services that now fill the Great Tech Graveyard: Olympus M-Robe. PocketPC. Smart Display. MicroMV. MSN Explorer. Aibo. All those PlaysForSure music players, all those Palm organizers, all those GPS units you had to load up with maps from your computer.

Everybody knows that’s the way tech goes. The trick is to accept your
gadget’s obsolescence at the time you buy it…

Nobody can keep up. Everywhere I go, I meet people who express the same reaction to consumer tech today: there’s too much stuff coming too fast. It’s impossible to keep up with trends, to know what to buy, to avoid feeling left behind. They’re right. There’s never been a period of greater technological change. You couldn’t keep up with all of it if you tried.

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

From Philip Larkin’s “Aubade”

From Philip Larkin’s “Aubade“:

I work all day, and get half drunk at night.
Waking at four to soundless dark, I stare.
In time the curtain edges will grow light.
Till then I see what’s really always there:
Unresting death, a whole day nearer now,
Making all thought impossible but how
And where and when I shall myself die.
Arid interrogation: yet the dread
Of dying, and being dead,
Flashes afresh to hold and horrify.

The mind blanks at the glare. Not in remorse
– The good not used, the love not given, time
Torn off unused – nor wretchedly because
An only life can take so long to climb
Clear of its wrong beginnings, and may never:
But at the total emptiness forever,
The sure extinction that we travel to
And shall be lost in always. Not to be here,
Not to be anywhere,
And soon; nothing more terrible, nothing more true.

This is a special way of being afraid
No trick dispels. Religion used to try,
That vast moth-eaten musical brocade
Created to pretend we never die,
And specious stuff that says no rational being
Can fear a thing it cannot feel, not seeing
that this is what we fear – no sight, no sound,
No touch or taste or smell, nothing to think with,
Nothing to love or link with,
The anaesthetic from which none come round.

And so it stays just on the edge of vision,
A small unfocused blur, a standing chill
That slows each impulse down to indecision
Most things may never happen: this one will,
And realisation of it rages out
In furnace fear when we are caught without
People or drink. Courage is no good:
It means not scaring others. Being brave
Lets no-one off the grave.
Death is no different whined at than withstood.

David Foster Wallace on Kafka

From David Foster Wallace’s “Laughing With Kafka” (Harper’s Magazine: July 1998, pg. 26):

… the really central Kafka joke – that the horrific struggle to establish a human self results in a self whose humanity is inseparable from that horrific struggle. That our endless and impossible journey toward home is in fact our home.

ODF compared & constrasted with OOXML

From Sam Hiser’s “Achieving Openness: A Closer Look at ODF and OOXML” ( 14 June 2007):

An open, XML-based standard for displaying and storing data files (text documents, spreadsheets, and presentations) offers a new and promising approach to data storage and document exchange among office applications. A comparison of the two XML-based formats–OpenDocument Format (“ODF”) and Office Open XML (“OOXML”)–across widely accepted “openness” criteria has revealed substantial differences, including the following:

  • ODF is developed and maintained in an open, multi-vendor, multi-stakeholder process that protects against control by a single organization. OOXML is less open in its development and maintenance, despite being submitted to a formal standards body, because control of the standard ultimately rests with one organization.
  • ODF is the only openly available standard, published fully in a document that is freely available and easy to comprehend. This openness is reflected in the number of competing applications in which ODF is already implemented. Unlike ODF, OOXML’s complexity, extraordinary length, technical omissions, and single-vendor dependencies combine to make alternative implementation unattractive as well as legally and practically impossible.
  • ODF is the only format unencumbered by intellectual property rights (IPR) restrictions on its use in other software, as certified by the Software Freedom Law Center. Conversely, many elements designed into the OOXML formats but left undefined in the OOXML specification require behaviors upon document files that only Microsoft Office applications can provide. This makes data inaccessible and breaks work group productivity whenever alternative software is used.
  • ODF offers interoperability with ODF-compliant applications on most of the common operating system platforms. OOXML is designed to operate fully within the Microsoft environment only. Though it will work elegantly across the many products in the Microsoft catalog, OOXML ignores accepted standards and best practices regarding its use of XML.

Overall, a comparison of both formats reveals significant differences in their levels of openness. While ODF is revealed as sufficiently open across all four key criteria, OOXML shows relative weakness in each criteria and offers fundamental flaws that undermine its candidacy as a global standard.

Offline copy protection in games

From Adam Swiderski’s “A History of Copy Protection” (Edge: 9 June 2008):

Fortunately, the games industry is creative, and thus it was that the offline copy protection was born and flourished. One of its most prevalent forms was an in-game quiz that would require gamers to refer to the manual for specific information – you’d be asked, for example, to enter the third word in the fourth paragraph on page 14. Some titles took a punishing approach to this little Q & A: SSI’s Star Command required a documentation check prior to each in-game save, while Master of Orion would respond to a failed manual check by gradually becoming so difficult that it was impossible to win. Perhaps the most notorious example of this method is Sierra’s King’s Quest III, in which lengthy passages of potion recipes and other information had to be reproduced from the manual. One typo, and you were greeted with a “Game Over” screen.

Other developers eschewed straight manual checks for in-box tools and items that were more integrated into the games with which they shipped, especially once photocopiers became more accessible and allowed would-be pirates to quickly and easily duplicate documentation. LucasArts made a name for itself in this field, utilizing such gems as the Monkey Island series’ multi-level code wheels. Other games, like Maniac Mansion and Indiana Jones and the Last Crusade shipped with the kind of color-masked text one would find in old-school decoder rings; the documents could not be reproduced by the photocopiers of the day and would require the application of a transparent red plastic filter in order to get at their contents.

A very brief history of programming

From Brian Hayes’ “The Post-OOP Paradigm“:

The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when “the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” Half a century later, we’re still debugging.

The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1s and 0s. Moreover, it was up to the programmer to keep track of where everything was stored in the machine’s memory. Before you could call a subroutine, you had to calculate its address.

The technology that lifted these burdens from the programmer was assembly language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming.

Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x 2+y 2 might require dozens of assembly-language instructions. Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x 2+y 2 would be written simply as X**2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler.

… By the 1960s large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large-system programming a “tar pit” and remarked, “Everyone seems to have been surprised by the stickiness of the problem.”

One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra’s brief letter to the editor titled “Go to statement considered harmful.” Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the goto command, which allows jumps into or out of the middle of a routine). Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Böhm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs.

Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.

Object-oriented programming addresses these issues by packing both data and procedures—both nouns and verbs—into a single object. An object named triangle would have inside it some data structure representing a three-sided shape, but it would also include the procedures (called methods in this context) for acting on the data. To rotate a triangle, you send a message to the triangle object, telling it to rotate itself. Sending and receiving messages is the only way objects communicate with one another; outsiders are not allowed direct access to the data. Because only the object’s own methods know about the internal data structures, it’s easier to keep them in sync.

You define the class triangle just once; individual triangles are created as instances of the class. A mechanism called inheritance takes this idea a step further. You might define a more-general class polygon, which would have triangle as a subclass, along with other subclasses such as quadrilateral, pentagon and hexagon. Some methods would be common to all polygons; one example is the calculation of perimeter, which can be done by adding the lengths of the sides, no matter how many sides there are. If you define the method calculate-perimeter in the class polygon, all the subclasses inherit this code.

Commanding the waves to stop

Author: Earth Network Editor Use: Image in pub...
Image via Wikipedia

From Wikipedia’s “Canute the Great“:

[King Canute (994/995 – November 12, 1035)] is perhaps best remembered for the legend of how he commanded the waves to go back. According to the legend, he grew tired of flattery from his courtiers. When one such flatterer gushed that the king could even command the obedience of the sea, Canute proved him wrong by practical demonstration at Bosham, his point being that even a king’s powers have limits. Unfortunately, this legend is usually misunderstood to mean that he believed himself so powerful that the natural elements would obey him, and that his failure to command the tides only made him look foolish. It is quite possible that the legend is simply pro-Canute propaganda.