Tools vs. tasks

From Adam Fields’s blog post, "Unthrilled with the Office 12 UI":

Over many years of designing custom content management interfaces for lots of people to use, it became crystal clear that there’s a huge difference between a “tool” and a “task”. A tool is a function that lets the user do something, but a task is a function that lets the user accomplish something.

In my experience, most successful content management interfaces are primarily task-based. When the user sits down in front of the computer, the goal is to get something done, not just use some tools. Tasks are for most people (beginners and power users alike), but tools are for power users. If you know what you want to do, but it doesn’t fit nicely into the framework of getting something done, you need a tool. Tasks should be the default.

The network is central

The centrality of the network in modern life, thought, & technology is apparent in that both ends of the use stack – users & developers – now think of the network as a given. A developer doesn’t graft networking on to software later in the process; instead, software is built assuming the network. Users expect networking in their apps as a matter of course, even if they don’t realize they’re relying on a network.

The ACLU on monopoly control by ISPs

From the ACLU’s No Competition: How Monopoly Control of the Broadband Internet Threatens Free Speech:

Common carriage policy requires that a network owner – in this case, a telephone company – not discriminate against information by halting, slowing, or otherwise tampering with the transfer of any data. The purpose of common carriage is to prevent a network owner from leveraging its control over the pipeline for communication to gain power or control over the actual information, products and services that flow through it. This is not a new concept; for well over a century it has been applied in ways that have been central to the economic development of our nation, including canal systems, public highways, and the telegraph. And common carriage has been applied to the telephone system since the early 20th century, requiring it to serve all users in an equitable and nondiscriminatory fashion.

2. Cable networks are not open

Unlike phone companies, cable television providers do not have to provide nondiscriminatory access to their TV subscribers, because cable TV is not subject to the common carrier regulatory regime. As a result, the content that cable TV companies deliver is largely under their control. …

3. Cable providers wield total control over Internet use

… Cable providers are under no obligation to remain a neutral pipe for content over an end-to-end Internet – and have many incentives for interfering with that pipe:

Basic control of the service. Providers of course have control over the fundamentals of a customer’s Internet connection. For example, they can restrict the number of computers that a customer connects to the cable modem through a home network. They can control the overall speed and reliability of a customer’s online experie nce. And they can set the price for various levels of high-speed access.

Control over applications. Providers can block their customers from using particular applications, such as video conferencing, Internet telephony, and virtual private networks …

Control over access to content. Even more frightening is the growing ability of cable providers to interfere with content. … That is like the phone company being allowed to own restaurants and then provide good service and clear signals to customers who call Domino’s and frequent busy signals, disconnects and static for those calling Pizza Hut. …

Ability to force-feed content. Cable providers can also use their monopoly power to force-feed content to customers by requiring them to access the Internet through a particular home page containing material selected by the cable company. …

Ability to violate privacy. Finally, a cable provider’s absolute control over its network gives it the technical capacity to record everything its customers do online, down to the smallest mouse click. In February 2002, the nation’s third largest cable company, Comcast, without notification to its customers, began to track their Web browsing. …

According to data provided by the National Cable and Telecommunications Association, the top five cable companies in the United States control 75% of the market; if the proposed merger between Comcast and AT&T is approved, only four companies will control that 75%, with approximately 35% of all cable in the US controlled by Comcast alone. …

The FCC, meanwhile, decided in April 2002 to classify broadband Internet service over cable as an "interstate information service." That technical redefinition would mean that cable broadband could be completely exempt from federal regulation such as interconnection and common carriage requirements, as well as from oversight by local cable franchising authorities. …

In fact, the Internet would never have exploded into American life the way it has without regulations issued by the FCC that curbed the power of the telephone companies in ways that the agency is now refusing to do for cable:

  • In 1975, the FCC issued a landmark regulation preventing telephone companies from blocking their customers from attaching their own equipment to the phone network. If the agency had decided this issue the other way, regular Americans would not have been able to use computer modems, and the Internet as we know it never could have been created.
  • In 1980, the agency set out rules that required telephone companies to offer "data services" through separate affiliates because they would have had both the ability and the incentive to use their control of the telephone network to discriminate against unaffiliated, competing data services.
  • In 1983, the FCC issued a regulation preventing telephone companies from charging ISPs by the minute for their use of the local telephone network; if they had allowed such charges, consumers would have to pay per-minute fees for Internet access. That would have slowed the growth of the Internet, as such fees have done in Europe.

6 distinct food consumers

From "Lies, Deep Fries and Statistics", at Ockham’s Razor:

So why is that, if so many people state that they are concerned about GM foods?

An indication of why has been provided by Environics International, a Canadian company which has done some cluster graphs on consumer attitudes to food and whose research translates well into Australia. The general finding of its research shows that attitudes towards GM foods are more driven by general attitudes towards food than attitudes towards gene technology.

They have defined six distinct consumer segments:

The first, Food Elites, who prefer to eat organics and the best foods and will pay for them ( about 1 in 10 amongst the population ).

Then, the Naturalists, who prefer to buy from markets rather than supermarkets ( about 1 in 8 ).

Fearful Shoppers, who have concerns about most foods, predominantly elder consumers ( about 1 in 5 ).

Nutrition Seekers, who treat food as fuel for the body ( about 1 in 5 ).

Date Code Diligent, who read labels, but generally only look at the use-by date and fat content, predominantly younger women ( about 1 in 8 ).

And The Unconcerned, who don’t really care too much about what they eat, predominantly younger men ( about 1 in 8 ).

Those top three, the food elites, the naturalists and the fearful shoppers, are concerned about many food issues and also concerned about GM foods. The bottom three, the nutrition seekers, the date code diligent and the unconcerned have specific concerns only, or aren’t too concerned about foods at all and are not concerned about GM foods.

What makes a great hacker?

From Paul Graham’s "Great Hackers":

… In programming, as in many fields, the hard part isn’t solving problems, but deciding what problems to solve. …

What do hackers want? Like all craftsmen, hackers like good tools. In fact, that’s an understatement. Good hackers find it unbearable to use bad tools. They’ll simply refuse to work on projects with the wrong infrastructure. …

Great hackers also generally insist on using open source software. Not just because it’s better, but because it gives them more control. Good hackers insist on control. This is part of what makes them good hackers: when something’s broken, they need to fix it. …

After software, the most important tool to a hacker is probably his office. Big companies think the function of office space is to express rank. But hackers use their offices for more than that: they use their office as a place to think in. And if you’re a technology company, their thoughts are your product. So making hackers work in a noisy, distracting environment is like having a paint factory where the air is full of soot. …

Indeed, these statistics about Cobol or Java being the most popular language can be misleading. What we ought to look at, if we want to know what tools are best, is what hackers choose when they can choose freely– that is, in projects of their own. When you ask that question, you find that open source operating systems already have a dominant market share, and the number one language is probably Perl. …

Along with good tools, hackers want interesting projects. …

This is an area where managers can make a difference. Like a parent saying to a child, I bet you can’t clean up your whole room in ten minutes, a good manager can sometimes redefine a problem as a more interesting one. Steve Jobs seems to be particularly good at this, in part simply by having high standards. …

Along with interesting problems, what good hackers like is other good hackers. Great hackers tend to clump together …

When I was in grad school I used to hang around the MIT AI Lab occasionally. It was kind of intimidating at first. Everyone there spoke so fast. But after a while I learned the trick of speaking fast. You don’t have to think any faster; just use twice as many words to say everything. …

I’ve found that people who are great at something are not so much convinced of their own greatness as mystified at why everyone else seems so incompetent. …

The key to being a good hacker may be to work on what you like. When I think about the great hackers I know, one thing they have in common is the extreme difficulty of making them work on anything they don’t want to. I don’t know if this is cause or effect; it may be both. …

The best hackers tend to be smart, of course, but that’s true in a lot of fields. Is there some quality that’s unique to hackers? I asked some friends, and the number one thing they mentioned was curiosity. I’d always supposed that all smart people were curious– that curiosity was simply the first derivative of knowledge. But apparently hackers are particularly curious, especially about how things work. That makes sense, because programs are in effect giant descriptions of how things work.

Several friends mentioned hackers’ ability to concentrate– their ability, as one put it, to ‘tune out everything outside their own heads.’ …


It’s hard to say exactly what constitutes research in the computer world, but as a first approximation, it’s software that doesn’t have users.

The printed book results in more handwritten mss

From “William Caxton“, at The Science Show:

More than 500 years later a copy of Caxton’s first edition of Chaucer became the most expensive book ever sold, knocked down at auction in the 1990s for 4.6 million pounds. But in the 15th Century, the obvious appeal of the newly printed books lay in their value for money. Books became so commonplace indeed, that some snobs employed scribes to copy Caxton’s printed editions back into manuscript, while both church and government became alarmed at the access to new ideas that the printing press offered to a widening public. [Emphasis added]

The first printed English books

From “William Caxton“, at The Science Show:

In 1474, his History of Troy, his own book, became the first book to be printed in English and two years later he brought his press to England setting up shop near the Chapter House in the precinct of Westminster Abbey, where parliament met. Caxton had an eye for a good location. Along the route between the palace of Westminster and the Chapter House shuffled lawyers, churchmen, courtiers, MPs – the book buying elite of England. The former cloth trader also had an eye for a best seller. The second book he printed was about chess, the game and play of the chess. Then came in fairly quick succession, a French-English dictionary, a translation of Aesop’s fables, several popular romances, Mallory’s tale of Camelot in Le Mort d’Arthur, some school text books, a history of England, an encyclopaedia entitled The Mirror of the World and Chaucer’s bawdy evergreen collection, The Canterbury Tales.

Cringely on patents, trademarks, & copyright

From Robert X. Cringely’s “Patently Absurd: Patent Reform Legislation in Congress Amounts to Little More Than a ‘Get Out of Jail Free’ Card for Microsoft“:

There are several forms of intellectual property protected by U.S. law. Among these are patents, trademarks, and copyrights. The goal of all three forms of protection is to encourage hard work through the granting of some economic exclusivity, and thereby helping the nation by growing the economy and through the good works made possible by new inventions. Trademarks reduce ambiguity in marketing and promotion. Copyrights protect artistic and intellectual expression. And patents protect ideas. Of these three categories of intellectual property, the ones recently subject to reform efforts are copyrights and patents, and each of these seems to be headed in a different direction, though for generally the same reason.

Copyright law is being tightened at the behest of big publishers and especially big record and movie companies. The Digital Millennium Copyright Act, for example, makes it a crime to defeat copy protection of CDs and DVDs, thus helping to preserve the property rights of these companies. At the end of some artistic productivity chain, it is supposed to protect the rest of us, too, most notably by encouraging the record and movie companies to make more records and movies, which we will in turn be discouraged from copying illegally.

Patent reform works the other way. Where we are tightening copyrights to help big companies, we are loosening patents, also to help big companies. Certainly it isn’t to help you or me.

TIPpies ‘n flow

From Cory Doctorow’s transcript of Danny O’Brien’s “Life Hacks Live” speech:

Technically Inexperienced People (TIPpies) are NEVER in a flow state. If you try to help people who are battling their computers, they’re never concentrating on their task, never in flow.

Brian Eno on the MSFT Windows 95 sound

Brian Eno composed the famous sound that plays when you start up Windows 95 (Don’t remember it? You can download it here.). Here’s what he had to say about composing it:

The idea came up at the time when I was completely bereft of ideas. I’d been working on my own music for a while and was quite lost, actually. And I really appreciated someone coming along and saying, ‘Here’s a specific problem — solve it.’ The thing from the agency said, ‘We want a piece of music that is inspiring, universal, blah-blah, da-da-da, optimistic, futuristic, sentimental, emotional,’ this whole list of adjectives, and then at the bottom it said ‘and it must be 3¼ seconds long.’ I thought this was so funny and an amazing thought to actually try to make a little piece of music. It’s like making a tiny little jewel. In fact, I made 84 pieces. I got completely into this world of tiny, tiny little pieces of music. I was so sensitive to microseconds at the end of this that it really broke a logjam in my own work. Then when I’d finished that and I went back to working with pieces that were like three minutes long, it seemed like oceans of time.

My first book – Don’t Click on the Blue E! – is out!

For all those surfers who have slowly grown disenchanted with Microsoft’s Internet Explorer web browser, Don’t Click on the Blue E! from O’Reilly is here to help. It offers non-technical users a convenient roadmap for switching to a better web browser – Firefox.

The only book that covers the switch to Firefox, Don’t Click on the Blue E! is a must for anyone who wants to browse faster, more securely, and more efficiently. It takes readers through the process step-by-step, so it’s easy to understand. Schools, non-profits, businesses, and individuals can all benefit from this how-to guide.

Firefox includes most of the features that browser users are familiar with, along with several new features other browsers don’t have, such as a bookmarks toolbar and window tabs that allow users to quickly switch among several web sites. There is also the likelihood of better security with Firefox.

All indications say that Firefox is more than just a passing fad. With USA Today and Forbes Magazine hailing it as superior to Internet Explorer, Firefox is clearly the web browser of the future. In fact, as it stands today, already 22% of the market currently employs Firefox for their browsing purposes.

Don’t Click on the Blue E! has been written exclusively for this growing audience. With its straightforward approach, it helps people harness this emerging technology so they can enjoy a superior – and safer – browsing experience.

Read two sample excerpts: Counteracting Web Annoyances (651 kb PDF) & Safety and Security (252 kb PDF).

Translated into Japanese!

Buy Don’t Click on the Blue E! from Amazon!

SSL in depth

I host Web sites, but we’ve only recently [2004] had to start implementing SSL, the Secure Sockets Layer, which turns http into https. I’ve been on the lookout for a good overview of SSL that explains why it is implemented as it is, and I think I’ve finally found one: Chris Shiflett: HTTP Developer’s Handbook: 18. Secure Sockets Layer is a chapter from Shiflett’s book posted on his web site, and boy it is good.

SSL has dramatically changed the way people use the Web, and it provides a very good solution to many of the Web’s shortcomings, most importantly:

  • Data integrity – SSL can help ensure that data (HTTP messages) cannot be changed while in transit.
  • Data confidentiality – SSL provides strong cryptographic techniques used to encrypt HTTP messages.
  • Identification – SSL can offer reasonable assurance as to the identity of a Web server. It can also be used to validate the identity of a client, but this is less common.

Shiflett is a clear technical writer, and if this chapter is any indication, the rest of his book may be worth buying.

Mozilla fixes a bug … fast

One of the arguments anti-open sourcers often try to advance is that open source has just as many security holes as closed source software. On top of that one, the anti-OSS folks then go on to say that once open source software is as widely used as their closed source equivalents, they’ll suffer just as many attacks. Now, I’ve argued before that this is a wrong-headed attitude, at least as far as email viruses are concerned, and I think the fact that Apache is the most-widely used Web server in the world, yet sees only a fraction of the constant stream of security disasters that IIS does, pretty much belies the argument.

Now a blogger named sacarny has created a timeline detailing a vulnerability that was found in Mozilla and the time it took to fix it. It starts on July 7, at 13:46 GMT, and ends on July 8, at 21:57 GMT – in other words, it took a little over 24 hours for the Mozilla developers to fix a serious hole. And best of all, the whole process was open and documented. Sure, open source has bugs – all software does – but it tends to get fixed. Fast.

BSD vs. Linux

As a Linux user, I don’t have a lot of daily experience using BSD. Oh sure, I use it on a couple of servers that I rent, but I certainly have never used it on the desktop. And while I certainly understand the concepts, history, and ideas behind Linux very well (although there’s always more to learn), I don’t really know that much about BSD. So it was a delight to read BSD vs. Linux.

“It’s been my impression that the BSD communit{y,ies}, in general, understand Linux far better than the Linux communit{y,ies} understand BSD. I have a few theories on why that is, but that’s not really relevant. I think a lot of Linux people get turned off BSD because they don’t really understand how and why it’s put together. Thus, this rant; as a BSD person, I want to try to explain how BSD works in a way that Linux people can absorb.”

In particular, I thought the contrast between the non-unified nature of Linux and the unified nature of BSD was pretty darn fascinating. As the author points out, this is not to criticize Linux – it’s just the way it is. It’s not a value judgment. Here’s the author on BSD:

“By contrast, BSD has always had a centralized development model. There’s always been an entity that’s “in charge” of the system. BSD doesn’t use GNU ls or GNU libc, it uses BSD’s ls and BSD’s libc, which are direct descendents of the ls and libc that where in the CSRG-distributed BSD releases. They’ve never been developed or packaged independently. You can’t go ‘download BSD libc’ somewhere, because in the BSD world, libc by itself is meaningless. ls by itself is meaningless. The kernel by itself is meaningless. The system as a whole is one piece, not a bunch of little pieces.”

11 pages of really interesting, well-explained analysis. If you’re a Linux user, go read it. You’ll learn about the other great open source OS.