What patents on life has wrought

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

The Supreme Court’s decision in 1980 to allow for the patenting of living organisms opened the spigots to individual claims of ownership over everything from genes and protein receptors to biochemical pathways and processes. Soon, research scientists were swooping into patent offices around the world with “invention” disclosures that weren’t so much products or processes as they were simply knowledge–or research tools to further knowledge.

The problem is, once it became clear that individuals could own little parcels of biology or chemistry, the common domain of scientific exchange–that dynamic place where theories are introduced, then challenged, and ultimately improved–begins to shrink. What’s more, as the number of claims grows, so do the overlapping claims and legal challenges. …

In October 1990 a researcher named Mary-Claire King at the University of California at Berkeley told the world that there was a breast-cancer susceptibility gene–and that it was on chromosome 17. Several other groups, sifting through 30 million base pairs of nucleotides to find the precise location of the gene, helped narrow the search with each new discovery. Then, in the spring of 1994, a team led by Mark Skolnick at the University of Utah beat everyone to the punch–identifying a gene with 5,592 base pairs and codes for a protein that was nearly 1,900 amino acids long. Skolnick’s team rushed to file a patent application and was issued title to the discovery three years later.

By all accounts the science was a collective effort. The NIH had funded scores of investigative teams around the country and given nearly 1,200 separate research grants to learn everything there was to learn about the genetics of breast cancer.

The patent, however, is licensed to one company–Skolnick’s. Myriad Genetics, a company the researcher founded in 1991, now insists on doing all U.S. testing for the presence of unknown mutation in the two related genes, BRCA1 and BRCA2. Those who have a mutation in either gene have as high as an 86% chance of getting cancer, say experts. The cost for the complete two-gene analysis: $2,975.

Critics say that Myriad’s ultrarestrictive licensing of the technology–one funded not only by federal dollars but also aided by the prior discoveries of hundreds of other scientists–is keeping the price of the test artificially high. Skolnick, 59, claims that the price is justified by his company’s careful analysis of thousands of base pairs of DNA, each of which is prone to a mutation or deletion, and by its educational outreach programs.

What patents on life has wrought Read More »

1980 Bayh-Dole Act created the biotech industry … & turned universities into businesses

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

For a century or more, the white-hot core of American innovation has been basic science. And the foundation of basic science has been the fluid exchange of ideas at the nation’s research universities. It has always been a surprisingly simple equation: Let scientists do their thing and share their work–and industry picks up the spoils. Academics win awards, companies make products, Americans benefit from an ever-rising standard of living.

That equation still holds, with the conspicuous exception of medical research. In this one area, something alarming has been happening over the past 25 years: Universities have evolved from public trusts into something closer to venture capital firms. What used to be a scientific community of free and open debate now often seems like a litigious scrum of data-hoarding and suspicion. And what’s more, Americans are paying for it through the nose. …

From 1992 to September 2003, pharmaceutical companies tied up the federal courts with 494 patent suits. That’s more than the number filed in the computer hardware, aerospace, defense, and chemical industries combined. Those legal expenses are part of a giant, hidden “drug tax”–a tax that has to be paid by someone. And that someone, as you’ll see below, is you. You don’t get the tab all at once, of course. It shows up in higher drug costs, higher tuition bills, higher taxes–and tragically, fewer medical miracles.

So how did we get to this sorry place? It was one piece of federal legislation that you’ve probably never heard of–a 1980 tweak to the U.S. patent and trademark law known as the Bayh-Dole Act. That single law, named for its sponsors, Senators Birch Bayh and Bob Dole, in essence transferred the title of all discoveries made with the help of federal research grants to the universities and small businesses where they were made.

Prior to the law’s enactment, inventors could always petition the government for the patent rights to their own work, though the rules were different at each federal agency; some 20 different statutes governed patent policy. The law simplified the “technology transfer” process and, more important, changed the legal presumption about who ought to own and develop new ideas–private enterprise as opposed to Uncle Sam. The new provisions encouraged academic institutions to seek out the clever ideas hiding in the backs of their research cupboards and to pursue licenses with business. And it told them to share some of the take with the actual inventors.

On the face of it, Bayh-Dole makes sense. Indeed, supporters say the law helped create the $43-billion-a-year biotech industry and has brought valuable drugs to market that otherwise would never have seen the light of day. What’s more, say many scholars, the law has created megaclusters of entrepreneurial companies–each an engine for high-paying, high-skilled jobs–all across the land.

That all sounds wonderful. Except that Bayh-Dole’s impact wasn’t so much in the industry it helped create, but rather in its unintended consequence–a legal frenzy that’s diverting scientists from doing science. …

A 1979 audit of government-held patents showed that fewer than 5% of some 28,000 discoveries–all of them made with the help of taxpayer money–had been developed, because no company was willing to risk the capital to commercialize them without owning title. …

A dozen schools–notably MIT, Stanford, the University of California, Johns Hopkins, and the University of Wisconsin–already had campus offices to work out licensing arrangements with government agencies and industry. But within a few years Technology Licensing Offices (or TLOs) were sprouting up everywhere. In 1979, American universities received 264 patents. By 1991, when a new organization, the Association of University Technology Managers, began compiling data, North American institutions (including colleges, research institutes, and hospitals) had filed 1,584 new U.S. patent applications and negotiated 1,229 licenses with industry–netting $218 million in royalties. By 2003 such institutions had filed five times as many new patent applications; they’d done 4,516 licensing deals and raked in over $1.3 billion in income. And on top of all that, 374 brand-new companies had sprouted from the wells of university research. That meant jobs pouring back into the community …

The anecdotal reports, fun “discovery stories” in alumni magazines, and numbers from the yearly AUTM surveys suggested that the academic productivity marvel had spread far and wide. But that’s hardly the case. Roughly a third of the new discoveries and more than half of all university licensing income in 2003 derived from just ten schools–MIT, Stanford, the usual suspects. They are, for the most part, the institutions that were pursuing “technology transfer” long before Bayh-Dole. …

Court dockets are now clogged with university patent claims. In 2002, North American academic institutions spent over $200 million in litigation (though some of that was returned in judgments)–more than five times the amount spent in 1991. Stanford Law School professor emeritus John Barton notes, in a 2000 study published in Science, that the indicator that correlates most perfectly with the rise in university patents is the number of intellectual-property lawyers. (Universities also spent $142 million on lobbying over the past six years.) …

So what do universities do with all their cash? That depends. Apart from the general guidelines provided by Bayh-Dole, which indicate the proceeds must be used for “scientific research or education,” there are no instructions. “These are unrestricted dollars that they can use, and so they’re worth a lot more than other dollars,” says University of Michigan law professor Rebecca Eisenberg, who has written extensively about the legislation. The one thing no school seems to use the money for is tuition–which apparently has little to do with “scientific research or education.” Meanwhile, the cost of university tuition has soared at a rate more than twice as high as inflation from 1980 to 2005.

1980 Bayh-Dole Act created the biotech industry … & turned universities into businesses Read More »

What is serious news reporting?

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

Serious reporting is based in verified fact passed through mature professional judgment. It has integrity. It engages readers – there’s that word again, readers – with compelling stories and it appeals to their human capacity for reason. This is the information that people need so they can make good life decisions and good citizenship decisions. Serious reporting is far from grim and solemn and off-putting. It is accessible and relevant to its readers. And the best serious reporting is a joy to read.

Serious reporting emanates largely from responsible local dailies and national and foreign reporting by big news organizations, print and broadcast. But the reporting all these institutions do is diminishing. With fewer reporters chasing the news, there is less and less variety in the stories citizens see and hear. The media that are booming, especially cable news and blogs, do precious little serious reporting. Or they do it for specialized audiences.

What is serious news reporting? Read More »

Neil Postman: the medium is the metaphor for the way we think

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

In late 1980s the late Neil Postman wrote an enduringly important book called Amusing Ourselves to Death. In it he says that Marshall McLuhan only came close to getting it right in his famous adage, that the medium is the message. Postman corrects McLuhan by saying that the medium is the metaphor – a metaphor for the way we think. Written narrative that people can read, Postman goes on, is a metaphor for thinking logically. And he says that image media bypass reason and go straight to the emotions. The image media are a metaphor for not thinking logically. Images disable thinking, so unless people read and use their reason democracy is disabled as well.

Neil Postman: the medium is the metaphor for the way we think Read More »

“Have you ever been admitted to a mental institution?”

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

And then there were [Walter] Annenberg’s political shenanigans – he shamelessly used his news columns [in The Philadelphia Inquirer] to embarrass candidates who dared to run against his favorites. One day in 1966 a Democrat named Milton Shapp held a press conference while running for governor and Annenberg’s hand-picked political reporter asked him only one question. The question was, “Mr. Shapp, have you ever been admitted to a mental institution?” “Why no,” Shapp responded, and went away scratching his head about this odd question. The next morning he didn’t need to scratch his head any more. A five-column front page Inquirer headline read, “Shapp Denies Mental Institution Stay.” I’m not making this up. I’ve seen the clipping – a friend used to have a framed copy above his desk. Those were not the good old days.

“Have you ever been admitted to a mental institution?” Read More »

Wal-Mart’s monopsony power damages its vendors

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

Instead, the firm is also one of the world’s most intrusive, jealous, fastidious micromanagers, and its aim is nothing less than to remake entirely how its suppliers do business, not least so that it can shift many of its own costs of doing business onto them. In addition to dictating what price its suppliers must accept, Wal-Mart also dictates how they package their products, how they ship those products, and how they gather and process information on the movement of those products. Take, for instance, Levi Strauss & Co. Wal-Mart dictates that its suppliers tell it what price they charge Wal-Mart’s competitors, that they accept payment entirely on Wal-Mart’s terms, and that they share information all the way back to the purchase of raw materials. Take, for instance, Newell Rubbermaid. Wal-Mart controls with whom its suppliers speak, how and where they can sell their goods, and even encourages them to support Wal-Mart in its political fights. Take, for instance, Disney. Wal-Mart all but dictates to suppliers where to manufacture their products, as well as how to design those products and what materials and ingredients to use in those products. Take, for instance, Coca-Cola [… Wal-Mart decided that it did not approve of the artificial sweetener Coca-Cola planned to use in a new line of diet colas. In a response that would have been unthinkable just a few years ago, Coca-Cola yielded to the will of an outside firm and designed a second product to meet Wal-Mart’s decree.]. …

Wal-Mart and a growing number of today’s dominant firms, by contrast, are programmed to cut cost faster than price, to slow the introduction of new technologies and techniques, to dictate downward the wages and profits of the millions of people and smaller firms who make and grow what they sell, to break down entire lines of production in the name of efficiency. The effects of this change are clear: We see them in the collapsing profit margins of the firms caught in Wal-Mart’s system. We see them in the fact that of Wal-Mart’s top ten suppliers in 1994, four have sought bankruptcy protection.

Wal-Mart’s monopsony power damages its vendors Read More »

Antitrust suits led to vertical integration & the IT revolution

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

As the industrial scholar Alfred D. Chandler has noted, the vertically integrated firm — which dominated the American economy for most of the last century — was to a great degree the product of antitrust enforcement. When Theodore Roosevelt began to limit the ability of large companies to grow horizontally, many responded by buying outside suppliers and integrating their operations into vertical lines of production. Many also set up internal research labs to improve existing products and develop new ones. Antitrust law later played a huge role in launching the information revolution. During the Cold War, the Justice Department routinely used antitrust suits to force high-tech firms to share the technologies they had developed. Targeted firms like IBM, RCA, AT&T, and Xerox spilled many thousands of patents onto the market, where they were available to any American competitor for free.

Antitrust suits led to vertical integration & the IT revolution Read More »

The mirror of monopoly: monopsony … which may be worse

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

Popular notions of oligopoly and monopoly tend to focus on the danger that firms, having gained control over a marketplace, will then be able to dictate an unfairly high price, extracting a sort of tax from society as a whole. But what should concern us today even more is a mirror image of monopoly called “monopsony.” Monopsony arises when a firm captures the ability to dictate price to its suppliers, because the suppliers have no real choice other than to deal with that buyer. Not all oligopolists rely on the exercise of monopsony, but a large and growing contingent of today’s largest firms are built to do just that. The ultimate danger of monopsony is that it deprives the firms that actually manufacture products from obtaining an adequate return on their investment. In other words, the ultimate danger of monopsony is that, over time, it tends to destroy the machines and skills on which we all rely.

Examples of monopsony can be difficult to pin down, but we are in luck in that today we have one of the best illustrations of monopsony pricing power in economic history: Wal-Mart. There is little need to recount at any length the retailer’s power over America’s marketplace. For our purposes, a few facts will suffice — that one in every five retail sales in America is recorded at Wal-Mart’s cash registers; that the firm’s revenue nearly equals that of the next six retailers combined; that for many goods, Wal-Mart accounts for upward of 30 percent of U.S. sales, and plans to more than double its sales within the next five years.

… The problem is that Wal-Mart, like other monopsonists, does not participate in the market so much as use its power to micromanage the market, carefully coordinating the actions of thousands of firms from a position above the market.

The mirror of monopoly: monopsony … which may be worse Read More »

Corporate consolidation reigns in American business, & that’s a problem

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

It is now twenty-five years since the Reagan Administration eviscerated America’s century-long tradition of antitrust enforcement. For a generation, big firms have enjoyed almost complete license to use brute economic force to grow only bigger. And so today we find ourselves in a world dominated by immense global oligopolies that every day further limit the flexibility of our economy and our personal freedom within it. There are still many instances of intense competition — just ask General Motors.

But since the great opening of global markets in the early 1990s, the tendency within most of the systems we rely on for manufactured goods, processed commodities, and basic services has been toward ever more extreme consolidation. Consider raw materials: three firms control almost 75 percent of the global market in iron ore. Consider manufacturing services: Owens Illinois has rolled up roughly half the global capacity to supply glass containers. We see extreme consolidation in heavy equipment; General Electric builds 60 percent of large gas turbines as well as 60 percent of large wind turbines. In processed materials; Corning produces 60 percent of the glass for flat-screen televisions. Even in sneakers; Nike and Adidas split a 60-percent share of the global market. Consolidation reigns in banking, meatpacking, oil refining, and grains. It holds even in eyeglasses, a field in which the Italian firm Luxottica has captured control over five of the six national outlets in the U.S. market.

The stakes could not be higher. In systems where oligopolies rule unchecked by the state, competition itself is transformed from a free-for-all into a kind of private-property right, a license to the powerful to fence off entire marketplaces, there to pit supplier against supplier, community against community, and worker against worker, for their own private gain. When oligopolies rule unchecked by the state, what is perverted is the free market itself, and our freedom as individuals within the economy and ultimately within our political system as well.

Corporate consolidation reigns in American business, & that’s a problem Read More »

AACS, next-gen encryption for DVDs

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

AACS relies on the well-established AES (with 128-bit keys) to safeguard the disc data. Just like DVD players, HD DVD and Blu-ray drives will come with a set of Device Keys handed out to the manufacturers by AACS LA. Unlike the CSS encryption used in DVDs, though, AACS has a built-in method for revoking sets of keys that are cracked and made public. AACS-encrypted discs will feature a Media Key Block that all players need to access in order to get the key needed to decrypt the video files on the disc. The MKB can be updated by AACS LA to prevent certain sets of Device Keys from functioning with future titles – a feature that AACS dubs “revocation.” …

AACS also supports a new feature called the Image Constraint Token. When set, the ICT will force video output to be degraded over analog connections. ICT has so far gone unused, though this could change at any time. …

While AACS is used by both HD disc formats, the Blu-ray Disc Association (BDA) has added some features of its own to make the format “more secure” than HD DVD. The additions are BD+ and ROM Mark; though both are designed to thwart pirates, they work quite differently.

While the generic AACS spec includes key revocation, BD+ actually allows the BDA to update the entire encryption system once players have already shipped. Should encryption be cracked, new discs will include information that will alter the players’ decryption code. …

The other new technology, ROM Mark, affects the manufacturing of Blu-ray discs. All Blu-ray mastering equipment must be licensed by the BDA, and they will ensure that all of it carries ROM Mark technology. Whenever a legitimate disc is created, it is given a “unique and undetectable identifier.” It’s not undetectable to the player, though, and players can refuse to play discs without a ROM Mark. The BDA has the optimistic hope that this will keep industrial-scale piracy at bay. We’ll see.

AACS, next-gen encryption for DVDs Read More »

How DVD encryption (CSS) works … or doesn’t

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

DVD players are factory-built with a set of keys. When a DVD is inserted, the player runs through every key it knows until one unlocks the disc. Once this disc key is known, the player uses it to retrieve a title key from the disc. This title key actually allows the player to unscramble the disc’s contents.

The decryption process might have been formidable when first drawn up, but it had begun to look weak even by 1999. Frank Stevenson, who published a good breakdown of the technology, estimated at that time that a 450Mhz Pentium III could crack the code in only 18 seconds – and that’s without even having a player key in the first place. In other, words a simple brute force attack could crack the code at runtime, assuming that users were patient enough to wait up to 18 seconds. With today’s technology, of course, the same crack would be trivial.

Once the code was cracked, the genie was out of the bottle. CSS descramblers proliferated …

Because the CSS system could not be updated once in the field, the entire system was all but broken. Attempts to patch the system (such as Macrovision’s “RipGuard”) met with limited success, and DVDs today remain easy to copy using a multitude of freely available tools.

How DVD encryption (CSS) works … or doesn’t Read More »

Where we are technically with DRM

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

The attacks on FairPlay have been enlightening because of what they illustrate about the current state of DRM. They show, for instance, that modern DRM schemes are difficult to bypass, ignore, or strip out with a few lines of code. In contrast to older “patches” of computer software (what you would generally bypass a program’s authorization routine), the encryption on modern media files is pervasive. All of the software mentioned has still required Apple’s decoding technology to unscramble the song files; there is no simple hack that can simply strip the files clean without help, and the ciphers are complex enough to make brute-force cracks difficult.

Apple’s response has also been a reminder that cracking an encryption scheme once will no longer be enough in the networked era. Each time that its DRM has been bypassed, Apple has been able to push out updates to its customers that render the hacks useless (or at least make them more difficult to achieve).

Where we are technically with DRM Read More »

Apple iTunes Music Store applies DRM after download

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

A third approach [to subverting Apple’s DRM] came from PyMusique, software originally written so that Linux users could access the iTunes Music Store. The software took advantage of the fact that iTMS transmits DRM-free songs to its customers and relies on iTunes to add that gooey layer of DRM goodness at the client end. PyMusique emulates iTunes and serves as a front end to the store, allowing users to browse and purchase music. When songs are downloaded, however, the program “neglects” to apply the FairPlay DRM.

Apple iTunes Music Store applies DRM after download Read More »

To combat phishing, change browser design philosophy

From Federico Biancuzzi’s “Phishing with Rachna Dhamija” (SecurityFocus: 19 June 2006):

We discovered that existing security cues are ineffective, for three reasons:

1. The indicators are ignored (23% of participants in our study did not look at the address bar, status bar, or any SSL indicators).

2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn’t realize that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.

3. The security indicators are trivial to spoof. Many users can’t distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80% of participants. …

Currently, I’m working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user’s history information (for example, “you’ve been to this website many times” or “you’ve never submitted this form before”) can significantly increase a user’s ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I’ve been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

Rachna Dhamija: I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customized security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user’s focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password. …

With security skins, we were trying to solve not user authentication, but the reverse problem – server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like “Server X is authenticated”, or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user’s browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can’t be replayed by an attacker and it won’t reveal anything useful about the user’s password. …

Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can’t simulate is an interface he can’t predict. This is the principle that DSS relies on. We should make it easy for users to personalize their interfaces. Look at how popular screensavers, ringtones, and application skins are – users clearly enjoy the ability to personalize their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

To combat phishing, change browser design philosophy Read More »

1% create, 10% comment, 89% just use

From Charles Arthur’s “What is the 1% rule?” (Guardian Unlimited: 20 July 2006):

It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will “interact” with it (commenting or offering improvements) and the other 89 will just view it.

It’s a meme that emerges strongly in statistics from YouTube, which in just 18 months has gone from zero to 60% of all online video viewing.

The numbers are revealing: each day there are 100 million downloads and 65,000 uploads – which as Antony Mayfield (at http://open.typepad.com/open) points out, is 1,538 downloads per upload – and 20m unique users per month.

That puts the “creator to consumer” ratio at just 0.5%, but it’s early days yet …

50% of all Wikipedia article edits are done by 0.7% of users, and more than 70% of all articles have been written by just 1.8% of all users, according to the Church of the Customer blog (http://customerevangelists.typepad.com/blog/).

Earlier metrics garnered from community sites suggested that about 80% of content was produced by 20% of the users, but the growing number of data points is creating a clearer picture of how Web 2.0 groups need to think. For instance, a site that demands too much interaction and content generation from users will see nine out of 10 people just pass by.

Bradley Horowitz of Yahoo points out that much the same applies at Yahoo: in Yahoo Groups, the discussion lists, “1% of the user population might start a group; 10% of the user population might participate actively, and actually author content, whether starting a thread or responding to a thread-in-progress; 100% of the user population benefits from the activities of the above groups,” he noted on his blog (www.elatable.com/blog/?p=5) in February.

1% create, 10% comment, 89% just use Read More »

Open sources turns software into a service industry

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

It’s not necessarily going to be an easy transition. Open source turns software into a service industry. Service-provider firms (think of medical and legal practices) can’t be scaled up by injecting more capital into them; those that try only scale up their fixed costs, overshoot their revenue base, and starve to death. The choices come down to singing for your supper (getting paid through tips and donations), running a corner shop (a small, low-overhead service business), or finding a wealthy patron (some large firm that needs to use and modify open-source software for its business purposes).

Open sources turns software into a service industry Read More »

Differences between Macintosh & Unix programmers

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

Macintosh programmers are all about the user experience. They’re architects and decorators. They design from the outside in, asking first “What kind of interaction do we want to support?” and then building the application logic behind it to meet the demands of the user-interface design. This leads to programs that are very pretty and infrastructure that is weak and rickety. In one notorious example, as late as Release 9 the MacOS memory manager sometimes required the user to manually deallocate memory by manually chucking out exited but still-resident programs. Unix people are viscerally revolted by this kind of mal-design; they don’t understand how Macintosh people could live with it.

By contrast, Unix people are all about infrastructure. We are plumbers and stonemasons. We design from the inside out, building mighty engines to solve abstractly defined problems (like “How do we get reliable packet-stream delivery from point A to point B over unreliable hardware and links?”). We then wrap thin and often profoundly ugly interfaces around the engines. The commands date(1), find(1), and ed(1) are notorious examples, but there are hundreds of others. Macintosh people are viscerally revolted by this kind of mal-design; they don’t understand how Unix people can live with it. …

In many ways this kind of parochialism has served us well. We are the keepers of the Internet and the World Wide Web. Our software and our traditions dominate serious computing, the applications where 24/7 reliability and minimal downtime is a must. We really are extremely good at building solid infrastructure; not perfect by any means, but there is no other software technical culture that has anywhere close to our track record, and it is one to be proud of. …

To non-technical end users, the software we build tends to be either bewildering and incomprehensible, or clumsy and condescending, or both at the same time. Even when we try to do the user-friendliness thing as earnestly as possible, we’re woefully inconsistent at it. Many of the attitudes and reflexes we’ve inherited from old-school Unix are just wrong for the job. Even when we want to listen to and help Aunt Tillie, we don’t know how — we project our categories and our concerns onto her and give her ‘solutions’ that she finds as daunting as her problems.

Differences between Macintosh & Unix programmers Read More »

The first movie theater

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

APRIL 16, 1902: The Movies

Motion pictures seemed destined to become a passing fad. Only a few years after Edison’s first crude newsreels were screened — mostly in penny arcades, alongside carnival games and other cheap attractions, the novelty had worn off, and Americans were flocking back to live vaudeville.

Then, in spring 1902, Thomas L. Tally opened his Electric Theater in Los Angeles, a radical new venture devoted to movies and other high-tech devices of the era, like audio recordings.

“Tally was the first person to offer a modern multimedia entertainment experience to the American public,” says the film historian Marc Wanamaker. Before long, his successful movie palace produced imitators nationally, which would become known as “nickelodeons.”

The first movie theater Read More »

The day FDR was almost assassinated

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

FEB. 15, 1933: The Wobbly Chair

It should have been an easy shot: five rounds at 25 feet. But the gunman, Giuseppe Zangara, an anarchist, lost his balance atop a wobbly chair, and instead of hitting President-elect Franklin D. Roosevelt, he fatally wounded the mayor of Chicago, who was shaking hands with F.D.R.

Had Roosevelt been assassinated, his conservative Texas running mate, John Nance Garner, would most likely have come to power. “The New Deal, the move toward internationalism – these would never have happened,” says Alan Brinkley of Columbia University. “It would have changed the history of the world in the 20th century. I don’t think the Kennedy assassination changed things as much as Roosevelt’s would have.”

The day FDR was almost assassinated Read More »

The date Silicon Valley (& Intel) was born

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

SEPT. 18, 1957: Revolt of the Nerds

Fed up with their boss, eight lab workers walked off the job on this day in Mountain View, Calif. Their employer, William Shockley, had decided not to continue research into silicon-based semiconductors; frustrated, they decided to undertake the work on their own. The researchers — who would become known as “the traitorous eight” — went on to invent the microprocessor (and to found Intel, among other companies). “Sept. 18 was the birth date of Silicon Valley, of the electronics industry and of the entire digital age,” says Mr. Shockley’s biographer, Joel Shurkin.

The date Silicon Valley (& Intel) was born Read More »