tech in changing society

Types of open source licenses

From Eric Steven Raymond’s “Varieties of Open-Source Licensing” (The Art of Unix Programming: 19 September 2003):

MIT or X Consortium License

The loosest kind of free-software license is one that grants unrestricted rights to copy, use, modify, and redistribute modified copies as long as a copy of the copyright and license terms is retained in all modified versions. But when you accept this license you do give up the right to sue the maintainers. …

BSD Classic License

The next least restrictive kind of license grants unrestricted rights to copy, use, modify, and redistribute modified copies as long as a copy of the copyright and license terms is retained in all modified versions, and an acknowledgment is made in advertising or documentation associated with the package. Grantee has to give up the right to sue the maintainers. … Note that in mid-1999 the Office of Technology Transfer of the University of California rescinded the advertising clause in the BSD license. …

Artistic License

The next most restrictive kind of license grants unrestricted rights to copy, use, and locally modify. It allows redistribution of modified binaries, but restricts redistribution of modified sources in ways intended to protect the interests of the authors and the free-software community. …

General Public License

The GNU General Public License (and its derivative, the Library or “Lesser” GPL) is the single most widely used free-software license. Like the Artistic License, it allows redistribution of modified sources provided the modified files bear “prominent notice”.

The GPL requires that any program containing parts that are under GPL be wholly GPLed. (The exact circumstances that trigger this requirement are not perfectly clear to everybody.)

These extra requirements actually make the GPL more restrictive than any of the other commonly used licenses. …

Mozilla Public License

The Mozilla Public License supports software that is open source, but may be linked with closed-source modules or extensions. It requires that the distributed software (“Covered Code”) remain open, but permits add-ons called through a defined API to remain closed. …

Types of open source licenses Read More »

The airplane graveyard

From Patrick Smith’s “Ask the pilot” (Salon: 4 August 2006):

The wing is shorn off. It lies upside down in the dirt amid a cluster of desert bushes. The flaps and slats are ripped away, and a nest of pipes sprouts from the engine attachment pylon like the flailing innards of some immense dead beast. Several yards to the west, the center fuselage has come to rest inverted, the cabin cracked open like an eggshell. Inside, shattered rows of overhead bins are visible through a savage tangle of cables, wires, ducts and insulation. Seats are flung everywhere, still attached to one another in smashed-up units of two and three. I come to a pair of first-class chairs, crushed beneath the remains of a thousand-pound bulkhead. In the distance, the plane’s tail sits upright in a gesture of mutilated repose, twisted sharply to one side. High on the fin, the blue and white logo remains visible, save for a large vacant portion where the rudder used to be. …

I’m taking in one of the aviation world’s most curious and fascinating places, the “boneyard” at Mojave Airport in California, 70 miles north of Los Angeles.

The Mojave Desert is a barren place, a region of forbidding rocky hills and centuries-old Joshua trees. But it’s also an area with a rich aerospace history. Edwards Air Force Base and the U.S. Navy’s China Lake weapons station are both here, as well as the airport in Palmdale, where the Lockheed L-1011 was built. The Mojave Airport, officially known as the Mojave Airport and Civilian Aerospace Test Center, is the first FAA-licensed “spaceport” in the United States, home to a burgeoning commercial spacecraft industry. It’s a spot for ingenuity and innovation, you could say. But for hundreds of commercial jetliners, it is also the end of the road.

Of several aircraft scrap yards and storage facilities, including others in Arizona, Oklahoma and elsewhere in California, Mojave is arguably the most famous. …

There are upward of 200 planes at Mojave, though the number rises and falls as hulls are destroyed — or returned to service. Not all of the inventory is permanently grounded or slated for destruction. Neither are the planes necessarily old. Aircraft are taken out of service for a host of reasons, and age, strictly speaking, isn’t always one of them. The west side of the airport is where most of the newer examples are parked. MD-80s, Fokker 100s and an assortment of later-model 737s line the sunbaked apron in a state of semiretirement, waiting for potential buyers. They wear the standard uniform of prolonged storage: liveries blotted out, intakes and sensor probes wrapped and covered to protect them from the ravages of climate — and from the thousands of desert jackrabbits that make their homes here. A few of the ships are literally brand new, flown straight to Mojave from the assembly line to await reassignment after a customer changed its plans. …

The scrap value of a carcass is anywhere from $15,000 to $30,000.

“New arrivals, as it were, tend to come in bunches,” explains Mike Potter, one of several Mojave proprietors. …

Before they’re broken up, jets are scavenged for any useful or valuable parts. Control surfaces — ailerons, rudders, slats and elevators — have been carefully removed. Radomes — the nose-cone assemblies that conceal a plane’s radar — are another item noticeable by their absence. And, almost without exception, engines have been carted away for use elsewhere, in whole or in part. Potter has a point about being careful out here, for the boneyard floor is an obstacle course of random, twisted, dangerously sharp detritus. Curiously, I notice hundreds of discarded oxygen masks, their plastic face cups bearing the gnaw marks of jackrabbits. Some of the jets are almost fully skeletonized, and much of what used to rest inside is now scattered across the ground. …

Near the eastern perimeter sits a mostly intact Continental Airlines 747. This is one of Potter’s birds, deposited here in 1999. A hundred-million-dollar plane, ultimately worth about 25 grand for the recyclers. …

The airplane graveyard Read More »

How to wiretap

From Seth David Schoen’s “Wiretapping vulnerabilities” (Vitanuova: 9 March 2006):

Traditional wiretap threat model: the risks are detection of the tap, and obfuscation of content of communication. …

POTS is basically the same as it was 100 years ago — with central offices and circuit-switching. A phone from 100 years ago will pretty much still work today. “Telephones are a remarkable example of engineering optimization” because they were built to work with very minimal requirements: just two wires between CO and the end subscriber, don’t assume that the subscriber has power, don’t assume that the subscriber has anything else. There is a DC current loop that provides 48 V DC power. The current loop determines the hook switch state. There’s also audio signalling for in-band signalling from phone to CO — or from CO to phone — or for voice. It all depends on context and yet all these things are multiplexed over two wires, including the hook state and the audio signalling and the voice traffic.

If you wanted to tap this: you could do it in three different ways.

* Via the local loop (wired or wireless/cellular).
* Via the CO switch (software programming).
* Via trunk interception (e.g. fiber, microwave, satellite) with demultiplexing.

How do LEAs do it? Almost always at local loop or CO. (By contrast, intelligence agencies are more likely to try to tap trunks.)

How to wiretap Read More »

Info about the Internet Archive

From The Internet Archive’s “Orphan Works Reply Comments” (9 May 2005):

The Internet Archive stores over 500 terabytes of ephemeral web pages, book and moving images, adding an additional twenty-five terabytes each month. The short life span and immense quantity of these works prompts a solution that provides immediate and efficient preservation and access to orphaned ephemeral works. For instance, the average lifespan of a webpage is 100 days before it undergoes alteration or permanent deletion, and there are an average of fifteen links on a webpage.

Info about the Internet Archive Read More »

The real solution to identity theft: bank liability

From Bruce Schneier’s “Mitigating Identity Theft” (Crypto-Gram: 15 April 2005):

The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. …

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. …

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. …

The second issue is the ease with which a criminal can use personal data to commit fraud. …

Proposed fixes tend to concentrate on the first issue — making personal data harder to steal — whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

… That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

… The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions.

The real solution to identity theft: bank liability Read More »

Examples of tweaking old technologies to add social aspects

From Clay Shirky’s “Group as User: Flaming and the Design of Social Software” (Clay Shirky’s Writings About the Internet: 5 November 2004):

This possibility of adding novel social components to old tools presents an enormous opportunity. To take the most famous example, the Slashdot moderation system puts the ability to rate comments into the hands of the users themselves. The designers took the traditional bulletin board format — threaded posts, sorted by time — and added a quality filter. And instead of assuming that all users are alike, the Slashdot designers created a karma system, to allow them to discriminate in favor of users likely to rate comments in ways that would benefit the community. And, to police that system, they created a meta-moderation system, to solve the ‘Who will guard the guardians’ problem. …

Likewise, Craigslist took the mailing list, and added a handful of simple features with profound social effects. First, all of Craigslist is an enclosure, owned by Craig … Because he has a business incentive to make his list work, he and his staff remove posts if enough readers flag them as inappropriate. …

And, on the positive side, the addition of a “Nominate for ‘Best of Craigslist'” button in every email creates a social incentive for users to post amusing or engaging material. … The only reason you would nominate a post for ‘Best of’ is if you wanted other users to see it — if you were acting in a group context, in other words. …

Jonah Brucker-Cohen’s Bumplist stands out as an experiment in experimenting the social aspect of mailing lists. Bumplist, whose motto is “an email community for the determined”, is a mailing list for 6 people, which anyone can join. When the 7th user joins, the first is bumped and, if they want to be back on, must re-join, bumping the second user, ad infinitum. … However, it is a vivid illustration of the ways simple changes to well-understood software can produce radically different social effects.

You could easily imagine many such experiments. What would it take, for example, to design a mailing list that was flame-retardant? Once you stop regarding all users as isolated actors, a number of possibilities appear. You could institute induced lag, where, once a user contributed 5 posts in the space of an hour, a cumulative 10 minute delay would be added to each subsequent post. Every post would be delivered eventually, but it would retard the rapid-reply nature of flame wars, introducing a cooling off period for the most vociferous participants.

You could institute a kind of thread jail, where every post would include a ‘Worst of’ button, in the manner of Craigslist. Interminable, pointless threads (e.g. Which Operating System Is Objectively Best?) could be sent to thread jail if enough users voted them down. (Though users could obviously change subject headers and evade this restriction, the surprise, first noted by Julian Dibbell, is how often users respect negative communal judgment, even when they don’t respect the negative judgment of individuals. [ See Rape in Cyberspace — search for “aggressively antisocial vibes.”])

You could institute a ‘Get a room!’ feature, where any conversation that involved two users ping-ponging six or more posts (substitute other numbers to taste) would be automatically re-directed to a sub-list, limited to that pair. The material could still be archived, and so accessible to interested lurkers, but the conversation would continue without the attraction of an audience.

You could imagine a similar exercise, working on signal/noise ratios generally, and keying off the fact that there is always a most active poster on mailing lists, who posts much more often than even the second most active, and much much more often than the median poster. Oddly, the most active poster is often not even aware that they occupy this position (seeing ourselves as others see us is difficult in mediated spaces as well,) but making them aware of it often causes them to self-moderate. You can imagine flagging all posts by the most active poster, whoever that happened to be, or throttling the maximum number of posts by any user to some multiple of average posting tempo.

Examples of tweaking old technologies to add social aspects Read More »

Clay Shirky on flaming & how to combat it

From Clay Shirky’s “Group as User: Flaming and the Design of Social Software” (Clay Shirky’s Writings About the Internet: 5 November 2004):

Learning From Flame Wars

Mailing lists were the first widely available piece of social software. … Mailing lists were also the first widely analyzed virtual communities. …

Flame wars are not surprising; they are one of the most reliable features of mailing list practice. If you assume a piece of software is for what it does, rather than what its designer’s stated goals were, then mailing list software is, among other things, a tool for creating and sustaining heated argument. …

… although the environment in which a mailing list runs is computers, the environment in which a flame war runs is people. …

The user’s mental model of a word processor is of limited importance — if a word processor supports multiple columns, users can create multiple columns; if not, then not. The users’ mental model of social software, on the other hand, matters enormously. For example, ‘personal home pages’ and weblogs are very similar technically — both involve local editing and global hosting. The difference between them was mainly in the user’s conception of the activity. …

… The cumulative effect is to make maximizing individual flexibility a priority, even when that may produce conflict with the group goals.

Netiquette and Kill Files

The first general response to flaming was netiquette. Netiquette was a proposed set of behaviors that assumed that flaming was caused by (who else?) individual users. If you could explain to each user what was wrong with flaming, all users would stop.

This mostly didn’t work. The problem was simple — the people who didn’t know netiquette needed it most. They were also the people least likely to care about the opinion of others …

… Addressing the flamer directly works not because he realizes the error of his ways, but because it deprives him of an audience. Flaming is not just personal expression, it is a kind of performance, brought on in a social context.

… People behave differently in groups, and while momentarily engaging them one-on-one can have a calming effect, that is a change in social context, rather than some kind of personal conversion. …

Another standard answer to flaming has been the kill file, sometimes called a bozo filter, which is a list of posters whose comments you want filtered by the software before you see them. …

… And although people have continually observed (for thirty years now) that “if everyone just ignores user X, he will go away,” the logic of collective action makes that outcome almost impossible to orchestrate — it only takes a couple of people rising to bait to trigger a flame war, and the larger the group, the more difficult it is to enforce the discipline required of all members.

The Tragedy of the Conversational Commons

Briefly stated, the tragedy of the commons occurs when a group holds a resource, but each of the individual members has an incentive to overuse it. …

In the case of mailing lists (and, again, other shared conversational spaces), the commonly held resource is communal attention. The group as a whole has an incentive to keep the signal-to-noise ratio high and the conversation informative, even when contentious. Individual users, though, have an incentive to maximize expression of their point of view, as well as maximizing the amount of communal attention they receive. It is a deep curiosity of the human condition that people often find negative attention more satisfying than inattention, and the larger the group, the likelier someone is to act out to get that sort of attention.

However, proposed responses to flaming have consistently steered away from group-oriented solutions and towards personal ones. …

Weblog and Wiki Responses

… Weblogs are relatively flame-free because they provide little communal space. In economic parlance, weblogs solve the tragedy of the commons through enclosure, the subdividing and privatizing of common space. …

Like weblogs, wikis also avoid the tragedy of the commons, but they do so by going to the other extreme. Instead of everything being owned, nothing is. Whereas a mailing list has individual and inviolable posts but communal conversational space, in wikis, even the writing is communal. … it is actually easier to restore damage than cause it. …

Weblogs and wikis are proof that you can have broadly open discourse without suffering from hijacking by flamers, by creating a social structure that encourages or deflects certain behaviors.

Clay Shirky on flaming & how to combat it Read More »

The neutron bomb as the most moral weapon possible

From Charles Platt’s “The Profits of Fear” (August 2005):

Sam Cohen might have remained relatively unknown, troubled by ethical lapses in government and the military but unable to do anything about them, if he had not visited Seoul in 1951, during the Korean war. In the aftermath of bombing sorties he witnessed scenes of intolerable devastation. Civilians wandered like zombies through the ruins of a city in which all services had ceased. Children were drinking water from gutters that were being used as sewers. “I’d seen countless pictures of Hiroshima by then,” Cohen recalls, “and what I saw in Seoul was precious little different. . . . The question I asked of myself was something like: If we’re going to go on fighting these damned fool wars in the future, shelling and bombing cities to smithereens and wrecking the lives of their surviving inhabitants, might there be some kind of nuclear weapon that could avoid all this?”

Here was a singularly odd idea: To re-engineer the most inhumane and destructive weapon of all time, so that it would _reduce_ human suffering. Cohen’s unique achievement was to prove that this could in fact be done.

His first requirement was that wars should be fought as they had been historically, confining their damage to military combatants while towns and cities remained undamaged and their civilian inhabitants remained unscathed. …

Ideally he wanted to reduce blast damage to zero, to eliminate the wholesale demolition of civilian housing, services, and amenities that he had witnessed in Seoul. He saw a way to achieve this if a fusion reaction released almost all of its energy as radiation. Moreover, if this radiation consisted of neutrons, which carry no charge, it would not poison the environment with residual radioactivity.

The bomb would still kill people–but this was the purpose of all weapons. _If_ wars were liable to recur (which Cohen thought was probable), soldiers were going to use weapons of some kind against each other, and everyone would benefit if the weapons minimized pain and suffering while ending the conflict as rapidly as possible.

Cohen came up with a design for a warhead about one-tenth as powerful as the atomic bombs dropped on Japan. If it was detonated at 3,000 feet above ground level, its blast effects would be negligible while its neutron radiation would be powerful enough to cause death within a circle about one mile in diameter. This was the battlefield weapon that came to be known as the neutron bomb.

Such a weapon obviously would be more civilized than large-scale hydrogen bombs, and would also be more humane than conventional bombs, because it would create an all-or-nothing, live-or-die scenario in which no one would be wounded. A stream of neutrons cannot maim people. It will not burn their flesh, spill their blood, or break their bones. Those who receive a non-lethal dose will recover after a period of intense nausea and diarrhea, and Cohen estimated that their risk of subsequent cancer would be no greater than the risk we experience as a result of exposure to second-hand cigarette smoke. As for the rest, death would come relatively quickly, primarily from shock to the central nervous system. As he put it in his typically candid style, “I doubt whether the agony an irradiated soldier goes through in the process of dying is any worse than that produced by having your body charred to a crisp by napalm, your guts being ripped apart by shrapnel, your lungs blown in by concussion weapons, and all those other sweet things that happen when conventional weapons (which are preferred and anointed by our official policy) are used.”

After assessing every aspect and implication of his concept, he reached his modest conclusion: “The neutron bomb has to be the most moral weapon ever invented.”

The neutron bomb as the most moral weapon possible Read More »

Prescription drug spending has vastly increased in 25 years

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

Whatever the answer, it’s clear who pays for it. You do. You pay in the form of vastly higher drug prices and health-care insurance. Americans spent $179 billion on prescription drugs in 2003. That’s up from … wait for it … $12 billion in 1980 [when the Bayh-Dole Act was passed]. That’s a 13% hike, year after year, for two decades. Of course, what you don’t pay as a patient you pay as a taxpayer. The U.S. government picks up the tab for one in three Americans by way of Medicare, Medicaid, the military, and other programs. According to the provisions of Bayh-Dole, the government gets a royalty-free use, forever, of its funded inventions. It has never tried to collect. You might say the taxpayers pay for the hat–and have it handed to them.

Prescription drug spending has vastly increased in 25 years Read More »

What patents on life has wrought

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

The Supreme Court’s decision in 1980 to allow for the patenting of living organisms opened the spigots to individual claims of ownership over everything from genes and protein receptors to biochemical pathways and processes. Soon, research scientists were swooping into patent offices around the world with “invention” disclosures that weren’t so much products or processes as they were simply knowledge–or research tools to further knowledge.

The problem is, once it became clear that individuals could own little parcels of biology or chemistry, the common domain of scientific exchange–that dynamic place where theories are introduced, then challenged, and ultimately improved–begins to shrink. What’s more, as the number of claims grows, so do the overlapping claims and legal challenges. …

In October 1990 a researcher named Mary-Claire King at the University of California at Berkeley told the world that there was a breast-cancer susceptibility gene–and that it was on chromosome 17. Several other groups, sifting through 30 million base pairs of nucleotides to find the precise location of the gene, helped narrow the search with each new discovery. Then, in the spring of 1994, a team led by Mark Skolnick at the University of Utah beat everyone to the punch–identifying a gene with 5,592 base pairs and codes for a protein that was nearly 1,900 amino acids long. Skolnick’s team rushed to file a patent application and was issued title to the discovery three years later.

By all accounts the science was a collective effort. The NIH had funded scores of investigative teams around the country and given nearly 1,200 separate research grants to learn everything there was to learn about the genetics of breast cancer.

The patent, however, is licensed to one company–Skolnick’s. Myriad Genetics, a company the researcher founded in 1991, now insists on doing all U.S. testing for the presence of unknown mutation in the two related genes, BRCA1 and BRCA2. Those who have a mutation in either gene have as high as an 86% chance of getting cancer, say experts. The cost for the complete two-gene analysis: $2,975.

Critics say that Myriad’s ultrarestrictive licensing of the technology–one funded not only by federal dollars but also aided by the prior discoveries of hundreds of other scientists–is keeping the price of the test artificially high. Skolnick, 59, claims that the price is justified by his company’s careful analysis of thousands of base pairs of DNA, each of which is prone to a mutation or deletion, and by its educational outreach programs.

What patents on life has wrought Read More »

1980 Bayh-Dole Act created the biotech industry … & turned universities into businesses

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

For a century or more, the white-hot core of American innovation has been basic science. And the foundation of basic science has been the fluid exchange of ideas at the nation’s research universities. It has always been a surprisingly simple equation: Let scientists do their thing and share their work–and industry picks up the spoils. Academics win awards, companies make products, Americans benefit from an ever-rising standard of living.

That equation still holds, with the conspicuous exception of medical research. In this one area, something alarming has been happening over the past 25 years: Universities have evolved from public trusts into something closer to venture capital firms. What used to be a scientific community of free and open debate now often seems like a litigious scrum of data-hoarding and suspicion. And what’s more, Americans are paying for it through the nose. …

From 1992 to September 2003, pharmaceutical companies tied up the federal courts with 494 patent suits. That’s more than the number filed in the computer hardware, aerospace, defense, and chemical industries combined. Those legal expenses are part of a giant, hidden “drug tax”–a tax that has to be paid by someone. And that someone, as you’ll see below, is you. You don’t get the tab all at once, of course. It shows up in higher drug costs, higher tuition bills, higher taxes–and tragically, fewer medical miracles.

So how did we get to this sorry place? It was one piece of federal legislation that you’ve probably never heard of–a 1980 tweak to the U.S. patent and trademark law known as the Bayh-Dole Act. That single law, named for its sponsors, Senators Birch Bayh and Bob Dole, in essence transferred the title of all discoveries made with the help of federal research grants to the universities and small businesses where they were made.

Prior to the law’s enactment, inventors could always petition the government for the patent rights to their own work, though the rules were different at each federal agency; some 20 different statutes governed patent policy. The law simplified the “technology transfer” process and, more important, changed the legal presumption about who ought to own and develop new ideas–private enterprise as opposed to Uncle Sam. The new provisions encouraged academic institutions to seek out the clever ideas hiding in the backs of their research cupboards and to pursue licenses with business. And it told them to share some of the take with the actual inventors.

On the face of it, Bayh-Dole makes sense. Indeed, supporters say the law helped create the $43-billion-a-year biotech industry and has brought valuable drugs to market that otherwise would never have seen the light of day. What’s more, say many scholars, the law has created megaclusters of entrepreneurial companies–each an engine for high-paying, high-skilled jobs–all across the land.

That all sounds wonderful. Except that Bayh-Dole’s impact wasn’t so much in the industry it helped create, but rather in its unintended consequence–a legal frenzy that’s diverting scientists from doing science. …

A 1979 audit of government-held patents showed that fewer than 5% of some 28,000 discoveries–all of them made with the help of taxpayer money–had been developed, because no company was willing to risk the capital to commercialize them without owning title. …

A dozen schools–notably MIT, Stanford, the University of California, Johns Hopkins, and the University of Wisconsin–already had campus offices to work out licensing arrangements with government agencies and industry. But within a few years Technology Licensing Offices (or TLOs) were sprouting up everywhere. In 1979, American universities received 264 patents. By 1991, when a new organization, the Association of University Technology Managers, began compiling data, North American institutions (including colleges, research institutes, and hospitals) had filed 1,584 new U.S. patent applications and negotiated 1,229 licenses with industry–netting $218 million in royalties. By 2003 such institutions had filed five times as many new patent applications; they’d done 4,516 licensing deals and raked in over $1.3 billion in income. And on top of all that, 374 brand-new companies had sprouted from the wells of university research. That meant jobs pouring back into the community …

The anecdotal reports, fun “discovery stories” in alumni magazines, and numbers from the yearly AUTM surveys suggested that the academic productivity marvel had spread far and wide. But that’s hardly the case. Roughly a third of the new discoveries and more than half of all university licensing income in 2003 derived from just ten schools–MIT, Stanford, the usual suspects. They are, for the most part, the institutions that were pursuing “technology transfer” long before Bayh-Dole. …

Court dockets are now clogged with university patent claims. In 2002, North American academic institutions spent over $200 million in litigation (though some of that was returned in judgments)–more than five times the amount spent in 1991. Stanford Law School professor emeritus John Barton notes, in a 2000 study published in Science, that the indicator that correlates most perfectly with the rise in university patents is the number of intellectual-property lawyers. (Universities also spent $142 million on lobbying over the past six years.) …

So what do universities do with all their cash? That depends. Apart from the general guidelines provided by Bayh-Dole, which indicate the proceeds must be used for “scientific research or education,” there are no instructions. “These are unrestricted dollars that they can use, and so they’re worth a lot more than other dollars,” says University of Michigan law professor Rebecca Eisenberg, who has written extensively about the legislation. The one thing no school seems to use the money for is tuition–which apparently has little to do with “scientific research or education.” Meanwhile, the cost of university tuition has soared at a rate more than twice as high as inflation from 1980 to 2005.

1980 Bayh-Dole Act created the biotech industry … & turned universities into businesses Read More »

What is serious news reporting?

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

Serious reporting is based in verified fact passed through mature professional judgment. It has integrity. It engages readers – there’s that word again, readers – with compelling stories and it appeals to their human capacity for reason. This is the information that people need so they can make good life decisions and good citizenship decisions. Serious reporting is far from grim and solemn and off-putting. It is accessible and relevant to its readers. And the best serious reporting is a joy to read.

Serious reporting emanates largely from responsible local dailies and national and foreign reporting by big news organizations, print and broadcast. But the reporting all these institutions do is diminishing. With fewer reporters chasing the news, there is less and less variety in the stories citizens see and hear. The media that are booming, especially cable news and blogs, do precious little serious reporting. Or they do it for specialized audiences.

What is serious news reporting? Read More »

Neil Postman: the medium is the metaphor for the way we think

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

In late 1980s the late Neil Postman wrote an enduringly important book called Amusing Ourselves to Death. In it he says that Marshall McLuhan only came close to getting it right in his famous adage, that the medium is the message. Postman corrects McLuhan by saying that the medium is the metaphor – a metaphor for the way we think. Written narrative that people can read, Postman goes on, is a metaphor for thinking logically. And he says that image media bypass reason and go straight to the emotions. The image media are a metaphor for not thinking logically. Images disable thinking, so unless people read and use their reason democracy is disabled as well.

Neil Postman: the medium is the metaphor for the way we think Read More »

Antitrust suits led to vertical integration & the IT revolution

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

As the industrial scholar Alfred D. Chandler has noted, the vertically integrated firm — which dominated the American economy for most of the last century — was to a great degree the product of antitrust enforcement. When Theodore Roosevelt began to limit the ability of large companies to grow horizontally, many responded by buying outside suppliers and integrating their operations into vertical lines of production. Many also set up internal research labs to improve existing products and develop new ones. Antitrust law later played a huge role in launching the information revolution. During the Cold War, the Justice Department routinely used antitrust suits to force high-tech firms to share the technologies they had developed. Targeted firms like IBM, RCA, AT&T, and Xerox spilled many thousands of patents onto the market, where they were available to any American competitor for free.

Antitrust suits led to vertical integration & the IT revolution Read More »

AACS, next-gen encryption for DVDs

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

AACS relies on the well-established AES (with 128-bit keys) to safeguard the disc data. Just like DVD players, HD DVD and Blu-ray drives will come with a set of Device Keys handed out to the manufacturers by AACS LA. Unlike the CSS encryption used in DVDs, though, AACS has a built-in method for revoking sets of keys that are cracked and made public. AACS-encrypted discs will feature a Media Key Block that all players need to access in order to get the key needed to decrypt the video files on the disc. The MKB can be updated by AACS LA to prevent certain sets of Device Keys from functioning with future titles – a feature that AACS dubs “revocation.” …

AACS also supports a new feature called the Image Constraint Token. When set, the ICT will force video output to be degraded over analog connections. ICT has so far gone unused, though this could change at any time. …

While AACS is used by both HD disc formats, the Blu-ray Disc Association (BDA) has added some features of its own to make the format “more secure” than HD DVD. The additions are BD+ and ROM Mark; though both are designed to thwart pirates, they work quite differently.

While the generic AACS spec includes key revocation, BD+ actually allows the BDA to update the entire encryption system once players have already shipped. Should encryption be cracked, new discs will include information that will alter the players’ decryption code. …

The other new technology, ROM Mark, affects the manufacturing of Blu-ray discs. All Blu-ray mastering equipment must be licensed by the BDA, and they will ensure that all of it carries ROM Mark technology. Whenever a legitimate disc is created, it is given a “unique and undetectable identifier.” It’s not undetectable to the player, though, and players can refuse to play discs without a ROM Mark. The BDA has the optimistic hope that this will keep industrial-scale piracy at bay. We’ll see.

AACS, next-gen encryption for DVDs Read More »

How DVD encryption (CSS) works … or doesn’t

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

DVD players are factory-built with a set of keys. When a DVD is inserted, the player runs through every key it knows until one unlocks the disc. Once this disc key is known, the player uses it to retrieve a title key from the disc. This title key actually allows the player to unscramble the disc’s contents.

The decryption process might have been formidable when first drawn up, but it had begun to look weak even by 1999. Frank Stevenson, who published a good breakdown of the technology, estimated at that time that a 450Mhz Pentium III could crack the code in only 18 seconds – and that’s without even having a player key in the first place. In other, words a simple brute force attack could crack the code at runtime, assuming that users were patient enough to wait up to 18 seconds. With today’s technology, of course, the same crack would be trivial.

Once the code was cracked, the genie was out of the bottle. CSS descramblers proliferated …

Because the CSS system could not be updated once in the field, the entire system was all but broken. Attempts to patch the system (such as Macrovision’s “RipGuard”) met with limited success, and DVDs today remain easy to copy using a multitude of freely available tools.

How DVD encryption (CSS) works … or doesn’t Read More »

Where we are technically with DRM

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

The attacks on FairPlay have been enlightening because of what they illustrate about the current state of DRM. They show, for instance, that modern DRM schemes are difficult to bypass, ignore, or strip out with a few lines of code. In contrast to older “patches” of computer software (what you would generally bypass a program’s authorization routine), the encryption on modern media files is pervasive. All of the software mentioned has still required Apple’s decoding technology to unscramble the song files; there is no simple hack that can simply strip the files clean without help, and the ciphers are complex enough to make brute-force cracks difficult.

Apple’s response has also been a reminder that cracking an encryption scheme once will no longer be enough in the networked era. Each time that its DRM has been bypassed, Apple has been able to push out updates to its customers that render the hacks useless (or at least make them more difficult to achieve).

Where we are technically with DRM Read More »

Apple iTunes Music Store applies DRM after download

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

A third approach [to subverting Apple’s DRM] came from PyMusique, software originally written so that Linux users could access the iTunes Music Store. The software took advantage of the fact that iTMS transmits DRM-free songs to its customers and relies on iTunes to add that gooey layer of DRM goodness at the client end. PyMusique emulates iTunes and serves as a front end to the store, allowing users to browse and purchase music. When songs are downloaded, however, the program “neglects” to apply the FairPlay DRM.

Apple iTunes Music Store applies DRM after download Read More »

To combat phishing, change browser design philosophy

From Federico Biancuzzi’s “Phishing with Rachna Dhamija” (SecurityFocus: 19 June 2006):

We discovered that existing security cues are ineffective, for three reasons:

1. The indicators are ignored (23% of participants in our study did not look at the address bar, status bar, or any SSL indicators).

2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn’t realize that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.

3. The security indicators are trivial to spoof. Many users can’t distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80% of participants. …

Currently, I’m working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user’s history information (for example, “you’ve been to this website many times” or “you’ve never submitted this form before”) can significantly increase a user’s ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I’ve been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

Rachna Dhamija: I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customized security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user’s focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password. …

With security skins, we were trying to solve not user authentication, but the reverse problem – server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like “Server X is authenticated”, or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user’s browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can’t be replayed by an attacker and it won’t reveal anything useful about the user’s password. …

Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can’t simulate is an interface he can’t predict. This is the principle that DSS relies on. We should make it easy for users to personalize their interfaces. Look at how popular screensavers, ringtones, and application skins are – users clearly enjoy the ability to personalize their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

To combat phishing, change browser design philosophy Read More »

1% create, 10% comment, 89% just use

From Charles Arthur’s “What is the 1% rule?” (Guardian Unlimited: 20 July 2006):

It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will “interact” with it (commenting or offering improvements) and the other 89 will just view it.

It’s a meme that emerges strongly in statistics from YouTube, which in just 18 months has gone from zero to 60% of all online video viewing.

The numbers are revealing: each day there are 100 million downloads and 65,000 uploads – which as Antony Mayfield (at http://open.typepad.com/open) points out, is 1,538 downloads per upload – and 20m unique users per month.

That puts the “creator to consumer” ratio at just 0.5%, but it’s early days yet …

50% of all Wikipedia article edits are done by 0.7% of users, and more than 70% of all articles have been written by just 1.8% of all users, according to the Church of the Customer blog (http://customerevangelists.typepad.com/blog/).

Earlier metrics garnered from community sites suggested that about 80% of content was produced by 20% of the users, but the growing number of data points is creating a clearer picture of how Web 2.0 groups need to think. For instance, a site that demands too much interaction and content generation from users will see nine out of 10 people just pass by.

Bradley Horowitz of Yahoo points out that much the same applies at Yahoo: in Yahoo Groups, the discussion lists, “1% of the user population might start a group; 10% of the user population might participate actively, and actually author content, whether starting a thread or responding to a thread-in-progress; 100% of the user population benefits from the activities of the above groups,” he noted on his blog (www.elatable.com/blog/?p=5) in February.

1% create, 10% comment, 89% just use Read More »