technology

The airplane graveyard

From Patrick Smith’s “Ask the pilot” (Salon: 4 August 2006):

The wing is shorn off. It lies upside down in the dirt amid a cluster of desert bushes. The flaps and slats are ripped away, and a nest of pipes sprouts from the engine attachment pylon like the flailing innards of some immense dead beast. Several yards to the west, the center fuselage has come to rest inverted, the cabin cracked open like an eggshell. Inside, shattered rows of overhead bins are visible through a savage tangle of cables, wires, ducts and insulation. Seats are flung everywhere, still attached to one another in smashed-up units of two and three. I come to a pair of first-class chairs, crushed beneath the remains of a thousand-pound bulkhead. In the distance, the plane’s tail sits upright in a gesture of mutilated repose, twisted sharply to one side. High on the fin, the blue and white logo remains visible, save for a large vacant portion where the rudder used to be. …

I’m taking in one of the aviation world’s most curious and fascinating places, the “boneyard” at Mojave Airport in California, 70 miles north of Los Angeles.

The Mojave Desert is a barren place, a region of forbidding rocky hills and centuries-old Joshua trees. But it’s also an area with a rich aerospace history. Edwards Air Force Base and the U.S. Navy’s China Lake weapons station are both here, as well as the airport in Palmdale, where the Lockheed L-1011 was built. The Mojave Airport, officially known as the Mojave Airport and Civilian Aerospace Test Center, is the first FAA-licensed “spaceport” in the United States, home to a burgeoning commercial spacecraft industry. It’s a spot for ingenuity and innovation, you could say. But for hundreds of commercial jetliners, it is also the end of the road.

Of several aircraft scrap yards and storage facilities, including others in Arizona, Oklahoma and elsewhere in California, Mojave is arguably the most famous. …

There are upward of 200 planes at Mojave, though the number rises and falls as hulls are destroyed — or returned to service. Not all of the inventory is permanently grounded or slated for destruction. Neither are the planes necessarily old. Aircraft are taken out of service for a host of reasons, and age, strictly speaking, isn’t always one of them. The west side of the airport is where most of the newer examples are parked. MD-80s, Fokker 100s and an assortment of later-model 737s line the sunbaked apron in a state of semiretirement, waiting for potential buyers. They wear the standard uniform of prolonged storage: liveries blotted out, intakes and sensor probes wrapped and covered to protect them from the ravages of climate — and from the thousands of desert jackrabbits that make their homes here. A few of the ships are literally brand new, flown straight to Mojave from the assembly line to await reassignment after a customer changed its plans. …

The scrap value of a carcass is anywhere from $15,000 to $30,000.

“New arrivals, as it were, tend to come in bunches,” explains Mike Potter, one of several Mojave proprietors. …

Before they’re broken up, jets are scavenged for any useful or valuable parts. Control surfaces — ailerons, rudders, slats and elevators — have been carefully removed. Radomes — the nose-cone assemblies that conceal a plane’s radar — are another item noticeable by their absence. And, almost without exception, engines have been carted away for use elsewhere, in whole or in part. Potter has a point about being careful out here, for the boneyard floor is an obstacle course of random, twisted, dangerously sharp detritus. Curiously, I notice hundreds of discarded oxygen masks, their plastic face cups bearing the gnaw marks of jackrabbits. Some of the jets are almost fully skeletonized, and much of what used to rest inside is now scattered across the ground. …

Near the eastern perimeter sits a mostly intact Continental Airlines 747. This is one of Potter’s birds, deposited here in 1999. A hundred-million-dollar plane, ultimately worth about 25 grand for the recyclers. …

The airplane graveyard Read More »

How to wiretap

From Seth David Schoen’s “Wiretapping vulnerabilities” (Vitanuova: 9 March 2006):

Traditional wiretap threat model: the risks are detection of the tap, and obfuscation of content of communication. …

POTS is basically the same as it was 100 years ago — with central offices and circuit-switching. A phone from 100 years ago will pretty much still work today. “Telephones are a remarkable example of engineering optimization” because they were built to work with very minimal requirements: just two wires between CO and the end subscriber, don’t assume that the subscriber has power, don’t assume that the subscriber has anything else. There is a DC current loop that provides 48 V DC power. The current loop determines the hook switch state. There’s also audio signalling for in-band signalling from phone to CO — or from CO to phone — or for voice. It all depends on context and yet all these things are multiplexed over two wires, including the hook state and the audio signalling and the voice traffic.

If you wanted to tap this: you could do it in three different ways.

* Via the local loop (wired or wireless/cellular).
* Via the CO switch (software programming).
* Via trunk interception (e.g. fiber, microwave, satellite) with demultiplexing.

How do LEAs do it? Almost always at local loop or CO. (By contrast, intelligence agencies are more likely to try to tap trunks.)

How to wiretap Read More »

Info about the Internet Archive

From The Internet Archive’s “Orphan Works Reply Comments” (9 May 2005):

The Internet Archive stores over 500 terabytes of ephemeral web pages, book and moving images, adding an additional twenty-five terabytes each month. The short life span and immense quantity of these works prompts a solution that provides immediate and efficient preservation and access to orphaned ephemeral works. For instance, the average lifespan of a webpage is 100 days before it undergoes alteration or permanent deletion, and there are an average of fifteen links on a webpage.

Info about the Internet Archive Read More »

The real solution to identity theft: bank liability

From Bruce Schneier’s “Mitigating Identity Theft” (Crypto-Gram: 15 April 2005):

The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. …

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. …

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. …

The second issue is the ease with which a criminal can use personal data to commit fraud. …

Proposed fixes tend to concentrate on the first issue — making personal data harder to steal — whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

… That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

… The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions.

The real solution to identity theft: bank liability Read More »

When people feel secure, they’re easier targets

From Bruce Schneier’s “Burglars and “Feeling Secure” (Crypto-Gram: 15 January 2005):

This quote is from “Confessions of a Master Jewel Thief,” by Bill Mason (Villard, 2003): “Nothing works more in a thief’s favor than people feeling secure. That’s why places that are heavily alarmed and guarded can sometimes be the easiest targets. The single most important factor in security — more than locks, alarms, sensors, or armed guards — is attitude. A building protected by nothing more than a cheap combination lock but inhabited by people who are alert and risk-aware is much safer than one with the world’s most sophisticated alarm system whose tenants assume they’re living in an impregnable fortress.”

The author, a burglar, found that luxury condos were an excellent target. Although they had much more security technology than other buildings, they were vulnerable because no one believed a thief could get through the lobby.

When people feel secure, they’re easier targets Read More »

Examples of tweaking old technologies to add social aspects

From Clay Shirky’s “Group as User: Flaming and the Design of Social Software” (Clay Shirky’s Writings About the Internet: 5 November 2004):

This possibility of adding novel social components to old tools presents an enormous opportunity. To take the most famous example, the Slashdot moderation system puts the ability to rate comments into the hands of the users themselves. The designers took the traditional bulletin board format — threaded posts, sorted by time — and added a quality filter. And instead of assuming that all users are alike, the Slashdot designers created a karma system, to allow them to discriminate in favor of users likely to rate comments in ways that would benefit the community. And, to police that system, they created a meta-moderation system, to solve the ‘Who will guard the guardians’ problem. …

Likewise, Craigslist took the mailing list, and added a handful of simple features with profound social effects. First, all of Craigslist is an enclosure, owned by Craig … Because he has a business incentive to make his list work, he and his staff remove posts if enough readers flag them as inappropriate. …

And, on the positive side, the addition of a “Nominate for ‘Best of Craigslist'” button in every email creates a social incentive for users to post amusing or engaging material. … The only reason you would nominate a post for ‘Best of’ is if you wanted other users to see it — if you were acting in a group context, in other words. …

Jonah Brucker-Cohen’s Bumplist stands out as an experiment in experimenting the social aspect of mailing lists. Bumplist, whose motto is “an email community for the determined”, is a mailing list for 6 people, which anyone can join. When the 7th user joins, the first is bumped and, if they want to be back on, must re-join, bumping the second user, ad infinitum. … However, it is a vivid illustration of the ways simple changes to well-understood software can produce radically different social effects.

You could easily imagine many such experiments. What would it take, for example, to design a mailing list that was flame-retardant? Once you stop regarding all users as isolated actors, a number of possibilities appear. You could institute induced lag, where, once a user contributed 5 posts in the space of an hour, a cumulative 10 minute delay would be added to each subsequent post. Every post would be delivered eventually, but it would retard the rapid-reply nature of flame wars, introducing a cooling off period for the most vociferous participants.

You could institute a kind of thread jail, where every post would include a ‘Worst of’ button, in the manner of Craigslist. Interminable, pointless threads (e.g. Which Operating System Is Objectively Best?) could be sent to thread jail if enough users voted them down. (Though users could obviously change subject headers and evade this restriction, the surprise, first noted by Julian Dibbell, is how often users respect negative communal judgment, even when they don’t respect the negative judgment of individuals. [ See Rape in Cyberspace — search for “aggressively antisocial vibes.”])

You could institute a ‘Get a room!’ feature, where any conversation that involved two users ping-ponging six or more posts (substitute other numbers to taste) would be automatically re-directed to a sub-list, limited to that pair. The material could still be archived, and so accessible to interested lurkers, but the conversation would continue without the attraction of an audience.

You could imagine a similar exercise, working on signal/noise ratios generally, and keying off the fact that there is always a most active poster on mailing lists, who posts much more often than even the second most active, and much much more often than the median poster. Oddly, the most active poster is often not even aware that they occupy this position (seeing ourselves as others see us is difficult in mediated spaces as well,) but making them aware of it often causes them to self-moderate. You can imagine flagging all posts by the most active poster, whoever that happened to be, or throttling the maximum number of posts by any user to some multiple of average posting tempo.

Examples of tweaking old technologies to add social aspects Read More »

What bots do and how they work

From The Honeynet Project & Research Alliance’s “Know your Enemy: Tracking Botnets” (13 March 2005):

After successful exploitation, a bot uses Trivial File Transfer Protocol (TFTP), File Transfer Protocol (FTP), HyperText Transfer Protocol (HTTP), or CSend (an IRC extension to send files to other users, comparable to DCC) to transfer itself to the compromised host. The binary is started, and tries to connect to the hard-coded master IRC server. Often a dynamic DNS name is provided … rather than a hard coded IP address, so the bot can be easily relocated. … Using a special crafted nickname like USA|743634 or [UrX]-98439854 the bot tries to join the master’s channel, sometimes using a password to keep strangers out of the channel. …

Afterwards, the server accepts the bot as a client and sends him RPL_ISUPPORT, RPL_MOTDSTART, RPL_MOTD, RPL_ENDOFMOTD or ERR_NOMOTD. Replies starting with RPL_ contain information for the client, for example RPL_ISUPPORT tells the client which features the server understands and RPL_MOTD indicates the Message Of The Day (MOTD). …

On RPL_ENDOFMOTD or ERR_NOMOTD, the bot will try to join his master’s channel with the provided password …

The bot receives the topic of the channel and interprets it as a command: …

The first topic tells the bot to spread further with the help of the LSASS vulnerability. … the second example of a possible topic instructs the bot to download a binary from the web and execute it … And if the topic does not contain any instructions for the bot, then it does nothing but idling in the channel, awaiting commands. That is fundamental for most current bots: They do not spread if they are not told to spread in their master’s channel.
Upon successful exploitation the bot will message the owner about it, if it has been advised to do so. …

Then the IRC server (also called IRC daemon, abbreviated IRCd) will provide the channels userlist. But most botnet owners have modified the IRCd to just send the channel operators to save traffic and disguise the number of bots in the channel. …

The controller of a botnet has to authenticate himself to take control over the bots. …

… the “-s” switch in the last example tells the bots to be silent when authenticating their master. …

… Once an attacker is authenticated, they can do whatever they want with the bots … The IRC server that is used to connect all bots is in most cases a compromised box. … Only beginners start a botnet on a normal IRCd. It is just too obvious you are doing something nasty if you got 1.200 clients named as rbot-<6-digits> reporting scanning results in a channel. Two different IRC servers software implementation are commonly used to run a botnet: Unreal IRCd and ConferenceRoom:

  • Unreal IRCd (http://www.unrealircd.com/) is cross-platform and can thus be used to easily link machines running Windows and Linux. The IRC server software is stripped down and modified to fit the botnet owners needs. Common modifications we have noticed are stripping “JOIN”, “PART” and “QUIT” messages on channels to avoid unnecessary traffic. … able to serve 80.000 bots …
  • ConferenceRoom (http://www.webmaster.com/) is a commercial IRCd solution, but people who run botnets typically use a cracked version. …

What bots do and how they work Read More »

Different types of Bots

From The Honeynet Project & Research Alliance’s “Know your Enemy: Tracking Botnets” (13 March 2005):

… some of the more widespread and well-known bots.

  • Agobot/Phatbot/Forbot/XtremBot

    … best known bot. … more than 500 known different versions of Agobot … written in C++ with cross-platform capabilities and the source code is put under the GPL. … structured in a very modular way, and it is very easy to add commands or scanners for other vulnerabilities … uses libpcap (a packet sniffing library) and Perl Compatible Regular Expressions (PCRE) to sniff and sort traffic. … can use NTFS Alternate Data Stream (ADS) and offers Rootkit capabilities like file and process hiding to hide it’s own presence … reverse engineering this malware is harder since it includes functions to detect debuggers (e.g. SoftICE and OllyDbg) and virtual machines (e.g. VMWare and Virtual PC). … the only bot that utilized a control protocol other than IRC. A fork using the distributed organized WASTE chat network is available.

  • SDBot/RBot/UrBot/UrXBot/…

    This family of malware is at the moment the most active one … seven derivatives … written in very poor C and also published under the GPL.

  • mIRC-based Bots – GT-Bots

    We subsume all mIRC-based bots as GT-bots … GT is an abbreviation for Global Threat and this is the common name used for all mIRC-scripted bots. … mIRC-scripts, often having the extension “.mrc”, are used to control the bot.

  • DSNX Bots

    Dataspy Network X (DSNX) bot is written in C++ and has a convenient plugin interface. … code is published under the GPL. … one major disadvantage: the default version does not come with any spreaders.

  • Q8 Bots

    only 926 lines of C-code. … written for Unix/Linux systems.

  • kaiten

    … lacks a spreader too, and is also written for Unix/Linux systems. The weak user authentication makes it very easy to hijack a botnet running with kaiten. The bot itself consists of just one file.

  • Perl-based bots

    … very small and contain in most cases only a few hundred lines of code. They offer only a rudimentary set of commands (most often DDoS-attacks) … used on Unix-based systems.

Different types of Bots Read More »

Uses of botnets

From The Honeynet Project & Research Alliance’s “Know your Enemy: Tracking Botnets” (13 March 2005):

“A botnet is comparable to compulsory military service for windows boxes” – Stromberg

… Based on the data we captured, the possibilities to use botnets can be categorized as listed below. …

  1. Distributed Denial-of-Service Attacks

    Most commonly implemented and also very often used are TCP SYN and UDP flood attacks. Script kiddies apparently consider DDoS an appropriate solution to every social problem. … run commercial DDoS attacks against competing corporations … DDoS attacks are not limited to web servers, virtually any service available on the Internet can be the target of such an attack. … very specific attacks, such as running exhausting search queries on bulletin boards or recursive HTTP-floods on the victim’s website.

  2. Spamming

    open a SOCKS v4/v5 proxy … send massive amounts of bulk email … harvest email-addresses … phishing-mails

  3. Sniffing Traffic

    use a packet sniffer to watch for interesting clear-text data passing by a compromised machine. … If a machine is compromised more than once and also a member of more than one botnet, the packet sniffing allows to gather the key information of the other botnet. Thus it is possible to “steal” another botnet.

  4. Keylogging
  5. Spreading new malware

    In most cases, botnets are used to spread new bots. … spreading an email virus using a botnet is a very nice idea

  6. Installing Advertisement Addons and Browser Helper Objects (BHOs)

    setting up a fake website with some advertisements … these clicks can be “automated” so that instantly a few thousand bots click on the pop-ups. … hijacks the start-page of a compromised machine so that the “clicks” are executed each time the victim uses the browser.

  7. Google AdSense abuse

    … leveraging his botnet to click on these advertisements in an automated fashion and thus artificially increments the click counter.

  8. Attacking IRC Chat Networks

    attacks against Internet Relay Chat (IRC) networks. … so called “clone attack”: In this kind of attack, the controller orders each bot to connect a large number of clones to the victim IRC network.

  9. Manipulating online polls/games

    Online polls/games are getting more and more attention and it is rather easy to manipulate them with botnets.

  10. Mass identity theft

    Bogus emails (“phishing mails”) … also host multiple fake websites pretending to be Ebay, PayPal, or a bank …

Uses of botnets Read More »

Who runs botnets?

From The Honeynet Project & Research Alliance’s “Know your Enemy: Tracking Botnets” (13 March 2005):

An event that is not that unusual is that somebody steals a botnet from someone else. … bots are often “secured” by some sensitive information, e.g. channel name or server password. If one is able to obtain all this information, he is able to update the bots within another botnet to another bot binary, thus stealing the bots from another botnet. …

Something which is interesting, but rarely seen, is botnet owners discussing issues in their bot channel. …

Our observations showed that often botnets are run by young males with surprisingly limited programming skills. … we also observed some more advanced attackers: these persons join the control channel only seldom. They use only 1 character nicks, issue a command and leave afterwards. The updates of the bots they run are very professional. Probably these people use the botnets for commercial usage and “sell” the services. A low percentage use their botnets for financial gain. …

Another possibility is to install special software to steal information. We had one very interesting case in which attackers stole Diablo 2 items from the compromised computers and sold them on eBay. … Some botnets are used to send spam: you can rent a botnet. The operators give you a SOCKS v4 server list with the IP addresses of the hosts and the ports their proxy runs on. …

… some attackers are highly skilled and organized, potentially belonging to well organized crime structures. Leveraging the power of several thousand bots, it is viable to take down almost any website or network instantly. Even in unskilled hands, it should be obvious that botnets are a loaded and powerful weapon.

Who runs botnets? Read More »

Prescription drug spending has vastly increased in 25 years

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

Whatever the answer, it’s clear who pays for it. You do. You pay in the form of vastly higher drug prices and health-care insurance. Americans spent $179 billion on prescription drugs in 2003. That’s up from … wait for it … $12 billion in 1980 [when the Bayh-Dole Act was passed]. That’s a 13% hike, year after year, for two decades. Of course, what you don’t pay as a patient you pay as a taxpayer. The U.S. government picks up the tab for one in three Americans by way of Medicare, Medicaid, the military, and other programs. According to the provisions of Bayh-Dole, the government gets a royalty-free use, forever, of its funded inventions. It has never tried to collect. You might say the taxpayers pay for the hat–and have it handed to them.

Prescription drug spending has vastly increased in 25 years Read More »

What patents on life has wrought

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

The Supreme Court’s decision in 1980 to allow for the patenting of living organisms opened the spigots to individual claims of ownership over everything from genes and protein receptors to biochemical pathways and processes. Soon, research scientists were swooping into patent offices around the world with “invention” disclosures that weren’t so much products or processes as they were simply knowledge–or research tools to further knowledge.

The problem is, once it became clear that individuals could own little parcels of biology or chemistry, the common domain of scientific exchange–that dynamic place where theories are introduced, then challenged, and ultimately improved–begins to shrink. What’s more, as the number of claims grows, so do the overlapping claims and legal challenges. …

In October 1990 a researcher named Mary-Claire King at the University of California at Berkeley told the world that there was a breast-cancer susceptibility gene–and that it was on chromosome 17. Several other groups, sifting through 30 million base pairs of nucleotides to find the precise location of the gene, helped narrow the search with each new discovery. Then, in the spring of 1994, a team led by Mark Skolnick at the University of Utah beat everyone to the punch–identifying a gene with 5,592 base pairs and codes for a protein that was nearly 1,900 amino acids long. Skolnick’s team rushed to file a patent application and was issued title to the discovery three years later.

By all accounts the science was a collective effort. The NIH had funded scores of investigative teams around the country and given nearly 1,200 separate research grants to learn everything there was to learn about the genetics of breast cancer.

The patent, however, is licensed to one company–Skolnick’s. Myriad Genetics, a company the researcher founded in 1991, now insists on doing all U.S. testing for the presence of unknown mutation in the two related genes, BRCA1 and BRCA2. Those who have a mutation in either gene have as high as an 86% chance of getting cancer, say experts. The cost for the complete two-gene analysis: $2,975.

Critics say that Myriad’s ultrarestrictive licensing of the technology–one funded not only by federal dollars but also aided by the prior discoveries of hundreds of other scientists–is keeping the price of the test artificially high. Skolnick, 59, claims that the price is justified by his company’s careful analysis of thousands of base pairs of DNA, each of which is prone to a mutation or deletion, and by its educational outreach programs.

What patents on life has wrought Read More »

1980 Bayh-Dole Act created the biotech industry … & turned universities into businesses

From Clifton Leaf’s “The Law of Unintended Consequences” (Fortune: 19 September 2005):

For a century or more, the white-hot core of American innovation has been basic science. And the foundation of basic science has been the fluid exchange of ideas at the nation’s research universities. It has always been a surprisingly simple equation: Let scientists do their thing and share their work–and industry picks up the spoils. Academics win awards, companies make products, Americans benefit from an ever-rising standard of living.

That equation still holds, with the conspicuous exception of medical research. In this one area, something alarming has been happening over the past 25 years: Universities have evolved from public trusts into something closer to venture capital firms. What used to be a scientific community of free and open debate now often seems like a litigious scrum of data-hoarding and suspicion. And what’s more, Americans are paying for it through the nose. …

From 1992 to September 2003, pharmaceutical companies tied up the federal courts with 494 patent suits. That’s more than the number filed in the computer hardware, aerospace, defense, and chemical industries combined. Those legal expenses are part of a giant, hidden “drug tax”–a tax that has to be paid by someone. And that someone, as you’ll see below, is you. You don’t get the tab all at once, of course. It shows up in higher drug costs, higher tuition bills, higher taxes–and tragically, fewer medical miracles.

So how did we get to this sorry place? It was one piece of federal legislation that you’ve probably never heard of–a 1980 tweak to the U.S. patent and trademark law known as the Bayh-Dole Act. That single law, named for its sponsors, Senators Birch Bayh and Bob Dole, in essence transferred the title of all discoveries made with the help of federal research grants to the universities and small businesses where they were made.

Prior to the law’s enactment, inventors could always petition the government for the patent rights to their own work, though the rules were different at each federal agency; some 20 different statutes governed patent policy. The law simplified the “technology transfer” process and, more important, changed the legal presumption about who ought to own and develop new ideas–private enterprise as opposed to Uncle Sam. The new provisions encouraged academic institutions to seek out the clever ideas hiding in the backs of their research cupboards and to pursue licenses with business. And it told them to share some of the take with the actual inventors.

On the face of it, Bayh-Dole makes sense. Indeed, supporters say the law helped create the $43-billion-a-year biotech industry and has brought valuable drugs to market that otherwise would never have seen the light of day. What’s more, say many scholars, the law has created megaclusters of entrepreneurial companies–each an engine for high-paying, high-skilled jobs–all across the land.

That all sounds wonderful. Except that Bayh-Dole’s impact wasn’t so much in the industry it helped create, but rather in its unintended consequence–a legal frenzy that’s diverting scientists from doing science. …

A 1979 audit of government-held patents showed that fewer than 5% of some 28,000 discoveries–all of them made with the help of taxpayer money–had been developed, because no company was willing to risk the capital to commercialize them without owning title. …

A dozen schools–notably MIT, Stanford, the University of California, Johns Hopkins, and the University of Wisconsin–already had campus offices to work out licensing arrangements with government agencies and industry. But within a few years Technology Licensing Offices (or TLOs) were sprouting up everywhere. In 1979, American universities received 264 patents. By 1991, when a new organization, the Association of University Technology Managers, began compiling data, North American institutions (including colleges, research institutes, and hospitals) had filed 1,584 new U.S. patent applications and negotiated 1,229 licenses with industry–netting $218 million in royalties. By 2003 such institutions had filed five times as many new patent applications; they’d done 4,516 licensing deals and raked in over $1.3 billion in income. And on top of all that, 374 brand-new companies had sprouted from the wells of university research. That meant jobs pouring back into the community …

The anecdotal reports, fun “discovery stories” in alumni magazines, and numbers from the yearly AUTM surveys suggested that the academic productivity marvel had spread far and wide. But that’s hardly the case. Roughly a third of the new discoveries and more than half of all university licensing income in 2003 derived from just ten schools–MIT, Stanford, the usual suspects. They are, for the most part, the institutions that were pursuing “technology transfer” long before Bayh-Dole. …

Court dockets are now clogged with university patent claims. In 2002, North American academic institutions spent over $200 million in litigation (though some of that was returned in judgments)–more than five times the amount spent in 1991. Stanford Law School professor emeritus John Barton notes, in a 2000 study published in Science, that the indicator that correlates most perfectly with the rise in university patents is the number of intellectual-property lawyers. (Universities also spent $142 million on lobbying over the past six years.) …

So what do universities do with all their cash? That depends. Apart from the general guidelines provided by Bayh-Dole, which indicate the proceeds must be used for “scientific research or education,” there are no instructions. “These are unrestricted dollars that they can use, and so they’re worth a lot more than other dollars,” says University of Michigan law professor Rebecca Eisenberg, who has written extensively about the legislation. The one thing no school seems to use the money for is tuition–which apparently has little to do with “scientific research or education.” Meanwhile, the cost of university tuition has soared at a rate more than twice as high as inflation from 1980 to 2005.

1980 Bayh-Dole Act created the biotech industry … & turned universities into businesses Read More »

Neil Postman: the medium is the metaphor for the way we think

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

In late 1980s the late Neil Postman wrote an enduringly important book called Amusing Ourselves to Death. In it he says that Marshall McLuhan only came close to getting it right in his famous adage, that the medium is the message. Postman corrects McLuhan by saying that the medium is the metaphor – a metaphor for the way we think. Written narrative that people can read, Postman goes on, is a metaphor for thinking logically. And he says that image media bypass reason and go straight to the emotions. The image media are a metaphor for not thinking logically. Images disable thinking, so unless people read and use their reason democracy is disabled as well.

Neil Postman: the medium is the metaphor for the way we think Read More »

Antitrust suits led to vertical integration & the IT revolution

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

As the industrial scholar Alfred D. Chandler has noted, the vertically integrated firm — which dominated the American economy for most of the last century — was to a great degree the product of antitrust enforcement. When Theodore Roosevelt began to limit the ability of large companies to grow horizontally, many responded by buying outside suppliers and integrating their operations into vertical lines of production. Many also set up internal research labs to improve existing products and develop new ones. Antitrust law later played a huge role in launching the information revolution. During the Cold War, the Justice Department routinely used antitrust suits to force high-tech firms to share the technologies they had developed. Targeted firms like IBM, RCA, AT&T, and Xerox spilled many thousands of patents onto the market, where they were available to any American competitor for free.

Antitrust suits led to vertical integration & the IT revolution Read More »

AACS, next-gen encryption for DVDs

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

AACS relies on the well-established AES (with 128-bit keys) to safeguard the disc data. Just like DVD players, HD DVD and Blu-ray drives will come with a set of Device Keys handed out to the manufacturers by AACS LA. Unlike the CSS encryption used in DVDs, though, AACS has a built-in method for revoking sets of keys that are cracked and made public. AACS-encrypted discs will feature a Media Key Block that all players need to access in order to get the key needed to decrypt the video files on the disc. The MKB can be updated by AACS LA to prevent certain sets of Device Keys from functioning with future titles – a feature that AACS dubs “revocation.” …

AACS also supports a new feature called the Image Constraint Token. When set, the ICT will force video output to be degraded over analog connections. ICT has so far gone unused, though this could change at any time. …

While AACS is used by both HD disc formats, the Blu-ray Disc Association (BDA) has added some features of its own to make the format “more secure” than HD DVD. The additions are BD+ and ROM Mark; though both are designed to thwart pirates, they work quite differently.

While the generic AACS spec includes key revocation, BD+ actually allows the BDA to update the entire encryption system once players have already shipped. Should encryption be cracked, new discs will include information that will alter the players’ decryption code. …

The other new technology, ROM Mark, affects the manufacturing of Blu-ray discs. All Blu-ray mastering equipment must be licensed by the BDA, and they will ensure that all of it carries ROM Mark technology. Whenever a legitimate disc is created, it is given a “unique and undetectable identifier.” It’s not undetectable to the player, though, and players can refuse to play discs without a ROM Mark. The BDA has the optimistic hope that this will keep industrial-scale piracy at bay. We’ll see.

AACS, next-gen encryption for DVDs Read More »

How DVD encryption (CSS) works … or doesn’t

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

DVD players are factory-built with a set of keys. When a DVD is inserted, the player runs through every key it knows until one unlocks the disc. Once this disc key is known, the player uses it to retrieve a title key from the disc. This title key actually allows the player to unscramble the disc’s contents.

The decryption process might have been formidable when first drawn up, but it had begun to look weak even by 1999. Frank Stevenson, who published a good breakdown of the technology, estimated at that time that a 450Mhz Pentium III could crack the code in only 18 seconds – and that’s without even having a player key in the first place. In other, words a simple brute force attack could crack the code at runtime, assuming that users were patient enough to wait up to 18 seconds. With today’s technology, of course, the same crack would be trivial.

Once the code was cracked, the genie was out of the bottle. CSS descramblers proliferated …

Because the CSS system could not be updated once in the field, the entire system was all but broken. Attempts to patch the system (such as Macrovision’s “RipGuard”) met with limited success, and DVDs today remain easy to copy using a multitude of freely available tools.

How DVD encryption (CSS) works … or doesn’t Read More »

Where we are technically with DRM

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

The attacks on FairPlay have been enlightening because of what they illustrate about the current state of DRM. They show, for instance, that modern DRM schemes are difficult to bypass, ignore, or strip out with a few lines of code. In contrast to older “patches” of computer software (what you would generally bypass a program’s authorization routine), the encryption on modern media files is pervasive. All of the software mentioned has still required Apple’s decoding technology to unscramble the song files; there is no simple hack that can simply strip the files clean without help, and the ciphers are complex enough to make brute-force cracks difficult.

Apple’s response has also been a reminder that cracking an encryption scheme once will no longer be enough in the networked era. Each time that its DRM has been bypassed, Apple has been able to push out updates to its customers that render the hacks useless (or at least make them more difficult to achieve).

Where we are technically with DRM Read More »

Apple iTunes Music Store applies DRM after download

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

A third approach [to subverting Apple’s DRM] came from PyMusique, software originally written so that Linux users could access the iTunes Music Store. The software took advantage of the fact that iTMS transmits DRM-free songs to its customers and relies on iTunes to add that gooey layer of DRM goodness at the client end. PyMusique emulates iTunes and serves as a front end to the store, allowing users to browse and purchase music. When songs are downloaded, however, the program “neglects” to apply the FairPlay DRM.

Apple iTunes Music Store applies DRM after download Read More »

To combat phishing, change browser design philosophy

From Federico Biancuzzi’s “Phishing with Rachna Dhamija” (SecurityFocus: 19 June 2006):

We discovered that existing security cues are ineffective, for three reasons:

1. The indicators are ignored (23% of participants in our study did not look at the address bar, status bar, or any SSL indicators).

2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn’t realize that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.

3. The security indicators are trivial to spoof. Many users can’t distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80% of participants. …

Currently, I’m working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user’s history information (for example, “you’ve been to this website many times” or “you’ve never submitted this form before”) can significantly increase a user’s ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I’ve been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

Rachna Dhamija: I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customized security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user’s focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password. …

With security skins, we were trying to solve not user authentication, but the reverse problem – server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like “Server X is authenticated”, or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user’s browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can’t be replayed by an attacker and it won’t reveal anything useful about the user’s password. …

Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can’t simulate is an interface he can’t predict. This is the principle that DSS relies on. We should make it easy for users to personalize their interfaces. Look at how popular screensavers, ringtones, and application skins are – users clearly enjoy the ability to personalize their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

To combat phishing, change browser design philosophy Read More »