teaching

Security decisions are often made for non-security reasons

From Bruce Schneier’s Crypto-Gram of 15 July 2004:

There was a single guard watching the X-ray machine’s monitor, and a line of people putting their bags onto the machine. The people themselves weren’t searched at all. Even worse, no guard was watching the people. So when I walked with everyone else in line and just didn’t put my bag onto the machine, no one noticed.

It was all good fun, and I very much enjoyed describing this to FinCorp’s VP of Corporate Security. He explained to me that he got a $5 million rate reduction from his insurance company by installing that X-ray machine and having some dogs sniff around the building a couple of times a week.

I thought the building’s security was a waste of money. It was actually a source of corporate profit.

The point of this story is one that I’ve made in ‘Beyond Fear’ and many other places: security decisions are often made for non-security reasons.

Security decisions are often made for non-security reasons Read More »

The real digital divide: knowing how to use what you have & not knowing

From Howard Rheingold’s interview in “Howard Rheingold’s Latest Connection” (Business Week: 11 August 2004):

Here’s where Wikipedia fits in. It used to be if you were a kid in a village in India or a village in northern Canada in the winter, maybe you could get to a place where they have a few books once in a while. Now, if you have a telephone, you can get a free encyclopedia. You have access to the world’s knowledge. Knowing how to use that is a barrier. The divide increasingly is not so much between those who have and those who don’t, but those who know how to use what they have and those who don’t.

The real digital divide: knowing how to use what you have & not knowing Read More »

The Pareto Principle & Temperament Dimensions

From David Brooks’ “More Tools For Thinking” (The New York Times: 29 March 2011):

Clay Shirkey nominates the Pareto Principle. We have the idea in our heads that most distributions fall along a bell curve (most people are in the middle). But this is not how the world is organized in sphere after sphere. The top 1 percent of the population control 35 percent of the wealth. The top two percent of Twitter users send 60 percent of the messages. The top 20 percent of workers in any company will produce a disproportionate share of the value. Shirkey points out that these distributions are regarded as anomalies. They are not.

Helen Fisher, the great researcher into love and romance, has a provocative entry on “temperament dimensions.” She writes that we have four broad temperament constellations. One, built around the dopamine system, regulates enthusiasm for risk. A second, structured around the serotonin system, regulates sociability. A third, organized around the prenatal testosterone system, regulates attention to detail and aggressiveness. A fourth, organized around the estrogen and oxytocin systems, regulates empathy and verbal fluency.

This is an interesting schema to explain temperament. It would be interesting to see others in the field evaluate whether this is the best way to organize our thinking about our permanent natures.

The Pareto Principle & Temperament Dimensions Read More »

Clay Shirky on the changes to publishing & media

From Parul Sehgal’s “Here Comes Clay Shirky” (Publisher’s Weekly: 21 June 2010):

PW: In April of this year, Wired‘s Kevin Kelly turned a Shirky quote—“Institutions will try to preserve the problem to which they are the solution”—into “the Shirky Principle,” in deference to the simple, yet powerful observation. … Kelly explained, “The Shirky Principle declares that complex solutions, like a company, or an industry, can become so dedicated to the problem they are the solution to, that often they inadvertently perpetuate the problem.”

CS: It is possible to think that the Internet will be a net positive for society while admitting that there are significant downsides—after all, it’s not a revolution if nobody loses.

No one will ever wonder, is there anything amusing for me on the Internet? That is a solved problem. What we should really care about are [the Internet’s] cultural uses.

In Here Comes Everybody I told the story of the Abbot of Sponheim who in 1492 wrote a book saying that if this printing press thing is allowed to expand, what will the scribes do for a living? But it was more important that Europe be literate than for scribes to have a job.

In a world where a book had to be a physical object, charging money was a way to cause more copies to come into circulation. In the digital world, charging money for something is a way to produce fewer copies. There is no way to preserve the status quo and not abandon that value.

Some of it’s the brilliant Upton Sinclair observation: “It’s hard to make a man understand something if his livelihood depends on him not understanding it.” From the laying on of hands of [Italian printer] Aldus Manutius on down, publishing has always been this way. This is a medium where a change to glue-based paperback binding constituted a revolution.

PW: When do you think a similar realization will come to book publishing?

CS: I think someone will make the imprint that bypasses the traditional distribution networks. Right now the big bottleneck is the head buyer at Barnes & Noble. That’s the seawall holding back the flood in publishing. Someone’s going to say, “I can do a business book or a vampire book or a romance novel, whatever, that might sell 60% of the units it would sell if I had full distribution and a multimillion dollar marketing campaign—but I can do it for 1% percent of the cost.” It has already happened a couple of times with specialty books. The moment of tip happens when enough things get joined up to create their own feedback loop, and the feedback loop in publishing changes when someone at Barnes & Noble says: “We can’t afford not to stock this particular book or series from an independent publisher.” It could be on Lulu, or iUniverse, whatever. And, I feel pretty confident saying it’s going to happen in the next five years.

Clay Shirky on the changes to publishing & media Read More »

These are their brilliant plans to save magazines?

From Jeremy W. Peters’ “In Magazine World, a New Crop of Chiefs” (The New York Times: 28 November 2010):

“This is the changing of the guard from an older school to a newer school,” said Justin B. Smith, president of the Atlantic Media Company. The changes, he added, were part of an inevitable evolution in publishing that was perhaps long overdue. “It is quite remarkable that it took until 2010, 15 years after the arrival of the Internet, for a new generation of leaders to emerge.”

At Time, the world’s largest magazine publisher, Mr. Griffin said he wanted to reintroduce the concept of “charging a fair price, and charging consumers who are interested in the product.” In other words, consumers can expect to pay more. “We spent a tremendous amount of money creating original content, original journalism, fact-checking, sending reporters overseas to cover wars,” he said. “You name it. What we’ve got to do as a business is get fair value for that.” Supplementing that approach, Mr. Griffin said, will be new partnerships within Time Warner, Time Inc.’s parent company, that allow magazines to take advantage of the vast film and visual resources at their disposal. One such partnership in the planning stages, he said, is a deal between a major cosmetics company and InStyle to broadcast from the red carpets of big Hollywood events like the Academy Awards and the Screen Actors Guild Awards.

But one thing Mr. Harty said the company was examining: expanding its licensed products. The company already pulls in more than a billion dollars a year selling products with a Better Homes and Gardens license at Wal-Mart stores. It is now planning to sell plants and bulbs with the magazine’s imprimatur directly to consumers. “We have relationships with all these consumers,” Mr. Harty said. “How can we figure out how to sell them goods and services? We believe that’s a key.”

These are their brilliant plans to save magazines? Read More »

Shelby Foote on how the Civil War changed the gender of the teaching profession

From Carter Coleman, Donald Faulkner, & William Kennedy’s interview of Shelby Foote in “The Art of Fiction No. 158” (The Paris Review: Summer 1999, No. 151):

About the time that war started I think roughly eighty-five or ninety percent of the teachers in this country were men. After the war was over something like eighty-five to ninety percent of teachers were women.

Shelby Foote on how the Civil War changed the gender of the teaching profession Read More »

My response to the news that “Reader, Acrobat Patches Plug 23 Security Holes”

I sent this email out earlier today to friends & students:

For the love of Pete, people, if you use Adobe Acrobat Reader, update it.

http://krebsonsecurity.com/2010/10/reader-acrobat-patches-plug-23-security-holes/

But here’s a better question: why are you using Adobe Reader in the first place? It’s one of the WORST programs for security you can have on your computer. And most of the time, you just don’t need it!

If you use Windows, give Foxit Reader (http://www.foxitsoftware.com/pdf/reader/) a whirl. It’s free!

If you use a Mac, you already have a great PDF reader installed with your operating system: Preview. Use it.

The ONLY reason to use Adobe Reader is to fill out tax forms. When I need to do that, I download Adobe Reader, download the PDFs from the gubmint, fill out the PDFs, send ’em to the Feds & the State, & then remove Adobe Reader. I encourage others to do the same.

My response to the news that “Reader, Acrobat Patches Plug 23 Security Holes” Read More »

How the Madden NFL videogame was developed

From Patrick Hruby’s “The Franchise: The inside story of how Madden NFL became a video game dynasty” (ESPN: 22 July 2010):

1982

Harvard grad and former Apple employee Trip Hawkins founds video game maker Electronic Arts, in part to create a football game; one year later, the company releases “One-on-One: Dr. J vs. Larry Bird,” the first game to feature licensed sports celebrities. Art imitates life.

1983-84

Hawkins approaches former Oakland Raiders coach and NFL television analyst John Madden to endorse a football game. Madden agrees, but insists on realistic game play with 22 on-screen players, a daunting technical challenge.

1988-90

EA releases the first Madden football game for the Apple II home computer; a subsequent Sega Genesis home console port blends the Apple II game’s realism with control pad-heavy, arcade-style action, becoming a smash hit.

madden-nfl-covers-sm.jpg

You can measure the impact of “Madden” through its sales: as many as 2 million copies in a single week, 85 million copies since the game’s inception and more than $3 billion in total revenue. You can chart the game’s ascent, shoulder to shoulder, alongside the $20 billion-a-year video game industry, which is either co-opting Hollywood (see “Tomb Raider” and “Prince of Persia”) or topping it (opening-week gross of “Call of Duty: Modern Warfare 2”: $550 million; “The Dark Knight”: $204 million).

Some of the pain was financial. Just as EA brought its first games to market in 1983, the home video game industry imploded. In a two-year span, Coleco abandoned the business, Intellivision went from 1,200 employees to five and Atari infamously dumped thousands of unsold game cartridges into a New Mexico landfill. Toy retailers bailed, concluding that video games were a Cabbage Patch-style fad. Even at EA — a hot home computer startup — continued solvency was hardly assured.

In 1988, “John Madden Football” was released for the Apple II computer and became a modest commercial success.

THE STAKES WERE HIGH for a pair of upstart game makers, with a career-making opportunity and a $100,000 development contract on the line. In early 1990, Troy Lyndon and Mike Knox of San Diego-based Park Place Productions met with Hawkins to discuss building a “Madden” game for Sega’s upcoming home video game console, the Genesis. …

Because the game that made “Madden” a phenomenon wasn’t the initial Apple II release, it was the Genesis follow-up, a surprise smash spawned by an entirely different mindset. Hawkins wanted “Madden” to play out like the NFL. Equivalent stats. Similar play charts. Real football.

In 1990, EA had a market cap of about $60 million; three years later, that number swelled to $2 billion.

In 2004, EA paid the NFL a reported $300 million-plus for five years of exclusive rights to teams and players. The deal was later extended to 2013. Just like that, competing games went kaput. The franchise stands alone, triumphant, increasingly encumbered by its outsize success.

Hawkins left EA in the early 1990s to spearhead 3D0, an ill-fated console maker that became a doomed software house. An icy rift between the company and its founder ensued.

How the Madden NFL videogame was developed Read More »

Refusing a technology defines you

From Sander Duivestein’s “Penny Thoughts on the Technium” (The Technium: 1 December 2009):

I‘m interested in how people personally decide to refuse a technology. I’m interested in that process, because I think that will happen more and more as the number of technologies keep increasing. The only way we can sort our identity is by not using technology. We’re used to be that you define yourself by what you use now. You define yourself by what you don’t use. So I’m interested in that process.

Refusing a technology defines you Read More »

Ambient awareness & social media

From Clive Thompson’s “Brave New World of Digital Intimacy” (The New York Times Magazine: 5 September 2008):

In essence, Facebook users didn’t think they wanted constant, up-to-the-minute updates on what other people are doing. Yet when they experienced this sort of omnipresent knowledge, they found it intriguing and addictive. Why?

Social scientists have a name for this sort of incessant online contact. They call it “ambient awareness.” It is, they say, very much like being physically near someone and picking up on his mood through the little things he does — body language, sighs, stray comments — out of the corner of your eye. Facebook is no longer alone in offering this sort of interaction online. In the last year, there has been a boom in tools for “microblogging”: posting frequent tiny updates on what you’re doing. The phenomenon is quite different from what we normally think of as blogging, because a blog post is usually a written piece, sometimes quite long: a statement of opinion, a story, an analysis. But these new updates are something different. They’re far shorter, far more frequent and less carefully considered. One of the most popular new tools is Twitter, a Web site and messaging service that allows its two-million-plus users to broadcast to their friends haiku-length updates — limited to 140 characters, as brief as a mobile-phone text message — on what they’re doing. There are other services for reporting where you’re traveling (Dopplr) or for quickly tossing online a stream of the pictures, videos or Web sites you’re looking at (Tumblr). And there are even tools that give your location. When the new iPhone, with built-in tracking, was introduced in July, one million people began using Loopt, a piece of software that automatically tells all your friends exactly where you are.

This is the paradox of ambient awareness. Each little update — each individual bit of social information — is insignificant on its own, even supremely mundane. But taken together, over time, the little snippets coalesce into a surprisingly sophisticated portrait of your friends’ and family members’ lives, like thousands of dots making a pointillist painting. This was never before possible, because in the real world, no friend would bother to call you up and detail the sandwiches she was eating. The ambient information becomes like “a type of E.S.P.,” as Haley described it to me, an invisible dimension floating over everyday life.

“It’s like I can distantly read everyone’s mind,” Haley went on to say. “I love that. I feel like I’m getting to something raw about my friends. It’s like I’ve got this heads-up display for them.” It can also lead to more real-life contact, because when one member of Haley’s group decides to go out to a bar or see a band and Twitters about his plans, the others see it, and some decide to drop by — ad hoc, self-organizing socializing. And when they do socialize face to face, it feels oddly as if they’ve never actually been apart. They don’t need to ask, “So, what have you been up to?” because they already know. Instead, they’ll begin discussing something that one of the friends Twittered that afternoon, as if picking up a conversation in the middle.

You could also regard the growing popularity of online awareness as a reaction to social isolation, the modern American disconnectedness that Robert Putnam explored in his book “Bowling Alone.” The mobile workforce requires people to travel more frequently for work, leaving friends and family behind, and members of the growing army of the self-employed often spend their days in solitude. Ambient intimacy becomes a way to “feel less alone,” as more than one Facebook and Twitter user told me.

Ambient awareness & social media Read More »

Bernie Madoff & the 1st worldwide Ponzi scheme

From Diana B. Henrioques’s “Madoff Scheme Kept Rippling Outward, Across Borders” (The New York Times: 20 December 2008):

But whatever else Mr. Madoff’s game was, it was certainly this: The first worldwide Ponzi scheme — a fraud that lasted longer, reached wider and cut deeper than any similar scheme in history, entirely eclipsing the puny regional ambitions of Charles Ponzi, the Boston swindler who gave his name to the scheme nearly a century ago.

Regulators say Mr. Madoff himself estimated that $50 billion in personal and institutional wealth from around the world was gone. … Before it evaporated, it helped finance Mr. Madoff’s coddled lifestyle, with a Manhattan apartment, a beachfront mansion in the Hamptons, a small villa overlooking Cap d’Antibes on the French Riviera, a Mayfair office in London and yachts in New York, Florida and the Mediterranean.

In 1960, as Wall Street was just shaking off its postwar lethargy and starting to buzz again, Bernie Madoff (pronounced MAY-doff) set up his small trading firm. His plan was to make a business out of trading lesser-known over-the-counter stocks on the fringes of the traditional stock market. He was just 22, a graduate of Hofstra University on Long Island.

By 1989, Mr. Madoff ‘s firm was handling more than 5 percent of the trading volume on the august New York Stock Exchange …

And in 1990, he became the nonexecutive chairman of the Nasdaq market, which at the time was operated as a committee of the National Association of Securities Dealers.

His rise on Wall Street was built on his belief in a visionary notion that seemed bizarre to many at the time: That stocks could be traded by people who never saw each other but were connected only by electronics.

In the mid-1970s, he had spent over $250,000 to upgrade the computer equipment at the Cincinnati Stock Exchange, where he began offering to buy and sell stocks that were listed on the Big Board. The exchange, in effect, was transformed into the first all-electronic computerized stock exchange.

He also invested in new electronic trading technology for his firm, making it cheaper for brokerage firms to fill their stock orders. He eventually gained a large amount of business from big firms like A. G. Edwards & Sons, Charles Schwab & Company, Quick & Reilly and Fidelity Brokerage Services.

By the end of the technology bubble in 2000, his firm was the largest market maker on the Nasdaq electronic market, and he was a member of the Securities Industry Association, now known as the Securities Industry and Financial Markets Association, Wall Street’s principal lobbying arm.

Bernie Madoff & the 1st worldwide Ponzi scheme Read More »

Australian police: don’t bank online with Windows

From Munir Kotadia’s “NSW Police: Don’t use Windows for internet banking” (ITnews: 9 October 2009):

Consumers wanting to safely connect to their internet banking service should use Linux or the Apple iPhone, according to a detective inspector from the NSW Police, who was giving evidence on behalf of the NSW Government at the public hearing into Cybercrime today in Sydney.

Detective Inspector Bruce van der Graaf from the Computer Crime Investigation Unit told the hearing that he uses two rules to protect himself from cybercriminals when banking online.

The first rule, he said, was to never click on hyperlinks to the banking site and the second was to avoid Microsoft Windows.

“If you are using the internet for a commercial transaction, use a Linux boot up disk – such as Ubuntu or some of the other flavours. Puppylinux is a nice small distribution that boots up fairly quickly.

Van der Graaf also mentioned the iPhone, which he called “quite safe” for internet banking.

“Another option is the Apple iPhone. It is only capable of running one process at a time so there is really no danger from infection,” he said.

Australian police: don’t bank online with Windows Read More »

Coppola on changes in the movie industry

From Bloomberg’s “Francis Ford Coppola Sees Cinema World Falling Apart: Interview” (12 October 2009):

“The cinema as we know it is falling apart,” says Francis Ford Coppola.

“It’s a period of incredible change,” says the director of “The Godfather” and “Apocalypse Now.” “We used to think of six, seven big film companies. Every one of them is under great stress now. Probably two or three will go out of business and the others will just make certain kind of films like ‘Harry Potter’ — basically trying to make ‘Star Wars’ over and over again, because it’s a business.”

“Cinema is losing the public’s interest,” says Coppola, “because there is so much it has to compete with to get people’s time.”

The profusion of leisure activities; the availability of movies on copied DVD and on the Internet; and news becoming entertainment are reshaping the industry, he says. Companies have combined businesses as customers turn to cheap downloads rather than visit shops or movie theaters.

“I think the cinema is going to live off into something more related to a live performance in which the filmmaker is there, like the conductor of an opera used to be,” Coppola says. “Cinema can be interactive, every night it can be a little different.”

Coppola on changes in the movie industry Read More »

Apple’s role in technology

Image representing iPhone as depicted in Crunc...
Image via CrunchBase

From Doc Searls’s “The Most Personal Device” (Linux Journal: 1 March 2009):

My friend Keith Hopper made an interesting observation recently. He said one of Apple’s roles in the world is finding categories where progress is logjammed, and opening things up by coming out with a single solution that takes care of everything, from the bottom to the top. Apple did it with graphical computing, with .mp3 players, with on-line music sales and now with smartphones. In each case, it opens up whole new territories that can then be settled and expanded by other products, services and companies. Yes, it’s closed and controlling and the rest of it. But what matters is the new markets that open up.

Apple’s role in technology Read More »

What Google’s book settlement means

Google Book Search
Image via Wikipedia

From Robert Darnton’s “Google & the Future of Books” (The New York Review of Books: 12 February 2009):

As the Enlightenment faded in the early nineteenth century, professionalization set in. You can follow the process by comparing the Encyclopédie of Diderot, which organized knowledge into an organic whole dominated by the faculty of reason, with its successor from the end of the eighteenth century, the Encyclopédie méthodique, which divided knowledge into fields that we can recognize today: chemistry, physics, history, mathematics, and the rest. In the nineteenth century, those fields turned into professions, certified by Ph.D.s and guarded by professional associations. They metamorphosed into departments of universities, and by the twentieth century they had left their mark on campuses…

Along the way, professional journals sprouted throughout the fields, subfields, and sub-subfields. The learned societies produced them, and the libraries bought them. This system worked well for about a hundred years. Then commercial publishers discovered that they could make a fortune by selling subscriptions to the journals. Once a university library subscribed, the students and professors came to expect an uninterrupted flow of issues. The price could be ratcheted up without causing cancellations, because the libraries paid for the subscriptions and the professors did not. Best of all, the professors provided free or nearly free labor. They wrote the articles, refereed submissions, and served on editorial boards, partly to spread knowledge in the Enlightenment fashion, but mainly to advance their own careers.

The result stands out on the acquisitions budget of every research library: the Journal of Comparative Neurology now costs $25,910 for a year’s subscription; Tetrahedron costs $17,969 (or $39,739, if bundled with related publications as a Tetrahedron package); the average price of a chemistry journal is $3,490; and the ripple effects have damaged intellectual life throughout the world of learning. Owing to the skyrocketing cost of serials, libraries that used to spend 50 percent of their acquisitions budget on monographs now spend 25 percent or less. University presses, which depend on sales to libraries, cannot cover their costs by publishing monographs. And young scholars who depend on publishing to advance their careers are now in danger of perishing.

The eighteenth-century Republic of Letters had been transformed into a professional Republic of Learning, and it is now open to amateurs—amateurs in the best sense of the word, lovers of learning among the general citizenry. Openness is operating everywhere, thanks to “open access” repositories of digitized articles available free of charge, the Open Content Alliance, the Open Knowledge Commons, OpenCourseWare, the Internet Archive, and openly amateur enterprises like Wikipedia. The democratization of knowledge now seems to be at our fingertips. We can make the Enlightenment ideal come to life in reality.

What provoked these jeremianic- utopian reflections? Google. Four years ago, Google began digitizing books from research libraries, providing full-text searching and making books in the public domain available on the Internet at no cost to the viewer. For example, it is now possible for anyone, anywhere to view and download a digital copy of the 1871 first edition of Middlemarch that is in the collection of the Bodleian Library at Oxford. Everyone profited, including Google, which collected revenue from some discreet advertising attached to the service, Google Book Search. Google also digitized an ever-increasing number of library books that were protected by copyright in order to provide search services that displayed small snippets of the text. In September and October 2005, a group of authors and publishers brought a class action suit against Google, alleging violation of copyright. Last October 28, after lengthy negotiations, the opposing parties announced agreement on a settlement, which is subject to approval by the US District Court for the Southern District of New York.[2]

The settlement creates an enterprise known as the Book Rights Registry to represent the interests of the copyright holders. Google will sell access to a gigantic data bank composed primarily of copyrighted, out-of-print books digitized from the research libraries. Colleges, universities, and other organizations will be able to subscribe by paying for an “institutional license” providing access to the data bank. A “public access license” will make this material available to public libraries, where Google will provide free viewing of the digitized books on one computer terminal. And individuals also will be able to access and print out digitized versions of the books by purchasing a “consumer license” from Google, which will cooperate with the registry for the distribution of all the revenue to copyright holders. Google will retain 37 percent, and the registry will distribute 63 percent among the rightsholders.

Meanwhile, Google will continue to make books in the public domain available for users to read, download, and print, free of charge. Of the seven million books that Google reportedly had digitized by November 2008, one million are works in the public domain; one million are in copyright and in print; and five million are in copyright but out of print. It is this last category that will furnish the bulk of the books to be made available through the institutional license.

Many of the in-copyright and in-print books will not be available in the data bank unless the copyright owners opt to include them. They will continue to be sold in the normal fashion as printed books and also could be marketed to individual customers as digitized copies, accessible through the consumer license for downloading and reading, perhaps eventually on e-book readers such as Amazon’s Kindle.

After reading the settlement and letting its terms sink in—no easy task, as it runs to 134 pages and 15 appendices of legalese—one is likely to be dumbfounded: here is a proposal that could result in the world’s largest library. It would, to be sure, be a digital library, but it could dwarf the Library of Congress and all the national libraries of Europe. Moreover, in pursuing the terms of the settlement with the authors and publishers, Google could also become the world’s largest book business—not a chain of stores but an electronic supply service that could out-Amazon Amazon.

An enterprise on such a scale is bound to elicit reactions of the two kinds that I have been discussing: on the one hand, utopian enthusiasm; on the other, jeremiads about the danger of concentrating power to control access to information.

Google is not a guild, and it did not set out to create a monopoly. On the contrary, it has pursued a laudable goal: promoting access to information. But the class action character of the settlement makes Google invulnerable to competition. Most book authors and publishers who own US copyrights are automatically covered by the settlement. They can opt out of it; but whatever they do, no new digitizing enterprise can get off the ground without winning their assent one by one, a practical impossibility, or without becoming mired down in another class action suit. If approved by the court—a process that could take as much as two years—the settlement will give Google control over the digitizing of virtually all books covered by copyright in the United States.

Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

Google’s record suggests that it will not abuse its double-barreled fiscal-legal power. But what will happen if its current leaders sell the company or retire? The public will discover the answer from the prices that the future Google charges, especially the price of the institutional subscription licenses. The settlement leaves Google free to negotiate deals with each of its clients, although it announces two guiding principles: “(1) the realization of revenue at market rates for each Book and license on behalf of the Rightsholders and (2) the realization of broad access to the Books by the public, including institutions of higher education.”

What will happen if Google favors profitability over access? Nothing, if I read the terms of the settlement correctly. Only the registry, acting for the copyright holders, has the power to force a change in the subscription prices charged by Google, and there is no reason to expect the registry to object if the prices are too high. Google may choose to be generous in it pricing, and I have reason to hope it may do so; but it could also employ a strategy comparable to the one that proved to be so effective in pushing up the price of scholarly journals: first, entice subscribers with low initial rates, and then, once they are hooked, ratchet up the rates as high as the traffic will bear.

What Google’s book settlement means Read More »

RFID dust

RFID dust from Hitachi

From David Becker’s “Hitachi Develops RFID Powder” (Wired: 15 February 2007):

[Hitachi] recently showed a prototype of an RFID chip measuring a .05 millimeters square and 5 microns thick, about the size of a grain of sand. They expect to have ‘em on the market in two or three years.

The chips are packed with 128 bits of static memory, enough to hold a 38-digit ID number.

The size make the new chips ideal for embedding in paper, where they could verify the legitimacy of currency or event tickets. Implantation under the skin would be trivial…

RFID dust Read More »

RFID security problems

Old British passport cover
Creative Commons License photo credit: sleepymyf

2005

From Brian Krebs’ “Leaving Las Vegas: So Long DefCon and Blackhat” (The Washington Post: 1 August 2005):

DefCon 13 also was notable for being the location where two new world records were set — both involved shooting certain electronic signals unprecedented distances. Los Angeles-based Flexilis set the world record for transmitting data to and from a “passive” radio frequency identification (RFID) card — covering a distance of more than 69 feet. (Active RFID — the kind being integrated into foreign passports, for example — differs from passive RFID in that it emits its own magnetic signal and can only be detected from a much shorter distance.)

The second record set this year at DefCon was pulled off by some teens from Cincinnati, who broke the world record they set last year by building a device capable of maintaining an unamplified, 11-megabit 802.11b wireless Internet connection over a distance of 125 miles (the network actually spanned from Utah into Nevada).

From Andrew Brandt’s “Black Hat, Lynn Settle with Cisco, ISS” (PC World: 29 July 2005):

Security researcher Kevin Mahaffey makes a final adjustment to a series of radio antennas; Mahaffey used the directional antennas in a demonstration during his presentation, “Long Range RFID and its Security Implications.” Mahaffey and two of his colleagues demonstrated how he could increase the “read range” of radio frequency identification (RF) tags from the typical four to six inches to approximately 50 feet. Mahaffey said the tags could be read at a longer distance, but he wanted to perform the demonstration in the room where he gave the presentation, and that was the greatest distance within the room that he could demonstrate. RFID tags such as the one Mahaffey tested will begin to appear in U.S. passports later this year or next year.

2006

From Joris Evers and Declan McCullagh’s “Researchers: E-passports pose security risk” (CNET: 5 August 2006):

At a pair of security conferences here, researchers demonstrated that passports equipped with radio frequency identification (RFID) tags can be cloned with a laptop equipped with a $200 RFID reader and a similarly inexpensive smart card writer. In addition, they suggested that RFID tags embedded in travel documents could identify U.S. passports from a distance, possibly letting terrorists use them as a trigger for explosives.

At the Black Hat conference, Lukas Grunwald, a researcher with DN-Systems in Hildesheim, Germany, demonstrated that he could copy data stored in an RFID tag from his passport and write the data to a smart card equipped with an RFID chip.

From Kim Zetter’s “Hackers Clone E-Passports” (Wired: 3 August 2006):

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker — Walluf, Germany-based ACG Identification Technologies — but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports — called Golden Reader Tool and made by secunet Security Networks — and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader — which can also act as a writer — and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

Although he can clone the tag, Grunwald says it’s not possible, as far as he can tell, to change data on the chip, such as the name or birth date, without being detected. That’s because the passport uses cryptographic hashes to authenticate the data.

Grunwald’s technique requires a counterfeiter to have physical possession of the original passport for a time. A forger could not surreptitiously clone a passport in a traveler’s pocket or purse because of a built-in privacy feature called Basic Access Control that requires officials to unlock a passport’s RFID chip before reading it. The chip can only be unlocked with a unique key derived from the machine-readable data printed on the passport’s page.

To produce a clone, Grunwald has to program his copycat chip to answer to the key printed on the new passport. Alternatively, he can program the clone to dispense with Basic Access Control, which is an optional feature in the specification.

As planned, U.S. e-passports will contain a web of metal fiber embedded in the front cover of the documents to shield them from unauthorized readers. Though Basic Access Control would keep the chip from yielding useful information to attackers, it would still announce its presence to anyone with the right equipment. The government added the shielding after privacy activists expressed worries that a terrorist could simply point a reader at a crowd and identify foreign travelers.

In theory, with metal fibers in the front cover, nobody can sniff out the presence of an e-passport that’s closed. But [Kevin Mahaffey and John Hering of Flexilis] demonstrated in their video how even if a passport opens only half an inch — such as it might if placed in a purse or backpack — it can reveal itself to a reader at least two feet away.

In addition to cloning passport chips, Grunwald has been able to clone RFID ticket cards used by students at universities to buy cafeteria meals and add money to the balance on the cards.

He and his partners were also able to crash RFID-enabled alarm systems designed to sound when an intruder breaks a window or door to gain entry. Such systems require workers to pass an RFID card over a reader to turn the system on and off. Grunwald found that by manipulating data on the RFID chip he could crash the system, opening the way for a thief to break into the building through a window or door.

And they were able to clone and manipulate RFID tags used in hotel room key cards and corporate access cards and create a master key card to open every room in a hotel, office or other facility. He was able, for example, to clone Mifare, the most commonly used key-access system, designed by Philips Electronics. To create a master key he simply needed two or three key cards for different rooms to determine the structure of the cards. Of the 10 different types of RFID systems he examined that were being used in hotels, none used encryption.

Many of the card systems that did use encryption failed to change the default key that manufacturers program into the access card system before shipping, or they used sample keys that the manufacturer includes in instructions sent with the cards. Grunwald and his partners created a dictionary database of all the sample keys they found in such literature (much of which they found accidentally published on purchasers’ websites) to conduct what’s known as a dictionary attack. When attacking a new access card system, their RFDump program would search the list until it found the key that unlocked a card’s encryption.

“I was really surprised we were able to open about 75 percent of all the cards we collected,” he says.

2009

From Thomas Ricker’s “Video: Hacker war drives San Francisco cloning RFID passports” (Engadget: 2 February 2009):

Using a $250 Motorola RFID reader and antenna connected to his laptop, Chris recently drove around San Francisco reading RFID tags from passports, driver licenses, and other identity documents. In just 20 minutes, he found and cloned the passports of two very unaware US citizens.

RFID security problems Read More »

Huck Finn caged

From Nicholas Carr’s “Sivilized” (Rough Type: 27 June 2009):

Michael Chabon, in an elegiac essay in the new edition of the New York Review of Books, rues the loss of the “Wilderness of Childhood” – the unparented, unfenced, only partially mapped territory that was once the scene of youth.

Huck Finn, now fully under the thumb of Miss Watson and the Widow Douglas, spends his unscheduled time wandering the fabricated landscapes of World of Warcraft, seeking adventure.

Huck Finn caged Read More »

The future of news as shown by the 2008 election

From Steven Berlin Johnson’s “Old Growth Media And The Future Of News” (StevenBerlinJohnson.com: 14 March 2009):

The first Presidential election that I followed in an obsessive way was the 1992 election that Clinton won. I was as compulsive a news junkie about that campaign as I was about the Mac in college: every day the Times would have a handful of stories about the campaign stops or debates or latest polls. Every night I would dutifully tune into Crossfire to hear what the punditocracy had to say about the day’s events. I read Newsweek and Time and the New Republic, and scoured the New Yorker for its occasional political pieces. When the debates aired, I’d watch religiously and stay up late soaking in the commentary from the assembled experts.

That was hardly a desert, to be sure. But compare it to the information channels that were available to me following the 2008 election. Everything I relied on in 1992 was still around of course – except for the late, lamented Crossfire – but it was now part of a vast new forest of news, data, opinion, satire – and perhaps most importantly, direct experience. Sites like Talking Points Memo and Politico did extensive direct reporting. Daily Kos provided in-depth surveys and field reports on state races that the Times would never have had the ink to cover. Individual bloggers like Andrew Sullivan responded to each twist in the news cycle; HuffPo culled the most provocative opinion pieces from the rest of the blogosphere. Nate Silver at fivethirtyeight.com did meta-analysis of polling that blew away anything William Schneider dreamed of doing on CNN in 1992. When the economy imploded in September, I followed economist bloggers like Brad DeLong to get their expert take the candidates’ responses to the crisis. (Yochai Benchler talks about this phenomenon of academics engaging with the news cycle in a smart response here.) I watched the debates with a thousand virtual friends live-Twittering alongside me on the couch. All this was filtered and remixed through the extraordinary political satire of John Stewart and Stephen Colbert, which I watched via viral clips on the Web as much as I watched on TV.

What’s more: the ecosystem of political news also included information coming directly from the candidates. Think about the Philadelphia race speech, arguably one of the two or three most important events in the whole campaign. Eight million people watched it on YouTube alone. Now, what would have happened to that speech had it been delivered in 1992? Would any of the networks have aired it in its entirety? Certainly not. It would have been reduced to a minute-long soundbite on the evening news. CNN probably would have aired it live, which might have meant that 500,000 people caught it. Fox News and MSNBC? They didn’t exist yet. A few serious newspaper might have reprinted it in its entirety, which might have added another million to the audience. Online perhaps someone would have uploaded a transcript to Compuserve or The Well, but that’s about the most we could have hoped for.

There is no question in mind my mind that the political news ecosystem of 2008 was far superior to that of 1992: I had more information about the state of the race, the tactics of both campaigns, the issues they were wrestling with, the mind of the electorate in different regions of the country. And I had more immediate access to the candidates themselves: their speeches and unscripted exchanges; their body language and position papers.

The old line on this new diversity was that it was fundamentally parasitic: bloggers were interesting, sure, but if the traditional news organizations went away, the bloggers would have nothing to write about, since most of what they did was link to professionally reported stories. Let me be clear: traditional news organizations were an important part of the 2008 ecosystem, no doubt about it. … But no reasonable observer of the political news ecosystem could describe all the new species as parasites on the traditional media. Imagine how many barrels of ink were purchased to print newspaper commentary on Obama’s San Francisco gaffe about people “clinging to their guns and religion.” But the original reporting on that quote didn’t come from the Times or the Journal; it came from a “citizen reporter” named Mayhill Fowler, part of the Off The Bus project sponsored by Jay Rosen’s Newassignment.net and The Huffington Post.

The future of news as shown by the 2008 election Read More »