teaching

Neil Postman: the medium is the metaphor for the way we think

From Tom Stites’s “Guest Posting: Is Media Performance Democracy’s Critical Issue?” (Center for Citizen Media: Blog: 3 July 2006):

In late 1980s the late Neil Postman wrote an enduringly important book called Amusing Ourselves to Death. In it he says that Marshall McLuhan only came close to getting it right in his famous adage, that the medium is the message. Postman corrects McLuhan by saying that the medium is the metaphor – a metaphor for the way we think. Written narrative that people can read, Postman goes on, is a metaphor for thinking logically. And he says that image media bypass reason and go straight to the emotions. The image media are a metaphor for not thinking logically. Images disable thinking, so unless people read and use their reason democracy is disabled as well.

Neil Postman: the medium is the metaphor for the way we think Read More »

Antitrust suits led to vertical integration & the IT revolution

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

As the industrial scholar Alfred D. Chandler has noted, the vertically integrated firm — which dominated the American economy for most of the last century — was to a great degree the product of antitrust enforcement. When Theodore Roosevelt began to limit the ability of large companies to grow horizontally, many responded by buying outside suppliers and integrating their operations into vertical lines of production. Many also set up internal research labs to improve existing products and develop new ones. Antitrust law later played a huge role in launching the information revolution. During the Cold War, the Justice Department routinely used antitrust suits to force high-tech firms to share the technologies they had developed. Targeted firms like IBM, RCA, AT&T, and Xerox spilled many thousands of patents onto the market, where they were available to any American competitor for free.

Antitrust suits led to vertical integration & the IT revolution Read More »

AACS, next-gen encryption for DVDs

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

AACS relies on the well-established AES (with 128-bit keys) to safeguard the disc data. Just like DVD players, HD DVD and Blu-ray drives will come with a set of Device Keys handed out to the manufacturers by AACS LA. Unlike the CSS encryption used in DVDs, though, AACS has a built-in method for revoking sets of keys that are cracked and made public. AACS-encrypted discs will feature a Media Key Block that all players need to access in order to get the key needed to decrypt the video files on the disc. The MKB can be updated by AACS LA to prevent certain sets of Device Keys from functioning with future titles – a feature that AACS dubs “revocation.” …

AACS also supports a new feature called the Image Constraint Token. When set, the ICT will force video output to be degraded over analog connections. ICT has so far gone unused, though this could change at any time. …

While AACS is used by both HD disc formats, the Blu-ray Disc Association (BDA) has added some features of its own to make the format “more secure” than HD DVD. The additions are BD+ and ROM Mark; though both are designed to thwart pirates, they work quite differently.

While the generic AACS spec includes key revocation, BD+ actually allows the BDA to update the entire encryption system once players have already shipped. Should encryption be cracked, new discs will include information that will alter the players’ decryption code. …

The other new technology, ROM Mark, affects the manufacturing of Blu-ray discs. All Blu-ray mastering equipment must be licensed by the BDA, and they will ensure that all of it carries ROM Mark technology. Whenever a legitimate disc is created, it is given a “unique and undetectable identifier.” It’s not undetectable to the player, though, and players can refuse to play discs without a ROM Mark. The BDA has the optimistic hope that this will keep industrial-scale piracy at bay. We’ll see.

AACS, next-gen encryption for DVDs Read More »

How DVD encryption (CSS) works … or doesn’t

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

DVD players are factory-built with a set of keys. When a DVD is inserted, the player runs through every key it knows until one unlocks the disc. Once this disc key is known, the player uses it to retrieve a title key from the disc. This title key actually allows the player to unscramble the disc’s contents.

The decryption process might have been formidable when first drawn up, but it had begun to look weak even by 1999. Frank Stevenson, who published a good breakdown of the technology, estimated at that time that a 450Mhz Pentium III could crack the code in only 18 seconds – and that’s without even having a player key in the first place. In other, words a simple brute force attack could crack the code at runtime, assuming that users were patient enough to wait up to 18 seconds. With today’s technology, of course, the same crack would be trivial.

Once the code was cracked, the genie was out of the bottle. CSS descramblers proliferated …

Because the CSS system could not be updated once in the field, the entire system was all but broken. Attempts to patch the system (such as Macrovision’s “RipGuard”) met with limited success, and DVDs today remain easy to copy using a multitude of freely available tools.

How DVD encryption (CSS) works … or doesn’t Read More »

Where we are technically with DRM

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

The attacks on FairPlay have been enlightening because of what they illustrate about the current state of DRM. They show, for instance, that modern DRM schemes are difficult to bypass, ignore, or strip out with a few lines of code. In contrast to older “patches” of computer software (what you would generally bypass a program’s authorization routine), the encryption on modern media files is pervasive. All of the software mentioned has still required Apple’s decoding technology to unscramble the song files; there is no simple hack that can simply strip the files clean without help, and the ciphers are complex enough to make brute-force cracks difficult.

Apple’s response has also been a reminder that cracking an encryption scheme once will no longer be enough in the networked era. Each time that its DRM has been bypassed, Apple has been able to push out updates to its customers that render the hacks useless (or at least make them more difficult to achieve).

Where we are technically with DRM Read More »

Apple iTunes Music Store applies DRM after download

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

A third approach [to subverting Apple’s DRM] came from PyMusique, software originally written so that Linux users could access the iTunes Music Store. The software took advantage of the fact that iTMS transmits DRM-free songs to its customers and relies on iTunes to add that gooey layer of DRM goodness at the client end. PyMusique emulates iTunes and serves as a front end to the store, allowing users to browse and purchase music. When songs are downloaded, however, the program “neglects” to apply the FairPlay DRM.

Apple iTunes Music Store applies DRM after download Read More »

To combat phishing, change browser design philosophy

From Federico Biancuzzi’s “Phishing with Rachna Dhamija” (SecurityFocus: 19 June 2006):

We discovered that existing security cues are ineffective, for three reasons:

1. The indicators are ignored (23% of participants in our study did not look at the address bar, status bar, or any SSL indicators).

2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn’t realize that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.

3. The security indicators are trivial to spoof. Many users can’t distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80% of participants. …

Currently, I’m working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user’s history information (for example, “you’ve been to this website many times” or “you’ve never submitted this form before”) can significantly increase a user’s ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I’ve been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

Rachna Dhamija: I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customized security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user’s focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password. …

With security skins, we were trying to solve not user authentication, but the reverse problem – server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like “Server X is authenticated”, or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user’s browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can’t be replayed by an attacker and it won’t reveal anything useful about the user’s password. …

Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can’t simulate is an interface he can’t predict. This is the principle that DSS relies on. We should make it easy for users to personalize their interfaces. Look at how popular screensavers, ringtones, and application skins are – users clearly enjoy the ability to personalize their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

To combat phishing, change browser design philosophy Read More »

1% create, 10% comment, 89% just use

From Charles Arthur’s “What is the 1% rule?” (Guardian Unlimited: 20 July 2006):

It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will “interact” with it (commenting or offering improvements) and the other 89 will just view it.

It’s a meme that emerges strongly in statistics from YouTube, which in just 18 months has gone from zero to 60% of all online video viewing.

The numbers are revealing: each day there are 100 million downloads and 65,000 uploads – which as Antony Mayfield (at http://open.typepad.com/open) points out, is 1,538 downloads per upload – and 20m unique users per month.

That puts the “creator to consumer” ratio at just 0.5%, but it’s early days yet …

50% of all Wikipedia article edits are done by 0.7% of users, and more than 70% of all articles have been written by just 1.8% of all users, according to the Church of the Customer blog (http://customerevangelists.typepad.com/blog/).

Earlier metrics garnered from community sites suggested that about 80% of content was produced by 20% of the users, but the growing number of data points is creating a clearer picture of how Web 2.0 groups need to think. For instance, a site that demands too much interaction and content generation from users will see nine out of 10 people just pass by.

Bradley Horowitz of Yahoo points out that much the same applies at Yahoo: in Yahoo Groups, the discussion lists, “1% of the user population might start a group; 10% of the user population might participate actively, and actually author content, whether starting a thread or responding to a thread-in-progress; 100% of the user population benefits from the activities of the above groups,” he noted on his blog (www.elatable.com/blog/?p=5) in February.

1% create, 10% comment, 89% just use Read More »

Open sources turns software into a service industry

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

It’s not necessarily going to be an easy transition. Open source turns software into a service industry. Service-provider firms (think of medical and legal practices) can’t be scaled up by injecting more capital into them; those that try only scale up their fixed costs, overshoot their revenue base, and starve to death. The choices come down to singing for your supper (getting paid through tips and donations), running a corner shop (a small, low-overhead service business), or finding a wealthy patron (some large firm that needs to use and modify open-source software for its business purposes).

Open sources turns software into a service industry Read More »

Differences between Macintosh & Unix programmers

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

Macintosh programmers are all about the user experience. They’re architects and decorators. They design from the outside in, asking first “What kind of interaction do we want to support?” and then building the application logic behind it to meet the demands of the user-interface design. This leads to programs that are very pretty and infrastructure that is weak and rickety. In one notorious example, as late as Release 9 the MacOS memory manager sometimes required the user to manually deallocate memory by manually chucking out exited but still-resident programs. Unix people are viscerally revolted by this kind of mal-design; they don’t understand how Macintosh people could live with it.

By contrast, Unix people are all about infrastructure. We are plumbers and stonemasons. We design from the inside out, building mighty engines to solve abstractly defined problems (like “How do we get reliable packet-stream delivery from point A to point B over unreliable hardware and links?”). We then wrap thin and often profoundly ugly interfaces around the engines. The commands date(1), find(1), and ed(1) are notorious examples, but there are hundreds of others. Macintosh people are viscerally revolted by this kind of mal-design; they don’t understand how Unix people can live with it. …

In many ways this kind of parochialism has served us well. We are the keepers of the Internet and the World Wide Web. Our software and our traditions dominate serious computing, the applications where 24/7 reliability and minimal downtime is a must. We really are extremely good at building solid infrastructure; not perfect by any means, but there is no other software technical culture that has anywhere close to our track record, and it is one to be proud of. …

To non-technical end users, the software we build tends to be either bewildering and incomprehensible, or clumsy and condescending, or both at the same time. Even when we try to do the user-friendliness thing as earnestly as possible, we’re woefully inconsistent at it. Many of the attitudes and reflexes we’ve inherited from old-school Unix are just wrong for the job. Even when we want to listen to and help Aunt Tillie, we don’t know how — we project our categories and our concerns onto her and give her ‘solutions’ that she finds as daunting as her problems.

Differences between Macintosh & Unix programmers Read More »

The first movie theater

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

APRIL 16, 1902: The Movies

Motion pictures seemed destined to become a passing fad. Only a few years after Edison’s first crude newsreels were screened — mostly in penny arcades, alongside carnival games and other cheap attractions, the novelty had worn off, and Americans were flocking back to live vaudeville.

Then, in spring 1902, Thomas L. Tally opened his Electric Theater in Los Angeles, a radical new venture devoted to movies and other high-tech devices of the era, like audio recordings.

“Tally was the first person to offer a modern multimedia entertainment experience to the American public,” says the film historian Marc Wanamaker. Before long, his successful movie palace produced imitators nationally, which would become known as “nickelodeons.”

The first movie theater Read More »

The date Silicon Valley (& Intel) was born

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

SEPT. 18, 1957: Revolt of the Nerds

Fed up with their boss, eight lab workers walked off the job on this day in Mountain View, Calif. Their employer, William Shockley, had decided not to continue research into silicon-based semiconductors; frustrated, they decided to undertake the work on their own. The researchers — who would become known as “the traitorous eight” — went on to invent the microprocessor (and to found Intel, among other companies). “Sept. 18 was the birth date of Silicon Valley, of the electronics industry and of the entire digital age,” says Mr. Shockley’s biographer, Joel Shurkin.

The date Silicon Valley (& Intel) was born Read More »

DRM converts copyrights into trade secrets

From Mark Sableman’s “Copyright reformers pose tough questions” (St. Louis Journalism Review: June 2005):

It goes by the name “digital rights management” – the effort, already very successful, to give content owners the right to lock down their works technologically. It is what Washington University law professor Charles McManis has characterized as attaching absolute “trade secret” property-type rights to the content formerly subject to the copyright balance between private rights and public use.

DRM converts copyrights into trade secrets Read More »

Macaulay in 1841 on the problems on the copyright monopoly

From Thomas Babington Macaulay’s “A Speech Delivered In The House Of Commons On The 5th Of February 1841” (Prime Palaver #4: 1 September 2001):

The question of copyright, Sir, like most questions of civil prudence, is neither black nor white, but grey. The system of copyright has great advantages and great disadvantages; and it is our business to ascertain what these are, and then to make an arrangement under which the advantages may be as far as possible secured, and the disadvantages as far as possible excluded. …

We have, then, only one resource left. We must betake ourselves to copyright, be the inconveniences of copyright what they may. Those inconveniences, in truth, are neither few nor small. Copyright is monopoly, and produces all the effects which the general voice of mankind attributes to monopoly. …

I believe, Sir, that I may with safety take it for granted that the effect of monopoly generally is to make articles scarce, to make them dear, and to make them bad. … Thus, then, stands the case. It is good that authors should be remunerated; and the least exceptionable way of remunerating them is by a monopoly. Yet monopoly is an evil. For the sake of the good we must submit to the evil; but the evil ought not to last a day longer than is necessary for the purpose of securing the good. …

For consider this; the evil effects of the monopoly are proportioned to the length of its duration. But the good effects for the sake of which we bear with the evil effects are by no means proportioned to the length of its duration. A monopoly of sixty years produces twice as much evil as a monopoly of thirty years, and thrice as much evil as a monopoly of twenty years. But it is by no means the fact that a posthumous monopoly of sixty years gives to an author thrice as much pleasure and thrice as strong a motive as a posthumous monopoly of twenty years. On the contrary, the difference is so small as to be hardly perceptible. We all know how faintly we are affected by the prospect of very distant advantages, even when they are advantages which we may reasonably hope that we shall ourselves enjoy. But an advantage that is to be enjoyed more than half a century after we are dead, by somebody, we know not by whom, perhaps by somebody unborn, by somebody utterly unconnected with us, is really no motive at all to action. …

Dr Johnson died fifty-six years ago. If the law were what my honourable and learned friend wishes to make it, somebody would now have the monopoly of Dr Johnson’s works. Who that somebody would be it is impossible to say; but we may venture to guess. I guess, then, that it would have been some bookseller, who was the assign of another bookseller, who was the grandson of a third bookseller, who had bought the copyright from Black Frank, the doctor’s servant and residuary legatee, in 1785 or 1786. Now, would the knowledge that this copyright would exist in 1841 have been a source of gratification to Johnson? Would it have stimulated his exertions? Would it have once drawn him out of his bed before noon? Would it have once cheered him under a fit of the spleen? Would it have induced him to give us one more allegory, one more life of a poet, one more imitation of Juvenal? I firmly believe not. I firmly believe that a hundred years ago, when he was writing our debates for the Gentleman’s Magazine, he would very much rather have had twopence to buy a plate of shin of beef at a cook’s shop underground. Considered as a reward to him, the difference between a twenty years’ and sixty years’ term of posthumous copyright would have been nothing or next to nothing. But is the difference nothing to us? I can buy Rasselas for sixpence; I might have had to give five shillings for it. I can buy the Dictionary, the entire genuine Dictionary, for two guineas, perhaps for less; I might have had to give five or six guineas for it. Do I grudge this to a man like Dr Johnson? Not at all. Show me that the prospect of this boon roused him to any vigorous effort, or sustained his spirits under depressing circumstances, and I am quite willing to pay the price of such an object, heavy as that price is. But what I do complain of is that my circumstances are to be worse, and Johnson’s none the better; that I am to give five pounds for what to him was not worth a farthing.

Macaulay in 1841 on the problems on the copyright monopoly Read More »

Paradigm shifts explained

From Kim Stanley Robinson’s “Imagining Abrupt Climate Change : Terraforming Earth” (Amazon Shorts: 31 July 2005):

… paradigm shifts are exciting moments in science’s ongoing project of self-improvement, making itself more accurately mapped to reality as it is discovered and teased out; this process of continual recalibration and improvement is one of the most admirable parts of science, which among other things is a most powerful and utopian set of mental habits; an attitude toward reality that I have no hesitation in labeling a kind of worship or devotion. And in this ongoing communal act of devotion, paradigm shifts are very good at revealing how science is conducted, in part because each one represents a little (or big) crisis of understanding.

As Thomas Kuhn described the process in his seminal book The Structure of Scientific Revolutions, workers in the various branches of science build over time an interconnected construct of concepts and beliefs that allow them to interpret the data from their experiments, and fit them into a larger picture of the world that makes the best sense of the evidence at hand. What is hoped for is a picture that, if anyone else were to question it, and follow the train of reasoning and all the evidence used to support it, they too would agree with it. This is one of the ways science is interestingly utopian; it attempts to say things that everyone looking at the same evidence would agree to.

So, using this paradigm, always admitted to be a work in progress, scientists then conduct what Kuhn calls “normal science,” elucidating further aspects of reality by using the paradigm to structure their questions and their answers. Sometimes paradigms are useful for centuries; other times, for shorter periods. Then it often happens that scientists in the course of doing “normal science” begin to get evidence from the field that cannot be explained within the paradigm that has been established. At first such “anomalies” are regarded as suspect in themselves, precisely because they don’t fit the paradigm. They’re oddities, and something might be wrong with them as such. Thus they are ignored, or tossed aside, or viewed with suspicion, or in some other way bracketed off. Eventually, if enough of them pile up, and they seem similar in kind, or otherwise solid as observations, attempts might be made to explain them within the old paradigm, by tweaking or re-interpreting the paradigm itself, without actually throwing the paradigm out entirely.

For instance, when it was found that Newtonian laws of gravitation could not account for the speed of Mercury, which was moving a tiny bit faster than it ought to have been, even though Newton’s laws accounted for all the other planets extremely well, at first some astronomers suggested there might be another planet inside the orbit of Mercury, too close to the Sun for us to see. They even gave this potential planet a name, Vulcan; but they couldn’t see it, and calculations revealed that this hypothetical Vulcan still would not explain the discrepancy in Mercury’s motion. The discrepancy remained an anomaly, and was real enough and serious enough to cast the whole Newtonian paradigm into doubt among the small group of people who worried about it and wondered what could be causing it.

It was Einstein who then proposed that Mercury moved differently than predicted because spacetime itself curved around masses, and near the huge mass of the Sun the effect was large enough to be noticeable.

Whoah! This was a rather mind-bogglingly profound explanation for a little orbital discrepancy in Mercury; but Einstein also made a new prediction and suggested an experiment; if his explanation were correct, then light too would bend in the gravity well around the sun, and so the light of a star would appear from behind the sun a little bit before the astronomical tables said that it should. The proposed experiment presented some observational difficulties, but a few years later it was accomplished during a total eclipse of the sun, and the light of a certain star appeared before it ought to have by just the degree Einstein had predicted. And so Einstein’s concepts concerning spacetime began to be accepted and elaborated, eventually forming a big part of the paradigm known as the “standard model,” within which new kinds of “normal science” in physics and astronomy could be done. …

Paradigm shifts explained Read More »

The CIA’s ‘black sites’ hide terror suspects around the world

From Dana Priest’s “CIA Holds Terror Suspects in Secret Prisons” (The Washington Post: 2 November 2005):

The CIA has been hiding and interrogating some of its most important al Qaeda captives at a Soviet-era compound in Eastern Europe, according to U.S. and foreign officials familiar with the arrangement.

The secret facility is part of a covert prison system set up by the CIA nearly four years ago that at various times has included sites in eight countries, including Thailand, Afghanistan and several democracies in Eastern Europe, as well as a small center at the Guantanamo Bay prison in Cuba, according to current and former intelligence officials and diplomats from three continents.

The hidden global internment network is a central element in the CIA’s unconventional war on terrorism. It depends on the cooperation of foreign intelligence services, and on keeping even basic information about the system secret from the public, foreign officials and nearly all members of Congress charged with overseeing the CIA’s covert actions.

The existence and locations of the facilities — referred to as “black sites” in classified White House, CIA, Justice Department and congressional documents — are known to only a handful of officials in the United States and, usually, only to the president and a few top intelligence officers in each host country. …

Virtually nothing is known about who is kept in the facilities, what interrogation methods are employed with them, or how decisions are made about whether they should be detained or for how long.

While the Defense Department has produced volumes of public reports and testimony about its detention practices and rules after the abuse scandals at Iraq’s Abu Ghraib prison and at Guantanamo Bay, the CIA has not even acknowledged the existence of its black sites. To do so, say officials familiar with the program, could open the U.S. government to legal challenges, particularly in foreign courts, and increase the risk of political condemnation at home and abroad. …

Although the CIA will not acknowledge details of its system, intelligence officials defend the agency’s approach, arguing that the successful defense of the country requires that the agency be empowered to hold and interrogate suspected terrorists for as long as necessary and without restrictions imposed by the U.S. legal system or even by the military tribunals established for prisoners held at Guantanamo Bay. …

It is illegal for the government to hold prisoners in such isolation in secret prisons in the United States, which is why the CIA placed them overseas, according to several former and current intelligence officials and other U.S. government officials. Legal experts and intelligence officials said that the CIA’s internment practices also would be considered illegal under the laws of several host countries, where detainees have rights to have a lawyer or to mount a defense against allegations of wrongdoing. …

More than 100 suspected terrorists have been sent by the CIA into the covert system, according to current and former U.S. intelligence officials and foreign sources. This figure, a rough estimate based on information from sources who said their knowledge of the numbers was incomplete, does not include prisoners picked up in Iraq.

The detainees break down roughly into two classes, the sources said.

About 30 are considered major terrorism suspects and have been held under the highest level of secrecy at black sites financed by the CIA and managed by agency personnel, including those in Eastern Europe and elsewhere, according to current and former intelligence officers and two other U.S. government officials. Two locations in this category — in Thailand and on the grounds of the military prison at Guantanamo Bay — were closed in 2003 and 2004, respectively.

A second tier — which these sources believe includes more than 70 detainees — is a group considered less important, with less direct involvement in terrorism and having limited intelligence value. These prisoners, some of whom were originally taken to black sites, are delivered to intelligence services in Egypt, Jordan, Morocco, Afghanistan and other countries, a process sometimes known as “rendition.” While the first-tier black sites are run by CIA officers, the jails in these countries are operated by the host nations, with CIA financial assistance and, sometimes, direction. …

The top 30 al Qaeda prisoners exist in complete isolation from the outside world. Kept in dark, sometimes underground cells, they have no recognized legal rights, and no one outside the CIA is allowed to talk with or even see them, or to otherwise verify their well-being, said current and former and U.S. and foreign government and intelligence officials. …

Among the first steps was to figure out where the CIA could secretly hold the captives. One early idea was to keep them on ships in international waters, but that was discarded for security and logistics reasons.

CIA officers also searched for a setting like Alcatraz Island. They considered the virtually unvisited islands in Lake Kariba in Zambia, which were edged with craggy cliffs and covered in woods. But poor sanitary conditions could easily lead to fatal diseases, they decided, and besides, they wondered, could the Zambians be trusted with such a secret? …

The largest CIA prison in Afghanistan was code-named the Salt Pit. It was also the CIA’s substation and was first housed in an old brick factory outside Kabul. In November 2002, an inexperienced CIA case officer allegedly ordered guards to strip naked an uncooperative young detainee, chain him to the concrete floor and leave him there overnight without blankets. He froze to death, according to four U.S. government officials. The CIA officer has not been charged in the death. …

The CIA program’s original scope was to hide and interrogate the two dozen or so al Qaeda leaders believed to be directly responsible for the Sept. 11 attacks, or who posed an imminent threat, or had knowledge of the larger al Qaeda network. But as the volume of leads pouring into the CTC from abroad increased, and the capacity of its paramilitary group to seize suspects grew, the CIA began apprehending more people whose intelligence value and links to terrorism were less certain, according to four current and former officials.

The original standard for consigning suspects to the invisible universe was lowered or ignored, they said. “They’ve got many, many more who don’t reach any threshold,” one intelligence official said.

The CIA’s ‘black sites’ hide terror suspects around the world Read More »

Just how big is YouTube?

From Reuters’s “YouTube serves up 100 mln videos a day” (16 July 2006):

YouTube, the leader in Internet video search, said on Sunday viewers have are now watching more than 100 million videos per day on its site, marking the surge in demand for its “snack-sized” video fare.

Since springing from out of nowhere late last year, YouTube has come to hold the leading position in online video with 29 percent of the U.S. multimedia entertainment market, according to the latest weekly data from Web measurement site Hitwise.

YouTube videos account for 60 percent of all videos watched online, the company said. …

In June, 2.5 billion videos were watched on YouTube, which is based in San Mateo, California and has just over 30 employees. More than 65,000 videos are now uploaded daily to YouTube, up from around 50,000 in May, the company said.

YouTube boasts nearly 20 million unique users per month, according to Nielsen//NetRatings, another Internet audience measurement firm.

Just how big is YouTube? Read More »

What kinds of spam are effective?

From Alex Mindlin’s “Seems Somebody Is Clicking on That Spam” (The New York Times: 3 July 2006):

Spam messages promoting pornography are 280 times as effective in getting recipients to click on them as messages advertising pharmacy drugs, which are the next most effective type of spam.

The third most successful variety is spam advertising Rolex watches, 0.0075 percent of which get clicked on, according to an analysis by CipherTrust, a large manufacturer of devices that protect networks from spam and viruses.

What kinds of spam are effective? Read More »

NSA spying: Project Shamrock & Echelon

From Kim Zetter’s “The NSA is on the line — all of them” (Salon: 15 May 2006):

As fireworks showered New York Harbor [in 1976], the country was debating a three-decades-long agreement between Western Union and other telecommunications companies to surreptitiously supply the NSA, on a daily basis, with all telegrams sent to and from the United States. The similarity between that earlier program and the most recent one is remarkable, with one exception — the NSA now owns vastly improved technology to sift through and mine massive amounts of data it has collected in what is being described as the world’s single largest database of personal information. And, according to Aid, the mining goes far beyond our phone lines.

The controversy over Project Shamrock in 1976 ultimately led Congress to pass the 1978 Foreign Intelligence Surveillance Act and other privacy and communication laws designed to prevent commercial companies from working in cahoots with the government to conduct wholesale secret surveillance on their customers. But as stories revealed last week, those safeguards had little effect in preventing at least three telecommunications companies from repeating history. …

[Intelligence historian Matthew Aid] compared the agency’s current data mining to Project Shamrock and Echelon, the code name for an NSA computer system that for many years analyzed satellite communication signals outside the U.S., and generated its own controversy when critics claimed that in addition to eavesdropping on enemy communication, the satellites were eavesdropping on allies’ domestic phone and e-mail conversations. …

If you want some historical perspective look at Operation Shamrock, which collapsed in 1975 because [Rep.] Bella Abzug [D-NY] subpoenaed the heads of Western Union and the other telecommunications giants and put them in witness chairs, and they all admitted that they had cooperated with the NSA for the better part of 40 years by supplying cables and telegrams.

The newest system being added to the NSA infrastructure, by the way, is called Project Trailblazer, which was initiated in 2002 and which was supposed to go online about now but is fantastically over budget and way behind schedule. Trailblazer is designed to copy the new forms of telecommunications — fiber optic cable traffic, cellphone communication, BlackBerry and Internet e-mail traffic. …

Echelon, in fact, is nothing more than a VAX microcomputer that was manufactured in the early 1970s by Digital Equipment Corp., and was used at six satellite intercept stations [to filter and sort data collected from the satellites and distribute it to analysts]. The computer has long since been obsolete. Since 9/11, whatever plans in place to modernize Echelon have been put on hold. The NSA does in fact have a global intercept network, but they just call it the intercept collection infrastructure. They don’t have a code name or anything sexy to describe it, and it didn’t do domestic spying.

NSA spying: Project Shamrock & Echelon Read More »

OnStar: the numbers

From PR Newswire’s “OnStar Achieves Another First as Winner of Good Housekeeping’s ‘Good Buy’ Award for Best Servic” (3 December 2004):

Each month on average, OnStar receives about 700 airbag notifications and 11,000 emergency assistance calls, which include 4,000 Good Samaritan calls for a variety of emergency situations. In addition, each month OnStar advisors respond to an average of 500 stolen vehicle location requests, 20,000 requests for roadside assistance, 36,000 remote door-unlock requests and 19,000 GM Goodwrench remote diagnostics requests.

OnStar: the numbers Read More »