business

Wal-Mart’s monopsony power damages its vendors

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

Instead, the firm is also one of the world’s most intrusive, jealous, fastidious micromanagers, and its aim is nothing less than to remake entirely how its suppliers do business, not least so that it can shift many of its own costs of doing business onto them. In addition to dictating what price its suppliers must accept, Wal-Mart also dictates how they package their products, how they ship those products, and how they gather and process information on the movement of those products. Take, for instance, Levi Strauss & Co. Wal-Mart dictates that its suppliers tell it what price they charge Wal-Mart’s competitors, that they accept payment entirely on Wal-Mart’s terms, and that they share information all the way back to the purchase of raw materials. Take, for instance, Newell Rubbermaid. Wal-Mart controls with whom its suppliers speak, how and where they can sell their goods, and even encourages them to support Wal-Mart in its political fights. Take, for instance, Disney. Wal-Mart all but dictates to suppliers where to manufacture their products, as well as how to design those products and what materials and ingredients to use in those products. Take, for instance, Coca-Cola [… Wal-Mart decided that it did not approve of the artificial sweetener Coca-Cola planned to use in a new line of diet colas. In a response that would have been unthinkable just a few years ago, Coca-Cola yielded to the will of an outside firm and designed a second product to meet Wal-Mart’s decree.]. …

Wal-Mart and a growing number of today’s dominant firms, by contrast, are programmed to cut cost faster than price, to slow the introduction of new technologies and techniques, to dictate downward the wages and profits of the millions of people and smaller firms who make and grow what they sell, to break down entire lines of production in the name of efficiency. The effects of this change are clear: We see them in the collapsing profit margins of the firms caught in Wal-Mart’s system. We see them in the fact that of Wal-Mart’s top ten suppliers in 1994, four have sought bankruptcy protection.

Wal-Mart’s monopsony power damages its vendors Read More »

Antitrust suits led to vertical integration & the IT revolution

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

As the industrial scholar Alfred D. Chandler has noted, the vertically integrated firm — which dominated the American economy for most of the last century — was to a great degree the product of antitrust enforcement. When Theodore Roosevelt began to limit the ability of large companies to grow horizontally, many responded by buying outside suppliers and integrating their operations into vertical lines of production. Many also set up internal research labs to improve existing products and develop new ones. Antitrust law later played a huge role in launching the information revolution. During the Cold War, the Justice Department routinely used antitrust suits to force high-tech firms to share the technologies they had developed. Targeted firms like IBM, RCA, AT&T, and Xerox spilled many thousands of patents onto the market, where they were available to any American competitor for free.

Antitrust suits led to vertical integration & the IT revolution Read More »

The mirror of monopoly: monopsony … which may be worse

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

Popular notions of oligopoly and monopoly tend to focus on the danger that firms, having gained control over a marketplace, will then be able to dictate an unfairly high price, extracting a sort of tax from society as a whole. But what should concern us today even more is a mirror image of monopoly called “monopsony.” Monopsony arises when a firm captures the ability to dictate price to its suppliers, because the suppliers have no real choice other than to deal with that buyer. Not all oligopolists rely on the exercise of monopsony, but a large and growing contingent of today’s largest firms are built to do just that. The ultimate danger of monopsony is that it deprives the firms that actually manufacture products from obtaining an adequate return on their investment. In other words, the ultimate danger of monopsony is that, over time, it tends to destroy the machines and skills on which we all rely.

Examples of monopsony can be difficult to pin down, but we are in luck in that today we have one of the best illustrations of monopsony pricing power in economic history: Wal-Mart. There is little need to recount at any length the retailer’s power over America’s marketplace. For our purposes, a few facts will suffice — that one in every five retail sales in America is recorded at Wal-Mart’s cash registers; that the firm’s revenue nearly equals that of the next six retailers combined; that for many goods, Wal-Mart accounts for upward of 30 percent of U.S. sales, and plans to more than double its sales within the next five years.

… The problem is that Wal-Mart, like other monopsonists, does not participate in the market so much as use its power to micromanage the market, carefully coordinating the actions of thousands of firms from a position above the market.

The mirror of monopoly: monopsony … which may be worse Read More »

Corporate consolidation reigns in American business, & that’s a problem

From Barry C. Lynn’s “The Case for Breaking Up Wal-Mart” (Harper’s: 24 July 2006):

It is now twenty-five years since the Reagan Administration eviscerated America’s century-long tradition of antitrust enforcement. For a generation, big firms have enjoyed almost complete license to use brute economic force to grow only bigger. And so today we find ourselves in a world dominated by immense global oligopolies that every day further limit the flexibility of our economy and our personal freedom within it. There are still many instances of intense competition — just ask General Motors.

But since the great opening of global markets in the early 1990s, the tendency within most of the systems we rely on for manufactured goods, processed commodities, and basic services has been toward ever more extreme consolidation. Consider raw materials: three firms control almost 75 percent of the global market in iron ore. Consider manufacturing services: Owens Illinois has rolled up roughly half the global capacity to supply glass containers. We see extreme consolidation in heavy equipment; General Electric builds 60 percent of large gas turbines as well as 60 percent of large wind turbines. In processed materials; Corning produces 60 percent of the glass for flat-screen televisions. Even in sneakers; Nike and Adidas split a 60-percent share of the global market. Consolidation reigns in banking, meatpacking, oil refining, and grains. It holds even in eyeglasses, a field in which the Italian firm Luxottica has captured control over five of the six national outlets in the U.S. market.

The stakes could not be higher. In systems where oligopolies rule unchecked by the state, competition itself is transformed from a free-for-all into a kind of private-property right, a license to the powerful to fence off entire marketplaces, there to pit supplier against supplier, community against community, and worker against worker, for their own private gain. When oligopolies rule unchecked by the state, what is perverted is the free market itself, and our freedom as individuals within the economy and ultimately within our political system as well.

Corporate consolidation reigns in American business, & that’s a problem Read More »

AACS, next-gen encryption for DVDs

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

AACS relies on the well-established AES (with 128-bit keys) to safeguard the disc data. Just like DVD players, HD DVD and Blu-ray drives will come with a set of Device Keys handed out to the manufacturers by AACS LA. Unlike the CSS encryption used in DVDs, though, AACS has a built-in method for revoking sets of keys that are cracked and made public. AACS-encrypted discs will feature a Media Key Block that all players need to access in order to get the key needed to decrypt the video files on the disc. The MKB can be updated by AACS LA to prevent certain sets of Device Keys from functioning with future titles – a feature that AACS dubs “revocation.” …

AACS also supports a new feature called the Image Constraint Token. When set, the ICT will force video output to be degraded over analog connections. ICT has so far gone unused, though this could change at any time. …

While AACS is used by both HD disc formats, the Blu-ray Disc Association (BDA) has added some features of its own to make the format “more secure” than HD DVD. The additions are BD+ and ROM Mark; though both are designed to thwart pirates, they work quite differently.

While the generic AACS spec includes key revocation, BD+ actually allows the BDA to update the entire encryption system once players have already shipped. Should encryption be cracked, new discs will include information that will alter the players’ decryption code. …

The other new technology, ROM Mark, affects the manufacturing of Blu-ray discs. All Blu-ray mastering equipment must be licensed by the BDA, and they will ensure that all of it carries ROM Mark technology. Whenever a legitimate disc is created, it is given a “unique and undetectable identifier.” It’s not undetectable to the player, though, and players can refuse to play discs without a ROM Mark. The BDA has the optimistic hope that this will keep industrial-scale piracy at bay. We’ll see.

AACS, next-gen encryption for DVDs Read More »

How DVD encryption (CSS) works … or doesn’t

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

DVD players are factory-built with a set of keys. When a DVD is inserted, the player runs through every key it knows until one unlocks the disc. Once this disc key is known, the player uses it to retrieve a title key from the disc. This title key actually allows the player to unscramble the disc’s contents.

The decryption process might have been formidable when first drawn up, but it had begun to look weak even by 1999. Frank Stevenson, who published a good breakdown of the technology, estimated at that time that a 450Mhz Pentium III could crack the code in only 18 seconds – and that’s without even having a player key in the first place. In other, words a simple brute force attack could crack the code at runtime, assuming that users were patient enough to wait up to 18 seconds. With today’s technology, of course, the same crack would be trivial.

Once the code was cracked, the genie was out of the bottle. CSS descramblers proliferated …

Because the CSS system could not be updated once in the field, the entire system was all but broken. Attempts to patch the system (such as Macrovision’s “RipGuard”) met with limited success, and DVDs today remain easy to copy using a multitude of freely available tools.

How DVD encryption (CSS) works … or doesn’t Read More »

Where we are technically with DRM

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

The attacks on FairPlay have been enlightening because of what they illustrate about the current state of DRM. They show, for instance, that modern DRM schemes are difficult to bypass, ignore, or strip out with a few lines of code. In contrast to older “patches” of computer software (what you would generally bypass a program’s authorization routine), the encryption on modern media files is pervasive. All of the software mentioned has still required Apple’s decoding technology to unscramble the song files; there is no simple hack that can simply strip the files clean without help, and the ciphers are complex enough to make brute-force cracks difficult.

Apple’s response has also been a reminder that cracking an encryption scheme once will no longer be enough in the networked era. Each time that its DRM has been bypassed, Apple has been able to push out updates to its customers that render the hacks useless (or at least make them more difficult to achieve).

Where we are technically with DRM Read More »

Apple iTunes Music Store applies DRM after download

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

A third approach [to subverting Apple’s DRM] came from PyMusique, software originally written so that Linux users could access the iTunes Music Store. The software took advantage of the fact that iTMS transmits DRM-free songs to its customers and relies on iTunes to add that gooey layer of DRM goodness at the client end. PyMusique emulates iTunes and serves as a front end to the store, allowing users to browse and purchase music. When songs are downloaded, however, the program “neglects” to apply the FairPlay DRM.

Apple iTunes Music Store applies DRM after download Read More »

To combat phishing, change browser design philosophy

From Federico Biancuzzi’s “Phishing with Rachna Dhamija” (SecurityFocus: 19 June 2006):

We discovered that existing security cues are ineffective, for three reasons:

1. The indicators are ignored (23% of participants in our study did not look at the address bar, status bar, or any SSL indicators).

2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn’t realize that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.

3. The security indicators are trivial to spoof. Many users can’t distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80% of participants. …

Currently, I’m working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user’s history information (for example, “you’ve been to this website many times” or “you’ve never submitted this form before”) can significantly increase a user’s ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I’ve been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

Rachna Dhamija: I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customized security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user’s focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password. …

With security skins, we were trying to solve not user authentication, but the reverse problem – server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like “Server X is authenticated”, or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user’s browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can’t be replayed by an attacker and it won’t reveal anything useful about the user’s password. …

Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can’t simulate is an interface he can’t predict. This is the principle that DSS relies on. We should make it easy for users to personalize their interfaces. Look at how popular screensavers, ringtones, and application skins are – users clearly enjoy the ability to personalize their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

To combat phishing, change browser design philosophy Read More »

1% create, 10% comment, 89% just use

From Charles Arthur’s “What is the 1% rule?” (Guardian Unlimited: 20 July 2006):

It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will “interact” with it (commenting or offering improvements) and the other 89 will just view it.

It’s a meme that emerges strongly in statistics from YouTube, which in just 18 months has gone from zero to 60% of all online video viewing.

The numbers are revealing: each day there are 100 million downloads and 65,000 uploads – which as Antony Mayfield (at http://open.typepad.com/open) points out, is 1,538 downloads per upload – and 20m unique users per month.

That puts the “creator to consumer” ratio at just 0.5%, but it’s early days yet …

50% of all Wikipedia article edits are done by 0.7% of users, and more than 70% of all articles have been written by just 1.8% of all users, according to the Church of the Customer blog (http://customerevangelists.typepad.com/blog/).

Earlier metrics garnered from community sites suggested that about 80% of content was produced by 20% of the users, but the growing number of data points is creating a clearer picture of how Web 2.0 groups need to think. For instance, a site that demands too much interaction and content generation from users will see nine out of 10 people just pass by.

Bradley Horowitz of Yahoo points out that much the same applies at Yahoo: in Yahoo Groups, the discussion lists, “1% of the user population might start a group; 10% of the user population might participate actively, and actually author content, whether starting a thread or responding to a thread-in-progress; 100% of the user population benefits from the activities of the above groups,” he noted on his blog (www.elatable.com/blog/?p=5) in February.

1% create, 10% comment, 89% just use Read More »

Open sources turns software into a service industry

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

It’s not necessarily going to be an easy transition. Open source turns software into a service industry. Service-provider firms (think of medical and legal practices) can’t be scaled up by injecting more capital into them; those that try only scale up their fixed costs, overshoot their revenue base, and starve to death. The choices come down to singing for your supper (getting paid through tips and donations), running a corner shop (a small, low-overhead service business), or finding a wealthy patron (some large firm that needs to use and modify open-source software for its business purposes).

Open sources turns software into a service industry Read More »

Differences between Macintosh & Unix programmers

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

Macintosh programmers are all about the user experience. They’re architects and decorators. They design from the outside in, asking first “What kind of interaction do we want to support?” and then building the application logic behind it to meet the demands of the user-interface design. This leads to programs that are very pretty and infrastructure that is weak and rickety. In one notorious example, as late as Release 9 the MacOS memory manager sometimes required the user to manually deallocate memory by manually chucking out exited but still-resident programs. Unix people are viscerally revolted by this kind of mal-design; they don’t understand how Macintosh people could live with it.

By contrast, Unix people are all about infrastructure. We are plumbers and stonemasons. We design from the inside out, building mighty engines to solve abstractly defined problems (like “How do we get reliable packet-stream delivery from point A to point B over unreliable hardware and links?”). We then wrap thin and often profoundly ugly interfaces around the engines. The commands date(1), find(1), and ed(1) are notorious examples, but there are hundreds of others. Macintosh people are viscerally revolted by this kind of mal-design; they don’t understand how Unix people can live with it. …

In many ways this kind of parochialism has served us well. We are the keepers of the Internet and the World Wide Web. Our software and our traditions dominate serious computing, the applications where 24/7 reliability and minimal downtime is a must. We really are extremely good at building solid infrastructure; not perfect by any means, but there is no other software technical culture that has anywhere close to our track record, and it is one to be proud of. …

To non-technical end users, the software we build tends to be either bewildering and incomprehensible, or clumsy and condescending, or both at the same time. Even when we try to do the user-friendliness thing as earnestly as possible, we’re woefully inconsistent at it. Many of the attitudes and reflexes we’ve inherited from old-school Unix are just wrong for the job. Even when we want to listen to and help Aunt Tillie, we don’t know how — we project our categories and our concerns onto her and give her ‘solutions’ that she finds as daunting as her problems.

Differences between Macintosh & Unix programmers Read More »

The first movie theater

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

APRIL 16, 1902: The Movies

Motion pictures seemed destined to become a passing fad. Only a few years after Edison’s first crude newsreels were screened — mostly in penny arcades, alongside carnival games and other cheap attractions, the novelty had worn off, and Americans were flocking back to live vaudeville.

Then, in spring 1902, Thomas L. Tally opened his Electric Theater in Los Angeles, a radical new venture devoted to movies and other high-tech devices of the era, like audio recordings.

“Tally was the first person to offer a modern multimedia entertainment experience to the American public,” says the film historian Marc Wanamaker. Before long, his successful movie palace produced imitators nationally, which would become known as “nickelodeons.”

The first movie theater Read More »

The date Silicon Valley (& Intel) was born

From Adam Goodheart’s “10 Days That Changed History” (The New York Times: 2 July 2006):

SEPT. 18, 1957: Revolt of the Nerds

Fed up with their boss, eight lab workers walked off the job on this day in Mountain View, Calif. Their employer, William Shockley, had decided not to continue research into silicon-based semiconductors; frustrated, they decided to undertake the work on their own. The researchers — who would become known as “the traitorous eight” — went on to invent the microprocessor (and to found Intel, among other companies). “Sept. 18 was the birth date of Silicon Valley, of the electronics industry and of the entire digital age,” says Mr. Shockley’s biographer, Joel Shurkin.

The date Silicon Valley (& Intel) was born Read More »

DRM converts copyrights into trade secrets

From Mark Sableman’s “Copyright reformers pose tough questions” (St. Louis Journalism Review: June 2005):

It goes by the name “digital rights management” – the effort, already very successful, to give content owners the right to lock down their works technologically. It is what Washington University law professor Charles McManis has characterized as attaching absolute “trade secret” property-type rights to the content formerly subject to the copyright balance between private rights and public use.

DRM converts copyrights into trade secrets Read More »

Macaulay in 1841: copyright a tax on readers

From Thomas Babington Macaulay’s “A Speech Delivered In The House Of Commons On The 5th Of February 1841” (Prime Palaver #4: 1 September 2001):

The principle of copyright is this. It is a tax on readers for the purpose of giving a bounty to writers. The tax is an exceedingly bad one; it is a tax on one of the most innocent and most salutary of human pleasures; and never let us forget, that a tax on innocent pleasures is a premium on vicious pleasures. I admit, however, the necessity of giving a bounty to genius and learning. In order to give such a bounty, I willingly submit even to this severe and burdensome tax. Nay, I am ready to increase the tax, if it can be shown that by so doing I should proportionally increase the bounty. My complaint is, that my honourable and learned friend doubles, triples, quadruples, the tax, and makes scarcely any perceptible addition to the bounty.

Macaulay in 1841: copyright a tax on readers Read More »

The real purposes of the American school

From John Taylor Gatto’s “Against School” (Harper’s Magazine: September 2003):

Mass schooling of a compulsory nature really got its teeth into the United States between 1905 and 1915, though it was conceived of much earlier and pushed for throughout most of the nineteenth century. The reason given for this enormous upheaval of family life and cultural traditions was, roughly speaking, threefold:

1) To make good people.
2) To make good citizens.
3) To make each person his or her personal best.

These goals are still trotted out today on a regular basis, and most of us accept them in one form or another as a decent definition of public education’s mission, however short schools actually fall in achieving them. But we are dead wrong. Compounding our error is the fact that the national literature holds numerous and surprisingly consistent statements of compulsory schooling’s true purpose. We have, for example, the great H. L. Mencken, who wrote in The American Mercury for April 1924 that the aim of public education is not

to fill the young of the species with knowledge and awaken their intelligence. . . . Nothing could be further from the truth. The aim.. . is simply to reduce as many individuals as possible to the same safe level, to breed and train a standardized citizenry, to put down dissent and originality. That is its aim in the United States . . . and that is its aim everywhere else.

[Alexander Inglis, author of the 1918 book, Principles of Secondary Education,], for whom a lecture in education at Harvard is named, makes it perfectly clear that compulsory schooling on this continent was intended to be just what it had been for Prussia in the 1820s: a fifth column into the burgeoning democratic movement that threatened to give the peasants and the proletarians a voice at the bargaining table. Modern, industrialized, compulsory schooling was to make a sort of surgical incision into the prospective unity of these underclasses. Divide children by subject, by age-grading, by constant rankings on tests, and by many other more subtle means, and it was unlikely that the ignorant mass of mankind, separated in childhood, would ever reintegrate into a dangerous whole.

Inglis breaks down the purpose – the actual purpose – of modem schooling into six basic functions, any one of which is enough to curl the hair of those innocent enough to believe the three traditional goals listed earlier:

1) The adjustive or adaptive function. Schools are to establish fixed habits of reaction to authority. This, of course, precludes critical judgment completely. It also pretty much destroys the idea that useful or interesting material should be taught, because you can’t test for reflexive obedience until you know whether you can make kids learn, and do, foolish and boring things.

2) The integrating function. This might well be called “the conformity function,” because its intention is to make children as alike as possible. People who conform are predictable, and this is of great use to those who wish to harness and manipulate a large labor force.

3) The diagnostic and directive function. School is meant to determine each student’s proper social role. This is done by logging evidence mathematically and anecdotally on cumulative records. As in “your permanent record.” Yes, you do have one.

4) The differentiating function. Once their social role has been “diagnosed,” children are to be sorted by role and trained only so far as their destination in the social machine merits – and not one step further. So much for making kids their personal best.

5) The selective function. This refers not to human choice at all but to Darwin’s theory of natural selection as applied to what he called “the favored races.” In short, the idea is to help things along by consciously attempting to improve the breeding stock. Schools are meant to tag the unfit – with poor grades, remedial placement, and other punishments – clearly enough that their peers will accept them as inferior and effectively bar them from the reproductive sweepstakes. That’s what all those little humiliations from first grade onward were intended to do: wash the dirt down the drain.

6) The propaedeutic function. The societal system implied by these rules will require an elite group of caretakers. To that end, a small fraction of the kids will quietly be taught how to manage this continuing project, how to watch over and control a population deliberately dumbed down and declawed in order that government might proceed unchallenged and corporations might never want for obedient labor. …

Class may frame the proposition, as when Woodrow Wilson, then president of Princeton University, said the following to the New York City School Teachers Association in 1909: “We want one class of persons to have a liberal education, and we want another class of persons, a very much larger class, of necessity, in every society, to forgo the privileges of a liberal education and fit themselves to perform specific difficult manual tasks.” …

Now, you needn’t have studied marketing to know that there are two groups of people who can always be convinced to consume more than they need to: addicts and children. School has done a pretty good job of turning our children into addicts, but it has done a spectacular job of turning our children into children. Again, this is no accident. Theorists from Plato to Rousseau to our own Dr. Inglis knew that if children could be cloistered with other children, stripped of responsibility and independence, encouraged to develop only the trivializing emotions of greed, envy, jealousy, and fear, they would grow older but never truly grow up. …

Now for the good news. Once you understand the logic behind modern schooling, its tricks and traps are fairly easy to avoid. School trains children to be employees and consumers; teach your own to be leaders and adventurers. School trains children to obey reflexively; teach your own to think critically and independently. Well-schooled kids have a low threshold for boredom; help your own to develop an inner life so that they’ll never be bored. Urge them to take on the serious material, the grown-up material, in history, literature, philosophy, music, art, economics, theology – all the stuff schoolteachers know well enough to avoid. Challenge your kids with plenty of solitude so that they can learn to enjoy their own company, to conduct inner dialogues. Well-schooled people are conditioned to dread being alone, and they seek constant companionship through the TV, the computer, the cell phone, and through shallow friendships quickly acquired and quickly abandoned. Your children should have a more meaningful life, and they can.

First, though, we must wake up to what our schools really are: laboratories of experimentation on young minds, drill centers for the habits and attitudes that corporate society demands. Mandatory education serves children only incidentally; its real purpose is to turn them into servants. Don’t let your own have their childhoods extended, not even for a day. If David Farragut could take command of a captured British warship as a preteen, if Thomas Edison could publish a broadsheet at the age of twelve, if Ben Franklin could apprentice himself to a printer at the same age (then put himself through a course of study that would choke a Yale senior today), there’s no telling what your own kids could do. After a long life, and thirty years in the public school trenches, I’ve concluded that genius is as common as dirt. We suppress our genius only because we haven’t yet figured out how to manage a population of educated men and women. The solution, I think, is simple and glorious. Let them manage themselves.

The real purposes of the American school Read More »

Why software is difficult to create … & will always be difficult

From Frederick P. Brooks, Jr.’s “No Silver Bullet: Essence and Accidents of Software Engineering” (Computer: Vol. 20, No. 4 [April 1987] pp. 10-19):

The familiar software project, at least as seen by the nontechnical manager, has something of this character; it is usually innocent and straightforward, but is capable of becoming a monster of missed schedules, blown budgets, and flawed products. So we hear desperate cries for a silver bullet–something to make software costs drop as rapidly as computer hardware costs do.

But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity. …

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract in that such a conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems. …

Let us consider the inherent properties of this irreducible essence of modern software systems: complexity, conformity, changeability, and invisibility.

Complexity. Software entities are more complex for their size than perhaps any other human construct because no two parts are alike (at least above the statement level). …

Many of the classic problems of developing software products derive from this essential complexity and its nonlinear increases with size. From the complexity comes the difficulty of communication among team members, which leads to product flaws, cost overruns, schedule delays. From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability. From complexity of function comes the difficulty of invoking function, which makes programs hard to use. From complexity of structure comes the difficulty of extending programs to new functions without creating side effects. From complexity of structure come the unvisualized states that constitute security trapdoors.

Not only technical problems, but management problems as well come from the complexity. It makes overview hard, thus impeding conceptual integrity. It makes it hard to find and control all the loose ends. It creates the tremendous learning and understanding burden that makes personnel turnover a disaster.

Conformity. … No such faith comforts the software engineer. Much of the complexity that he must master is arbitrary complexity, forced without rhyme or reason by the many human institutions and systems to which his interfaces must conform. …

Changeability. … All successful software gets changed. Two processes are at work. First, as a software product is found to be useful, people try it in new cases at the edge of or beyond the original domain. The pressures for extended function come chiefly from users who like the basic function and invent new uses for it.

Second, successful software survives beyond the normal life of the machine vehicle for which it is first written. If not new computers, then at least new disks, new displays, new printers come along; and the software must be conformed to its new vehicles of opportunity. …

Invisibility. Software is invisible and unvisualizable. …

The reality of software is not inherently embedded in space. Hence, it has no ready geometric representation in the way that land has maps, silicon chips have diagrams, computers have connectivity schematics. As soon as we attempt to diagram software structure, we find it to constitute not one, but several, general directed graphs superimposed one upon another. The several graphs may represent the flow of control, the flow of data, patterns of dependency, time sequence, name-space relationships. These graphs are usually not even planar, much less hierarchical. …

Past Breakthroughs Solved Accidental Difficulties

If we examine the three steps in software technology development that have been most fruitful in the past, we discover that each attacked a different major difficulty in building software, but that those difficulties have been accidental, not essential, difficulties. …

High-level languages. Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. …

What does a high-level language accomplish? It frees a program from much of its accidental complexity. …

Time-sharing. Time-sharing brought a major improvement in the productivity of programmers and in the quality of their product, although not so large as that brought by high-level languages.

Time-sharing attacks a quite different difficulty. Time-sharing preserves immediacy, and hence enables one to maintain an overview of complexity. …

Unified programming environments. Unix and Interlisp, the first integrated programming environments to come into widespread use, seem to have improved productivity by integral factors. Why?

They attack the accidental difficulties that result from using individual programs together, by providing integrated libraries, unified file formats, and pipes and filters. As a result, conceptual structures that in principle could always call, feed, and use one another can indeed easily do so in practice.

Why software is difficult to create … & will always be difficult Read More »

Ridiculous trademark and fair use stories

From Mark Sableman’s “Copyright reformers pose tough questions” (St. Louis Journalism Review: June 2005):

Kembrew McLeod of the University of Iowa explained how as a graduate student he applied for a federal trademark registration on the phrase “freedom of expression” as a joke, not really expecting that even a green-eye-shaded trademark examiner would approve it. The result? He got the trademark registration – and his certificate appears on the frontispiece of his current book about the abuse of intellectual property – a book titled, “Freedom of Expression™.” …

Victor Navasky, editor of The Nation magazine, told the story of his copyright case, which became a U.S. Supreme Court landmark – a story that from his perspective involved his use of only a tiny newsworthy portion of Gerald Ford’s memoirs, a book that he considered “designed to put you to sleep.” The resulting whirlwind lawsuit, however, put no one to sleep, and led to a 1985 decision that made copyright “fair use” determinations more difficult than ever. …

Ridiculous trademark and fair use stories Read More »

Just how big is YouTube?

From Reuters’s “YouTube serves up 100 mln videos a day” (16 July 2006):

YouTube, the leader in Internet video search, said on Sunday viewers have are now watching more than 100 million videos per day on its site, marking the surge in demand for its “snack-sized” video fare.

Since springing from out of nowhere late last year, YouTube has come to hold the leading position in online video with 29 percent of the U.S. multimedia entertainment market, according to the latest weekly data from Web measurement site Hitwise.

YouTube videos account for 60 percent of all videos watched online, the company said. …

In June, 2.5 billion videos were watched on YouTube, which is based in San Mateo, California and has just over 30 employees. More than 65,000 videos are now uploaded daily to YouTube, up from around 50,000 in May, the company said.

YouTube boasts nearly 20 million unique users per month, according to Nielsen//NetRatings, another Internet audience measurement firm.

Just how big is YouTube? Read More »