organization

Microsoft’s real customers

From James Fallow’s “Inside the Leviathan: A short and stimulating brush with Microsoft’s corporate culture” (The Atlantic: February 2000):

Financial analysts have long recognized that Microsoft’s profit really comes from two sources. One is operating systems (Windows, in all its varieties), and the other is the Office suite of programs. Everything else — Flight Simulator, Slate, MSNBC, mice and keyboards — is financially meaningless. What these two big categories have in common is that individuals are not the significant customers. Operating systems are sold mainly to computer companies such as Dell and Compaq, which pass them pre-loaded to individual consumers. And the main paying customers for Office are big corporations (or what the high-tech world calls LORGs, for “large-size organizations”), which may buy thousands of “seats” for their employees at hundreds of dollars apiece. Product planning, therefore, is focused with admirable clarity on those whose decisions really matter to Microsoft — the information-technology manager at Chevron or the U.S. Department of Agriculture, for example — rather than some writer with an idea about how to make his colleagues happier with a program.

Microsoft’s real customers Read More »

The future of news as shown by the 2008 election

From Steven Berlin Johnson’s “Old Growth Media And The Future Of News” (StevenBerlinJohnson.com: 14 March 2009):

The first Presidential election that I followed in an obsessive way was the 1992 election that Clinton won. I was as compulsive a news junkie about that campaign as I was about the Mac in college: every day the Times would have a handful of stories about the campaign stops or debates or latest polls. Every night I would dutifully tune into Crossfire to hear what the punditocracy had to say about the day’s events. I read Newsweek and Time and the New Republic, and scoured the New Yorker for its occasional political pieces. When the debates aired, I’d watch religiously and stay up late soaking in the commentary from the assembled experts.

That was hardly a desert, to be sure. But compare it to the information channels that were available to me following the 2008 election. Everything I relied on in 1992 was still around of course – except for the late, lamented Crossfire – but it was now part of a vast new forest of news, data, opinion, satire – and perhaps most importantly, direct experience. Sites like Talking Points Memo and Politico did extensive direct reporting. Daily Kos provided in-depth surveys and field reports on state races that the Times would never have had the ink to cover. Individual bloggers like Andrew Sullivan responded to each twist in the news cycle; HuffPo culled the most provocative opinion pieces from the rest of the blogosphere. Nate Silver at fivethirtyeight.com did meta-analysis of polling that blew away anything William Schneider dreamed of doing on CNN in 1992. When the economy imploded in September, I followed economist bloggers like Brad DeLong to get their expert take the candidates’ responses to the crisis. (Yochai Benchler talks about this phenomenon of academics engaging with the news cycle in a smart response here.) I watched the debates with a thousand virtual friends live-Twittering alongside me on the couch. All this was filtered and remixed through the extraordinary political satire of John Stewart and Stephen Colbert, which I watched via viral clips on the Web as much as I watched on TV.

What’s more: the ecosystem of political news also included information coming directly from the candidates. Think about the Philadelphia race speech, arguably one of the two or three most important events in the whole campaign. Eight million people watched it on YouTube alone. Now, what would have happened to that speech had it been delivered in 1992? Would any of the networks have aired it in its entirety? Certainly not. It would have been reduced to a minute-long soundbite on the evening news. CNN probably would have aired it live, which might have meant that 500,000 people caught it. Fox News and MSNBC? They didn’t exist yet. A few serious newspaper might have reprinted it in its entirety, which might have added another million to the audience. Online perhaps someone would have uploaded a transcript to Compuserve or The Well, but that’s about the most we could have hoped for.

There is no question in mind my mind that the political news ecosystem of 2008 was far superior to that of 1992: I had more information about the state of the race, the tactics of both campaigns, the issues they were wrestling with, the mind of the electorate in different regions of the country. And I had more immediate access to the candidates themselves: their speeches and unscripted exchanges; their body language and position papers.

The old line on this new diversity was that it was fundamentally parasitic: bloggers were interesting, sure, but if the traditional news organizations went away, the bloggers would have nothing to write about, since most of what they did was link to professionally reported stories. Let me be clear: traditional news organizations were an important part of the 2008 ecosystem, no doubt about it. … But no reasonable observer of the political news ecosystem could describe all the new species as parasites on the traditional media. Imagine how many barrels of ink were purchased to print newspaper commentary on Obama’s San Francisco gaffe about people “clinging to their guns and religion.” But the original reporting on that quote didn’t come from the Times or the Journal; it came from a “citizen reporter” named Mayhill Fowler, part of the Off The Bus project sponsored by Jay Rosen’s Newassignment.net and The Huffington Post.

The future of news as shown by the 2008 election Read More »

How to increase donations on non-profit websites

From Jakob Nielsen’s “Donation Usability: Increasing Online Giving to Non-Profits and Charities” (Alertbox: 30 March 2009):

We asked participants what information they want to see on non-profit websites before they decide whether to donate. Their answers fell into 4 broad categories, 2 of which were the most heavily requested:

  • The organization’s mission, goals, objectives, and work.
  • How it uses donations and contributions.

That is: What are you trying to achieve, and how will you spend my money?

Sadly, only 43% of the sites we studied answered the first question on their homepage. Further, only a ridiculously low 4% answered the second question on the homepage. Although organizations typically provided these answers somewhere within the site, users often had problems finding this crucial information.

In choosing between 2 charities, people referred to 5 categories of information. However, an organization’s mission, goals, objectives, and work was by far the most important. Indeed, it was 3.6 times as important as the runner-up issue, which was the organization’s presence in the user’s own community.

How to increase donations on non-profit websites Read More »

ODF compared & constrasted with OOXML

From Sam Hiser’s “Achieving Openness: A Closer Look at ODF and OOXML” (ONLamp.com: 14 June 2007):

An open, XML-based standard for displaying and storing data files (text documents, spreadsheets, and presentations) offers a new and promising approach to data storage and document exchange among office applications. A comparison of the two XML-based formats–OpenDocument Format (“ODF”) and Office Open XML (“OOXML”)–across widely accepted “openness” criteria has revealed substantial differences, including the following:

  • ODF is developed and maintained in an open, multi-vendor, multi-stakeholder process that protects against control by a single organization. OOXML is less open in its development and maintenance, despite being submitted to a formal standards body, because control of the standard ultimately rests with one organization.
  • ODF is the only openly available standard, published fully in a document that is freely available and easy to comprehend. This openness is reflected in the number of competing applications in which ODF is already implemented. Unlike ODF, OOXML’s complexity, extraordinary length, technical omissions, and single-vendor dependencies combine to make alternative implementation unattractive as well as legally and practically impossible.
  • ODF is the only format unencumbered by intellectual property rights (IPR) restrictions on its use in other software, as certified by the Software Freedom Law Center. Conversely, many elements designed into the OOXML formats but left undefined in the OOXML specification require behaviors upon document files that only Microsoft Office applications can provide. This makes data inaccessible and breaks work group productivity whenever alternative software is used.
  • ODF offers interoperability with ODF-compliant applications on most of the common operating system platforms. OOXML is designed to operate fully within the Microsoft environment only. Though it will work elegantly across the many products in the Microsoft catalog, OOXML ignores accepted standards and best practices regarding its use of XML.

Overall, a comparison of both formats reveals significant differences in their levels of openness. While ODF is revealed as sufficiently open across all four key criteria, OOXML shows relative weakness in each criteria and offers fundamental flaws that undermine its candidacy as a global standard.

ODF compared & constrasted with OOXML Read More »

The future of security

From Bruce Schneier’s “Security in Ten Years” (Crypto-Gram: 15 December 2007):

Bruce Schneier: … The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.

Marcus Ranum: … Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.

Bruce Schneier: … I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

The future of security Read More »

Bruce Schneier on security & crime economics

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Basically, you’re asking if crime pays. Most of the time, it doesn’t, and the problem is the different risk characteristics. If I make a computer security mistake — in a book, for a consulting client, at BT — it’s a mistake. It might be expensive, but I learn from it and move on. As a criminal, a mistake likely means jail time — time I can’t spend earning my criminal living. For this reason, it’s hard to improve as a criminal. And this is why there are more criminal masterminds in the movies than in real life.

Crime has been part of our society since our species invented society, and it’s not going away anytime soon. The real question is, “Why is there so much crime and hacking on the Internet, and why isn’t anyone doing anything about it?”

The answer is in the economics of Internet vulnerabilities and attacks: the organizations that are in the position to mitigate the risks aren’t responsible for the risks. This is an externality, and if you want to fix the problem you need to address it. In this essay (more here), I recommend liabilities; companies need to be liable for the effects of their software flaws. A related problem is that the Internet security market is a lemon’s market (discussed here), but there are strategies for dealing with that, too.

Bruce Schneier on security & crime economics Read More »

Matthew, the blind phone phreaker

From Kevin Poulsen’s “Teenage Hacker Is Blind, Brash and in the Crosshairs of the FBI” (Wired: 29 February 2008):

At 4 in the morning of May 1, 2005, deputies from the El Paso County Sheriff’s Office converged on the suburban Colorado Springs home of Richard Gasper, a TSA screener at the local Colorado Springs Municipal Airport. They were expecting to find a desperate, suicidal gunman holding Gasper and his daughter hostage.

“I will shoot,” the gravely voice had warned, in a phone call to police minutes earlier. “I’m not afraid. I will shoot, and then I will kill myself, because I don’t care.”

But instead of a gunman, it was Gasper himself who stepped into the glare of police floodlights. Deputies ordered Gasper’s hands up and held him for 90 minutes while searching the house. They found no armed intruder, no hostages bound in duct tape. Just Gasper’s 18-year-old daughter and his baffled parents.

A federal Joint Terrorism Task Force would later conclude that Gasper had been the victim of a new type of nasty hoax, called “swatting,” that was spreading across the United States. Pranksters were phoning police with fake murders and hostage crises, spoofing their caller IDs so the calls appear to be coming from inside the target’s home. The result: police SWAT teams rolling to the scene, sometimes bursting into homes, guns drawn.

Now the FBI thinks it has identified the culprit in the Colorado swatting as a 17-year-old East Boston phone phreak known as “Li’l Hacker.” Because he’s underage, Wired.com is not reporting Li’l Hacker’s last name. His first name is Matthew, and he poses a unique challenge to the federal justice system, because he is blind from birth.

Interviews by Wired.com with Matt and his associates, and a review of court documents, FBI reports and audio recordings, paints a picture of a young man with an uncanny talent for quick telephone con jobs. Able to commit vast amounts of information to memory instantly, Matt has mastered the intricacies of telephone switching systems, while developing an innate understanding of human psychology and organization culture — knowledge that he uses to manipulate his patsies and torment his foes.

Matt says he ordered phone company switch manuals off the internet and paid to have them translated into Braille. He became a regular caller to internal telephone company lines, where he’d masquerade as an employee to perform tricks like tracing telephone calls, getting free phone features, obtaining confidential customer information and disconnecting his rivals’ phones.

It was, relatively speaking, mild stuff. The teen though, soon fell in with a bad crowd. The party lines were dominated by a gang of half-a-dozen miscreants who informally called themselves the “Wrecking Crew” and “The Cavalry.”

By then, Matt’s reputation had taken on a life of its own, and tales of some of his hacks — perhaps apocryphal — are now legends. According to Daniels, he hacked his school’s PBX so that every phone would ring at once. Another time, he took control of a hotel elevator, sending it up and down over and over again. One story has it that Matt phoned a telephone company frame room worker at home in the middle of the night, and persuaded him to get out of bed and return to work to disconnect someone’s phone.

Matthew, the blind phone phreaker Read More »

The NSA and threats to privacy

From James Bamford’s “Big Brother Is Listening” (The Atlantic: April 2006):

This legislation, the 1978 Foreign Intelligence Surveillance Act, established the FISA court—made up of eleven judges handpicked by the chief justice of the United States—as a secret part of the federal judiciary. The court’s job is to decide whether to grant warrants requested by the NSA or the FBI to monitor communications of American citizens and legal residents. The law allows the government up to three days after it starts eavesdropping to ask for a warrant; every violation of FISA carries a penalty of up to five years in prison. Between May 18, 1979, when the court opened for business, until the end of 2004, it granted 18,742 NSA and FBI applications; it turned down only four outright.

Such facts worry Jonathan Turley, a George Washington University law professor who worked for the NSA as an intern while in law school in the 1980s. The FISA “courtroom,” hidden away on the top floor of the Justice Department building (because even its location is supposed to be secret), is actually a heavily protected, windowless, bug-proof installation known as a Sensitive Compartmented Information Facility, or SCIF.

It is true that the court has been getting tougher. From 1979 through 2000, it modified only two out of 13,087 warrant requests. But from the start of the Bush administration, in 2001, the number of modifications increased to 179 out of 5,645 requests. Most of those—173—involved what the court terms “substantive modifications.”

Contrary to popular perception, the NSA does not engage in “wiretapping”; it collects signals intelligence, or “sigint.” In contrast to the image we have from movies and television of an FBI agent placing a listening device on a target’s phone line, the NSA intercepts entire streams of electronic communications containing millions of telephone calls and e-mails. It runs the intercepts through very powerful computers that screen them for particular names, telephone numbers, Internet addresses, and trigger words or phrases. Any communications containing flagged information are forwarded by the computer for further analysis.

Names and information on the watch lists are shared with the FBI, the CIA, the Department of Homeland Security, and foreign intelligence services. Once a person’s name is in the files, even if nothing incriminating ever turns up, it will likely remain there forever. There is no way to request removal, because there is no way to confirm that a name is on the list.

In December of 1997, in a small factory outside the southern French city of Toulouse, a salesman got caught in the NSA’s electronic web. Agents working for the NSA’s British partner, the Government Communications Headquarters, learned of a letter of credit, valued at more than $1.1 million, issued by Iran’s defense ministry to the French company Microturbo. According to NSA documents, both the NSA and the GCHQ concluded that Iran was attempting to secretly buy from Microturbo an engine for the embargoed C-802 anti-ship missile. Faxes zapping back and forth between Toulouse and Tehran were intercepted by the GCHQ, which sent them on not just to the NSA but also to the Canadian and Australian sigint agencies, as well as to Britain’s MI6. The NSA then sent the reports on the salesman making the Iranian deal to a number of CIA stations around the world, including those in Paris and Bonn, and to the U.S. Commerce Department and the Customs Service. Probably several hundred people in at least four countries were reading the company’s communications.

Such events are central to the current debate involving the potential harm caused by the NSA’s warrantless domestic eavesdropping operation. Even though the salesman did nothing wrong, his name made its way into the computers and onto the watch lists of intelligence, customs, and other secret and law-enforcement organizations around the world. Maybe nothing will come of it. Maybe the next time he tries to enter the United States or Britain he will be denied, without explanation. Maybe he will be arrested. As the domestic eavesdropping program continues to grow, such uncertainties may plague innocent Americans whose names are being run through the supercomputers even though the NSA has not met the established legal standard for a search warrant. It is only when such citizens are turned down while applying for a job with the federal government—or refused when seeking a Small Business Administration loan, or turned back by British customs agents when flying to London on vacation, or even placed on a “no-fly” list—that they will realize that something is very wrong. But they will never learn why.

General Michael Hayden, director of the NSA from 1999 to 2005 and now principal deputy director of national intelligence, noted in 2002 that during the 1990s, e-communications “surpassed traditional communications. That is the same decade when mobile cell phones increased from 16 million to 741 million—an increase of nearly 50 times. That is the same decade when Internet users went from about 4 million to 361 million—an increase of over 90 times. Half as many land lines were laid in the last six years of the 1990s as in the whole previous history of the world. In that same decade of the 1990s, international telephone traffic went from 38 billion minutes to over 100 billion. This year, the world’s population will spend over 180 billion minutes on the phone in international calls alone.”

Intercepting communications carried by satellite is fairly simple for the NSA. The key conduits are the thirty Intelsat satellites that ring the Earth, 22,300 miles above the equator. Many communications from Europe, Africa, and the Middle East to the eastern half of the United States, for example, are first uplinked to an Intelsat satellite and then downlinked to AT&T’s ground station in Etam, West Virginia. From there, phone calls, e-mails, and other communications travel on to various parts of the country. To listen in on that rich stream of information, the NSA built a listening post fifty miles away, near Sugar Grove, West Virginia. Consisting of a group of very large parabolic dishes, hidden in a heavily forested valley and surrounded by tall hills, the post can easily intercept the millions of calls and messages flowing every hour into the Etam station. On the West Coast, high on the edge of a bluff overlooking the Okanogan River, near Brewster, Washington, is the major commercial downlink for communications to and from Asia and the Pacific. Consisting of forty parabolic dishes, it is reportedly the largest satellite antenna farm in the Western Hemisphere. A hundred miles to the south, collecting every whisper, is the NSA’s western listening post, hidden away on a 324,000-acre Army base in Yakima, Washington. The NSA posts collect the international traffic beamed down from the Intelsat satellites over the Atlantic and Pacific. But each also has a number of dishes that appear to be directed at domestic telecommunications satellites.

Until recently, most international telecommunications flowing into and out of the United States traveled by satellite. But faster, more reliable undersea fiber-optic cables have taken the lead, and the NSA has adapted. The agency taps into the cables that don’t reach our shores by using specially designed submarines, such as the USS Jimmy Carter, to attach a complex “bug” to the cable itself. This is difficult, however, and undersea taps are short-lived because the batteries last only a limited time. The fiber-optic transmission cables that enter the United States from Europe and Asia can be tapped more easily at the landing stations where they come ashore. With the acquiescence of the telecommunications companies, it is possible for the NSA to attach monitoring equipment inside the landing station and then run a buried encrypted fiber-optic “backhaul” line to NSA headquarters at Fort Meade, Maryland, where the river of data can be analyzed by supercomputers in near real time.

Tapping into the fiber-optic network that carries the nation’s Internet communications is even easier, as much of the information transits through just a few “switches” (similar to the satellite downlinks). Among the busiest are MAE East (Metropolitan Area Ethernet), in Vienna, Virginia, and MAE West, in San Jose, California, both owned by Verizon. By accessing the switch, the NSA can see who’s e-mailing with whom over the Internet cables and can copy entire messages. Last September, the Federal Communications Commission further opened the door for the agency. The 1994 Communications Assistance for Law Enforcement Act required telephone companies to rewire their networks to provide the government with secret access. The FCC has now extended the act to cover “any type of broadband Internet access service” and the new Internet phone services—and ordered company officials never to discuss any aspect of the program.

The National Security Agency was born in absolute secrecy. Unlike the CIA, which was created publicly by a congressional act, the NSA was brought to life by a top-secret memorandum signed by President Truman in 1952, consolidating the country’s various military sigint operations into a single agency. Even its name was secret, and only a few members of Congress were informed of its existence—and they received no information about some of its most important activities. Such secrecy has lent itself to abuse.

During the Vietnam War, for instance, the agency was heavily involved in spying on the domestic opposition to the government. Many of the Americans on the watch lists of that era were there solely for having protested against the war. … Even so much as writing about the NSA could land a person a place on a watch list.

For instance, during World War I, the government read and censored thousands of telegrams—the e-mail of the day—sent hourly by telegraph companies. Though the end of the war brought with it a reversion to the Radio Act of 1912, which guaranteed the secrecy of communications, the State and War Departments nevertheless joined together in May of 1919 to create America’s first civilian eavesdropping and code-breaking agency, nicknamed the Black Chamber. By arrangement, messengers visited the telegraph companies each morning and took bundles of hard-copy telegrams to the agency’s offices across town. These copies were returned before the close of business that day.

A similar tale followed the end of World War II. In August of 1945, President Truman ordered an end to censorship. That left the Signal Security Agency (the military successor to the Black Chamber, which was shut down in 1929) without its raw intelligence—the telegrams provided by the telegraph companies. The director of the SSA sought access to cable traffic through a secret arrangement with the heads of the three major telegraph companies. The companies agreed to turn all telegrams over to the SSA, under a plan code-named Operation Shamrock. It ran until the government’s domestic spying programs were publicly revealed, in the mid-1970s.

Frank Church, the Idaho Democrat who led the first probe into the National Security Agency, warned in 1975 that the agency’s capabilities

“could be turned around on the American people, and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it is done, is within the reach of the government to know. Such is the capacity of this technology.”

The NSA and threats to privacy Read More »

How technologies have changed politics, & how Obama uses tech

From Marc Ambinder’s “HisSpace” (The Atlantic: June 2008):

Improvements to the printing press helped Andrew Jackson form and organize the Democratic Party, and he courted newspaper editors and publishers, some of whom became members of his Cabinet, with a zeal then unknown among political leaders. But the postal service, which was coming into its own as he reached for the presidency, was perhaps even more important to his election and public image. Jackson’s exploits in the War of 1812 became well known thanks in large measure to the distribution network that the postal service had created, and his 1828 campaign—among the first to distribute biographical pamphlets by mail—reinforced his heroic image. As president, he turned the office of postmaster into a patronage position, expanded the postal network further—the historian Richard John has pointed out that by the middle of Jackson’s first term, there were 2,000 more postal workers in America than soldiers in the Army—and used it to keep his populist base rallied behind him.

Abraham Lincoln became a national celebrity, according to the historian Allen Guelzo’s new book, Lincoln and Douglas: The Debates That Defined America, when transcripts of those debates were reprinted nationwide in newspapers, which were just then reaching critical mass in distribution beyond the few Eastern cities where they had previously flourished. Newspapers enabled Lincoln, an odd-looking man with a reed-thin voice, to become a viable national candidate …

Franklin Delano Roosevelt used radio to make his case for a dramatic redefinition of government itself, quickly mastering the informal tone best suited to the medium. In his fireside chats, Roosevelt reached directly into American living rooms at pivotal moments of his presidency. His talks—which by turns soothed, educated, and pressed for change—held the New Deal together.

And of course John F. Kennedy famously rode into the White House thanks in part to the first televised presidential debate in U.S. history, in which his keen sense of the medium’s visual impact, plus a little makeup, enabled him to fashion the look of a winner (especially when compared with a pale and haggard Richard Nixon). Kennedy used TV primarily to create and maintain his public image, not as a governing tool, but he understood its strengths and limitations before his peers did …

[Obama’s] speeches play well on YouTube, which allows for more than the five-second sound bites that have characterized the television era. And he recognizes the importance of transparency and consistency at a time when access to everything a politician has ever said is at the fingertips of every voter. But as Joshua Green notes in the preceding pages, Obama has truly set himself apart by his campaign’s use of the Internet to organize support. No other candidate in this or any other election has ever built a support network like Obama’s. The campaign’s 8,000 Web-based affinity groups, 750,000 active volunteers, and 1,276,000 donors have provided him with an enormous financial and organizational advantage in the Democratic primary.

What Obama seems to promise is, at its outer limits, a participatory democracy in which the opportunities for participation have been radically expanded. He proposes creating a public, Google-like database of every federal dollar spent. He aims to post every piece of non-emergency legislation online for five days before he signs it so that Americans can comment. A White House blog—also with comments—would be a near certainty. Overseeing this new apparatus would be a chief technology officer.

There is some precedent for Obama’s vision. The British government has already used the Web to try to increase interaction with its citizenry, to limited effect. In November 2006, it established a Web site for citizens seeking redress from their government, http://petitions.pm.gov.uk/. More than 29,000 petitions have since been submitted, and about 9.5 percent of Britons have signed at least one of them. The petitions range from the class-conscious (“Order a independent report to identify reasons that the living conditions of working class people are poor in relation to higher classes”) to the parochial (“We the undersigned petition the Prime Minister to re-open sunderland ice rink”).

How technologies have changed politics, & how Obama uses tech Read More »

The 80/20 rule

From F. John Reh’s “How the 80/20 rule can help you be more effective” (About.com):

In 1906, Italian economist Vilfredo Pareto created a mathematical formula to describe the unequal distribution of wealth in his country, observing that twenty percent of the people owned eighty percent of the wealth. In the late 1940s, Dr. Joseph M. Juran inaccurately attributed the 80/20 Rule to Pareto, calling it Pareto’s Principle. …

Quality Management pioneer, Dr. Joseph Juran, working in the US in the 1930s and 40s recognized a universal principle he called the “vital few and trivial many” and reduced it to writing. …

As a result, Dr. Juran’s observation of the “vital few and trivial many”, the principle that 20 percent of something always are responsible for 80 percent of the results, became known as Pareto’s Principle or the 80/20 Rule. …

The 80/20 Rule means that in anything a few (20 percent) are vital and many(80 percent) are trivial. In Pareto’s case it meant 20 percent of the people owned 80 percent of the wealth. In Juran’s initial work he identified 20 percent of the defects causing 80 percent of the problems. Project Managers know that 20 percent of the work (the first 10 percent and the last 10 percent) consume 80 percent of your time and resources. You can apply the 80/20 Rule to almost anything, from the science of management to the physical world.

You know 20 percent of you stock takes up 80 percent of your warehouse space and that 80 percent of your stock comes from 20 percent of your suppliers. Also 80 percent of your sales will come from 20 percent of your sales staff. 20 percent of your staff will cause 80 percent of your problems, but another 20 percent of your staff will provide 80 percent of your production. It works both ways.

The value of the Pareto Principle for a manager is that it reminds you to focus on the 20 percent that matters. Of the things you do during your day, only 20 percent really matter. Those 20 percent produce 80 percent of your results.

The 80/20 rule Read More »

Feral cities of the future

From Richard J. Norton’s “Feral cities – The New Strategic Environment” (Naval War College Review: Autumn, 2003):

Imagine a great metropolis covering hundreds of square miles. Once a vital component in a national economy, this sprawling urban environment is now a vast collection of blighted buildings, an immense petri dish of both ancient and new diseases, a territory where the rule of law has long been replaced by near anarchy in which the only security available is that which is attained through brute power. Such cities have been routinely imagined in apocalyptic movies and in certain science-fiction genres, where they are often portrayed as gigantic versions of T. S. Eliot’s Rat’s Alley. Yet this city would still be globally connected. It would possess at least a modicum of commercial linkages, and some of its inhabitants would have access to the world’s most modern communication and computing technologies. It would, in effect, be a feral city.

The putative “feral city” is (or would be) a metropolis with a population of more than a million people in a state the government of which has lost the ability to maintain the rule of law within the city’s boundaries yet remains a functioning actor in the greater international system.

In a feral city social services are all but nonexistent, and the vast majority of the city’s occupants have no access to even the most basic health or security assistance. There is no social safety net. Human security is for the most part a matter of individual initiative. Yet a feral city does not descend into complete, random chaos. Some elements, be they criminals, armed resistance groups, clans, tribes, or neighborhood associations, exert various degrees of control over portions of the city. Intercity, city-state, and even international commercial transactions occur, but corruption, avarice, and violence are their hallmarks. A feral city experiences massive levels of disease and creates enough pollution to qualify as an international environmental disaster zone. Most feral cities would suffer from massive urban hypertrophy, covering vast expanses of land. The city’s structures range from once-great buildings symbolic of state power to the meanest shantytowns and slums. Yet even under these conditions, these cities continue to grow, and the majority of occupants do not voluntarily leave.

Feral cities would exert an almost magnetic influence on terrorist organizations. Such megalopolises will provide exceptionally safe havens for armed resistance groups, especially those having cultural affinity with at least one sizable segment of the city’s population. The efficacy and portability of the most modern computing and communication systems allow the activities of a worldwide terrorist, criminal, or predatory and corrupt commercial network to be coordinated and directed with equipment easily obtained on the open market and packed into a minivan. The vast size of a feral city, with its buildings, other structures, and subterranean spaces, would offer nearly perfect protection from overhead sensors, whether satellites or unmanned aerial vehicles. The city’s population represents for such entities a ready source of recruits and a built-in intelligence network. Collecting human intelligence against them in this environment is likely to be a daunting task. Should the city contain airport or seaport facilities, such an organization would be able to import and export a variety of items. The feral city environment will actually make it easier for an armed resistance group that does not already have connections with criminal organizations to make them. The linkage between such groups, once thought to be rather unlikely, is now so commonplace as to elicit no comment.

Feral cities of the future Read More »