government

Face recognition software as an example of “function creep”

From Technology Review‘s’ “Creepy Functions“:

Consider one example of function creep. The Electoral Commission of Uganda has retained Viisage Technology to implement a face recognition system capable of enrolling 10 million voters in 60 days. The goal is to reduce voter registration fraud. But Woodward notes that the system might also be put to work fingering political opponents of the regime. And Uganda probably isn’t the first country that springs to mind when someone says “due process” or “civil rights.”

From Technology Review‘s’ “Big Brother Logs On“:

Take the fact that the faces of a large portion of the driving population are becoming digitized by motor vehicles agencies and placed into databases, says Steinhardt. It isn’t much of a stretch to extend the system to a Big Brother-like nationwide identification and tracking network. Or consider that the Electoral Commission of Uganda has retained Viisage Technology to implement a “turnkey face recognition system” capable of enrolling 10 million voter registrants within 60 days. By generating a database containing the faceprint of every one of the country’s registered voters-and combining it with algorithms able to scour all 10 million images within six seconds to find a match-the commission hopes to reduce voter registration fraud. But once such a database is compiled, notes John Woodward, a former CIA operations officer who managed spies in several Asian countries and who’s now an analyst with the Rand Corporation, it could be employed for tracking and apprehending known or suspected political foes. Woodward calls that “function creep.”

Face recognition software as an example of “function creep” Read More »

Open source breathalyzers

From Bruce Schneier’s “DUI Cases Thrown Out Due to Closed-Source Breathalyzer“:

According to the article: “Hundreds of cases involving breath-alcohol tests have been thrown out by Seminole County judges in the past five months because the test’s manufacturer will not disclose how the machines work.”

This is the right decision. Throughout history, the government has had to make the choice: prosecute, or keep your investigative methods secret. They couldn’t have both. If they wanted to keep their methods secret, they had to give up on prosecution.

People have the right to confront their accuser. People have a right to examine the evidence against them, and to contest the validity of that evidence.

Open source breathalyzers Read More »

10 early choices that helped make the Internet successful

From Dan Gillmor’s “10 choices that were critical to the Net’s success“:

1) Make it all work on top of existing networks.

2) Use packets, not circuits.

3) Create a ‘routing’ function.

4) Split the Transmission Control Protocol (TCP) and Internet Protocol (IP) …

5) The National Science Foundation (NSF) funds the University of California-Berkeley, to put TCP/IP into the Unix operating system originally developed by AT&T.

6) CSNET, an early network used by universities, connects with the ARPANET … The connection was for e-mail only, but it led to much more university research on networks and a more general understanding among students, faculty and staff of the value of internetworking.

7) The NSF requires users of the NSFNET to use TCP/IP, not competing protocols.

8) International telecommunications standards bodies reject TCP/IP, then create a separate standard called OSI.

9) The NSF creates an “Acceptable Use Policy” restricting NSFNET use to noncommercial activities.

10) Once things start to build, government stays mostly out of the way.

10 early choices that helped make the Internet successful Read More »

Monopolies & Internet innovation

From Andrew Odlyzko’s “Pricing and Architecture of the Internet: Historical Perspectives from Telecommunications and Transportation“:

The power to price discriminate, especially for a monopolist, is like the power of taxation, something that can be used to destroy. There are many governments that are interested in controlling Internet traffic for political or other reasons, and are interfering (with various degrees of success) with the end-to-end principle. However, in most democratic societies, the pressure to change the architecture of the Internet is coming primarily from economic concerns, trying to extract more revenues from users. This does not necessarily threaten political liberty, but it does impede innovation. If some new protocol or service is invented, gains from its use could be appropriated by the carriers if they could impose special charges for it.

The power of price discrimination was well understood in ancient times, even if the economic concept was not defined. As the many historical vignettes presented before show, differential pricing was frequently allowed, but only to a controlled degree. The main con- cern in the early days was about general fairness and about service providers leveraging their control of a key facility into control over other businesses. Personal discrimination was particularly hated, and preference was given to general rules applying to broad classes (such as student or senior citizen discounts today). Very often bounds on charges were imposed to limit price discrimination. …

Openness, non-discrimination, and the end-to-end principle have contributed greatly to the success of the Internet, by allowing innovation to flourish. Service providers have traditionally been very poor in introducing services that mattered and even in forecasting where their profits would come from. Sometimes this was because of ignorance, as in the failure of WAP and success of SMS, both of which came as great surprises to the wireless industry, even though this should have been the easiest thing to predict [55]. Sometimes it was because the industry tried to control usage excessively. For example, services such as Minitel have turned out to be disappointments for their proponents largely because of the built-in limitations. We can also recall the attempts by the local telephone monopolies in the mid-to late-1990s to impose special fees on Internet access calls. Various studies were trotted out about the harm that long Internet calls were causing to the network. In retrospect, though, Internet access was a key source of the increased revenues and profits at the local telcos in the late 1990s. Since the main value of the phone was its accessibility at any time, long Internet calls led to installation of second lines that were highly profitable for service providers. (The average length of time that a phone line was in use remained remarkably constant during that period [49].)

Much of the progress in telecommunications over the last couple of decades was due to innovations by users. The “killer apps” on the Internet, email, Web, browser, search engines, and Napster, were all invented by end users, not by carriers. (Even email was specifically not designed into the ARPANET, the progenitor of the Internet, and its dominance came as a surprise [55].)

Monopolies & Internet innovation Read More »

Arguments against the Web’s ungovernability

From Technology Review‘s “Taming the Web“:

Nonetheless, the claim that the Internet is ungovernable by its nature is more of a hope than a fact. It rests on three widely accepted beliefs, each of which has become dogma to webheads. First, the Net is said to be too international to oversee: there will always be some place where people can set up a server and distribute whatever they want. Second, the Net is too interconnected to fence in: if a single person has something, he or she can instantly make it available to millions of others. Third, the Net is too full of hackers: any effort at control will invariably be circumvented by the world’s army of amateur tinkerers, who will then spread the workaround everywhere.

Unfortunately, current evidence suggests that two of the three arguments for the Net’s uncontrollability are simply wrong; the third, though likely to be correct, is likely to be irrelevant. In consequence, the world may well be on the path to a more orderly electronic future-one in which the Internet can and will be controlled. If so, the important question is not whether the Net can be regulated and monitored, but how and by whom. …

As Swaptor shows, the Net can be accessed from anywhere in theory, but as a practical matter, most out-of-the-way places don’t have the requisite equipment. And even if people do actually locate their services in a remote land, they can be easily discovered. …

Rather than being composed of an uncontrollable, shapeless mass of individual rebels, Gnutella-type networks have identifiable, centralized targets that can easily be challenged, shut down or sued. Obvious targets are the large backbone machines, which, according to peer-to-peer developers, can be identified by sending out multiple searches and requests. By tracking the answers and the number of hops they take between computers, it is possible not only to identify the Internet addresses of important sites but also to pinpoint their locations within the network.

Once central machines have been identified, companies and governments have a potent legal weapon against them: their Internet service providers. …

In other words, those who claim that the Net cannot be controlled because the world’s hackers will inevitably break any protection scheme are not taking into account that the Internet runs on hardware – and that this hardware is, in large part, the product of marketing decisions, not technological givens.

Arguments against the Web’s ungovernability Read More »

Embarassing email story #1056

From MedZilla’s “Emails ‘gone bad’“:

In another example of embarrassing and damaging emails sent during work is an investigation that uncovered 622 emails exchanged between Arapahoe County (Colo.) Clerk and Recorder Tracy K. Baker and his Assistant Chief Deputy Leesa Sale. Of those emails, 570 were sexually explicit. That’s not the only thing Baker’s lawyers are having to explain in court. Seems the emails also revealed Baker might have misused public funds, among other things.

Embarassing email story #1056 Read More »

Feral cities of the future

From Richard J. Norton’s “Feral cities – The New Strategic Environment” (Naval War College Review: Autumn, 2003):

Imagine a great metropolis covering hundreds of square miles. Once a vital component in a national economy, this sprawling urban environment is now a vast collection of blighted buildings, an immense petri dish of both ancient and new diseases, a territory where the rule of law has long been replaced by near anarchy in which the only security available is that which is attained through brute power. Such cities have been routinely imagined in apocalyptic movies and in certain science-fiction genres, where they are often portrayed as gigantic versions of T. S. Eliot’s Rat’s Alley. Yet this city would still be globally connected. It would possess at least a modicum of commercial linkages, and some of its inhabitants would have access to the world’s most modern communication and computing technologies. It would, in effect, be a feral city.

The putative “feral city” is (or would be) a metropolis with a population of more than a million people in a state the government of which has lost the ability to maintain the rule of law within the city’s boundaries yet remains a functioning actor in the greater international system.

In a feral city social services are all but nonexistent, and the vast majority of the city’s occupants have no access to even the most basic health or security assistance. There is no social safety net. Human security is for the most part a matter of individual initiative. Yet a feral city does not descend into complete, random chaos. Some elements, be they criminals, armed resistance groups, clans, tribes, or neighborhood associations, exert various degrees of control over portions of the city. Intercity, city-state, and even international commercial transactions occur, but corruption, avarice, and violence are their hallmarks. A feral city experiences massive levels of disease and creates enough pollution to qualify as an international environmental disaster zone. Most feral cities would suffer from massive urban hypertrophy, covering vast expanses of land. The city’s structures range from once-great buildings symbolic of state power to the meanest shantytowns and slums. Yet even under these conditions, these cities continue to grow, and the majority of occupants do not voluntarily leave.

Feral cities would exert an almost magnetic influence on terrorist organizations. Such megalopolises will provide exceptionally safe havens for armed resistance groups, especially those having cultural affinity with at least one sizable segment of the city’s population. The efficacy and portability of the most modern computing and communication systems allow the activities of a worldwide terrorist, criminal, or predatory and corrupt commercial network to be coordinated and directed with equipment easily obtained on the open market and packed into a minivan. The vast size of a feral city, with its buildings, other structures, and subterranean spaces, would offer nearly perfect protection from overhead sensors, whether satellites or unmanned aerial vehicles. The city’s population represents for such entities a ready source of recruits and a built-in intelligence network. Collecting human intelligence against them in this environment is likely to be a daunting task. Should the city contain airport or seaport facilities, such an organization would be able to import and export a variety of items. The feral city environment will actually make it easier for an armed resistance group that does not already have connections with criminal organizations to make them. The linkage between such groups, once thought to be rather unlikely, is now so commonplace as to elicit no comment.

Feral cities of the future Read More »