security

Why we get disoriented in malls

From Wikipedia’s “Gruen transfer” (28 September 2009):

In shopping mall design, the Gruen transfer refers to the moment when consumers respond to “scripted disorientation” cues in the environment. It is named for Austrian architect Victor Gruen (who disavowed such manipulative techniques) …

The Gruen transfer refers to the moment when a consumer enters a shopping mall, and, surrounded by an intentionally confusing layout, loses track of their original intentions. Spatial awareness of their surroundings play a key role, as does the surrounding sound and music. The effect of the transfer is marked by a slower walking pace and glazed eyes.

Why we get disoriented in malls Read More »

COBOL is much more widely used than you might think

From Darryl Taft’s “Enterprise Applications: 20 Things You Might Not Know About COBOL (as the Language Turns 50)” (eWeek: September 2009). http://www.eweek.com/c/a/Enterprise-Applications/20-Things-You-Might-Not-Know-About-COBOL-As-the-Language-Turns-50-103943/?kc=EWKNLBOE09252009FEA1. Accessed 25 September 2009.

Five billion lines of new COBOL are developed every year.

More than 80 percent of all daily business transactions are processed in COBOL.

More than 70 percent of all worldwide business data is stored on a mainframe.

More than 70 percent of mission-critical applications are in COBOL.

More than 310 billion lines of software are in use today and more than 200 billion lines are COBOL (65 percent of the total software).

There are 200 times more COBOL transactions per day than Google searches worldwide.

An estimated 2 million people are currently working in COBOL in one form or another.

COBOL is much more widely used than you might think Read More »

Grab what others type through an electrical socket

Description unavailable
Image by Dim Sum! via Flickr

From Tim Greene’s “Black Hat set to expose new attacks” (Network World: 27 July 2009):

Black Hat USA 2009, considered a premier venue for publicizing new exploits with an eye toward neutralizing them, is expected to draw thousands to hear presentations from academics, vendors and private crackers.

For instance, one talk will demonstrate that if attackers can plug into an electrical socket near a computer or draw a bead on it with a laser they can steal whatever is being typed in. How to execute this attack will be demonstrated by Andrea Barisani and Daniele Bianco, a pair of researchers for network security consultancy Inverse Path.

Attackers grab keyboard signals that are generated by hitting keys. Because the data wire within the keyboard cable is unshielded, the signals leak into the ground wire in the cable, and from there into the ground wire of the electrical system feeding the computer. Bit streams generated by the keyboards that indicate what keys have been struck create voltage fluctuations in the grounds, they say.

Attackers extend the ground of a nearby power socket and attach to it two probes separated by a resistor. The voltage difference and the fluctuations in that difference – the keyboard signals – are captured from both ends of the resistor and converted to letters.

This method would not work if the computer were unplugged from the wall, such as a laptop running on its battery. A second attack can prove effective in this case, Bianco’s and Barisani’s paper says.

Attackers point a cheap laser at a shiny part of a laptop or even an object on the table with the laptop. A receiver is aligned to capture the reflected light beam and the modulations that are caused by the vibrations resulting from striking the keys.

Analyzing the sequences of individual keys that are struck and the spacing between words, the attacker can figure out what message has been typed. Knowing what language is being typed is a big help, they say.

Grab what others type through an electrical socket Read More »

Warnings about invalid security certs are ignored by users

Yahoo Publisher Network Security Cert
Image by rustybrick via Flickr

From Robert McMillan’s “Security certificate warnings don’t work, researchers say” (IDG News Service: 27 July 2009):

In a laboratory experiment, researchers found that between 55 percent and 100 percent of participants ignored certificate security warnings, depending on which browser they were using (different browsers use different language to warn their users).

The researchers first conducted an online survey of more than 400 Web surfers, to learn what they thought about certificate warnings. They then brought 100 people into a lab and studied how they surf the Web.

They found that people often had a mixed-up understanding of certificate warnings. For example, many thought they could ignore the messages when visiting a site they trust, but that they should be more wary at less-trustworthy sites.

In the Firefox 3 browser, Mozilla tried to use simpler language and better warnings for bad certificates. And the browser makes it harder to ignore a bad certificate warning. In the Carnegie Mellon lab, Firefox 3 users were the least likely to click through after being shown a warning.

The researchers experimented with several redesigned security warnings they’d written themselves, which appeared to be even more effective.…

Still, Sunshine believes that better warnings will help only so much. Instead of warnings, browsers should use systems that can analyze the error messages. “If those systems decide this is likely to be an attack, they should just block the user altogether,” he said.

Warnings about invalid security certs are ignored by users Read More »

Some reasons why America hasn’t been attacked since 9/11

The World Trade Center after the 9/11 attacks
Image via Wikipedia

From Timothy Noah’s “Why No More 9/11s?: An interactive inquiry about why America hasn’t been attacked again” (Slate: 5 March 2009):

… I spent the Obama transition asking various terrorism experts why the dire predictions of a 9/11 sequel proved untrue and reviewing the literature on this question. The answers boiled down to eight prevailing theories whose implications range from fairly reassuring to deeply worrying.

I. The Terrorists-Are-Dumb Theory

“Acts of terrorism almost never appear to accomplish anything politically significant,” prominent game theorist Thomas C. Schelling observed nearly two decades ago. Max Abrahms, a pre-doctoral fellow at Stanford’s Center for International Security and Cooperation, reaffirmed that conclusion in a 2006 paper for International Security titled, “Why Terrorism Does Not Work.” Abrahms researched 28 groups designated “foreign terrorist organizations” by the U.S. State Department since 2001, identifying among them a total of 42 objectives. The groups achieved those objectives only 7 percent of the time, Abrahms concluded, and the key variable for success was whether they targeted civilians. Groups that attacked civilian targets more often than military ones “systematically failed to achieve their policy objectives.”

In a 2008 follow-up essay, “What Terrorists Really Want,” Abrahms explained that terrorist groups are typically incapable of maintaining a consistent set of strategic goals, much less achieving them. Then why do they become terrorists? To “develop strong affective ties with fellow terrorists.” It’s fraternal bonds they want, not territory, nor influence, nor even, in most cases, to affirm religious beliefs. If a terrorist group’s demands tend to sound improvised, that’s because they are improvised; what really matters to its members—even its leaders—is that they are a band of brothers. Marc Sageman, a forensic psychiatrist and former Central Intelligence Agency case officer in Afghanistan, collected the biographies of 400 terrorists who’d targeted the United States. He found that fully 88 percent became terrorists not because they wanted to change the world but because they had “friendship/family bonds to the jihad.” Among the 400, Sageman found only four who had “any hint of a [psychological] disorder,” a lower incidence than in the general population. Think the Elks, only more lethal. Cut off from al-Qaida’s top leadership, they are plenty dangerous, but not nearly as task-oriented as we imagine them to be.

II. The Near-Enemy Theory

Jihadis speak of the “near enemy” (apostate regimes in and around the Middle East) and the “far enemy” (the United States and the West generally). The man credited with coining these terms, Mohammed Abd al-Salam Faraj, did so largely to emphasize that it was much more important to attack the near enemy, a principle he upheld by organizing the 1981 assassination of Egyptian President Anwar Sadat. (The Egyptian government affirmed the same principle in executing Faraj.) In 1993, a militant Egyptian group called al-Gama’a al-Islamiyya (“the Islamic Group”), which had extensive ties to al-Qaida, broke with the “near enemy” strategy and bombed the World Trade Center. In 1996, al-Qaida followed suit and formally turned its attention to the far enemy. But according to Fawaz A. Gerges, an international affairs professor at Sarah Lawrence and author of The Far Enemy: Why Jihad Went Global, other jihadist groups around the world never really bought into this shift in priorities. Even al-Gama’a al-Islamiyya had by late 1999 declared a cease-fire, a move that outraged its incarcerated spiritual leader, Omar Abdel-Rahman (“the blind sheikh”) and caused the group to splinter. With the 9/11 attacks, Bin Laden hoped to rally jihadis outside al-Qaida’s orbit to join the battle against the far enemy. Instead, he scared them off.

III. The Melting-Pot Theory

In the absence of other evidence, we must conclude that inside the United States, homegrown, al-Qaida-inspired terrorist conspiracy-mongering seldom advances very far.

That record stands in stark contrast to that of the United Kingdom, which since 9/11 has incubated several very serious terrorism plots inspired or directed by al-Qaida. … Even when it isn’t linked directly to terrorism, Muslim radicalism seems more prevalent—and certainly more visible—inside the United Kingdom, and in Western Europe generally, than it is inside the United States.

Why the difference? Economics may be one reason. American Muslims are better-educated and wealthier than the average American. In Europe, they are poorer and less well-educated than the rest of the population—in Germany, only about 10 percent of the Turkish population attends college. The United States has assimilated Muslims into its society more successfully than Western Europe—and over a longer period. Arabs began migrating to the United States in great numbers during the second half of the 19th century. Western Europe’s Arab migration didn’t start until after World War II, when many arrived as guest workers. In Germany and France, a great many Muslims live in housing projects segregated from the rest of the population. In the United States, Muslims are dispersed more widely. An exception would be Detroit, which has a large Muslim community but not an impoverished one.

The relative dearth of Islamist radicalism in the United States is at least as much a function of American demographics as it is of American exceptionalism. Muslims simply loom smaller in the U.S. population than they do in the populations of many Western European countries. Muslims account for roughly 3 percent of the population in the United Kingdom, 4 percent in Germany, and 9 percent in France. In the United States, they’re closer to 1 percent and are spread over a much larger geographic area. As both immigrants and descendants of immigrants, Muslims are far outnumbered in the United States by Latinos. It’s quite different in Western Europe. Muslims represent the largest single immigrant group in France, Germany, Belgium, the Netherlands (where they constitute a majority of all immigrants), and the United Kingdom (where they constitute a plurality of all immigrants).

Somewhere between one-quarter to one-half of U.S. Muslims are African-American. Historically, American-born black Muslims have felt little kinship with Arab and foreign-born Muslims, and while al-Qaida has sought to recruit black Muslims, “there’s no sign” they’ve met with any success, according to Laurence. … Among foreign-born Muslims in the United States, nearly one-quarter are Shiite—many of them refugees from the 1979 Iranian revolution—and therefore harbor little sympathy for al-Qaida’s Sunni following. Europe’s Muslim population, by contrast, is overwhelmingly Sunni, hailing typically in France from Algeria and Morocco; in Germany from Turkey; and in the United Kingdom from Pakistan and the subcontinent.

All right, then. American Muslims are disinclined to commit acts of terror inside the United States. Why don’t American non-Muslims pick up the slack?

Actually, they do. In April 1995 Timothy McVeigh and Terry Nichols bombed a federal building in Oklahoma City, killing 168 people and injuring 500 more. In April 1996, Ted Kaczynski, the “Unabomber,” was arrested for killing three people and wounding 22 others. In July 1996, a former Army explosives expert named Eric Rudolph set off a bomb at the Olympics in Atlanta, killing one person and injuring 11; later, he set off bombs at two abortion clinics and a nightclub frequented by gay men and women, killing a security guard* and injuring 12 others. In September and October 2001, somebody sent anthrax spores to media outlets and government offices, killing five people. The FBI believes it was an Army scientist named Bruce Ivins who killed himself as the investigation closed in on him. These are just the incidents everybody’s heard of. The point is that domestic terrorism inside the United States is fairly routine. The FBI counted 24 terror incidents inside the United States between 2002 and 2005; all but one were committed by American citizens.

IV. The Burden-Of-Success Theory

In fact, the likelihood of nuclear terrorism isn’t that great. Mueller points out that Russian “suitcase bombs,” which figure prominently in discussions about “loose nukes,” were all built before 1991 and ceased being operable after three years. Enriched uranium is extremely difficult to acquire; over the past decade, Mueller argues, there were only 10 known thefts. The material stolen weighed a combined 16 pounds, which was nowhere near the amount needed to build a bomb. Once the uranium is acquired, building the weapon is simple in theory (anti-nuclear activist Howard Morland published a famous 1979 article about this in the Progressive) but quite difficult in practice, which is why entire countries have had to work decades to acquire the bomb, only sometimes meeting with success. (Plutonium, another fissile material, is sufficiently dangerous and difficult to transport that nonproliferation experts seldom discuss it.)

V. The Flypaper Theory

The 9/11 attacks led to a U.S. invasion of Afghanistan, whose Taliban regime was sheltering al-Qaida. That made sense. Then it led to a U.S. invasion of Iraq. That made no sense. The Bush administration claimed that Iraq’s Saddam Hussein had close ties to al-Qaida. This was based on:

a) allegations made by an American Enterprise Institute scholar named Laurie Mylroie, later discredited;

b) an al-Qaida captive’s confession under threat of torture to Egyptian authorities, later retracted;

c) a false report from Czech intelligence about a Prague meeting between the lead 9/11 hijacker, Mohamed Atta, and an Iraqi intelligence agent;

d) Defense Secretary Donald Rumsfeld’s zany complaint at a Sept. 12, 2001, White House meeting that “there aren’t any good targets in Afghanistan, and there are lots of good targets in Iraq”;

and

e) certain Oedipal preoccupations of President George W. Bush.

VI. The He-Kept-Us-Safe Theory

A White House fact sheet specifies six terror plots “prevented in the United States” on Bush’s watch:

  • an attempt to bomb fuel tanks at JFK airport,
  • a plot to blow up airliners bound for the East Coast,
  • a plan to destroy the tallest skyscraper in Los Angeles,
  • a plot by six al-Qaida-inspired individuals to kill soldiers at Fort Dix Army Base in New Jersey,
  • a plan to attack a Chicago-area shopping mall using grenades,
  • a plot to attack the Sears Tower in Chicago.

The Bush administration deserves at least some credit in each of these instances, but a few qualifications are in order. The most serious terror plot listed was the scheme to blow up airliners headed for the East Coast. That conspiracy, halted in its advanced stages, is why you aren’t allowed to carry liquids and gels onto a plane. As noted in “The Melting-Pot Theory,” it originated in the United Kingdom, which took the lead in the investigation. (The undercover agent who infiltrated the terror group was British.) We also learned in “The Melting-Pot Theory” that the plan to bring down the Sears Tower was termed by the Federal Bureau of Investigation’s deputy director “more aspirational than operational” and that the prosecution ended in a mistrial.

The JFK plot was unrelated to al-Qaida and so technically infeasible that the New York Times, the airport’s hometown newspaper, buried the story on Page A37. The attack on the Library Tower in Los Angeles was planned in October 2001 by 9/11’s architect, Khalid Sheikh Mohammed, who recruited volunteers from South Asia to fly a commercial jetliner into the building. But Michael Scheuer, a veteran al-Qaida expert who was working at the Central Intelligence Agency in 2002, when the arrests were made, told the Voice of America that he never heard about them, and a U.S. government official told the Los Angeles Times that the plot never approached the operational stage. Moreover, as the story of United Flight 93 demonstrated, the tactic of flying passenger planes into buildings—which depended on passengers not conceiving of that possibility—didn’t remain viable even through the morning of 9/11 (“Let’s roll”).

The Fort Dix plot was inspired by, but not directed by, al-Qaida. The five Muslim conspirators from New Jersey, convicted on conspiracy charges in December, watched jihadi videos. They were then foolish enough not only to make one of their own but to bring the tape to Circuit City for transfer to DVD. A teenage clerk tipped off the FBI, which infiltrated the group, sold them automatic weapons, and busted them. The attempted grenade attack on the CherryVale Mall in suburban Chicago was similarly inspired but not directed by al-Qaida. In this instance, the conspirators numbered only two, one of whom was an FBI informant. The other guy was arrested when an undercover FBI agent accepted his offer to trade two stereo speakers for four grenades and a gun. He is now serving a life sentence.

VIII. The Time-Space Theory

The RAND Corp. is headquartered in a blindingly white temple of reason a few blocks from the Pacific Ocean in Santa Monica, Calif. It was here—or rather, next door, in the boxy international-style offices it inhabited for half a century before moving four years ago into a new $100 million structure—that America’s Cold War nuclear strategy of “mutual assured destruction” was dreamed up. Also, the Internet. Created by the Air Force in 1948, the nonprofit RAND would “invent a whole new language in [its] quest for rationality,” Slate’s Fred Kaplan wrote in his 1983 book The Wizards of Armageddon.

RAND is the cradle of rational-choice theory, a rigorously utilitarian mode of thought with applications to virtually every field of social science. Under rational-choice theory, belief systems, historical circumstances, cultural influences, and other nonrational filigree must be removed from consideration in calculating the dynamics of human behavior. There exists only the rational and orderly pursuit of self-interest. It is the religion that governs RAND. …

Lakdawalla and RAND economist Claude Berrebi are co-authors of “How Does Terrorism Risk Vary Across Space and Time?” a 2007 paper.

One goal inherent in the 9/11 attacks was to do harm to the United States. In “The Terrorists-Are-Dumb Theory” and “The Melting-Pot Theory,” we reviewed the considerable harm that the furious U.S. response to 9/11 caused al-Qaida. But that response harmed the United States, too. Nearly 5,000 U.S. troops have died in Iraq and Afghanistan, and more than 15,000 have come home wounded. More than 90,000 Iraqi civilians have been killed and perhaps as many as 10,000 Afghan civilians; in Afghanistan, where fighting has intensified, more than 2,000 civilians died just in the past year. “In Muslim nations, the wars in Afghanistan and particularly Iraq have driven negative ratings [of the United States] nearly off the charts,” the Pew Global Attitudes Project reported in December. Gallup polls conducted between 2006 and 2008 found approval ratings for the U.S. government at 15 percent in the Middle East, 23 percent in Europe, and 34 percent in Asia. To be sure, civilian casualties have harmed al-Qaida’s standing, too, as I noted in “The Terrorists-Are-Dumb Theory.” But to whatever extent al-Qaida hoped to reduce the United States’ standing in the world, and especially in the Middle East: Mission accomplished.

Rational-choice theory is most at home with economics, and here the costs are more straightforward. In March 2008, the Nobel Prize-winning economist Joseph Stiglitz, and Linda Bilmes of Harvard’s Kennedy School of Government, put the Iraq war’s cost at $3 trillion. In October 2008, the Congressional Research Service calculated, more conservatively, an additional $107 billion for the Afghanistan war and another $28 billion for enhanced homeland security since 9/11. According to CRS, for every soldier the United States deploys in Iraq or Afghanistan, the taxpayer spends $390,000. Let me put that another way. Sending a single soldier to Iraq or Afghanistan costs the United States nearly as much as the estimated $500,000 it cost al-Qaida to conduct the entire 9/11 operation. Not a bad return on Bin Laden’s investment, Berrebi says. President Bush left office with a budget deficit of nearly $500 billion, and that’s before most of the deficit spending that most economists think will be required to avoid another Great Depression even begins.

Some reasons why America hasn’t been attacked since 9/11 Read More »

A beheading in Saudi Arabia

Judith Beheading Holofernes, Oil on canvas, 19...
Image via Wikipedia

From Adam St. Patrick’s “Chop Chop Square: Inside Saudi Arabia’s brutal justice system” (The Walrus: May 2009):

This is Saudi Arabia, one of the last places on earth where capital punishment is a public spectacle. Decapitation awaits murderers, but the death penalty also applies to many other crimes, such as armed robbery, rape, adultery, drug use and trafficking, and renouncing Islam. There’s a woman on death row now for witchcraft, and the charge is based partly on a man’s accusation that her spell made him impotent. Saudi Arabia executed some 1,750 convicts between 1985 and 2008, yet reliable information about the practice is scarce. In Riyadh, beheadings happen at 9 a.m. any given day of the week, and there is no advance notice. There is also no written penal code, so questions of illegality depend on the on-the-spot interpretations of police and judges.

… The Saudi interpretation of the Koran discourages all forms of evidence other than confessions and eyewitness accounts in capital trials, on the theory that doing otherwise would leave too much discretion to the judge. But at any time until the sword strikes, a victim’s family can pardon the condemned — usually for a cash settlement of at least two million riyals ($690,000 or so) from the convict or his family.

Many who live to recount their experience in the Saudi justice system report that police promised freedom in exchange for a confession — or tortured them to get one.

In Riyadh, beheadings take place in a downtown public square equipped with a drain the size of a pizza box in its centre. Expatriates call it Chop Chop Square. … The job is a coveted one, often passed from father to son. In a Lebanese TV clip now on YouTube, a Saudi executioner shows off his swords and describes his approach: “If the heart is compassionate, the hand fails.”

A beheading in Saudi Arabia Read More »

RFID dust

RFID dust from Hitachi

From David Becker’s “Hitachi Develops RFID Powder” (Wired: 15 February 2007):

[Hitachi] recently showed a prototype of an RFID chip measuring a .05 millimeters square and 5 microns thick, about the size of a grain of sand. They expect to have ‘em on the market in two or three years.

The chips are packed with 128 bits of static memory, enough to hold a 38-digit ID number.

The size make the new chips ideal for embedding in paper, where they could verify the legitimacy of currency or event tickets. Implantation under the skin would be trivial…

RFID dust Read More »

RFID security problems

Old British passport cover
Creative Commons License photo credit: sleepymyf

2005

From Brian Krebs’ “Leaving Las Vegas: So Long DefCon and Blackhat” (The Washington Post: 1 August 2005):

DefCon 13 also was notable for being the location where two new world records were set — both involved shooting certain electronic signals unprecedented distances. Los Angeles-based Flexilis set the world record for transmitting data to and from a “passive” radio frequency identification (RFID) card — covering a distance of more than 69 feet. (Active RFID — the kind being integrated into foreign passports, for example — differs from passive RFID in that it emits its own magnetic signal and can only be detected from a much shorter distance.)

The second record set this year at DefCon was pulled off by some teens from Cincinnati, who broke the world record they set last year by building a device capable of maintaining an unamplified, 11-megabit 802.11b wireless Internet connection over a distance of 125 miles (the network actually spanned from Utah into Nevada).

From Andrew Brandt’s “Black Hat, Lynn Settle with Cisco, ISS” (PC World: 29 July 2005):

Security researcher Kevin Mahaffey makes a final adjustment to a series of radio antennas; Mahaffey used the directional antennas in a demonstration during his presentation, “Long Range RFID and its Security Implications.” Mahaffey and two of his colleagues demonstrated how he could increase the “read range” of radio frequency identification (RF) tags from the typical four to six inches to approximately 50 feet. Mahaffey said the tags could be read at a longer distance, but he wanted to perform the demonstration in the room where he gave the presentation, and that was the greatest distance within the room that he could demonstrate. RFID tags such as the one Mahaffey tested will begin to appear in U.S. passports later this year or next year.

2006

From Joris Evers and Declan McCullagh’s “Researchers: E-passports pose security risk” (CNET: 5 August 2006):

At a pair of security conferences here, researchers demonstrated that passports equipped with radio frequency identification (RFID) tags can be cloned with a laptop equipped with a $200 RFID reader and a similarly inexpensive smart card writer. In addition, they suggested that RFID tags embedded in travel documents could identify U.S. passports from a distance, possibly letting terrorists use them as a trigger for explosives.

At the Black Hat conference, Lukas Grunwald, a researcher with DN-Systems in Hildesheim, Germany, demonstrated that he could copy data stored in an RFID tag from his passport and write the data to a smart card equipped with an RFID chip.

From Kim Zetter’s “Hackers Clone E-Passports” (Wired: 3 August 2006):

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker — Walluf, Germany-based ACG Identification Technologies — but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports — called Golden Reader Tool and made by secunet Security Networks — and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader — which can also act as a writer — and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

Although he can clone the tag, Grunwald says it’s not possible, as far as he can tell, to change data on the chip, such as the name or birth date, without being detected. That’s because the passport uses cryptographic hashes to authenticate the data.

Grunwald’s technique requires a counterfeiter to have physical possession of the original passport for a time. A forger could not surreptitiously clone a passport in a traveler’s pocket or purse because of a built-in privacy feature called Basic Access Control that requires officials to unlock a passport’s RFID chip before reading it. The chip can only be unlocked with a unique key derived from the machine-readable data printed on the passport’s page.

To produce a clone, Grunwald has to program his copycat chip to answer to the key printed on the new passport. Alternatively, he can program the clone to dispense with Basic Access Control, which is an optional feature in the specification.

As planned, U.S. e-passports will contain a web of metal fiber embedded in the front cover of the documents to shield them from unauthorized readers. Though Basic Access Control would keep the chip from yielding useful information to attackers, it would still announce its presence to anyone with the right equipment. The government added the shielding after privacy activists expressed worries that a terrorist could simply point a reader at a crowd and identify foreign travelers.

In theory, with metal fibers in the front cover, nobody can sniff out the presence of an e-passport that’s closed. But [Kevin Mahaffey and John Hering of Flexilis] demonstrated in their video how even if a passport opens only half an inch — such as it might if placed in a purse or backpack — it can reveal itself to a reader at least two feet away.

In addition to cloning passport chips, Grunwald has been able to clone RFID ticket cards used by students at universities to buy cafeteria meals and add money to the balance on the cards.

He and his partners were also able to crash RFID-enabled alarm systems designed to sound when an intruder breaks a window or door to gain entry. Such systems require workers to pass an RFID card over a reader to turn the system on and off. Grunwald found that by manipulating data on the RFID chip he could crash the system, opening the way for a thief to break into the building through a window or door.

And they were able to clone and manipulate RFID tags used in hotel room key cards and corporate access cards and create a master key card to open every room in a hotel, office or other facility. He was able, for example, to clone Mifare, the most commonly used key-access system, designed by Philips Electronics. To create a master key he simply needed two or three key cards for different rooms to determine the structure of the cards. Of the 10 different types of RFID systems he examined that were being used in hotels, none used encryption.

Many of the card systems that did use encryption failed to change the default key that manufacturers program into the access card system before shipping, or they used sample keys that the manufacturer includes in instructions sent with the cards. Grunwald and his partners created a dictionary database of all the sample keys they found in such literature (much of which they found accidentally published on purchasers’ websites) to conduct what’s known as a dictionary attack. When attacking a new access card system, their RFDump program would search the list until it found the key that unlocked a card’s encryption.

“I was really surprised we were able to open about 75 percent of all the cards we collected,” he says.

2009

From Thomas Ricker’s “Video: Hacker war drives San Francisco cloning RFID passports” (Engadget: 2 February 2009):

Using a $250 Motorola RFID reader and antenna connected to his laptop, Chris recently drove around San Francisco reading RFID tags from passports, driver licenses, and other identity documents. In just 20 minutes, he found and cloned the passports of two very unaware US citizens.

RFID security problems Read More »

Various confidence scams, tricks, & frauds

From “List of confidence tricks” (Wikipedia: 3 July 2009):

Get-rich-quick schemes

Get-rich-quick schemes are extremely varied. For example, fake franchises, real estate “sure things”, get-rich-quick books, wealth-building seminars, self-help gurus, sure-fire inventions, useless products, chain letters, fortune tellers, quack doctors, miracle pharmaceuticals, Nigerian money scams, charms and talismans are all used to separate the mark from his money. Variations include the pyramid scheme, Ponzi scheme and Matrix sale.

Count Victor Lustig sold the “money-printing machine” which could copy $100 bills. The client, sensing huge profits, would buy the machines for a high price (usually over $30,000). Over the next twelve hours, the machine would produce just two more $100 bills, but after that it produced only blank paper, as its supply of hidden $100 bills would have become exhausted. This type of scheme is also called the “money box” scheme.

The wire game, as depicted in the movie The Sting, trades on the promise of insider knowledge to beat a gamble, stock trade or other monetary action. In the wire game, a “mob” composed of dozens of grifters simulates a “wire store”, i.e., a place where results from horse races are received by telegram and posted on a large board, while also being read aloud by an announcer. The griftee is given secret foreknowledge of the race results minutes before the race is broadcast, and is therefore able to place a sure bet at the wire store. In reality, of course, the con artists who set up the wire store are the providers of the inside information, and the mark eventually is led to place a large bet, thinking it to be a sure win. At this point, some mistake is made, which actually makes the bet a loss. …

Salting or to salt the mine are terms for a scam in which gems or gold ore are planted in a mine or on the landscape, duping the greedy mark into purchasing shares in a worthless or non-existent mining company.[2] During the Gold Rush, scammers would load shotguns with gold dust and shoot into the sides of the mine to give the appearance of a rich ore, thus “salting the mine”. …

The Spanish Prisoner scam – and its modern variant, the advance fee fraud or Nigerian scam – take advantage of the victim’s greed. The basic premise involves enlisting the mark to aid in retrieving some stolen money from its hiding place. The victim sometimes believes he can cheat the con artists out of their money, but anyone trying this has already fallen for the essential con by believing that the money is there to steal (see also Black money scam). …

Many conmen employ extra tricks to keep the victim from going to the police. A common ploy of investment scammers is to encourage a mark to use money concealed from tax authorities. The mark cannot go to the authorities without revealing that he or she has committed tax fraud. Many swindles involve a minor element of crime or some other misdeed. The mark is made to think that he or she will gain money by helping fraudsters get huge sums out of a country (the classic Nigerian scam); hence marks cannot go to the police without revealing that they planned to commit a crime themselves.

Gold brick scams

Gold brick scams involve selling a tangible item for more than it is worth; named after selling the victim an allegedly golden ingot which turns out to be gold-coated lead.

Pig-in-a-poke originated in the late Middle Ages. The con entails a sale of a (suckling) “pig” in a “poke” (bag). The bag ostensibly contains a live healthy little pig, but actually contains a cat (not particularly prized as a source of meat, and at any rate, quite unlikely to grow to be a large hog). If one buys a “pig in a poke” without looking in the bag (a colloquial expression in the English language, meaning “to be a sucker”), the person has bought something of less value than was assumed, and has learned firsthand the lesson caveat emptor.

The Thai gem scam involves layers of con men and helpers who tell a tourist in Bangkok of an opportunity to earn money by buying duty-free jewelry and having it shipped back to the tourist’s home country. The mark is driven around the city in a tuk-tuk operated by one of the con men, who ensures that the mark meets one helper after another, until the mark is persuaded to buy the jewelry from a store also operated by the swindlers. The gems are real but significantly overpriced. This scam has been operating for 20 years in Bangkok, and is said to be protected by Thai police and politicians. A similar scam usually runs in parallel for custom-made suits.

Extortion or false-injury tricks

The badger game extortion is often perpetrated on married men. The mark is deliberately coerced into a compromising position, a supposed affair for example, then threatened with public exposure of his acts unless blackmail money is paid.

The Melon Drop is a scam in which the scammer will intentionally bump into the mark and drop a package containing (already broken) glass. He will blame the damage on the clumsiness of the mark, and demand money in compensation. This con arose when artists discovered that the Japanese paid large sums of money for watermelons. The scammer would go to a supermarket to buy a cheap watermelon, then bump into a Japanese tourist and set a high price.

Gambling tricks

Three-card Monte, ‘Find The Queen’, the “Three-card Trick”, or “Follow The Lady”, is (except for the props) essentially the same as the probably centuries-older shell game or thimblerig. The trickster shows three playing cards to the audience, one of which is a queen (the “lady”), then places the cards face-down, shuffles them around and invites the audience to bet on which one is the queen. At first the audience is skeptical, so the shill places a bet and the scammer allows him to win. In one variation of the game, the shill will (apparently surreptitiously) peek at the lady, ensuring that the mark also sees the card. This is sometimes enough to entice the audience to place bets, but the trickster uses sleight of hand to ensure that they always lose, unless the conman decides to let them win, hoping to lure them into betting much more. The mark loses whenever the dealer chooses to make him lose. This con appears in the Eric Garcia novel Matchstick Men and is featured in the movie Edmond.

A variation on this scam exists in Barcelona, Spain, but with the addition of a pickpocket. The dealer and shill behave in an overtly obvious manner, attracting a larger audience. When the pickpocket succeeds in stealing from a member of the audience, he signals the dealer. The dealer then shouts the word “aqua”, and the three split up. The audience is left believing that “aqua” is a code word indicating the police are coming, and that the performance was a failed scam.

In the Football Picks Scam the scammer sends out tip sheet stating a game will go one way to 100 potential victims and the other way to another 100. The next week, the 100 or so who received the correct answer are divided into two groups and fed another pick. This is repeated until a small population have (apparently) received a series of supernaturally perfect picks, then the final pick is offered for sale. Despite being well-known (it was even described completely on an episode of The Simpsons and used by Derren Brown in “The System”), this scam is run almost continuously in different forms by different operators. The sports picks can also be replaced with securities, or any other random process, in an alternative form. This scam has also been called the inverted pyramid scheme, because of the steadily decreasing population of victims at each stage.

Visitors to Las Vegas or other gambling towns often encounter the Barred Winner scam, a form of advance fee fraud performed in person. The artist will approach his mark outside a casino with a stack or bag of high-value casino chips and say that he just won big, but the casino accused him of cheating and threw him out without letting him redeem the chips. The artist asks the mark to go in and cash the chips for him. The artist will often offer a percentage of the winnings to the mark for his trouble. But, when the mark agrees, the artist feigns suspicion and asks the mark to put up something of value “for insurance”. The mark agrees, hands over jewelry, a credit card or their wallet, then goes in to cash the chips. When the mark arrives at the cashier, they are informed the chips are fake. The artist, by this time, is long gone with the mark’s valuables.

False reward tricks

The glim-dropper requires several accomplices, one of whom must be a one-eyed man. One grifter goes into a store and pretends he has lost his glass eye. Everyone looks around, but the eye cannot be found. He declares that he will pay a thousand-dollar reward for the return of his eye, leaving contact information. The next day, an accomplice enters the store and pretends to find the eye. The storekeeper (the intended griftee), thinking of the reward, offers to take it and return it to its owner. The finder insists he will return it himself, and demands the owner’s address. Thinking he will lose all chance of the reward, the storekeeper offers a hundred dollars for the eye. The finder bargains him up to $250, and departs.…

The fiddle game uses the pigeon drop technique. A pair of con men work together, one going into an expensive restaurant in shabby clothes, eating, and claiming to have left his wallet at home, which is nearby. As collateral, the con man leaves his only worldly possession, the violin that provides his livelihood. After he leaves, the second con man swoops in, offers an outrageously large amount (for example $50,000) for such a rare instrument, then looks at his watch and runs off to an appointment, leaving his card for the mark to call him when the fiddle-owner returns. The mark’s greed comes into play when the “poor man” comes back, having gotten the money to pay for his meal and redeem his violin. The mark, thinking he has an offer on the table, then buys the violin from the fiddle player (who “reluctantly” sells it eventually for, say, $5,000). The result is the two conmen are $5,000 richer (less the cost of the violin), and the mark is left with a cheap instrument.

Other confidence tricks and techniques

The Landlord Scam advertises an apartment for rent at an attractive price. The con artist, usually someone who is house-sitting or has a short-term sublet at the unit, takes a deposit and first/last month’s rent from every person who views the suite. When move-in day arrives, the con artist is of course gone, and the apartment belongs to none of the angry people carrying boxes.

Change raising is a common short con and involves an offer to change an amount of money with someone, while at the same time taking change or bills back and forth to confuse the person as to how much money is actually being changed. The most common form, “the Short Count”, has been featured prominently in several movies about grifting, notably Nueve Reinas, The Grifters and Paper Moon. A con artist shopping at, say a gas station, is given 80 cents in change because he lacks two dimes to complete the sale (say the sale cost is $19.20 and the con man has a 20 dollar bill). He goes out to his car and returns a short time later, with 20 cents. He returns them, saying that he found the rest of the change to make a dollar, and asking for a bill so he will not have to carry coins. The confused store clerk agrees, exchanging a dollar for the 20 cents the conman returned. In essence, the mark makes change twice.

Beijing tea scam is a famous scam in and around Beijing. The artists (usually female and working in pairs) will approach tourists and try to make friends. After chatting, they will suggest a trip to see a tea ceremony, claiming that they have never been to one before. The tourist is never shown a menu, but assumes that this is how things are done in China. After the ceremony, the bill is presented to the tourist, charging upwards of $100 per head. The artists will then hand over their bills, and the tourists are obliged to follow suit.

Various confidence scams, tricks, & frauds Read More »

Cell phone viruses

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

Earlier this year, smartphone users in China started to get messages promising a “sexy view” if they clicked on a link. The link led to a download. That download was a spam generator which, once installed, sent identical “sexy view” messages to everyone in the owner’s contacts list.

That was the first virus known to travel by text message. It was chiefly an annoyance, but there is great potential harm from mobile viruses, especially as technologies such as Bluetooth provide new ways for viruses to spread. But there has never yet been a cellphone threat as serious as Conficker is to PCs.

There are two reasons for that, says Albert-László Barabási of Northeastern University in Boston. He and his colleagues used billing data to model the spread of a mobile virus. They found that Bluetooth is an inefficient way of transmitting a virus as it can only jump between users who are within 30 metres of each other. A better option would be for the virus to disguise itself as a picture message. But that could still only infect handsets running the same operating system. As the mobile market is fragmented, says Barabási, no one virus can gain a foothold.

Cell phone viruses Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Stolen credit card data is cheaper than ever in the Underground

From Brian Krebs’ “Glut of Stolen Banking Data Trims Profits for Thieves” (The Washington Post: 15 April 2009):

A massive glut in the number of credit and debit cards stolen in data breaches at financial institutions last year has flooded criminal underground markets that trade in this material, driving prices for the illicit goods to the lowest levels seen in years, experts have found.

For a glimpse of just how many financial records were lost to hackers last year, consider the stats released this week by Verizon Business. The company said it responded to at least 90 confirmed data breaches last year involving roughly 285 million consumer records, a number that exceeded the combined total number of breached records from cases the company investigated from 2004 to 2007. Breaches at banks and financial institutions were responsible for 93 percent of all such records compromised last year, Verizon found.

As a result, the stolen identities and credit and debit cards for sale in the underground markets is outpacing demand for the product, said Bryan Sartin, director of investigative response at Verizon Business.

Verizon found that profit margins associated with selling stolen credit card data have dropped from $10 to $16 per record in mid-2007 to less than $0.50 per record today.

According to a study released last week by Symantec Corp., the price for each card can be sold for as low as 6 cents when they are purchased in bulk.

Lawrence Baldwin, a security consultant in Alpharetta, Ga., has been working with several financial institutions to help infiltrate illegal card-checking services. Baldwin estimates that at least 25,000 credit and debit cards are checked each day at three separate illegal card-checking Web sites he is monitoring. That translates to about 800,000 cards per month or nearly 10 million cards each year.

Baldwin said the checker sites take advantage of authentication weaknesses in the card processing system that allow merchants to conduct so-called “pre-authorization requests,” which merchants use to place a temporary charge on the account to make sure that the cardholder has sufficient funds to pay for the promised goods or services.

Pre-authorization requests are quite common. When a waiter at a restaurant swipes a customer’s card and brings the receipt to the table so the customer can add a tip, for example, that initial charge is essentially a pre-authorization.

With these card-checking services, however, in most cases the charge initiated by the pre-authorization check is never consummated. As a result, unless a consumer is monitoring their accounts online in real-time, they may never notice a pre-authorization initiated by a card-checking site against their card number, because that query won’t show up as a charge on the customer’s monthly statement.

The crooks have designed their card-checking sites so that each check is submitted into the card processing network using a legitimate, hijacked merchant account number combined with a completely unrelated merchant name, Baldwin discovered.

One of the many innocent companies caught up in one of these card-checking services is Wild Birds Unlimited, a franchise pet store outside of Buffalo, N.Y. Baldwin said a fraudulent card-checking service is running pre-authorization requests using Wild Bird’s store name and phone number in combination with another merchant’s ID number.

Danielle Pecoraro, the store’s manager, said the bogus charges started in January 2008. Since then, she said, her store has received an average of three to four phone calls each day from people who had never shopped there, wondering why small, $1-$10 charges from her store were showing up on their monthly statements. Some of the charges were for as little as 24 cents, and a few were for as much as $1,900.

Stolen credit card data is cheaper than ever in the Underground Read More »

80% of all spam from botnets

From Jacqui Cheng’s “Report: botnets sent over 80% of all June spam” (Ars Technica: 29 June 2009):

A new report (PDF) from Symantec’s MessageLabs says that more than 80 percent of all spam sent today comes from botnets, despite several recent shut-downs.

According to MessageLabs’ June report, spam accounted for 90.4 percent of all e-mail sent in the month of June—this was roughly unchanged since May. Botnets, however, sent about 83.2 percent of that spam, with the largest spam-wielding botnet being Cutwail. Cutwail is described as “one of the largest and most active botnets” and has doubled its size and output per bot since March of this year. As a result, it is now responsible for 45 percent of all spam, with others like Mega-D, Xarvester, Donbot, Grum, and Rustock making up much of the difference

80% of all spam from botnets Read More »

The light bulb con job

From Bruce Schneier’s “The Psychology of Con Men” (Crypto-Gram: 15 November 2008):

Great story: “My all-time favourite [short con] only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.”

http://www.abc.net.au/rn/lawreport/stories/2008/2376933.htm

The light bulb con job Read More »

Storm made $7000 each day from spam

From Bruce Schneier’s “The Economics of Spam” (Crypto-Gram: 15 November 2008):

Researchers infiltrated the Storm worm and monitored its doings.

“After 26 days, and almost 350 million e-mail messages, only 28 sales resulted — a conversion rate of well under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. Taken together, these conversions would have resulted in revenues of $2,731.88 — a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network — we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm’s pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.

“Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than “millions of dollars every day,” but certainly a healthy enterprise.”

Storm made $7000 each day from spam Read More »

Quanta Crypto: cool but useless

From Bruce Schneier’s “Quantum Cryptography” (Crypto-Gram: 15 November 2008):

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper — period.

While I like the science of quantum cryptography — my undergraduate degree was in physics — I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols.

Quanta Crypto: cool but useless Read More »

What it takes to get people to comply with security policies

From Bruce Schneier’s “Second SHB Workshop Liveblogging (5)” (Schneier on Security: 11 June 2009):

Angela Sasse, University College London …, has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply — this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

What it takes to get people to comply with security policies Read More »

Small charges on your credit card – why?

Too Much Credit
Creative Commons License photo credit: Andres Rueda

From Brian Kreb’s “An Odyssey of Fraud” (The Washington Post: 17 June 2009):

Andy Kordopatis is the proprietor of Odyssey Bar, a modest watering hole in Pocatello, Idaho, a few blocks away from Idaho State University. Most of his customers pay for their drinks with cash, but about three times a day he receives a phone call from someone he’s never served — in most cases someone who’s never even been to Idaho — asking why their credit or debit card has been charged a small amount by his establishment.

Kordopatis says he can usually tell what’s coming next when the caller immediately asks to speak with the manager or owner.

“That’s when I start telling them that I know why they’re calling, and about the Russian hackers who are using my business,” Kordopatis said.

The Odyssey Bar is but one of dozens of small establishments throughout the United States seemingly picked at random by organized cyber criminals to serve as unwitting pawns in a high-stakes game of chess against the U.S. financial system. This daily pattern of phone calls and complaints has been going on for more than a year now. Kordopatis said he has talked to the company that processes his bar’s credit card payments about fixing the problem, but says they can’t do anything because he hasn’t actually lost any money from the scam.

The Odyssey Bar’s merchant account is being abused by online services that cyber thieves built to help other crooks check the balances and limits on stolen credit and debit card account numbers.

Small charges on your credit card – why? Read More »

Outline for an Unpublished Linux Textbook

Back in 2004 or so, I was asked to write an outline for a college textbook that would be used in courses on Linux. I happily complied, producing the outline you can see on my website. The editor on the project loved the outline & showed it several professors to get their reactions, which were uniformly positive, with one prof reporting back that (& I’m paraphrasing here) “It was like this author read my mind, as this is exactly the book I’d like to use in my course!” Sadly, the book was never written, because the editor’s boss didn’t like the fact that I didn’t have a PhD in Computer Science. I thought that to be a silly reason then, & I think it’s a silly reason to reject the book now.

However, their loss is your gain. Here’s the outline for the book. Yes, it’s sadly outdated. Yes, it focuses quite a bit on SUSE, but that was what the publisher wanted. Yes, Linux has come a LONG way since I wrote this outline. But I still think it’s a damn good job, and you may find it interesting for historical reasons. So, enjoy!

Outline for an Unpublished Linux Textbook Read More »

How to deal with the fact that users can’t learn much about security

From Bruce Schneier’s “Second SHB Workshop Liveblogging (4)” (Schneier on Security: 11 June 2009):

Diana Smetters, Palo Alto Research Center …, started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs.

How to deal with the fact that users can’t learn much about security Read More »