security

Grab what others type through an electrical socket

Description unavailable
Image by Dim Sum! via Flickr

From Tim Greene’s “Black Hat set to expose new attacks” (Network World: 27 July 2009):

Black Hat USA 2009, considered a premier venue for publicizing new exploits with an eye toward neutralizing them, is expected to draw thousands to hear presentations from academics, vendors and private crackers.

For instance, one talk will demonstrate that if attackers can plug into an electrical socket near a computer or draw a bead on it with a laser they can steal whatever is being typed in. How to execute this attack will be demonstrated by Andrea Barisani and Daniele Bianco, a pair of researchers for network security consultancy Inverse Path.

Attackers grab keyboard signals that are generated by hitting keys. Because the data wire within the keyboard cable is unshielded, the signals leak into the ground wire in the cable, and from there into the ground wire of the electrical system feeding the computer. Bit streams generated by the keyboards that indicate what keys have been struck create voltage fluctuations in the grounds, they say.

Attackers extend the ground of a nearby power socket and attach to it two probes separated by a resistor. The voltage difference and the fluctuations in that difference – the keyboard signals – are captured from both ends of the resistor and converted to letters.

This method would not work if the computer were unplugged from the wall, such as a laptop running on its battery. A second attack can prove effective in this case, Bianco’s and Barisani’s paper says.

Attackers point a cheap laser at a shiny part of a laptop or even an object on the table with the laptop. A receiver is aligned to capture the reflected light beam and the modulations that are caused by the vibrations resulting from striking the keys.

Analyzing the sequences of individual keys that are struck and the spacing between words, the attacker can figure out what message has been typed. Knowing what language is being typed is a big help, they say.

Grab what others type through an electrical socket Read More »

Warnings about invalid security certs are ignored by users

Yahoo Publisher Network Security Cert
Image by rustybrick via Flickr

From Robert McMillan’s “Security certificate warnings don’t work, researchers say” (IDG News Service: 27 July 2009):

In a laboratory experiment, researchers found that between 55 percent and 100 percent of participants ignored certificate security warnings, depending on which browser they were using (different browsers use different language to warn their users).

The researchers first conducted an online survey of more than 400 Web surfers, to learn what they thought about certificate warnings. They then brought 100 people into a lab and studied how they surf the Web.

They found that people often had a mixed-up understanding of certificate warnings. For example, many thought they could ignore the messages when visiting a site they trust, but that they should be more wary at less-trustworthy sites.

In the Firefox 3 browser, Mozilla tried to use simpler language and better warnings for bad certificates. And the browser makes it harder to ignore a bad certificate warning. In the Carnegie Mellon lab, Firefox 3 users were the least likely to click through after being shown a warning.

The researchers experimented with several redesigned security warnings they’d written themselves, which appeared to be even more effective.…

Still, Sunshine believes that better warnings will help only so much. Instead of warnings, browsers should use systems that can analyze the error messages. “If those systems decide this is likely to be an attack, they should just block the user altogether,” he said.

Warnings about invalid security certs are ignored by users Read More »

RFID dust

RFID dust from Hitachi

From David Becker’s “Hitachi Develops RFID Powder” (Wired: 15 February 2007):

[Hitachi] recently showed a prototype of an RFID chip measuring a .05 millimeters square and 5 microns thick, about the size of a grain of sand. They expect to have ‘em on the market in two or three years.

The chips are packed with 128 bits of static memory, enough to hold a 38-digit ID number.

The size make the new chips ideal for embedding in paper, where they could verify the legitimacy of currency or event tickets. Implantation under the skin would be trivial…

RFID dust Read More »

RFID security problems

Old British passport cover
Creative Commons License photo credit: sleepymyf

2005

From Brian Krebs’ “Leaving Las Vegas: So Long DefCon and Blackhat” (The Washington Post: 1 August 2005):

DefCon 13 also was notable for being the location where two new world records were set — both involved shooting certain electronic signals unprecedented distances. Los Angeles-based Flexilis set the world record for transmitting data to and from a “passive” radio frequency identification (RFID) card — covering a distance of more than 69 feet. (Active RFID — the kind being integrated into foreign passports, for example — differs from passive RFID in that it emits its own magnetic signal and can only be detected from a much shorter distance.)

The second record set this year at DefCon was pulled off by some teens from Cincinnati, who broke the world record they set last year by building a device capable of maintaining an unamplified, 11-megabit 802.11b wireless Internet connection over a distance of 125 miles (the network actually spanned from Utah into Nevada).

From Andrew Brandt’s “Black Hat, Lynn Settle with Cisco, ISS” (PC World: 29 July 2005):

Security researcher Kevin Mahaffey makes a final adjustment to a series of radio antennas; Mahaffey used the directional antennas in a demonstration during his presentation, “Long Range RFID and its Security Implications.” Mahaffey and two of his colleagues demonstrated how he could increase the “read range” of radio frequency identification (RF) tags from the typical four to six inches to approximately 50 feet. Mahaffey said the tags could be read at a longer distance, but he wanted to perform the demonstration in the room where he gave the presentation, and that was the greatest distance within the room that he could demonstrate. RFID tags such as the one Mahaffey tested will begin to appear in U.S. passports later this year or next year.

2006

From Joris Evers and Declan McCullagh’s “Researchers: E-passports pose security risk” (CNET: 5 August 2006):

At a pair of security conferences here, researchers demonstrated that passports equipped with radio frequency identification (RFID) tags can be cloned with a laptop equipped with a $200 RFID reader and a similarly inexpensive smart card writer. In addition, they suggested that RFID tags embedded in travel documents could identify U.S. passports from a distance, possibly letting terrorists use them as a trigger for explosives.

At the Black Hat conference, Lukas Grunwald, a researcher with DN-Systems in Hildesheim, Germany, demonstrated that he could copy data stored in an RFID tag from his passport and write the data to a smart card equipped with an RFID chip.

From Kim Zetter’s “Hackers Clone E-Passports” (Wired: 3 August 2006):

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker — Walluf, Germany-based ACG Identification Technologies — but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports — called Golden Reader Tool and made by secunet Security Networks — and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader — which can also act as a writer — and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

Although he can clone the tag, Grunwald says it’s not possible, as far as he can tell, to change data on the chip, such as the name or birth date, without being detected. That’s because the passport uses cryptographic hashes to authenticate the data.

Grunwald’s technique requires a counterfeiter to have physical possession of the original passport for a time. A forger could not surreptitiously clone a passport in a traveler’s pocket or purse because of a built-in privacy feature called Basic Access Control that requires officials to unlock a passport’s RFID chip before reading it. The chip can only be unlocked with a unique key derived from the machine-readable data printed on the passport’s page.

To produce a clone, Grunwald has to program his copycat chip to answer to the key printed on the new passport. Alternatively, he can program the clone to dispense with Basic Access Control, which is an optional feature in the specification.

As planned, U.S. e-passports will contain a web of metal fiber embedded in the front cover of the documents to shield them from unauthorized readers. Though Basic Access Control would keep the chip from yielding useful information to attackers, it would still announce its presence to anyone with the right equipment. The government added the shielding after privacy activists expressed worries that a terrorist could simply point a reader at a crowd and identify foreign travelers.

In theory, with metal fibers in the front cover, nobody can sniff out the presence of an e-passport that’s closed. But [Kevin Mahaffey and John Hering of Flexilis] demonstrated in their video how even if a passport opens only half an inch — such as it might if placed in a purse or backpack — it can reveal itself to a reader at least two feet away.

In addition to cloning passport chips, Grunwald has been able to clone RFID ticket cards used by students at universities to buy cafeteria meals and add money to the balance on the cards.

He and his partners were also able to crash RFID-enabled alarm systems designed to sound when an intruder breaks a window or door to gain entry. Such systems require workers to pass an RFID card over a reader to turn the system on and off. Grunwald found that by manipulating data on the RFID chip he could crash the system, opening the way for a thief to break into the building through a window or door.

And they were able to clone and manipulate RFID tags used in hotel room key cards and corporate access cards and create a master key card to open every room in a hotel, office or other facility. He was able, for example, to clone Mifare, the most commonly used key-access system, designed by Philips Electronics. To create a master key he simply needed two or three key cards for different rooms to determine the structure of the cards. Of the 10 different types of RFID systems he examined that were being used in hotels, none used encryption.

Many of the card systems that did use encryption failed to change the default key that manufacturers program into the access card system before shipping, or they used sample keys that the manufacturer includes in instructions sent with the cards. Grunwald and his partners created a dictionary database of all the sample keys they found in such literature (much of which they found accidentally published on purchasers’ websites) to conduct what’s known as a dictionary attack. When attacking a new access card system, their RFDump program would search the list until it found the key that unlocked a card’s encryption.

“I was really surprised we were able to open about 75 percent of all the cards we collected,” he says.

2009

From Thomas Ricker’s “Video: Hacker war drives San Francisco cloning RFID passports” (Engadget: 2 February 2009):

Using a $250 Motorola RFID reader and antenna connected to his laptop, Chris recently drove around San Francisco reading RFID tags from passports, driver licenses, and other identity documents. In just 20 minutes, he found and cloned the passports of two very unaware US citizens.

RFID security problems Read More »

Cell phone viruses

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

Earlier this year, smartphone users in China started to get messages promising a “sexy view” if they clicked on a link. The link led to a download. That download was a spam generator which, once installed, sent identical “sexy view” messages to everyone in the owner’s contacts list.

That was the first virus known to travel by text message. It was chiefly an annoyance, but there is great potential harm from mobile viruses, especially as technologies such as Bluetooth provide new ways for viruses to spread. But there has never yet been a cellphone threat as serious as Conficker is to PCs.

There are two reasons for that, says Albert-László Barabási of Northeastern University in Boston. He and his colleagues used billing data to model the spread of a mobile virus. They found that Bluetooth is an inefficient way of transmitting a virus as it can only jump between users who are within 30 metres of each other. A better option would be for the virus to disguise itself as a picture message. But that could still only infect handsets running the same operating system. As the mobile market is fragmented, says Barabási, no one virus can gain a foothold.

Cell phone viruses Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Stolen credit card data is cheaper than ever in the Underground

From Brian Krebs’ “Glut of Stolen Banking Data Trims Profits for Thieves” (The Washington Post: 15 April 2009):

A massive glut in the number of credit and debit cards stolen in data breaches at financial institutions last year has flooded criminal underground markets that trade in this material, driving prices for the illicit goods to the lowest levels seen in years, experts have found.

For a glimpse of just how many financial records were lost to hackers last year, consider the stats released this week by Verizon Business. The company said it responded to at least 90 confirmed data breaches last year involving roughly 285 million consumer records, a number that exceeded the combined total number of breached records from cases the company investigated from 2004 to 2007. Breaches at banks and financial institutions were responsible for 93 percent of all such records compromised last year, Verizon found.

As a result, the stolen identities and credit and debit cards for sale in the underground markets is outpacing demand for the product, said Bryan Sartin, director of investigative response at Verizon Business.

Verizon found that profit margins associated with selling stolen credit card data have dropped from $10 to $16 per record in mid-2007 to less than $0.50 per record today.

According to a study released last week by Symantec Corp., the price for each card can be sold for as low as 6 cents when they are purchased in bulk.

Lawrence Baldwin, a security consultant in Alpharetta, Ga., has been working with several financial institutions to help infiltrate illegal card-checking services. Baldwin estimates that at least 25,000 credit and debit cards are checked each day at three separate illegal card-checking Web sites he is monitoring. That translates to about 800,000 cards per month or nearly 10 million cards each year.

Baldwin said the checker sites take advantage of authentication weaknesses in the card processing system that allow merchants to conduct so-called “pre-authorization requests,” which merchants use to place a temporary charge on the account to make sure that the cardholder has sufficient funds to pay for the promised goods or services.

Pre-authorization requests are quite common. When a waiter at a restaurant swipes a customer’s card and brings the receipt to the table so the customer can add a tip, for example, that initial charge is essentially a pre-authorization.

With these card-checking services, however, in most cases the charge initiated by the pre-authorization check is never consummated. As a result, unless a consumer is monitoring their accounts online in real-time, they may never notice a pre-authorization initiated by a card-checking site against their card number, because that query won’t show up as a charge on the customer’s monthly statement.

The crooks have designed their card-checking sites so that each check is submitted into the card processing network using a legitimate, hijacked merchant account number combined with a completely unrelated merchant name, Baldwin discovered.

One of the many innocent companies caught up in one of these card-checking services is Wild Birds Unlimited, a franchise pet store outside of Buffalo, N.Y. Baldwin said a fraudulent card-checking service is running pre-authorization requests using Wild Bird’s store name and phone number in combination with another merchant’s ID number.

Danielle Pecoraro, the store’s manager, said the bogus charges started in January 2008. Since then, she said, her store has received an average of three to four phone calls each day from people who had never shopped there, wondering why small, $1-$10 charges from her store were showing up on their monthly statements. Some of the charges were for as little as 24 cents, and a few were for as much as $1,900.

Stolen credit card data is cheaper than ever in the Underground Read More »

80% of all spam from botnets

From Jacqui Cheng’s “Report: botnets sent over 80% of all June spam” (Ars Technica: 29 June 2009):

A new report (PDF) from Symantec’s MessageLabs says that more than 80 percent of all spam sent today comes from botnets, despite several recent shut-downs.

According to MessageLabs’ June report, spam accounted for 90.4 percent of all e-mail sent in the month of June—this was roughly unchanged since May. Botnets, however, sent about 83.2 percent of that spam, with the largest spam-wielding botnet being Cutwail. Cutwail is described as “one of the largest and most active botnets” and has doubled its size and output per bot since March of this year. As a result, it is now responsible for 45 percent of all spam, with others like Mega-D, Xarvester, Donbot, Grum, and Rustock making up much of the difference

80% of all spam from botnets Read More »

Storm made $7000 each day from spam

From Bruce Schneier’s “The Economics of Spam” (Crypto-Gram: 15 November 2008):

Researchers infiltrated the Storm worm and monitored its doings.

“After 26 days, and almost 350 million e-mail messages, only 28 sales resulted — a conversion rate of well under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. Taken together, these conversions would have resulted in revenues of $2,731.88 — a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network — we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm’s pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.

“Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than “millions of dollars every day,” but certainly a healthy enterprise.”

Storm made $7000 each day from spam Read More »

Quanta Crypto: cool but useless

From Bruce Schneier’s “Quantum Cryptography” (Crypto-Gram: 15 November 2008):

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper — period.

While I like the science of quantum cryptography — my undergraduate degree was in physics — I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols.

Quanta Crypto: cool but useless Read More »

What it takes to get people to comply with security policies

From Bruce Schneier’s “Second SHB Workshop Liveblogging (5)” (Schneier on Security: 11 June 2009):

Angela Sasse, University College London …, has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply — this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

What it takes to get people to comply with security policies Read More »

Small charges on your credit card – why?

Too Much Credit
Creative Commons License photo credit: Andres Rueda

From Brian Kreb’s “An Odyssey of Fraud” (The Washington Post: 17 June 2009):

Andy Kordopatis is the proprietor of Odyssey Bar, a modest watering hole in Pocatello, Idaho, a few blocks away from Idaho State University. Most of his customers pay for their drinks with cash, but about three times a day he receives a phone call from someone he’s never served — in most cases someone who’s never even been to Idaho — asking why their credit or debit card has been charged a small amount by his establishment.

Kordopatis says he can usually tell what’s coming next when the caller immediately asks to speak with the manager or owner.

“That’s when I start telling them that I know why they’re calling, and about the Russian hackers who are using my business,” Kordopatis said.

The Odyssey Bar is but one of dozens of small establishments throughout the United States seemingly picked at random by organized cyber criminals to serve as unwitting pawns in a high-stakes game of chess against the U.S. financial system. This daily pattern of phone calls and complaints has been going on for more than a year now. Kordopatis said he has talked to the company that processes his bar’s credit card payments about fixing the problem, but says they can’t do anything because he hasn’t actually lost any money from the scam.

The Odyssey Bar’s merchant account is being abused by online services that cyber thieves built to help other crooks check the balances and limits on stolen credit and debit card account numbers.

Small charges on your credit card – why? Read More »

How to deal with the fact that users can’t learn much about security

From Bruce Schneier’s “Second SHB Workshop Liveblogging (4)” (Schneier on Security: 11 June 2009):

Diana Smetters, Palo Alto Research Center …, started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs.

How to deal with the fact that users can’t learn much about security Read More »

Could Green Dam lead to the largest botnet in history?

Green_Damn_site_blocked.jpg

From Rob Cottingham’s “From blocking to botnet: Censorship isn’t the only problem with China’s new Internet blocking software” (Social Signal: 10 June 2009):

Any blocking software needs to update itself from time to time: at the very least to freshen its database of forbidden content, and more than likely to fix bugs, add features and improve performance. (Most anti-virus software does this.)

If all the software does is to refresh the list of banned sites, that limits the potential for abuse. But if the software is loading new executable code onto the computer, suddenly there’s the potential for something a lot bigger.

Say you’re a high-ranking official in the Chinese military. And let’s say you have some responsibility for the state’s capacity to wage so-called cyber warfare: digital assaults on an enemy’s technological infrastructure.

It strikes you: there’s a single backdoor into more that 40 million Chinese computers, capable of installing… well, nearly anything you want.

What if you used that backdoor, not just to update blocking software, but to create something else?

Say, the biggest botnet in history?

Still, a botnet 40 million strong (plus the installed base already in place in Chinese schools and other institutions) at the beck and call of the military is potentially a formidable weapon. Even if the Chinese government has no intention today of using Green Dam for anything other than blocking pornography, the temptation to repurpose it for military purposes may prove to be overwhelming.

Could Green Dam lead to the largest botnet in history? Read More »

Green Dam is easily exploitable

Green_Damn_site_blocked.jpg

From Scott Wolchok, Randy Yao, and J. Alex Halderman’s “Analysis of the Green Dam Censorware System” (The University of Michigan: 11 June 2009):

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC.

According to press reports, China will soon require all PCs sold in the country to include Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material.

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

Green Dam is easily exploitable Read More »

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

Al Qaeda’s use of social networking sites Read More »

The watchclock knows where your night watchman is

Detex Watchclock Station
Creative Commons License photo credit: 917press

From Christopher Fahey’s “Who Watches the Watchman?” (GraphPaper: 2 May 2009):

The Detex Newman watchclock was first introduced in 1927 and is still in wide use today.

&hellip What could you possibly do in 1900 to be absolutely sure a night watchman was making his full patrol?

An elegant solution, designed and patented in 1901 by the German engineer A.A. Newman, is called the “watchclock”. It’s an ingenious mechanical device, slung over the shoulder like a canteen and powered by a simple wind-up spring mechanism. It precisely tracks and records a night watchman’s position in both space and time for the duration of every evening. It also generates a detailed, permanent, and verifiable record of each night’s patrol.

What’s so interesting to me about the watchclock is that it’s an early example of interaction design used to explicitly control user behavior. The “user” of the watchclock device is obliged to behave in a strictly delimited fashion.

The key, literally, to the watchclock system is that the watchman is required to “clock in” at a series of perhaps a dozen or more checkpoints throughout the premises. Positioned at each checkpoint is a unique, coded key nestled in a little steel box and secured by a small chain. Each keybox is permanently and discreetly installed in strategically-placed nooks and crannies throughout the building, for example in a broom closet or behind a stairway.

The watchman makes his patrol. He visits every checkpoint and clicks each unique key into the watchclock. Within the device, the clockwork marks the exact time and key-location code to a paper disk or strip. If the watchman visits all checkpoints in order, they will have completed their required patrol route.

The watchman’s supervisor can subsequently unlock the device itself (the watchman himself cannot open the watchclock) and review the paper records to confirm if the watchman was or was not doing their job.

The watchclock knows where your night watchman is Read More »

A better alternative to text CAPTCHAs

From Rich Gossweiler, Maryam Kamvar, & Shumeet Baluja’s “What’s Up CAPTCHA?: A CAPTCHA Based On Image Orientation” (Google: 20-24 April 2009):

There are several classes of images which can be successfully oriented by computers. Some objects, such as faces, cars, pedestrians, sky, grass etc.

Many images, however, are difficult for computers to orient. For example, indoor scenes have variations in lighting sources, and abstract and close-up images provide the greatest challenge to both computers and people, often because no clear anchor points or lighting sources exist.

The average performance on outdoor photographs, architecture photographs and typical tourist type photographs was significantly higher than the performance on abstract photographs, close-ups and backgrounds. When an analysis of the features used to make the discriminations was done, it was found that the edge features play a significant role.

It is important not to simply select random images for this task. There are many cues which can quickly reveal the upright orientation of an image to automated systems; these images must be filtered out. For example, if typical vacation or snapshot photos are used, automated rotation accuracies can be in the 90% range. The existence of any of the cues in the presented images will severely limit the effectiveness of the approach. Three common cues are listed below:

1. Text: Usually the predominant orientation of text in an image reveals the upright orientation of an image.

2. Faces and People: Most photographs are taken with the face(s) / people upright in the image.

3. Blue skies, green grass, and beige sand: These are all revealing clues, and are present in many travel/tourist photographs found on the web. Extending this beyond color, in general, the sky often has few texture/edges in comparison to the ground. Additional cues found important in human tests include "grass", "trees", "cars", "water" and "clouds".

Second, due to sometimes warped objects, lack of shading and lighting cues, and often unrealistic colors, cartoons also make ideal candidates. … Finally, although we did not alter the content of the image, it may be possible to simply alter the color- mapping, overall lighting curves, and hue/saturation levels to reveal images that appear unnatural but remain recognizable to people.

To normalize the shape and size of the images, we scaled each image to a 180×180 pixel square and we then applied a circular mask to remove the image corners.

We have created a system that has sufficiently high human- success rates and sufficiently low computer-success rates. When using three images, the rotational CAPTCHA system results in an 84% human success metric, and a .009% bot-success metric (assuming random guessing). These metrics are based on two variables: the number of images we require a user to rotate and the size of the acceptable error window (the degrees from upright which we still consider to be upright). Predictably, as the number of images shown becomes greater, the probability of correctly solving them decreases. However, as the error window increases, the probability of correctly solving them increases. The system which results in an 84% human success rate and .009% bot success rate asks the user to rotate three images, each within 16° of upright (8-degrees on either side of upright).

A CAPTCHA system which displayed ≥ 3 images with a ≤ 16-degree error window would achieve a guess success rate of less than 1 in 10,000, a standard acceptable computer success rates for CAPTCHAs.

In our experiments, users moved a slider to rotate the image to its upright position. On small display devices such as a mobile phone, they could directly manipulate the image using a touch screen, as seen in Figure 12, or can rotate it via button presses.

A better alternative to text CAPTCHAs Read More »

A story of failed biometrics at a gym

Fingerprints
Creative Commons License photo credit: kevindooley

From Jake Vinson’s “Cracking your Fingers” (The Daily WTF: 28 April 2009):

A few days later, Ross stood proudly in the reception area, hands on his hips. A high-tech fingerprint scanner sat at the reception area near the turnstile and register, as the same scanner would be used for each, though the register system wasn’t quite ready for rollout yet. Another scanner sat on the opposite side of the turnstile, for gym members to sign out. … The receptionist looked almost as pleased as Ross that morning as well, excited that this meant they were working toward a system that necessitated less manual member ID lookups.

After signing a few people up, the new system was going swimmingly. Some users declined to use the new system, instead walking to the far side of the counter to use the old touchscreen system. Then Johnny tried to leave after his workout.

… He scanned his finger on his way out, but the turnstile wouldn’t budge.

“Uh, just a second,” the receptionist furiously typed and clicked, while Johnny removed one of his earbuds out and stared. “I’ll just have to manually override it…” but it was useless. There was no manual override option. Somehow, it was never considered that the scanner would malfunction. After several seconds of searching and having Johnny try to scan his finger again, the receptionist instructed him just to jump over the turnstile.

It was later discovered that the system required a “sign in” and a “sign out,” and if a member was recognized as someone else when attempting to sign out, the system rejected the input, and the turnstile remained locked in position. This was not good.

The scene repeated itself several times that day. Worse, the fingerprint scanner at the exit was getting kind of disgusting. Dozens of sweaty fingerprints required the scanner to be cleaned hourly, and even after it was freshly cleaned, it sometimes still couldn’t read fingerprints right. The latticed patterns on the barbell grips would leave indented patterns temporarily on the members’ fingers, there could be small cuts or folds on fingertips just from carrying weights or scrapes on the concrete coming out of the pool, fingers were wrinkly after a long swim, or sometimes the system just misidentified the person for no apparent reason.

Fingerprint Scanning

In much the same way that it’s not a good idea to store passwords in plaintext, it’s not a good idea to store raw fingerprint data. Instead, it should be hashed, so that the same input will consistently give the same output, but said output can’t be used to determine what the input was. In biometry, there are many complex algorithms that can analyze a fingerprint via several points on the finger. This system was set up to record seven points.

After a few hours of rollout, though, it became clear that the real world doesn’t conform to how it should’ve worked in theory. There were simply too many variables, too many activities in the gym that could cause fingerprints to become altered. As such, the installers did what they thought was the reasonable thing to do – reduce the precision from seven points down to something substantially lower.

The updated system was in place for a few days, and it seemed to be working better; no more people being held up trying to leave.

Discovery

… [The monitor] showed Ray as coming in several times that week, often twice on the same day, just hours apart. For each day listed, Ray had only come the later of the two times.

Reducing the precision of the fingerprint scanning resulted in the system identifying two people as one person. Reviewing the log, they saw that some regulars weren’t showing up in the system, and many members had two or three people being identified by the scanner as them.

A story of failed biometrics at a gym Read More »

Interviewed for an article about mis-uses of Twitter

The Saint Louis Beacon published an article on 27 April 2009 titled “Tweets from the jury box aren’t amusing“, about legal “cases across the country where jurors have used cell phones, BlackBerrys and other devices to comment – sometimes minute by minute or second by second on Twitter, for instance – on what they are doing, hearing and seeing during jury duty.” In it, I was quoted as follows:

The small mobile devices so prevalent today can encourage a talkative habit that some people may find hard to break.

“People get so used to doing it all the time,” said Scott Granneman, an adjunct professor of communications and journalism at Washington University. “They don’t stop to think that being on a jury means they should stop. It’s an etiquette thing.”

Moreover, he added, the habit can become so ingrained in some people – even those on juries who are warned repeatedly they should not Tweet or text or talk about they case they are on – that they make excuses.

“It’s habitual,” Granneman said. “They say ‘It’s just my friends and family reading this.’ They don’t know the whole world is following this.

“Anybody can go to any Twitter page. There may be only eight people following you, but anybody can go anyone’s page and anybody can reTweet – forward someone’s page: ‘Oh my God, the defense attorney is so stupid.’ That can go on and on and on.”

Interviewed for an article about mis-uses of Twitter Read More »