bruce_schneier

Security decisions are often made for non-security reasons

From Bruce Schneier’s Crypto-Gram of 15 July 2004:

There was a single guard watching the X-ray machine’s monitor, and a line of people putting their bags onto the machine. The people themselves weren’t searched at all. Even worse, no guard was watching the people. So when I walked with everyone else in line and just didn’t put my bag onto the machine, no one noticed.

It was all good fun, and I very much enjoyed describing this to FinCorp’s VP of Corporate Security. He explained to me that he got a $5 million rate reduction from his insurance company by installing that X-ray machine and having some dogs sniff around the building a couple of times a week.

I thought the building’s security was a waste of money. It was actually a source of corporate profit.

The point of this story is one that I’ve made in ‘Beyond Fear’ and many other places: security decisions are often made for non-security reasons.

Security decisions are often made for non-security reasons Read More »

Quanta Crypto: cool but useless

From Bruce Schneier’s “Quantum Cryptography” (Crypto-Gram: 15 November 2008):

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper — period.

While I like the science of quantum cryptography — my undergraduate degree was in physics — I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols.

Quanta Crypto: cool but useless Read More »

How to deal with the fact that users can’t learn much about security

From Bruce Schneier’s “Second SHB Workshop Liveblogging (4)” (Schneier on Security: 11 June 2009):

Diana Smetters, Palo Alto Research Center …, started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs.

How to deal with the fact that users can’t learn much about security Read More »

The future of security

From Bruce Schneier’s “Security in Ten Years” (Crypto-Gram: 15 December 2007):

Bruce Schneier: … The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.

Marcus Ranum: … Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.

Bruce Schneier: … I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

The future of security Read More »

Problems with airport security

From Jeffrey Goldberg’s “The Things He Carried” (The Atlantic: November 2008):

Because the TSA’s security regimen seems to be mainly thing-based—most of its 44,500 airport officers are assigned to truffle through carry-on bags for things like guns, bombs, three-ounce tubes of anthrax, Crest toothpaste, nail clippers, Snapple, and so on—I focused my efforts on bringing bad things through security in many different airports, primarily my home airport, Washington’s Reagan National, the one situated approximately 17 feet from the Pentagon, but also in Los Angeles, New York, Miami, Chicago, and at the Wilkes-Barre/Scranton International Airport (which is where I came closest to arousing at least a modest level of suspicion, receiving a symbolic pat-down—all frisks that avoid the sensitive regions are by definition symbolic—and one question about the presence of a Leatherman Multi-Tool in my pocket; said Leatherman was confiscated and is now, I hope, living with the loving family of a TSA employee). And because I have a fair amount of experience reporting on terrorists, and because terrorist groups produce large quantities of branded knickknacks, I’ve amassed an inspiring collection of al-Qaeda T-shirts, Islamic Jihad flags, Hezbollah videotapes, and inflatable Yasir Arafat dolls (really). All these things I’ve carried with me through airports across the country. I’ve also carried, at various times: pocketknives, matches from hotels in Beirut and Peshawar, dust masks, lengths of rope, cigarette lighters, nail clippers, eight-ounce tubes of toothpaste (in my front pocket), bottles of Fiji Water (which is foreign), and, of course, box cutters. I was selected for secondary screening four times—out of dozens of passages through security checkpoints—during this extended experiment. At one screening, I was relieved of a pair of nail clippers; during another, a can of shaving cream.

During one secondary inspection, at O’Hare International Airport in Chicago, I was wearing under my shirt a spectacular, only-in-America device called a “Beerbelly,” a neoprene sling that holds a polyurethane bladder and drinking tube. The Beerbelly, designed originally to sneak alcohol—up to 80 ounces—into football games, can quite obviously be used to sneak up to 80 ounces of liquid through airport security. (The company that manufactures the Beerbelly also makes something called a “Winerack,” a bra that holds up to 25 ounces of booze and is recommended, according to the company’s Web site, for PTA meetings.) My Beerbelly, which fit comfortably over my beer belly, contained two cans’ worth of Bud Light at the time of the inspection. It went undetected. The eight-ounce bottle of water in my carry-on bag, however, was seized by the federal government.

Schnei­er and I walked to the security checkpoint. “Counter­terrorism in the airport is a show designed to make people feel better,” he said. “Only two things have made flying safer: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.” This assumes, of course, that al-Qaeda will target airplanes for hijacking, or target aviation at all. “We defend against what the terrorists did last week,” Schnei­er said. He believes that the country would be just as safe as it is today if airport security were rolled back to pre-9/11 levels. “Spend the rest of your money on intelligence, investigations, and emergency response.”

We took our shoes off and placed our laptops in bins. Schnei­er took from his bag a 12-ounce container labeled “saline solution.”

“It’s allowed,” he said. Medical supplies, such as saline solution for contact-lens cleaning, don’t fall under the TSA’s three-ounce rule.

“What’s allowed?” I asked. “Saline solution, or bottles labeled saline solution?”

“Bottles labeled saline solution. They won’t check what’s in it, trust me.”

They did not check. As we gathered our belongings, Schnei­er held up the bottle and said to the nearest security officer, “This is okay, right?” “Yep,” the officer said. “Just have to put it in the tray.”

“Maybe if you lit it on fire, he’d pay attention,” I said, risking arrest for making a joke at airport security. (Later, Schnei­er would carry two bottles labeled saline solution—24 ounces in total—through security. An officer asked him why he needed two bottles. “Two eyes,” he said. He was allowed to keep the bottles.)

We were in the clear. But what did we prove?

“We proved that the ID triangle is hopeless,” Schneier said.

The ID triangle: before a passenger boards a commercial flight, he interacts with his airline or the government three times—when he purchases his ticket; when he passes through airport security; and finally at the gate, when he presents his boarding pass to an airline agent. It is at the first point of contact, when the ticket is purchased, that a passenger’s name is checked against the government’s no-fly list. It is not checked again, and for this reason, Schnei­er argued, the process is merely another form of security theater.

“The goal is to make sure that this ID triangle represents one person,” he explained. “Here’s how you get around it. Let’s assume you’re a terrorist and you believe your name is on the watch list.” It’s easy for a terrorist to check whether the government has cottoned on to his existence, Schnei­er said; he simply has to submit his name online to the new, privately run CLEAR program, which is meant to fast-pass approved travelers through security. If the terrorist is rejected, then he knows he’s on the watch list.

To slip through the only check against the no-fly list, the terrorist uses a stolen credit card to buy a ticket under a fake name. “Then you print a fake boarding pass with your real name on it and go to the airport. You give your real ID, and the fake boarding pass with your real name on it, to security. They’re checking the documents against each other. They’re not checking your name against the no-fly list—that was done on the airline’s computers. Once you’re through security, you rip up the fake boarding pass, and use the real boarding pass that has the name from the stolen credit card. Then you board the plane, because they’re not checking your name against your ID at boarding.”

What if you don’t know how to steal a credit card?

“Then you’re a stupid terrorist and the government will catch you,” he said.

What if you don’t know how to download a PDF of an actual boarding pass and alter it on a home computer?

“Then you’re a stupid terrorist and the government will catch you.”

I couldn’t believe that what Schneier was saying was true—in the national debate over the no-fly list, it is seldom, if ever, mentioned that the no-fly list doesn’t work. “It’s true,” he said. “The gap blows the whole system out of the water.”

Problems with airport security Read More »

Bruce Schneier on wholesale, constant surveillance

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”

What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.

Bruce Schneier on wholesale, constant surveillance Read More »

Those who know how to fix know how to destroy as well

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

This is true in many aspects of our society. Here’s what I said in my book, Secrets and Lies (page 389): “As technology becomes more complicated, society’s experts become more specialized. And in almost every area, those with the expertise to build society’s infrastructure also have the expertise to destroy it. Ask any doctor how to poison someone untraceably, and he can tell you. Ask someone who works in aircraft maintenance how to drop a 747 out of the sky without getting caught, and he’ll know. Now ask any Internet security professional how to take down the Internet, permanently. I’ve heard about half a dozen different ways, and I know I haven’t exhausted the possibilities.”

Those who know how to fix know how to destroy as well Read More »

Bruce Schneier on security & crime economics

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Basically, you’re asking if crime pays. Most of the time, it doesn’t, and the problem is the different risk characteristics. If I make a computer security mistake — in a book, for a consulting client, at BT — it’s a mistake. It might be expensive, but I learn from it and move on. As a criminal, a mistake likely means jail time — time I can’t spend earning my criminal living. For this reason, it’s hard to improve as a criminal. And this is why there are more criminal masterminds in the movies than in real life.

Crime has been part of our society since our species invented society, and it’s not going away anytime soon. The real question is, “Why is there so much crime and hacking on the Internet, and why isn’t anyone doing anything about it?”

The answer is in the economics of Internet vulnerabilities and attacks: the organizations that are in the position to mitigate the risks aren’t responsible for the risks. This is an externality, and if you want to fix the problem you need to address it. In this essay (more here), I recommend liabilities; companies need to be liable for the effects of their software flaws. A related problem is that the Internet security market is a lemon’s market (discussed here), but there are strategies for dealing with that, too.

Bruce Schneier on security & crime economics Read More »

Bruce Schneier on identity theft

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Identity theft is a problem for two reasons. One, personal identifying information is incredibly easy to get; and two, personal identifying information is incredibly easy to use. Most of our security measures have tried to solve the first problem. Instead, we need to solve the second problem. As long as it’s easy to impersonate someone if you have his data, this sort of fraud will continue to be a major problem.

The basic answer is to stop relying on authenticating the person, and instead authenticate the transaction. Credit cards are a good example of this. Credit card companies spend almost no effort authenticating the person — hardly anyone checks your signature, and you can use your card over the phone, where they can’t even check if you’re holding the card — and spend all their effort authenticating the transaction.

Bruce Schneier on identity theft Read More »

9 reasons the Storm botnet is different

From Bruce Schneier’s “Gathering ‘Storm’ Superworm Poses Grave Threat to PC Nets” (Wired: 4 October 2007):

Storm represents the future of malware. Let’s look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. …

3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. …

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. …

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect. …

5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” …

6. Storm’s payload — the code it uses to spread — morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites — anything to entice users to click on a phony link. …

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. …

9. Last month, Storm began attacking anti-spam sites focused on identifying it — spamhaus.org, 419eater and so on — and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

9 reasons the Storm botnet is different Read More »

Synchronization attacks at fast food drive-through windows

From Bruce Schneier’s “Getting Free Food at a Fast-Food Drive-In” (Crypto-Gram: 15 September 2007):

It’s easy. Find a fast-food restaurant with two drive-through windows: one where you order and pay, and the other where you receive your food. This won’t work at the more-common U.S. configuration: a microphone where you order, and a single window where you both pay and receive your food. The video demonstrates the attack at a McDonald’s in — I assume — France.

Wait until there is someone behind you and someone in front of you. Don’t order anything at the first window. Tell the clerk that you forgot your money and didn’t order anything. Then drive to the second window, and take the food that the person behind you ordered.

It’s a clever exploit. Basically, it’s a synchronization attack. By exploiting the limited information flow between the two windows, you can insert yourself into the pay-receive queue.

Synchronization attacks at fast food drive-through windows Read More »

Serial-numbered confetti

From Bruce Schneier’s “News” (Crypto-Gram: 15 September 2007):

Taser — yep, that’s the company’s name as well as the product’s name — is now selling a personal-use version of their product. It’s called the Taser C2, and it has an interesting embedded identification technology. Whenever the weapon is fired, it also sprays some serial-number bar-coded confetti, so a firing can be traced to a weapon and — presumably — the owner.
http://www.taser.com/products/consumers/Pages/C2.aspx

Serial-numbered confetti Read More »

Trusted insiders and how to protect against them

From Bruce Schneier’s “Basketball Referees and Single Points of Failure” (Crypto-Gram: 15 September 2007):

What sorts of systems — IT, financial, NBA games, or whatever — are most at risk of being manipulated? The ones where the smallest change can have the greatest impact, and the ones where trusted insiders can make that change.

It’s not just that basketball referees are single points of failure, it’s that they’re both trusted insiders and single points of catastrophic failure.

All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.

Trusted insiders and how to protect against them Read More »

A collective action problem: why the cops can’t talk to firemen

From Bruce Schneier’s “First Responders” (Crypto-Gram: 15 September 2007):

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25% of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80% of cities, municipal authorities couldn’t communicate with the FBI, FEMA, and other federal agencies.

The source of the problem is a basic economic one, called the “collective action problem.” A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

A collective action problem: why the cops can’t talk to firemen Read More »

A wireless router with 2 networks: 1 secure, 1 open

From Bruce Schneier’s “My Open Wireless Network” (Crypto-Gram: 15 January 2008):

A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either “Bill” or “Linus” mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network. In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It’s a really clever idea.

A wireless router with 2 networks: 1 secure, 1 open Read More »

Anonymity and Netflix

From Bruce Schneier’s “Anonymity and the Netflix Dataset” (Crypto-Gram: 15 January 2008):

The point of the research was to demonstrate how little information is required to de-anonymize information in the Netflix dataset.

What the University of Texas researchers demonstrate is that this process isn’t hard, and doesn’t require a lot of data. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. This would certainly hold true for our book reading habits, our internet shopping habits, our telephone habits and our web searching habits.

Other research reaches the same conclusion. Using public anonymous data from the 1990 census, Latanya Sweeney found that 87 percent of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides. Expanding the geographic scope to an entire county reduces that to a still-significant 18 percent. “In general,” the researchers wrote, “few characteristics are needed to uniquely identify a person.”

Stanford University researchers reported similar results using 2000 census data. It turns out that date of birth, which (unlike birthday month and day alone) sorts people into thousands of different buckets, is incredibly valuable in disambiguating people.

Anonymity and Netflix Read More »

A cheap, easy way to obfuscate license plates

From Victor Bogado da Silva Lins’ letter in Bruce Schneier’s Crypto-Gram (15 May 2004):

You mentioned in your last crypto-gram newsletter about a cover that makes a license plate impossible to read from certain angles. Brazilian people have thought in another low-tech solution for the same “problem”, they simply tie some ribbons to the plate or the car itself; when the car is running (speeding) the ribbons fly and get in front of the plate making it difficult to read the plate.

A cheap, easy way to obfuscate license plates Read More »

People being rescued run from their rescuers

From Les Jones’s email in Bruce Schneier’s “Crypto-Gram” (15 August 2005):

Avoiding rescuers is a common reaction in people who have been lost in the woods. See Dwight McCarter’s book, “Lost,” an account of search and rescue operations in the Great Smoky Mountains National Park. In one chapter McCarter tells the story of two backpackers in the park who got separated while traveling off-trail in the vicinity of Thunderhead. The less-experienced hiker quickly got lost.

After a day or two wandering around he was going through his pack and found a backpacking how-to book that explained what to do in case you got lost in the woods. Following the advice, he went to a clearing and built a signal fire. A rescue helicopter saw the smoke and hovered overhead above the tree tops as he waved his arms to attract their attention. The helicopter dropped a sleeping bag and food, with a note saying they couldn’t land in the clearing, but that they would send in a rescue party on foot.

The lost hiker sat down, tended his fire, and waited for rescue. When the rescuers appeared at the edge of the clearing, he panicked, jumped up, and ran in the other direction. They had to chase him down to rescue him. This despite the fact that he wanted to be rescued, had taken active steps to attract rescuers, and knew that rescuers were coming to him. Odd but true.

People being rescued run from their rescuers Read More »

World distance reading WiFi and RFID

From Bruce Schneier’s “Crypto-Gram” (15 August 2005):

At DefCon earlier this month, a group was able to set up an unamplified 802.11 network at a distance of 124.9 miles.

http://www.enterpriseitplanet.com/networking/news/…

http://pasadena.net/shootout05/

Even more important, the world record for communicating with a passive RFID device was set at 69 feet. Remember that the next time someone tells you that it’s impossible to read RFID identity cards at a distance.

http://www.makezine.com/blog/archive/2005/07/…

Whenever you hear a manufacturer talk about a distance limitation for any wireless technology — wireless LANs, RFID, Bluetooth, anything — assume he’s wrong. If he’s not wrong today, he will be in a couple of years. Assume that someone who spends some money and effort building more sensitive technology can do much better, and that it will take less money and effort over the years. Technology always gets better; it never gets worse. If something is difficult and expensive now, it will get easier and cheaper in the future.

World distance reading WiFi and RFID Read More »

The NSA’s cryptographic backdoor

From Bruce Schneier’s “The Strange Story of Dual_EC_DRBG” (Crypto-Gram: 15 November 2007):

This year, the U.S. government released a new official standard for random number generators, which will likely be followed by software and hardware developers around the world. Called NIST Special Publication 800-90, the 130-page document contains four different approved techniques, called DRBGs, or “Deterministic Random Bit Generators.” All four are based on existing cryptographic primitives. One is based on hash functions, one on HMAC, one on block ciphers, and one on elliptic curves. It’s smart cryptographic design to use only a few well-trusted cryptographic primitives, so building a random number generator out of existing parts is a good thing.

But one of those generators — the one based on elliptic curves — is not like the others. Called Dual_EC_DRBG, not only is it a mouthful to say, it’s also three orders of magnitude slower than its peers. It’s in the standard only because it’s been championed by the NSA, which first proposed it years ago in a related standardization project at the American National Standards Institute.

Problems with Dual_EC_DBRG were first described in early 2006. The math is complicated, but the general point is that the random numbers it produces have a small bias. The problem isn’t large enough to make the algorithm unusable — and Appendix E of the NIST standard describes an optional workaround to avoid the issue — but it’s cause for concern. Cryptographers are a conservative bunch; we don’t like to use algorithms that have even a whiff of a problem.

But today there’s an even bigger stink brewing around Dual_EC_DRBG. In an informal presentation at the CRYPTO 2007 conference this past August, Dan Shumow and Niels Ferguson showed that the algorithm contains a weakness that can only be described as a backdoor.

What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

My recommendation, if you’re in need of a random number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG. Or Fortuna or Yarrow, for that matter.

The NSA’s cryptographic backdoor Read More »