bruce_schneier

Take over a computer network with an iPod or USB stick

From Bruce Schneier’s “Hacking Computers Over USB” (Crypto-Gram: 15 June 2005):

From CSO Magazine:

“Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB’s internal storage, and hide them as “deleted” files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that’s likely to be with us for many years to come.” …

Recently I’ve been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called “iPod Sneakiness” (unfortunately, not online). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up — and then steal his passwords and critical files.

And about someone used this trick in a penetration test:

“We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us.

“The next hurdle we had was getting the USB drives in the hands of the credit union’s internal users. I made my way to the credit union at about 6 a.m. to make sure no employees saw us. I then proceeded to scatter the drives in the parking lot, smoking areas, and other areas employees frequented.

“Once I seeded the USB drives, I decided to grab some coffee and watch the employees show up for work. Surveillance of the facility was worth the time involved. It was really amusing to watch the reaction of the employees who found a USB drive. You know they plugged them into their computers the minute they got to their desks.

“I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. I would have loved to be on the inside of the building watching as people started plugging the USB drives in, scouring through the planted image files, then unknowingly running our piece of software.”

Take over a computer network with an iPod or USB stick Read More »

The real solution to identity theft: bank liability

From Bruce Schneier’s “Mitigating Identity Theft” (Crypto-Gram: 15 April 2005):

The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. …

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. …

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. …

The second issue is the ease with which a criminal can use personal data to commit fraud. …

Proposed fixes tend to concentrate on the first issue — making personal data harder to steal — whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

… That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

… The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions.

The real solution to identity theft: bank liability Read More »

Two-factor authentication: the good & the bad

From Bruce Schneier’s “More on Two-Factor Authentication” (Crypto-Gram: 15 April 2005):

Passwords just don’t work anymore. As computers have gotten faster, password guessing has gotten easier. Ever-more-complicated passwords are required to evade password-guessing software. At the same time, there’s an upper limit to how complex a password users can be expected to remember. About five years ago, these two lines crossed: It is no longer reasonable to expect users to have passwords that can’t be guessed. For anything that requires reasonable security, the era of passwords is over.

Two-factor authentication solves this problem. It works against passive attacks: eavesdropping and password guessing. It protects against users choosing weak passwords, telling their passwords to their colleagues or writing their passwords on pieces of paper taped to their monitors. For an organization trying to improve access control for its employees, two-factor authentication is a great idea. Microsoft is integrating two-factor authentication into its operating system, another great idea.

What two-factor authentication won’t do is prevent identity theft and fraud. It’ll prevent certain tactics of identity theft and fraud, but criminals simply will switch tactics. We’re already seeing fraud tactics that completely ignore two-factor authentication. As banks roll out two-factor authentication, criminals simply will switch to these new tactics.

One way to think about this is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Two-factor authentication: the good & the bad Read More »

The HOLLYWOOD sign as multi-user access-control system

From Bruce Schneier’s “Hollywood Sign Security” (Crypto-Gram: 15 January 2005):

In Los Angeles, the “HOLLYWOOD” sign is protected by a fence and a locked gate. Because several different agencies need access to the sign for various purposes, the chain locking the gate is formed by several locks linked together. Each of the agencies has the key to its own lock, and not the key to any of the others. Of course, anyone who can open one of the locks can open the gate.

This is a nice example of a multiple-user access-control system. It’s simple, and it works. You can also make it as complicated as you want, with different locks in parallel and in series.

The HOLLYWOOD sign as multi-user access-control system Read More »

When people feel secure, they’re easier targets

From Bruce Schneier’s “Burglars and “Feeling Secure” (Crypto-Gram: 15 January 2005):

This quote is from “Confessions of a Master Jewel Thief,” by Bill Mason (Villard, 2003): “Nothing works more in a thief’s favor than people feeling secure. That’s why places that are heavily alarmed and guarded can sometimes be the easiest targets. The single most important factor in security — more than locks, alarms, sensors, or armed guards — is attitude. A building protected by nothing more than a cheap combination lock but inhabited by people who are alert and risk-aware is much safer than one with the world’s most sophisticated alarm system whose tenants assume they’re living in an impregnable fortress.”

The author, a burglar, found that luxury condos were an excellent target. Although they had much more security technology than other buildings, they were vulnerable because no one believed a thief could get through the lobby.

When people feel secure, they’re easier targets Read More »

Why the color-coded threat alert system fails

From Bruce Schneier’s “Color-Coded Terrorist Threat Levels” (Crypto-Gram Newsletter: 15 January 2004):

The color-coded threat alerts issued by the Department of Homeland Security are useless today, but may become useful in the future. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people know exactly what caused the levels to change, what the change means, and what actions they need to take in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down.”

Living under Orange reinforces this. It didn’t mean anything. Tom Ridge’s admonition that Americans “be alert, but go about their business” reinforces this; it’s nonsensical advice. I saw little that could be considered a good security trade-off, and a lot of draconian security measures and security theater.

Why the color-coded threat alert system fails Read More »

Checking papers does no good if the papers are forged

From Bruce Schneier’s “News” (Crypto-Gram Newsletter: 15 April 2006):

Undercover investigators were able to smuggle radioactive materials into the U.S. It set off alarms at border checkpoints, but the smugglers had forged import licenses from the Nuclear Regulatory Commission, based on an image of the real document they found on the Internet. Unfortunately, the border agents had no way to confirm the validity of import licenses. I’ve written about this problem before, and it’s one I think will get worse in the future. Verification systems are often the weakest link of authentication. Improving authentication tokens won’t improve security unless the verification systems improve as well.

Checking papers does no good if the papers are forged Read More »

5 reasons people exaggerate risks

From Bruce Schneier’s “Movie Plot Threat Contest: Status Report” (Crypto-Gram Newsletter: 15 May 2006):

In my book, Beyond Fear, I discussed five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

1. People exaggerate spectacular but rare risks and downplay common risks.

2. People have trouble estimating risks for anything not exactly like their normal situation.

3. Personified risks are perceived to be greater than anonymous risks.

4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.

5. People overestimate risks that are being talked about and remain an object of public scrutiny.

5 reasons people exaggerate risks Read More »

Why no terrorist attacks since 9/11?

From Bruce Schneier’s “Movie Plot Threat Contest: Status Report” (Crypto-Gram Newsletter: 15 May 2006):

… you have to wonder why there have been no terrorist attacks in the U.S. since 9/11. I don’t believe the “flypaper theory” that the terrorists are all in Iraq instead of in the U.S. And despite all the ineffectual security we’ve put in place since 9/11, I’m sure we have had some successes in intelligence and investigation — and have made it harder for terrorists to operate both in the U.S. and abroad.

But mostly, I think terrorist attacks are much harder than most of us think. It’s harder to find willing recruits than we think. It’s harder to coordinate plans. It’s harder to execute those plans. Terrorism is rare, and for all we’ve heard about 9/11 changing the world, it’s still rare.

Why no terrorist attacks since 9/11? Read More »

Why disclosure laws are good

From Bruce Schneier’s “Identity-Theft Disclosure Laws” (Crypto-Gram Newsletter: 15 May 2006):

Disclosure laws force companies to make these security breaches public. This is a good idea for three reasons. One, it is good security practice to notify potential identity theft victims that their personal information has been lost or stolen. Two, statistics on actual data thefts are valuable for research purposes. And three, the potential cost of the notification and the associated bad publicity naturally leads companies to spend more money on protecting personal information — or to refrain from collecting it in the first place.

Why disclosure laws are good Read More »

Why airport security fails constantly

From Bruce Schneier’s “Airport Passenger Screening” (Crypto-Gram Newsletter: 15 April 2006):

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns, and 60 percent of (fake) bombs. And recently, testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. …

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces. …

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for — for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

Why airport security fails constantly Read More »

4 ways to eavesdrop on telephone calls

From Bruce Schneier’s “VOIP Encryption” (Crypto-Gram Newsletter: 15 April 2006):

There are basically four ways to eavesdrop on a telephone call.

One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it’s the easiest. While it doesn’t work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.

Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line’s path — even outside the home. This used to be the way the police eavesdropped on your phone line. These days it’s probably most often used by criminals. This method doesn’t work for cell phones, either.

Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.

Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It’s hard to eavesdrop on one particular person this way, but it’s easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They’ve even been known to use submarines to tap undersea phone cables.

4 ways to eavesdrop on telephone calls Read More »

A new way to steal from ATMs: blow ’em up

From Bruce Schneier’s “News” (Crypto-Gram Newsletter: 15 March 2006):

In the Netherlands, criminals are stealing money from ATM machines by blowing them up. First, they drill a hole in an ATM and fill it with some sort of gas. Then, they ignite the gas — from a safe distance — and clean up the money that flies all over the place after the ATM explodes. Sounds crazy, but apparently there has been an increase in this type of attack recently. The banks’ countermeasure is to install air vents so that gas can’t build up inside the ATMs.

A new way to steal from ATMs: blow ’em up Read More »

Microsoft’s BitLocker could be used for DRM

From Bruce Schneier’s “Microsoft’s BitLocker” (Crypto-Gram Newsletter: 15 May 2006):

BitLocker is not a DRM system. However, it is straightforward to turn it into a DRM system. Simply give programs the ability to require that files be stored only on BitLocker-enabled drives, and then only be transferable to other BitLocker-enabled drives. How easy this would be to implement, and how hard it would be to subvert, depends on the details of the system.

Microsoft’s BitLocker could be used for DRM Read More »

THE answer to “if you’re not doing anything wrong, why resist surveillance?”

From Bruce Schneier’s “The Eternal Value of Privacy” (Wired News: 18 May 2006):

The most common retort against privacy advocates — by those in favor of ID checks, cameras, databases, data mining and other wholesale surveillance measures — is this line: “If you aren’t doing anything wrong, what do you have to hide?”

Some clever answers: “If I’m not doing anything wrong, then you have no cause to watch me.” “Because the government gets to define what’s wrong, and they keep changing the definition.” “Because you might do something wrong with my information.” My problem with quips like these — as right as they are — is that they accept the premise that privacy is about hiding a wrong. It’s not. Privacy is an inherent human right, and a requirement for maintaining the human condition with dignity and respect.

Two proverbs say it best: Quis custodiet custodes ipsos? (“Who watches the watchers?”) and “Absolute power corrupts absolutely.”

Cardinal Richelieu understood the value of surveillance when he famously said, “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.” Watch someone long enough, and you’ll find something to arrest — or just blackmail — with. Privacy is important because without it, surveillance information will be abused: to peep, to sell to marketers and to spy on political enemies — whoever they happen to be at the time.

THE answer to “if you’re not doing anything wrong, why resist surveillance?” Read More »

Change the AMD K8 CPU without authentication checks

From Bruce Schneier’s Crypto-Gram Newsletter (15 August 2004):

Here’s an interesting hardware security vulnerability. Turns out that it’s possible to update the AMD K8 processor (Athlon64 or Opteron) microcode. And, get this, there’s no authentication check. So it’s possible that an attacker who has access to a machine can backdoor the CPU.

[See http://www.realworldtech.com/forums/index.cfm?action=detail&id=35446&threadid=35446&roomid=11]

Change the AMD K8 CPU without authentication checks Read More »

5 reasons people exaggerate risk

From Bruce Schneier’s “Movie Plot Threat Contest: Status Report“:

In my book, Beyond Fear, I discusse five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.

1. People exaggerate spectacular but rare risks and downplay common risks.
2. People have trouble estimating risks for anything not exactly like their normal situation.
3. Personified risks are perceived to be greater than anonymous risks.
4. People underestimate risks they willingly take and overestimate risks in situations they can’t control.
5. People overestimate risks that are being talked about and remain an object of public scrutiny.

5 reasons people exaggerate risk Read More »

Bruce Schneier on steganography

From Bruce Schneier’s “Steganography: Truths and Fictions“:

Steganography is the science of hiding messages in messages. … In the computer world, it has come to mean hiding secret messages in graphics, pictures, movies, or sounds. …

The point of steganography is to hide the existence of the message, to hide the fact that the parties are communicating anything other than innocuous photographs. This only works when it can be used within existing communications patterns. I’ve never sent or received a GIF in my life. If someone suddenly sends me one, it won’t take a rocket scientist to realize that there’s a steganographic message hidden somewhere in it. If Alice and Bob already regularly exchange files that are suitable to hide steganographic messages, then an eavesdropper won’t know which messages — if any — contain the messages. If Alice and Bob change their communications patterns to hide the messages, it won’t work. An eavesdropper will figure it out.

… Don’t use the sample image that came with the program when you downloaded it; your eavesdropper will quickly recognize that one. Don’t use the same image over and over again; your eavesdropper will look for the differences between that indicate the hidden message. Don’t use an image that you’ve downloaded from the net; your eavesdropper can easily compare the image you’re sending with the reference image you downloaded.

Bruce Schneier on steganography Read More »

Israeli car theft scam

From Bruce Schneier’s “Automobile Identity Theft“:

This scam was uncovered in Israel:

1. Thief rents a car.

2. An identical car, legitimately owned, is found and its “identity” stolen.

3. The stolen identity is applied to the rented car and is then offered for sale in a newspaper ad.

4. Innocent buyer purchases the car from the thief as a regular private party sale.

5. After a few days the thief steals the car back from the buyer and returns it to the rental shop.

What ended up happening is that the “new” owners claimed compensation for the theft and most of the damage was absorbed by the insurers.

Israeli car theft scam Read More »

Open source breathalyzers

From Bruce Schneier’s “DUI Cases Thrown Out Due to Closed-Source Breathalyzer“:

According to the article: “Hundreds of cases involving breath-alcohol tests have been thrown out by Seminole County judges in the past five months because the test’s manufacturer will not disclose how the machines work.”

This is the right decision. Throughout history, the government has had to make the choice: prosecute, or keep your investigative methods secret. They couldn’t have both. If they wanted to keep their methods secret, they had to give up on prosecution.

People have the right to confront their accuser. People have a right to examine the evidence against them, and to contest the validity of that evidence.

Open source breathalyzers Read More »