encryption

RFID security problems

Old British passport cover
Creative Commons License photo credit: sleepymyf

2005

From Brian Krebs’ “Leaving Las Vegas: So Long DefCon and Blackhat” (The Washington Post: 1 August 2005):

DefCon 13 also was notable for being the location where two new world records were set — both involved shooting certain electronic signals unprecedented distances. Los Angeles-based Flexilis set the world record for transmitting data to and from a “passive” radio frequency identification (RFID) card — covering a distance of more than 69 feet. (Active RFID — the kind being integrated into foreign passports, for example — differs from passive RFID in that it emits its own magnetic signal and can only be detected from a much shorter distance.)

The second record set this year at DefCon was pulled off by some teens from Cincinnati, who broke the world record they set last year by building a device capable of maintaining an unamplified, 11-megabit 802.11b wireless Internet connection over a distance of 125 miles (the network actually spanned from Utah into Nevada).

From Andrew Brandt’s “Black Hat, Lynn Settle with Cisco, ISS” (PC World: 29 July 2005):

Security researcher Kevin Mahaffey makes a final adjustment to a series of radio antennas; Mahaffey used the directional antennas in a demonstration during his presentation, “Long Range RFID and its Security Implications.” Mahaffey and two of his colleagues demonstrated how he could increase the “read range” of radio frequency identification (RF) tags from the typical four to six inches to approximately 50 feet. Mahaffey said the tags could be read at a longer distance, but he wanted to perform the demonstration in the room where he gave the presentation, and that was the greatest distance within the room that he could demonstrate. RFID tags such as the one Mahaffey tested will begin to appear in U.S. passports later this year or next year.

2006

From Joris Evers and Declan McCullagh’s “Researchers: E-passports pose security risk” (CNET: 5 August 2006):

At a pair of security conferences here, researchers demonstrated that passports equipped with radio frequency identification (RFID) tags can be cloned with a laptop equipped with a $200 RFID reader and a similarly inexpensive smart card writer. In addition, they suggested that RFID tags embedded in travel documents could identify U.S. passports from a distance, possibly letting terrorists use them as a trigger for explosives.

At the Black Hat conference, Lukas Grunwald, a researcher with DN-Systems in Hildesheim, Germany, demonstrated that he could copy data stored in an RFID tag from his passport and write the data to a smart card equipped with an RFID chip.

From Kim Zetter’s “Hackers Clone E-Passports” (Wired: 3 August 2006):

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker — Walluf, Germany-based ACG Identification Technologies — but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports — called Golden Reader Tool and made by secunet Security Networks — and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader — which can also act as a writer — and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

Although he can clone the tag, Grunwald says it’s not possible, as far as he can tell, to change data on the chip, such as the name or birth date, without being detected. That’s because the passport uses cryptographic hashes to authenticate the data.

Grunwald’s technique requires a counterfeiter to have physical possession of the original passport for a time. A forger could not surreptitiously clone a passport in a traveler’s pocket or purse because of a built-in privacy feature called Basic Access Control that requires officials to unlock a passport’s RFID chip before reading it. The chip can only be unlocked with a unique key derived from the machine-readable data printed on the passport’s page.

To produce a clone, Grunwald has to program his copycat chip to answer to the key printed on the new passport. Alternatively, he can program the clone to dispense with Basic Access Control, which is an optional feature in the specification.

As planned, U.S. e-passports will contain a web of metal fiber embedded in the front cover of the documents to shield them from unauthorized readers. Though Basic Access Control would keep the chip from yielding useful information to attackers, it would still announce its presence to anyone with the right equipment. The government added the shielding after privacy activists expressed worries that a terrorist could simply point a reader at a crowd and identify foreign travelers.

In theory, with metal fibers in the front cover, nobody can sniff out the presence of an e-passport that’s closed. But [Kevin Mahaffey and John Hering of Flexilis] demonstrated in their video how even if a passport opens only half an inch — such as it might if placed in a purse or backpack — it can reveal itself to a reader at least two feet away.

In addition to cloning passport chips, Grunwald has been able to clone RFID ticket cards used by students at universities to buy cafeteria meals and add money to the balance on the cards.

He and his partners were also able to crash RFID-enabled alarm systems designed to sound when an intruder breaks a window or door to gain entry. Such systems require workers to pass an RFID card over a reader to turn the system on and off. Grunwald found that by manipulating data on the RFID chip he could crash the system, opening the way for a thief to break into the building through a window or door.

And they were able to clone and manipulate RFID tags used in hotel room key cards and corporate access cards and create a master key card to open every room in a hotel, office or other facility. He was able, for example, to clone Mifare, the most commonly used key-access system, designed by Philips Electronics. To create a master key he simply needed two or three key cards for different rooms to determine the structure of the cards. Of the 10 different types of RFID systems he examined that were being used in hotels, none used encryption.

Many of the card systems that did use encryption failed to change the default key that manufacturers program into the access card system before shipping, or they used sample keys that the manufacturer includes in instructions sent with the cards. Grunwald and his partners created a dictionary database of all the sample keys they found in such literature (much of which they found accidentally published on purchasers’ websites) to conduct what’s known as a dictionary attack. When attacking a new access card system, their RFDump program would search the list until it found the key that unlocked a card’s encryption.

“I was really surprised we were able to open about 75 percent of all the cards we collected,” he says.

2009

From Thomas Ricker’s “Video: Hacker war drives San Francisco cloning RFID passports” (Engadget: 2 February 2009):

Using a $250 Motorola RFID reader and antenna connected to his laptop, Chris recently drove around San Francisco reading RFID tags from passports, driver licenses, and other identity documents. In just 20 minutes, he found and cloned the passports of two very unaware US citizens.

RFID security problems Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Quanta Crypto: cool but useless

From Bruce Schneier’s “Quantum Cryptography” (Crypto-Gram: 15 November 2008):

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper — period.

While I like the science of quantum cryptography — my undergraduate degree was in physics — I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols.

Quanta Crypto: cool but useless Read More »

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

Al Qaeda’s use of social networking sites Read More »

The various participants in phishing schemes

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Christopher Abad provides insight into the phishing economy in an article published online by FirstMonday.org (http://www.firstmonday.org/issues/ issue10_9/abad/). The article, “The economy of phishing: A survey of the operations of the phishing market,” reveals the final phase of the phishing life cycle, called cashing. These are usually not the botherders or the phishers. The phishers are simply providers of credential goods to the cashers. Cashers buy the credential goods from the phishers, either taking a commission on the funds extracted or earned based on the quality, completeness, which financial institution it is from, and the victim’s balance in the account. A high-balance, verified, full-credential account can be purchased for up to $100. Full creden- tials means that you have the credit card number, bank and routing numbers, the expiration date, the security verification code (cvv2) on the back of the card, the ATM pin number, and the current balance. Credit card numbers for a financial institution selected by the supplier can be bought for 50 cents per account. The casher’s commission of this transaction may run as much as 70 percent. When the deal calls for commissions to be paid in cash, the vehicle of choice is Western Union.

The continuation of phishing attacks depends largely on the ability of the casher’s to convert the information into cash. The preferred method is to use the credential information to create duplicate ATM cards and use the cards to withdraw cash from ATM terminals. Not surprisingly the demand for these cards leans heavily in favor of banks that provide inadequate protections of the ATM cards. Institutions like Bank of America are almost nonexistent in the phisher marketplace due to the strong encryption (triple DES) used to protect information on its ATM cards.

The various participants in phishing schemes Read More »

The life cycle of a botnet client

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

What makes a botnet a botnet? In particular, how do you distinguish a botnet client from just another hacker break-in? First, the clients in a botnet must be able to take actions on the client without the hacker having to log into the client’s operating system (Windows, UNIX, or Mac OS). Second, many clients must be able to act in a coordinated fashion to accomplish a common goal with little or no intervention from the hacker. If a collection of computers meet this criteria it is a botnet.

The life of a botnet client, or botclient, begins when it has been exploited. A prospective botclient can be exploited via malicious code that a user is tricked into running; attacks against unpatched vulnerabilities; backdoors left by Trojan worms or remote access Trojans; and password guessing and brute force access attempts. In this section we’ll discuss each of these methods of exploiting botnets.

Rallying and Securing the Botnet Client

Although the order in the life cycle may vary, at some point early in the life of a new botnet client it must call home, a process called “rallying. “When rallying, the botnet client initiates contact with the botnet Command and Control (C&C) Server. Currently, most botnets use IRC for Command and Control.

Rallying is the term given for the first time a botnet client logins in to a C&C server. The login may use some form of encryption or authentication to limit the ability of others to eavesdrop on the communications. Some botnets are beginning to encrypt the communicated data.

At this point the new botnet client may request updates. The updates could be updated exploit software, an updated list of C&C server names, IP addresses, and/or channel names. This will assure that the botnet client can be managed and can be recovered should the current C&C server be taken offline.

The next order of business is to secure the new client from removal. The client can request location of the latest anti-antivirus (Anti-A/V) tool from the C&C server. The newly controlled botclient would download this soft- ware and execute it to remove the A/V tool, hide from it, or render it ineffective.

Shutting off the A/V tool may raise suspicions if the user is observant. Some botclients will run a dll that neuters the A/V tool. With an Anti-A/V dll in place the A/V tool may appear to be working normally except that it never detects or reports the files related to the botnet client. It may also change the Hosts file and LMHosts file so that attempts to contact an A/V vendor for updates will not succeed. Using this method, attempts to contact an A/V vendor can be redirected to a site containing malicious code or can yield a “website or server not found” error.

One tool, hidden32. exe, is used to hide applications that have a GUI interface from the user. Its use is simple; the botherder creates a batch file that executes hidden32 with the name of the executable to be hidden as its parameter. Another stealthy tool, HideUserv2, adds an invisible user to the administrator group.

Waiting for Orders and Retrieving the Payload

Once secured, the botnet client will listen to the C&C communications channel.

The botnet client will then request the associated payload. The payload is the term I give the software representing the intended function of this botnet client.

The life cycle of a botnet client Read More »

AACS, next-gen encryption for DVDs

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

AACS relies on the well-established AES (with 128-bit keys) to safeguard the disc data. Just like DVD players, HD DVD and Blu-ray drives will come with a set of Device Keys handed out to the manufacturers by AACS LA. Unlike the CSS encryption used in DVDs, though, AACS has a built-in method for revoking sets of keys that are cracked and made public. AACS-encrypted discs will feature a Media Key Block that all players need to access in order to get the key needed to decrypt the video files on the disc. The MKB can be updated by AACS LA to prevent certain sets of Device Keys from functioning with future titles – a feature that AACS dubs “revocation.” …

AACS also supports a new feature called the Image Constraint Token. When set, the ICT will force video output to be degraded over analog connections. ICT has so far gone unused, though this could change at any time. …

While AACS is used by both HD disc formats, the Blu-ray Disc Association (BDA) has added some features of its own to make the format “more secure” than HD DVD. The additions are BD+ and ROM Mark; though both are designed to thwart pirates, they work quite differently.

While the generic AACS spec includes key revocation, BD+ actually allows the BDA to update the entire encryption system once players have already shipped. Should encryption be cracked, new discs will include information that will alter the players’ decryption code. …

The other new technology, ROM Mark, affects the manufacturing of Blu-ray discs. All Blu-ray mastering equipment must be licensed by the BDA, and they will ensure that all of it carries ROM Mark technology. Whenever a legitimate disc is created, it is given a “unique and undetectable identifier.” It’s not undetectable to the player, though, and players can refuse to play discs without a ROM Mark. The BDA has the optimistic hope that this will keep industrial-scale piracy at bay. We’ll see.

AACS, next-gen encryption for DVDs Read More »

How DVD encryption (CSS) works … or doesn’t

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

DVD players are factory-built with a set of keys. When a DVD is inserted, the player runs through every key it knows until one unlocks the disc. Once this disc key is known, the player uses it to retrieve a title key from the disc. This title key actually allows the player to unscramble the disc’s contents.

The decryption process might have been formidable when first drawn up, but it had begun to look weak even by 1999. Frank Stevenson, who published a good breakdown of the technology, estimated at that time that a 450Mhz Pentium III could crack the code in only 18 seconds – and that’s without even having a player key in the first place. In other, words a simple brute force attack could crack the code at runtime, assuming that users were patient enough to wait up to 18 seconds. With today’s technology, of course, the same crack would be trivial.

Once the code was cracked, the genie was out of the bottle. CSS descramblers proliferated …

Because the CSS system could not be updated once in the field, the entire system was all but broken. Attempts to patch the system (such as Macrovision’s “RipGuard”) met with limited success, and DVDs today remain easy to copy using a multitude of freely available tools.

How DVD encryption (CSS) works … or doesn’t Read More »

Where we are technically with DRM

From Nate Anderson’s “Hacking Digital Rights Management” (Ars Technica: 18 July 2006):

The attacks on FairPlay have been enlightening because of what they illustrate about the current state of DRM. They show, for instance, that modern DRM schemes are difficult to bypass, ignore, or strip out with a few lines of code. In contrast to older “patches” of computer software (what you would generally bypass a program’s authorization routine), the encryption on modern media files is pervasive. All of the software mentioned has still required Apple’s decoding technology to unscramble the song files; there is no simple hack that can simply strip the files clean without help, and the ciphers are complex enough to make brute-force cracks difficult.

Apple’s response has also been a reminder that cracking an encryption scheme once will no longer be enough in the networked era. Each time that its DRM has been bypassed, Apple has been able to push out updates to its customers that render the hacks useless (or at least make them more difficult to achieve).

Where we are technically with DRM Read More »

Some surprising data isn’t encrypted in ATM transfers

From “Triple DES Upgrades May Introduce New ATM Vulnerabilities” (Payment News: 13 April 2006):

In a press release today, Redspin, an independent auditing firm based in Carpinteria, CA, suggests that the recent mandated upgrades of ATMs to support triple DES encryption of PINs has introduced new vulnerabilities into the ATM network environment – because of other changes that were typically made concurrently with the triple DES upgrades.

<begin press release>Redspin, Inc. has released a white paper detailing the problem. Essentially, unencrypted ATM transaction data is floating around bank networks, and bank managers are completely unaware of it. The only data from an ATM transaction that is encrypted is the PIN number.

“We were in the middle of an audit, looking at network traffic, when there it was, plain as day. We were surprised. The bank manager was surprised. Pretty much everyone we talk to is surprised. The card number, the expiration date, the account balances and withdrawal amounts, they all go across the networks in cleartext, which is exactly what it sounds like — text that anyone can read,” explained Abraham.

Ironically, the problem came about because of a mandated security improvement in ATMs. The original standard for ATM data encryption (DES) was becoming too easy to crack, so the standard was upgraded to Triple DES. Like any home improvement project, many ATM upgrades have snowballed to include a variety of other enhancements, including the use of transmission control protocol/Internet protocol (TCP/IP) — moving ATMs off their own dedicated lines, and on to the banks’ networks. …

A hacker tapping into a bank’s network would have complete access to every single ATM transaction going through the bank’s ATMs.<end press release>

Some surprising data isn’t encrypted in ATM transfers Read More »

What RFID passports really mean

From John Twelve Hawks’s “ How We Live Now” (2005):

The passports contain a radio frequency identification chip (RFID) so that all our personal information can be instantly read by a machine at the airport. However, the State Department has refused to encrypt the information embedded in the chip, because it requires more complicated technology that is difficult to coordinate with other countries. This means that our personal information could be read by a machine called a “skimmer” that can be placed in a doorway or a bus stop, perhaps as far as 30 feet away.

The U.S. government isn’t concerned by this, but the contents of Paris Hilton’s cell phone, which uses the same kind of RFID chip, were skimmed and made public last year. It may not seem like a problem when a semi-celebrity’s phone numbers and emails are stolen, but it is quite possible that an American tourist walking down a street in a foreign country will be “skimmed” by a machine that reads the passport in his or her pocket. A terrorist group will be able to decide if the name on the passport indicates a possible target before the tourist reaches the end of the street.

The new RFID passports are a clear indication that protection is not as important to the authorities as the need to acquire easily accessible personal information.

What RFID passports really mean Read More »

4 ways to eavesdrop on telephone calls

From Bruce Schneier’s “VOIP Encryption” (Crypto-Gram Newsletter: 15 April 2006):

There are basically four ways to eavesdrop on a telephone call.

One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it’s the easiest. While it doesn’t work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.

Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line’s path — even outside the home. This used to be the way the police eavesdropped on your phone line. These days it’s probably most often used by criminals. This method doesn’t work for cell phones, either.

Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.

Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It’s hard to eavesdrop on one particular person this way, but it’s easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They’ve even been known to use submarines to tap undersea phone cables.

4 ways to eavesdrop on telephone calls Read More »

Quick ‘n dirty explanation of onion routing

From Ann Harrison’s Onion Routing Averts Prying Eyes (Wired News: 5 August 2004):

Computer programmers are modifying a communications system, originally developed by the U.S. Naval Research Lab, to help Internet users surf the Web anonymously and shield their online activities from corporate or government eyes.

The system is based on a concept called onion routing. It works like this: Messages, or packets of information, are sent through a distributed network of randomly selected servers, or nodes, each of which knows only its predecessor and successor. Messages flowing through this network are unwrapped by a symmetric encryption key at each server that peels off one layer and reveals instructions for the next downstream node. …

The Navy is financing the development of a second-generation onion-routing system called Tor, which addresses many of the flaws in the original design and makes it easier to use. The Tor client behaves like a SOCKS proxy (a common protocol for developing secure communication services), allowing applications like Mozilla, SSH and FTP clients to talk directly to Tor and route data streams through a network of onion routers, without long delays.

Quick ‘n dirty explanation of onion routing Read More »