security

Could Green Dam lead to the largest botnet in history?

Green_Damn_site_blocked.jpg

From Rob Cottingham’s “From blocking to botnet: Censorship isn’t the only problem with China’s new Internet blocking software” (Social Signal: 10 June 2009):

Any blocking software needs to update itself from time to time: at the very least to freshen its database of forbidden content, and more than likely to fix bugs, add features and improve performance. (Most anti-virus software does this.)

If all the software does is to refresh the list of banned sites, that limits the potential for abuse. But if the software is loading new executable code onto the computer, suddenly there’s the potential for something a lot bigger.

Say you’re a high-ranking official in the Chinese military. And let’s say you have some responsibility for the state’s capacity to wage so-called cyber warfare: digital assaults on an enemy’s technological infrastructure.

It strikes you: there’s a single backdoor into more that 40 million Chinese computers, capable of installing… well, nearly anything you want.

What if you used that backdoor, not just to update blocking software, but to create something else?

Say, the biggest botnet in history?

Still, a botnet 40 million strong (plus the installed base already in place in Chinese schools and other institutions) at the beck and call of the military is potentially a formidable weapon. Even if the Chinese government has no intention today of using Green Dam for anything other than blocking pornography, the temptation to repurpose it for military purposes may prove to be overwhelming.

Green Dam is easily exploitable

Green_Damn_site_blocked.jpg

From Scott Wolchok, Randy Yao, and J. Alex Halderman’s “Analysis of the Green Dam Censorware System” (The University of Michigan: 11 June 2009):

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC.

According to press reports, China will soon require all PCs sold in the country to include Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material.

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

The Uncanny Valley, art forgery, & love

Apply new wax to old wood
Creative Commons License photo credit: hans s

From Errol Morris’ “Bamboozling Ourselves (Part 2)” (The New York Times: 28 May 2009):

[Errol Morris:] The Uncanny Valley is a concept developed by the Japanese robot scientist Masahiro Mori. It concerns the design of humanoid robots. Mori’s theory is relatively simple. We tend to reject robots that look too much like people. Slight discrepancies and incongruities between what we look like and what they look like disturb us. The closer a robot resembles a human, the more critical we become, the more sensitive to slight discrepancies, variations, imperfections. However, if we go far enough away from the humanoid, then we much more readily accept the robot as being like us. This accounts for the success of so many movie robots — from R2-D2 to WALL-E. They act like humans but they don’t look like humans. There is a region of acceptability — the peaks around The Uncanny Valley, the zone of acceptability that includes completely human and sort of human but not too human. The existence of The Uncanny Valley also suggests that we are programmed by natural selection to scrutinize the behavior and appearance of others. Survival no doubt depends on such an innate ability.

EDWARD DOLNICK: [The art forger Van Meegeren] wants to avoid it. So his big challenge is he wants to paint a picture that other people are going to take as Vermeer, because Vermeer is a brand name, because Vermeer is going to bring him lots of money, if he can get away with it, but he can’t paint a Vermeer. He doesn’t have that skill. So how is he going to paint a picture that doesn’t look like a Vermeer, but that people are going to say, “Oh! It’s a Vermeer?” How’s he going to pull it off? It’s a tough challenge. Now here’s the point of The Uncanny Valley: as your imitation gets closer and closer to the real thing, people think, “Good, good, good!” — but then when it’s very close, when it’s within 1 percent or something, instead of focusing on the 99 percent that is done well, they focus on the 1 percent that you’re missing, and you’re in trouble. Big trouble.

Van Meegeren is trapped in the valley. If he tries for the close copy, an almost exact copy, he’s going to fall short. He’s going to look silly. So what he does instead is rely on the blanks in Vermeer’s career, because hardly anything is known about him; he’s like Shakespeare in that regard. He’ll take advantage of those blanks by inventing a whole new era in Vermeer’s career. No one knows what he was up to all this time. He’ll throw in some Vermeer touches, including a signature, so that people who look at it will be led to think, “Yes, this is a Vermeer.”

Van Meegeren was sometimes careful, other times astonishingly reckless. He could have passed certain tests. What was peculiar, and what was quite startling to me, is that it turned out that nobody ever did any scientific test on Van Meegeren, even the stuff that was available in his day, until after he confessed. And to this day, people hardly ever test pictures, even multi-million dollar ones. And I was so surprised by that that I kept asking, over and over again: why? Why would that be? Before you buy a house, you have someone go through it for termites and the rest. How could it be that when you’re going to lay out $10 million for a painting, you don’t test it beforehand? And the answer is that you don’t test it because, at the point of being about to buy it, you’re in love! You’ve found something. It’s going to be the high mark of your collection; it’s going to be the making of you as a collector. You finally found this great thing. It’s available, and you want it. You want it to be real. You don’t want to have someone let you down by telling you that the painting isn’t what you think it is. It’s like being newly in love. Everything is candlelight and wine. Nobody hires a private detective at that point. It’s only years down the road when things have gone wrong that you say, “What was I thinking? What’s going on here?” The collector and the forger are in cahoots. The forger wants the collector to snap it up, and the collector wants it to be real. You are on the same side. You think that it would be a game of chess or something, you against him. “Has he got the paint right?” “Has he got the canvas?” You’re going to make this checkmark and that checkmark to see if the painting measures up. But instead, both sides are rooting for this thing to be real. If it is real, then you’ve got a masterpiece. If it’s not real, then today is just like yesterday. You’re back where you started, still on the prowl.

Taxi driver party lines

8th Ave .....Midtown Manhattan
Creative Commons License photo credit: 708718

From Annie Karni’s “Gabbing Taxi Drivers Talking on ‘Party Lines’” (The New York Sun: 11 January 2007):

It’s not just wives at home or relatives overseas that keep taxi drivers tied up on their cellular phones during work shifts. Many cabbies say that when they are chatting on duty, it’s often with their cab driver colleagues on group party lines. Taxi drivers say they use conference calls to discuss directions and find out about congested routes to avoid. They come to depend on one another as first responders, reacting faster even than police to calls from drivers in distress. Some drivers say they participate in group prayers on a party line.

It is during this morning routine, waiting for the first shuttle flights to arrive from Washington and Boston, where many friendships between cabbies are forged and cell phone numbers are exchanged, Mr. Sverdlov said. Once drivers have each other’s numbers, they can use push-to-talk technology to call large groups all at once.

Mr. Sverdlov said he conferences with up to 10 cabbies at a time to discuss “traffic, what’s going on, this and that, and where do cops stay.” He estimated that every month, he logs about 20,000 talking minutes on his cell phone.

While civilian drivers are allowed to use hands-free devices to talk on cell phones while behind the wheel, the Taxi & Limousine Commission imposed a total cell phone ban for taxi drivers on duty in 1999. In 2006, the Taxi & Limousine Commission issued 1,049 summonses for phone use while on duty, up by almost 69% from the 621 summonses it issued the previous year. Drivers caught chatting while driving are fined $200 and receive two-point penalties on their licenses.

Drivers originally from countries like Israel, China, and America, who are few and far between, say they rarely chat on the phone with other cab drivers because of the language barrier. For many South Asians and Russian drivers, however, conference calls that are prohibited by the Taxi & Limousine Commission are mainstays of cabby life.

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

The watchclock knows where your night watchman is

Detex Watchclock Station
Creative Commons License photo credit: 917press

From Christopher Fahey’s “Who Watches the Watchman?” (GraphPaper: 2 May 2009):

The Detex Newman watchclock was first introduced in 1927 and is still in wide use today.

&hellip What could you possibly do in 1900 to be absolutely sure a night watchman was making his full patrol?

An elegant solution, designed and patented in 1901 by the German engineer A.A. Newman, is called the “watchclock”. It’s an ingenious mechanical device, slung over the shoulder like a canteen and powered by a simple wind-up spring mechanism. It precisely tracks and records a night watchman’s position in both space and time for the duration of every evening. It also generates a detailed, permanent, and verifiable record of each night’s patrol.

What’s so interesting to me about the watchclock is that it’s an early example of interaction design used to explicitly control user behavior. The “user” of the watchclock device is obliged to behave in a strictly delimited fashion.

The key, literally, to the watchclock system is that the watchman is required to “clock in” at a series of perhaps a dozen or more checkpoints throughout the premises. Positioned at each checkpoint is a unique, coded key nestled in a little steel box and secured by a small chain. Each keybox is permanently and discreetly installed in strategically-placed nooks and crannies throughout the building, for example in a broom closet or behind a stairway.

The watchman makes his patrol. He visits every checkpoint and clicks each unique key into the watchclock. Within the device, the clockwork marks the exact time and key-location code to a paper disk or strip. If the watchman visits all checkpoints in order, they will have completed their required patrol route.

The watchman’s supervisor can subsequently unlock the device itself (the watchman himself cannot open the watchclock) and review the paper records to confirm if the watchman was or was not doing their job.

A better alternative to text CAPTCHAs

From Rich Gossweiler, Maryam Kamvar, & Shumeet Baluja’s “What’s Up CAPTCHA?: A CAPTCHA Based On Image Orientation” (Google: 20-24 April 2009):

There are several classes of images which can be successfully oriented by computers. Some objects, such as faces, cars, pedestrians, sky, grass etc.

Many images, however, are difficult for computers to orient. For example, indoor scenes have variations in lighting sources, and abstract and close-up images provide the greatest challenge to both computers and people, often because no clear anchor points or lighting sources exist.

The average performance on outdoor photographs, architecture photographs and typical tourist type photographs was significantly higher than the performance on abstract photographs, close-ups and backgrounds. When an analysis of the features used to make the discriminations was done, it was found that the edge features play a significant role.

It is important not to simply select random images for this task. There are many cues which can quickly reveal the upright orientation of an image to automated systems; these images must be filtered out. For example, if typical vacation or snapshot photos are used, automated rotation accuracies can be in the 90% range. The existence of any of the cues in the presented images will severely limit the effectiveness of the approach. Three common cues are listed below:

1. Text: Usually the predominant orientation of text in an image reveals the upright orientation of an image.

2. Faces and People: Most photographs are taken with the face(s) / people upright in the image.

3. Blue skies, green grass, and beige sand: These are all revealing clues, and are present in many travel/tourist photographs found on the web. Extending this beyond color, in general, the sky often has few texture/edges in comparison to the ground. Additional cues found important in human tests include "grass", "trees", "cars", "water" and "clouds".

Second, due to sometimes warped objects, lack of shading and lighting cues, and often unrealistic colors, cartoons also make ideal candidates. … Finally, although we did not alter the content of the image, it may be possible to simply alter the color- mapping, overall lighting curves, and hue/saturation levels to reveal images that appear unnatural but remain recognizable to people.

To normalize the shape and size of the images, we scaled each image to a 180×180 pixel square and we then applied a circular mask to remove the image corners.

We have created a system that has sufficiently high human- success rates and sufficiently low computer-success rates. When using three images, the rotational CAPTCHA system results in an 84% human success metric, and a .009% bot-success metric (assuming random guessing). These metrics are based on two variables: the number of images we require a user to rotate and the size of the acceptable error window (the degrees from upright which we still consider to be upright). Predictably, as the number of images shown becomes greater, the probability of correctly solving them decreases. However, as the error window increases, the probability of correctly solving them increases. The system which results in an 84% human success rate and .009% bot success rate asks the user to rotate three images, each within 16° of upright (8-degrees on either side of upright).

A CAPTCHA system which displayed ≥ 3 images with a ≤ 16-degree error window would achieve a guess success rate of less than 1 in 10,000, a standard acceptable computer success rates for CAPTCHAs.

In our experiments, users moved a slider to rotate the image to its upright position. On small display devices such as a mobile phone, they could directly manipulate the image using a touch screen, as seen in Figure 12, or can rotate it via button presses.

A story of failed biometrics at a gym

Fingerprints
Creative Commons License photo credit: kevindooley

From Jake Vinson’s “Cracking your Fingers” (The Daily WTF: 28 April 2009):

A few days later, Ross stood proudly in the reception area, hands on his hips. A high-tech fingerprint scanner sat at the reception area near the turnstile and register, as the same scanner would be used for each, though the register system wasn’t quite ready for rollout yet. Another scanner sat on the opposite side of the turnstile, for gym members to sign out. … The receptionist looked almost as pleased as Ross that morning as well, excited that this meant they were working toward a system that necessitated less manual member ID lookups.

After signing a few people up, the new system was going swimmingly. Some users declined to use the new system, instead walking to the far side of the counter to use the old touchscreen system. Then Johnny tried to leave after his workout.

… He scanned his finger on his way out, but the turnstile wouldn’t budge.

“Uh, just a second,” the receptionist furiously typed and clicked, while Johnny removed one of his earbuds out and stared. “I’ll just have to manually override it…” but it was useless. There was no manual override option. Somehow, it was never considered that the scanner would malfunction. After several seconds of searching and having Johnny try to scan his finger again, the receptionist instructed him just to jump over the turnstile.

It was later discovered that the system required a “sign in” and a “sign out,” and if a member was recognized as someone else when attempting to sign out, the system rejected the input, and the turnstile remained locked in position. This was not good.

The scene repeated itself several times that day. Worse, the fingerprint scanner at the exit was getting kind of disgusting. Dozens of sweaty fingerprints required the scanner to be cleaned hourly, and even after it was freshly cleaned, it sometimes still couldn’t read fingerprints right. The latticed patterns on the barbell grips would leave indented patterns temporarily on the members’ fingers, there could be small cuts or folds on fingertips just from carrying weights or scrapes on the concrete coming out of the pool, fingers were wrinkly after a long swim, or sometimes the system just misidentified the person for no apparent reason.

Fingerprint Scanning

In much the same way that it’s not a good idea to store passwords in plaintext, it’s not a good idea to store raw fingerprint data. Instead, it should be hashed, so that the same input will consistently give the same output, but said output can’t be used to determine what the input was. In biometry, there are many complex algorithms that can analyze a fingerprint via several points on the finger. This system was set up to record seven points.

After a few hours of rollout, though, it became clear that the real world doesn’t conform to how it should’ve worked in theory. There were simply too many variables, too many activities in the gym that could cause fingerprints to become altered. As such, the installers did what they thought was the reasonable thing to do – reduce the precision from seven points down to something substantially lower.

The updated system was in place for a few days, and it seemed to be working better; no more people being held up trying to leave.

Discovery

… [The monitor] showed Ray as coming in several times that week, often twice on the same day, just hours apart. For each day listed, Ray had only come the later of the two times.

Reducing the precision of the fingerprint scanning resulted in the system identifying two people as one person. Reviewing the log, they saw that some regulars weren’t showing up in the system, and many members had two or three people being identified by the scanner as them.

German twins commit the perfect crime

From “Twins Suspected in Spectacular Jewelry Heist Set Free” (Spiegel Online International: 19 March 2009):

Saved by their indistinguishable DNA, identical twins suspected in a massive jewelry heist have been set free. Neither could be exclusively linked to the DNA evidence.

German police say at least one of the identical twin brothers Hassan and Abbas O. may have perpetrated a recent multimillion euro jewelry heist in Berlin. But because of their indistinguishable DNA, neither can be individually linked to the crime. Both were set free on Wednesday.

In the early morning hours of February 25, three masked men broke into Germany’s famous luxury department store Kaufhaus Des Westens (KaDeWe). Video cameras show how they climbed into the store’s grand main hall, broke open cabinets and display cases and made off with an estimated €5 million worth of jewelry and watches.

When police found traces of DNA on a glove left at the scene of the crime, it seemed that the criminals responsible for Germany’s most spectacular heist in years would be caught. But the DNA led to not one but two suspects — 27-year-old identical, or monozygotic, twins with near-identical DNA.

German law stipulates that each criminal must be individually proven guilty. The problem in the case of the O. brothers is that their twin DNA is so similar that neither can be exclusively linked to the evidence using current methods of DNA analysis. So even though both have criminal records and may have committed the heist together, Hassan and Abbas O. have been set free.

Criminal goods & service sold on the black market

From Ellen Messmer’s “Symantec takes cybercrime snapshot with ‘Underground Economy’ report” (Network World: 24 November 2008):

The “Underground Economy” report [from Symantec] contains a snapshot of online criminal activity observed from July 2007 to June 2008 by a Symantec team monitoring activities in Internet Relay Chat (IRC) and Web-based forums where stolen goods are advertised. Symantec estimates the total value of the goods advertised on what it calls “underground servers” was about $276 million, with credit-card information accounting for 59% of the total.

If that purloined information were successfully exploited, it probably would bring the buyers about $5 billion, according to the report — just a drop in the bucket, points out David Cowings, senior manager of operations at Symantec Security Response.

“Ninety-eight percent of the underground-economy servers have life spans of less than 6 months,” Cowings says. “The smallest IRC server we saw had five channels and 40 users. The largest IRC server network had 28,000 channels and 90,000 users.”

In the one year covered by the report, Symantec’s team observed more than 69,000 distinct advertisers and 44 million total messages online selling illicit credit-card and financial data, but the 10 most active advertisers appeared to account for 11% of the total messages posted and $575,000 in sales.

According to the report, a bank-account credential was selling for $10 to $1,000, depending on the balance and location of the account. Sellers also hawked specific financial sites’ vulnerabilities for an average price of $740, though prices did go as high as $2,999.

In other spots, the average price for a keystroke logger — malware used to capture a victim’s information — was an affordable $23. Attack tools, such as botnets, sold for an average of $225. “For $10, you could host a phishing site on someone’s server or compromised Web site,” Cowings says.

Desktop computer games appeared to be the most-pirated software, accounting for 49% of all file instances that Symantec observed. The second-highest category was utility applications; third-highest was multimedia productivity applications, such as photograph or HTML editors.

Another huge botnet

From Kelly Jackson Higgins’ “Researchers Find Massive Botnet On Nearly 2 Million Infected Consumer, Business, Government PCs” (Dark Reading: 22 April 2009):

Researchers have discovered a major botnet operating out of the Ukraine that has infected 1.9 million machines, including large corporate and government PCs mainly in the U.S.

The botnet, which appears to be larger than the infamous Storm botnet was in its heyday, has infected machines from some 77 government-owned domains — 51 of which are U.S. government ones, according to Ophir Shalitin, marketing director of Finjan, which recently found the botnet. Shalitin says the botnet is controlled by six individuals and is hosted in Ukraine.

Aside from its massive size and scope, what is also striking about the botnet is what its malware can do to an infected machine. The malware lets an attacker read the victim’s email, communicate via HTTP in the botnet, inject code into other processes, visit Websites without the user knowing, and register as a background service on the infected machine, for instance.

Finjan says victims are infected when visiting legitimate Websites containing a Trojan that the company says is detected by only four of 39 anti-malware tools, according to a VirusTotal report run by Finjan researchers.

Around 45 percent of the bots are in the U.S., and the machines are Windows XP. Nearly 80 percent run Internet Explorer; 15 percent, Firefox; 3 percent, Opera; and 1 percent Safari. Finjan says the bots were found in banks and large corporations, as well as consumer machines.

Reasons Windows has a poor security architecture

From Daniel Eran Dilger’s “The Unavoidable Malware Myth: Why Apple Won’t Inherit Microsoft’s Malware Crown” (AppleInsider: 1 April 2008):

Thanks to its extensive use of battle-hardened Unix and open source software, Mac OS X also has always had security precautions in place that Windows lacked. It has also not shared the architectural weaknesses of Windows that have made that platform so easy to exploit and so difficult to clean up afterward, including:

  • the Windows Registry and the convoluted software installation mess related to it,
  • the Windows NT/2000/XP Interactive Services flaw opening up shatter attacks,
  • a wide open, legacy network architecture that left unnecessary, unsecured ports exposed by default,
  • poorly designed network sharing protocols that failed to account for adequate security measures,
  • poorly designed administrative messaging protocols that failed to account for adequate security,
  • poorly designed email clients that gave untrusted scripts access to spam one’s own contacts unwittingly,
  • an integrated web browser architecture that opened untrusted executables by design, and many others.

Vista & Mac OS X security features

From Prince McLean’s “Pwn2Own contest winner: Macs are safer than Windows” (AppleInsider: 26 March 2009):

Once it did arrive, Vista introduced sophisticated new measures to make it more difficult for malicious crackers to inject code.

One is support for the CPU’s NX bit, which allows a process to mark certain areas of memory as “Non-eXecutable” so the CPU will not run any code stored there. This is referred to as “executable space protection,” and helps to prevent malicious code from being surreptitiously loaded into a program’s data storage and subsequently executed to gain access to the same privileges as the program itself, an exploit known as a “buffer overflow attack.”

A second security practice of Vista is “address space layout randomization” or ASLR, which is used to load executables, and the system libraries, heap, and stack into a randomly assigned location within the address space, making it far more difficult for crackers to know where to find vulnerabilities they can attack, even if they know what the bugs are and how to exploit them.

[Charlie Miller, the security expert who won both this and last year’s CanSecWest Pwn2Own security contests,] told Tom’s Hardware “the NX bit is very powerful. When used properly, it ensures that user-supplied code cannot be executed in the process during exploitation. Researchers (and hackers) have struggled with ways around this protection. ASLR is also very tough to defeat. This is the way the process randomizes the location of code in a process. Between these two hurdles, no one knows how to execute arbitrary code in Firefox or IE 8 in Vista right now. For the record, Leopard has neither of these features, at least implemented effectively. In the exploit I won Pwn2Own with, I knew right where my shellcode was located and I knew it would execute on the heap for me.”

While Apple did implement some support for NX and ASLR in Mac OS X, Leopard retains dyld, (the dynamic loader responsible for loading all of the frameworks, dylibs, and bundles needed by a process) in the same known location, making it relatively trivial to bypass its ASLR. This is slated to change later this year in Snow Leopard.

With the much larger address space available to 64-bit binaries, Snow Leopard’s ASLR will make it possible to hide the location of loaded code like a needle in a haystack, thwarting the efforts of malicious attackers to maintain predictable targets for controlling the code and data loaded into memory. Without knowing what addresses to target, the “vast majority of these exploits will fail,” another security expert who has also won a high profile Mac cracking contest explained to AppleInsider.

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes

From Catey Hill’s “Massive ATM heist! $9M stolen in only 30 minutes” (New York Daily News: 12 February 2009)

With information stolen from only 100 ATM cards, thieves made off with $9 million in cash, according to published reports. It only took 30 minutes.

“We’ve seen similar attempts to defraud a bank through ATM machines but not, not anywhere near the scale we have here,” FBI Agent Ross Rice told Fox 5. “We’ve never seen one this well coordinated,” the FBI told Fox 5.

The heist happened in November, but FBI officials released more information about the events only recently. …

How did they do it? The thieves hacked into the RBS WorldPay computer system and stole payroll card information from the company. A payroll card is used by many companies to pay the salaries of their employees. The cards work a lot like a debit card and can be used in any ATM.

Once the thieves had the card info, they employed a group of ‘cashers’ – people employed to go get the money out of the ATMs. The cashers went to ATMs around the world and withdrew money.
“Over 130 different ATM machines in 49 cities worldwide were accessed in a 30-minute period on November 8,” Agent Rice told Fox 5.

Social software: 5 properties & 3 dynamics

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

A great deal of sociality is about engaging with publics, but we take for granted certain structural aspects of those publics. Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

1. Persistence. What you say sticks around. This is great for asynchronicity, not so great when everything you’ve ever said has gone down on your permanent record. …

2. Replicability. You can copy and paste a conversation from one medium to another, adding to the persistent nature of it. This is great for being able to share information, but it is also at the crux of rumor-spreading. Worse: while you can replicate a conversation, it’s much easier to alter what’s been said than to confirm that it’s an accurate portrayal of the original conversation.

3. Searchability. My mother would’ve loved to scream search into the air and figure out where I’d run off with friends. She couldn’t; I’m quite thankful. But with social media, it’s quite easy to track someone down or to find someone as a result of searching for content. Search changes the landscape, making information available at our fingertips. This is great in some circumstances, but when trying to avoid those who hold power over you, it may be less than ideal.

4. Scalability. Social media scales things in new ways. Conversations that were intended for just a friend or two might spiral out of control and scale to the entire school or, if it is especially embarrassing, the whole world. …

5. (de)locatability. With the mobile, you are dislocated from any particular point in space, but at the same time, location-based technologies make location much more relevant. This paradox means that we are simultaneously more and less connected to physical space.

Those five properties are intertwined, but their implications have to do with the ways in which they alter social dynamics. Let’s look at three different dynamics that have been reconfigured as a result of social media.

1. Invisible Audiences. We are used to being able to assess the people around us when we’re speaking. We adjust what we’re saying to account for the audience. Social media introduces all sorts of invisible audiences. There are lurkers who are present at the moment but whom we cannot see, but there are also visitors who access our content at a later date or in a different environment than where we first produced them. As a result, we are having to present ourselves and communicate without fully understanding the potential or actual audience. The potential invisible audiences can be stifling. Of course, there’s plenty of room to put your head in the sand and pretend like those people don’t really exist.

2. Collapsed Contexts. Connected to this is the collapsing of contexts. In choosing what to say when, we account for both the audience and the context more generally. Some behaviors are appropriate in one context but not another, in front of one audience but not others. Social media brings all of these contexts crashing into one another and it’s often difficult to figure out what’s appropriate, let alone what can be understood.

3. Blurring of Public and Private. Finally, there’s the blurring of public and private. These distinctions are normally structured around audience and context with certain places or conversations being “public” or “private.” These distinctions are much harder to manage when you have to contend with the shifts in how the environment is organized.

All of this means that we’re forced to contend with a society in which things are being truly reconfigured. So what does this mean? As we are already starting to see, this creates all new questions about context and privacy, about our relationship to space and to the people around us.

What passwords do people use? phpBB examples

From Robert Graham’s “PHPBB Password Analysis” (Dark Reading: 6 February 2009):

A popular Website, phpbb.com, was recently hacked. The hacker published approximately 20,000 user passwords from the site. …

This incident is similar to one two years ago when MySpace was hacked, revealing about 30,000 passwords. …

The striking different between the two incidents is that the phpbb passwords are simpler. MySpace requires that passwords “must be between 6 and 10 characters, and contain at least 1 number or punctuation character.” Most people satisfied this requirement by simply appending “1” to the ends of their passwords. The phpbb site has no such restrictions — the passwords are shorter and rarely contain anything more than a dictionary word.

It’s hard to judge exactly how many passwords are dictionary words. … I ran the phpbb passwords through various dictionary files and come up with a 65% match (for a simple English dictionary) and 94% (for “hacker” dictionaries). …

16% of passwords matched a person’s first name. This includes people choosing their own first names or those of their spouses or children. The most popular first names were Joshua, Thomas, Michael, and Charlie. But I wonder if there is something else going on. Joshua, for example, was also the password to the computer in “Wargames” …

14% of passwords were patterns on the keyboard, like “1234,” “qwerty,” or “asdf.” There are a lot of different patterns people choose, like “1qaz2wsx” or “1q2w3e.” I spent a while googling “159357,” trying to figure out how to categorize it, then realized it was a pattern on the numeric keypad. …

4% are variations of the word “password,” such as “passw0rd,” “password1,” or “passwd.” I googled “drowssap,” trying to figure out how to categorize it, until I realized it was “password” spelled backward.

5% of passwords are pop-culture references from TV, movies, and music. These tend to be youth culture (“hannah,” “pokemon,” “tigger”) and geeky (“klingon,” “starwars,” “matrix,” “legolas,” “ironman”). … Some notable pop-culture references are chosen not because they are popular, but because they sound like passwords, such as “ou812” (’80s Van Halen album), “blink182” (’90s pop), “rush2112” (’80s album), and “8675309” (’80s pop song).

4% of passwords appear to reference things nearby. The name “samsung” is a popular password, I think because it’s the brand name on the monitor that people are looking at … Similarly, there are a lot of names of home computers like “dell,” “packard,” “apple,” “pavilion,” “presario,” “compaq,” and so on. …

3% of passwords are “emo” words. Swear words, especially the F-word, are common, but so are various forms of love and hate (like “iloveyou” or “ihateyou”).

3% are “don’t care” words. … A lot of password choices reflect this attitude, either implicitly with “abc123” or “blahblah,” or explicitly with “whatever,” “whocares,” or “nothing.”

1.3% are passwords people saw in movies/TV. This is a small category, consisting only of “letmein,” “trustno1,” “joshua,” and “monkey,” but it accounts for a large percentage of passwords.

1% are sports related. …

Here is the top 20 passwords from the phpbb dataset. You’ll find nothing surprising here; all of them are on this Top 500 list.

3.03% “123456”
2.13% “password”
1.45% “phpbb”
0.91% “qwerty”
0.82% “12345”
0.59% “12345678”
0.58% “letmein”
0.53% “1234”
0.50% “test”
0.43% “123”
0.36% “trustno1”
0.33% “dragon”
0.31% “abc123”
0.31% “123456789”
0.31% “111111”
0.30% “hello”
0.30% “monkey”
0.28% “master”
0.22% “killer”
0.22% “123123”

Notice that whereas “myspace1” was one of the most popular passwords in the MySpace dataset, “phpbb” is one of the most popular passwords in the phpbb dataset.

The password length distribution is as follows:

1 character 0.34%
2 characters 0.54%
3 characters 2.92%
4 characters 12.29%
5 characters 13.29%
6 characters 35.16%
7 characters 14.60%
8 characters 15.50%
9 characters 3.81%
10 characters 1.14%
11 characters 0.22%

Note that phpbb has no requirements for password lengths …

Should states track cars with GPS?

From Glen Johnson’s “Massachusetts may consider a mileage charge” (AP: 17 February 2009):

A tentative plan to overhaul Massachusetts’ transportation system by using GPS chips to charge motorists a quarter-cent for every mile behind the wheel has angered some drivers.

But a “Vehicle Miles Traveled” program like the one the governor may unveil this week has already been tested — with positive results — in Oregon.

Governors in Idaho and Rhode Island, as well as the federal government, also are talking about such programs. And in North Carolina, a panel suggested in December the state start charging motorists a quarter-cent for every mile as a substitute for the gas tax.

“The Big Brother issue was identified during the first meeting of the task force that developed our program,” said Jim Whitty, who oversees innovation projects for the Oregon Department of Transportation. “Everything we did from that point forward, even though we used electronics, was to eliminate those concerns.”

A draft overhaul transport plan prepared for Gov. Deval Patrick says implementing a Vehicle Miles Traveled system to replace the gas tax makes sense. “A user-based system, collected electronically, is a fair way to pay for our transportation needs in the future,” it says.

The idea behind the program is simple: As cars become more fuel efficient or powered by electricity, gas tax revenues decline. Yet the cost of building and maintaining roads and bridges is increasing. A state could cover that gap by charging drivers precisely for the mileage their vehicles put on public roads.

In Oregon, the state paid volunteers who let the transportation department install GPS receivers in 300 vehicles. The device did not transmit a signal — which would allow real-time tracking of a driver’s movements — but instead passively received satellite pings telling the receiver where it was in terms of latitude and longitude coordinates.

The state used those coordinates to determine when the vehicle was driving both within Oregon and outside the state. And it measured the respective distances through a connection with the vehicle’s odometer.

When a driver pulled into a predetermined service station, the pump linked electronically with the receiver, downloaded the number of miles driven in Oregon and then charged the driver a fee based on the distance. The gas tax they would have paid was reduced by the amount of the user fee. Drivers continued to be charged gas tax for miles driven outside Oregon.

Under such systems, one of which is already used in London, drivers are charged more for entering a crowded area during rush hour than off-peak periods.

Crazy anti-terrorism plans that worked

From a Special Operations officer quoted in Tom Ricks’s Inbox (The Washington Post: 5 October 2008):

One of the most interesting operations was the laundry mat [sic]. Having lost many troops and civilians to bombings, the Brits decided they needed to determine who was making the bombs and where they were being manufactured. One bright fellow recommended they operate a laundry and when asked “what the hell he was talking about,” he explained the plan and it was incorporated — to much success.

The plan was simple: Build a laundry and staff it with locals and a few of their own. The laundry would then send out “color coded” special discount tickets, to the effect of “get two loads for the price of one,” etc. The color coding was matched to specific streets and thus when someone brought in their laundry, it was easy to determine the general location from which a city map was coded.

While the laundry was indeed being washed, pressed and dry cleaned, it had one additional cycle — every garment, sheet, glove, pair of pants, was first sent through an analyzer, located in the basement, that checked for bomb-making residue. The analyzer was disguised as just another piece of the laundry equipment; good OPSEC [operational security]. Within a few weeks, multiple positives had shown up, indicating the ingredients of bomb residue, and intelligence had determined which areas of the city were involved. To narrow their target list, [the laundry] simply sent out more specific coupons [numbered] to all houses in the area, and before long they had good addresses. After confirming addresses, authorities with the SAS teams swooped down on the multiple homes and arrested multiple personnel and confiscated numerous assembled bombs, weapons and ingredients. During the entire operation, no one was injured or killed.
ad_icon

By the way, the gentleman also told the story of how [the British] also bugged every new car going into Northern Ireland, and thus knew everything [Sinn Fein leader] Gerry Adams was discussing. They did this because Adams always conducted mobile meetings and always used new cars.

The Israelis have a term for this type of thinking, “Embracing the Meshugganah,” which literally translated means, embrace the craziness, because the crazier the plan, the less likely the adversary will have thought about it, and thus, not have implemented a counter-measure.

Why cons work on us

From Damien Carrick’s interview with Nicholas Johnson, “The psychology of conmen” (The Law Report: 30 September 2008):

Nicholas Johnson: I think what I love most about con artists and the world of scammers is that they’re criminals who manage to get their victims to hand over their possessions freely. Most thieves and robbers and the like, tend to use force, or deception, in order for them to take things, whereas a con artist manages to get their victim to freely give up their stuff.

The main thing that really makes people susceptible to con artists is the idea that we’re going to get something for nothing. So it really buys into our greed; it buys into sometimes our lust, and at the same time, sometimes even our sense that we’re going to do something good, so we’re going to get a great feeling from helping someone out, we’re going to make some money, we’re going to meet a beautiful girl—it really ties into our basest desires, and that’s what the con artist relies on.

Most con artists rely on this idea that the victim is in control. The victim is the one who is controlling the situation. So a great example of that is the classic Nigerian email scam, the person who writes to you and says, ‘I’ve got this money that I need to get out of the country, and I need your help.’ So you’re in control, you can help them, you can do a good deed, you can make some money, you’ve got this fantastic opportunity, and the con artist needs your help. It’s not the con artist doing you a favour. So really, you feel like you’re the one who’s controlling the situation when really it’s the con artist who knows the real deal.

I think for a lot of con artists they’re very proud of their work, and they like people to know exactly what they’ve gotten away with.

… for many of [the conmen], they really feel like even if they get caught, or even if they don’t get away with it, they feel like they’re giving their victim a good story, you know, something to dine out over, something to discuss down at the pub. They think that’s OK, you can scam somebody out of a couple of hundred bucks, because they’re getting a good story in return.

My all-time favourite one only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.

But there’s all sorts of those homemaker scams, people offering to seal your roof so they say, ‘We’ll put a fresh coat of tar on your roof’, or ‘We’ll re-seal your driveway’. In actual fact all they do is get old black sump oil and smooth it over the roof or smooth it over the driveway. You come home and it looks like wet tar, and so ‘Don’t step on it for 24 hours’, and of course 24 hours later they’re long gone with the money, and you’re left with a sticky, smelly driveway.