communication

A nanny’s man-in-the-middle attack

From Bruce Schneier’s Crypto-Gram of 15 April 2004:

Here’s a story of a woman who posts an ad requesting a nanny. When a potential nanny responds, she asks for references for a background check. Then she places another ad, using the reference material as a fake identity. She gets a job with the good references—they’re real, although for another person—and then robs the family who hires her. And then she repeats the process.

Look what’s going on here. She inserts herself in the middle of a communication between the real nanny and the real employer, pretending to be one to the other. The nanny sends her references to someone she assumes to be a potential employer, not realizing that it is a criminal. The employer receives the references and checks them, not realizing that they don’t actually belong to the person who is sending them.

A nanny’s man-in-the-middle attack Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Interviewed for an article about mis-uses of Twitter

The Saint Louis Beacon published an article on 27 April 2009 titled “Tweets from the jury box aren’t amusing“, about legal “cases across the country where jurors have used cell phones, BlackBerrys and other devices to comment – sometimes minute by minute or second by second on Twitter, for instance – on what they are doing, hearing and seeing during jury duty.” In it, I was quoted as follows:

The small mobile devices so prevalent today can encourage a talkative habit that some people may find hard to break.

“People get so used to doing it all the time,” said Scott Granneman, an adjunct professor of communications and journalism at Washington University. “They don’t stop to think that being on a jury means they should stop. It’s an etiquette thing.”

Moreover, he added, the habit can become so ingrained in some people – even those on juries who are warned repeatedly they should not Tweet or text or talk about they case they are on – that they make excuses.

“It’s habitual,” Granneman said. “They say ‘It’s just my friends and family reading this.’ They don’t know the whole world is following this.

“Anybody can go to any Twitter page. There may be only eight people following you, but anybody can go anyone’s page and anybody can reTweet – forward someone’s page: ‘Oh my God, the defense attorney is so stupid.’ That can go on and on and on.”

Interviewed for an article about mis-uses of Twitter Read More »

Defining social media, social software, & Web 2.0

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Social media is the latest buzzword in a long line of buzzwords. It is often used to describe the collection of software that enables individuals and communities to gather, communicate, share, and in some cases collaborate or play. In tech circles, social media has replaced the earlier fave “social software.” Academics still tend to prefer terms like “computer-mediated communication” or “computer-supported cooperative work” to describe the practices that emerge from these tools and the old skool academics might even categorize these tools as “groupwork” tools. Social media is driven by another buzzword: “user-generated content” or content that is contributed by participants rather than editors.

… These tools are part of a broader notion of “Web2.0.” Yet-another-buzzword, Web2.0 means different things to different people.

For the technology crowd, Web2.0 was about a shift in development and deployment. Rather than producing a product, testing it, and shipping it to be consumed by an audience that was disconnected from the developer, Web2.0 was about the perpetual beta. This concept makes all of us giggle, but what this means is that, for technologists, Web2.0 was about constantly iterating the technology as people interacted with it and learning from what they were doing. To make this happen, we saw the rise of technologies that supported real-time interactions, user-generated content, remixing and mashups, APIs and open-source software that allowed mass collaboration in the development cycle. …

For the business crowd, Web2.0 can be understood as hope. Web2.0 emerged out of the ashes of the fallen tech bubble and bust. Scars ran deep throughout Silicon Valley and venture capitalists and entrepreneurs wanted to party like it was 1999. Web2.0 brought energy to this forlorn crowd. At first they were skeptical, but slowly they bought in. As a result, we’ve seen a resurgence of startups, venture capitalists, and conferences. At this point, Web2.0 is sometimes referred to as Bubble2.0, but there’s something to say about “hope” even when the VCs start co-opting that term because they want four more years.

For users, Web2.0 was all about reorganizing web-based practices around Friends. For many users, direct communication tools like email and IM were used to communicate with one’s closest and dearest while online communities were tools for connecting with strangers around shared interests. Web2.0 reworked all of that by allowing users to connect in new ways. While many of the tools may have been designed to help people find others, what Web2.0 showed was that people really wanted a way to connect with those that they already knew in new ways. Even tools like MySpace and Facebook which are typically labeled social networkING sites were never really about networking for most users. They were about socializing inside of pre-existing networks.

Defining social media, social software, & Web 2.0 Read More »

Facebook & the Dunbar number

From The Economist‘s “Primates on Facebook” (26 February 2009):

Robin Dunbar, an anthropologist who now works at Oxford University, concluded that the cognitive power of the brain limits the size of the social network that an individual of any given species can develop. Extrapolating from the brain sizes and social networks of apes, Dr Dunbar suggested that the size of the human brain allows stable networks of about 148. Rounded to 150, this has become famous as “the Dunbar number”.

Many institutions, from neolithic villages to the maniples of the Roman army, seem to be organised around the Dunbar number. Because everybody knows everybody else, such groups can run with a minimum of bureaucracy. But that does not prove Dr Dunbar’s hypothesis is correct, and other anthropologists, such as Russell Bernard and Peter Killworth, have come up with estimates of almost double the Dunbar number for the upper limit of human groups. Moreover, sociologists also distinguish between a person’s wider network, as described by the Dunbar number or something similar, and his social “core”. Peter Marsden, of Harvard University, found that Americans, even if they socialise a lot, tend to have only a handful of individuals with whom they “can discuss important matters”. A subsequent study found, to widespread concern, that this number is on a downward trend.

The rise of online social networks, with their troves of data, might shed some light on these matters. So The Economist asked Cameron Marlow, the “in-house sociologist” at Facebook, to crunch some numbers. Dr Marlow found that the average number of “friends” in a Facebook network is 120, consistent with Dr Dunbar’s hypothesis, and that women tend to have somewhat more than men. But the range is large, and some people have networks numbering more than 500, so the hypothesis cannot yet be regarded as proven.

What also struck Dr Marlow, however, was that the number of people on an individual’s friend list with whom he (or she) frequently interacts is remarkably small and stable. The more “active” or intimate the interaction, the smaller and more stable the group.

Thus an average man—one with 120 friends—generally responds to the postings of only seven of those friends by leaving comments on the posting individual’s photos, status messages or “wall”. An average woman is slightly more sociable, responding to ten. When it comes to two-way communication such as e-mails or chats, the average man interacts with only four people and the average woman with six. Among those Facebook users with 500 friends, these numbers are somewhat higher, but not hugely so. Men leave comments for 17 friends, women for 26. Men communicate with ten, women with 16.

What mainly goes up, therefore, is not the core network but the number of casual contacts that people track more passively. …

Put differently, people who are members of online social networks are not so much “networking” as they are “broadcasting their lives to an outer tier of acquaintances who aren’t necessarily inside the Dunbar circle,” says Lee Rainie, the director of the Pew Internet & American Life Project, a polling organisation.

Facebook & the Dunbar number Read More »

Socioeconomic analysis of MySpace & Facebook

From danah boyd’s “Viewing American class divisions through Facebook and MySpace” (danah boyd: 24 June 2007):

When MySpace launched in 2003, it was primarily used by 20/30-somethings (just like Friendster before it). The bands began populating the site by early 2004 and throughout 2004, the average age slowly declined. It wasn’t until late 2004 that teens really started appearing en masse on MySpace and 2005 was the year that MySpace became the “in thing” for teens.

Facebook launched in 2004 as a Harvard-only site. It slowly expanded to welcome people with .edu accounts from a variety of different universities. In mid-2005, Facebook opened its doors to high school students, but it wasn’t that easy to get an account because you needed to be invited. As a result, those who were in college tended to invite those high school students that they liked. Facebook was strongly framed as the “cool” thing that college students did.

In addition to the college framing, the press coverage of MySpace as dangerous and sketchy alienated “good” kids. Facebook seemed to provide an ideal alternative. Parents weren’t nearly as terrified of Facebook because it seemed “safe” thanks to the network-driven structure.

She argues that class divisions in the United States have more to do with lifestyle and social stratification than with income. In other words, all of my anti-capitalist college friends who work in cafes and read Engels are not working class just because they make $14K a year and have no benefits. Class divisions in the United States have more to do with social networks (the real ones, not FB/MS), social capital, cultural capital, and attitudes than income. Not surprisingly, other demographics typically discussed in class terms are also a part of this lifestyle division. Social networks are strongly connected to geography, race, and religion; these are also huge factors in lifestyle divisions and thus “class.”

The goodie two shoes, jocks, athletes, or other “good” kids are now going to Facebook. These kids tend to come from families who emphasize education and going to college. They are part of what we’d call hegemonic society. They are primarily white, but not exclusively. They are in honors classes, looking forward to the prom, and live in a world dictated by after school activities.

MySpace is still home for Latino/Hispanic teens, immigrant teens, “burnouts,” “alternative kids,” “art fags,” punks, emos, goths, gangstas, queer kids, and other kids who didn’t play into the dominant high school popularity paradigm. These are kids whose parents didn’t go to college, who are expected to get a job when they finish high school. These are the teens who plan to go into the military immediately after schools. Teens who are really into music or in a band are also on MySpace. MySpace has most of the kids who are socially ostracized at school because they are geeks, freaks, or queers.

In order to demarcate these two groups, let’s call the first group of teens “hegemonic teens” and the second group “subaltern teens.”

Most teens who exclusively use Facebook are familiar with and have an opinion about MySpace. These teens are very aware of MySpace and they often have a negative opinion about it. They see it as gaudy, immature, and “so middle school.” They prefer the “clean” look of Facebook, noting that it is more mature and that MySpace is “so lame.” What hegemonic teens call gaudy can also be labeled as “glitzy” or “bling” or “fly” (or what my generation would call “phat”) by subaltern teens. Terms like “bling” come out of hip-hop culture where showy, sparkly, brash visual displays are acceptable and valued. The look and feel of MySpace resonates far better with subaltern communities than it does with the upwardly mobile hegemonic teens. … That “clean” or “modern” look of Facebook is akin to West Elm or Pottery Barn or any poshy Scandinavian design house (that I admit I’m drawn to) while the more flashy look of MySpace resembles the Las Vegas imagery that attracts millions every year. I suspect that lifestyles have aesthetic values and that these are being reproduced on MySpace and Facebook.

I should note here that aesthetics do divide MySpace users. The look and feel that is acceptable amongst average Latino users is quite different from what you see the subculturally-identified outcasts using. Amongst the emo teens, there’s a push for simple black/white/grey backgrounds and simplistic layouts. While I’m using the term “subaltern teens” to lump together non-hegemonic teens, the lifestyle divisions amongst the subalterns are quite visible on MySpace through the aesthetic choices of the backgrounds. The aesthetics issue is also one of the forces that drives some longer-term users away from MySpace.

Teens from poorer backgrounds who are on MySpace are less likely to know people who go to universities. They are more likely to know people who are older than them, but most of their older friends, cousins, and co-workers are on MySpace. It’s the cool working class thing and it’s the dominant SNS at community colleges. These teens are more likely to be interested in activities like shows and clubs and they find out about them through MySpace. The subaltern teens who are better identified as “outsiders” in a hegemonic community tend to be very aware of Facebook. Their choice to use MySpace instead of Facebook is a rejection of the hegemonic values (and a lack of desire to hang out with the preps and jocks even online).

Class divisions in military use

A month ago, the military banned MySpace but not Facebook. This was a very interesting move because the division in the military reflects the division in high schools. Soldiers are on MySpace; officers are on Facebook. Facebook is extremely popular in the military, but it’s not the SNS of choice for 18-year old soldiers, a group that is primarily from poorer, less educated communities. They are using MySpace. The officers, many of whom have already received college training, are using Facebook. The military ban appears to replicate the class divisions that exist throughout the military. …

MySpace is the primary way that young soldiers communicate with their peers. When I first started tracking soldiers’ MySpace profiles, I had to take a long deep breath. Many of them were extremely pro-war, pro-guns, anti-Arab, anti-Muslim, pro-killing, and xenophobic as hell. Over the last year, I’ve watched more and more profiles emerge from soldiers who aren’t quite sure what they are doing in Iraq. I don’t have the data to confirm whether or not a significant shift has occurred but it was one of those observations that just made me think. And then the ban happened. I can’t help but wonder if part of the goal is to cut off communication between current soldiers and the group that the military hopes to recruit.

Thoughts and meta thoughts

People often ask me if I’m worried about teens today. The answer is yes, but it’s not because of social network sites. With the hegemonic teens, I’m very worried about the stress that they’re under, the lack of mobility and healthy opportunities for play and socialization, and the hyper-scheduling and surveillance. I’m worried about their unrealistic expectations for becoming rich and famous, their lack of work ethic after being pampered for so long, and the lack of opportunities that many of them have to even be economically stable let alone better off than their parents. I’m worried about how locking teens indoors coupled with a fast food/junk food advertising machine has resulted in a decrease in health levels across the board which will just get messy as they are increasingly unable to afford health insurance. When it comes to ostracized teens, I’m worried about the reasons why society has ostracized them and how they will react to ongoing criticism from hegemonic peers. I cringe every time I hear of another Columbine, another Virgina Tech, another site of horror when an outcast teen lashes back at the hegemonic values of society.

I worry about the lack of opportunities available to poor teens from uneducated backgrounds. I’m worried about how Wal-Mart Nation has destroyed many of the opportunities for meaningful working class labor as these youth enter the workforce. I’m worried about what a prolonged war will mean for them. I’m worried about how they’ve been told that to succeed, they must be a famous musician or sports player. I’m worried about how gangs provide the only meaningful sense of community that many of these teens will ever know.

Given the state of what I see in all sorts of neighborhoods, I’m amazed at how well teens are coping and I think that technology has a lot to do with that. Teens are using social network sites to build community and connect with their peers. They are creating publics for socialization. And through it, they are showcasing all of the good, bad, and ugly of today’s teen life.

In the 70s, Paul Willis analyzed British working class youth and he wrote a book called Learning to Labor: How Working Class Kids Get Working Class Jobs. He argued that working class teens will reject hegemonic values because it’s the only way to continue to be a part of the community that they live in. In other words, if you don’t know that you will succeed if you make a run at jumping class, don’t bother – you’ll lose all of your friends and community in the process. His analysis has such strong resonance in American society today. I just wish I knew how to fix it.

Socioeconomic analysis of MySpace & Facebook Read More »

The end of Storm?

From “Storm Worm botnet cracked wide open” (Heise Security: 9 January 2009):

A team of researchers from Bonn University and RWTH Aachen University have analysed the notorious Storm Worm botnet, and concluded it certainly isn’t as invulnerable as it once seemed. Quite the reverse, for in theory it can be rapidly eliminated using software developed and at least partially disclosed by Georg Wicherski, Tillmann Werner, Felix Leder and Mark Schlösser. However it seems in practice the elimination process would fall foul of the law.

Over the last two years, Storm Worm has demonstrated how easily organised internet criminals have been able to spread this infection. During that period, the Storm Worm botnet has accumulated more than a million infected computers, known as drones or zombies, obeying the commands of a control server and using peer-to-peer techniques to locate new servers. Even following a big clean-up with Microsoft’s Malicious Software Removal Tool, around 100,000 drones probably still remain. That means the Storm Worm botnet is responsible for a considerable share of the Spam tsunami and for many distributed denial-of-service attacks. It’s astonishing that no one has succeeded in dismantling the network, but these researchers say it isn’t due to technical finesse on the part of the Storm Worm’s developers.

Existing knowledge of the techniques used by the Storm Worm has mainly been obtained by observing the behaviour of infected systems, but the researchers took a different approach to disarm it. They reverse translated large parts of the machine code of the drone client program and analysed it, taking a particularly close look at the functions for communications between drones and with the server.

Using this background knowledge, they were able to develop their own client, which links itself into the peer-to-peer structure of a Storm Worm network in such a way that queries from other drones, looking for new command servers, can be reliably routed to it. That enables it to divert drones to a new server. The second step was to analyse the protocol for passing commands. The researchers were astonished to find that the server doesn’t have to authenticate itself to clients, so using their knowledge they were able to direct drones to a simple server. The latter could then issue commands to the test Storm worm drones in the laboratory so that, for example, they downloaded a specific program from a server, perhaps a special cleaning program, and ran it. The students then went on to write such a program.

The team has not yet taken the final step of putting the whole thing into action with a genuine Storm Worm botnet in the wild. From a legal point of view, that could involve many problems. Any unauthorised access to third-party computers could be regarded as tampering with data, which is punishable under paragraph § 303a of the German Penal Code. That paragraph threatens up to two years’ imprisonment for unlawfully deleting, suppressing, making unusable or changing third-party data. Although this legal process would only come into effect if there was a criminal complaint from an injured party, or if there was special public interest in the prosecution of the crime.

Besides risks of coming up against the criminal law, there is also a danger of civil claims for damages by the owners of infected PCs, because the operation might cause collateral damage. There are almost certain to be configurations in which the cleaning goes wrong, perhaps disabling computers so they won’t run any more. Botnet operators could also be expected to strike back, causing further damage.

The end of Storm? Read More »

Three top botnets

From Kelly Jackson Higgins’ “The World’s Biggest Botnets” (Dark Reading: 9 November 2007):

You know about the Storm Trojan, which is spread by the world’s largest botnet. But what you may not know is there’s now a new peer-to-peer based botnet emerging that could blow Storm away.

“We’re investigating a new peer-to-peer botnet that may wind up rivaling Storm in size and sophistication,” says Tripp Cox, vice president of engineering for startup Damballa, which tracks botnet command and control infrastructures. “We can’t say much more about it, but we can tell it’s distinct from Storm.”

Researchers estimate that there are thousands of botnets in operation today, but only a handful stand out by their sheer size and pervasiveness. Although size gives a botnet muscle and breadth, it can also make it too conspicuous, which is why botnets like Storm fluctuate in size and are constantly finding new ways to cover their tracks to avoid detection. Researchers have different head counts for different botnets, with Storm by far the largest (for now, anyway).

Damballa says its top three botnets are Storm, with 230,000 active members per 24 hour period; Rbot, an IRC-based botnet with 40,000 active members per 24 hour period; and Bobax, an HTTP-based botnet with 24,000 active members per 24 hour period, according to the company.

1. Storm

Size: 230,000 active members per 24 hour period

Type: peer-to-peer

Purpose: Spam, DDOS

Malware: Trojan.Peacomm (aka Nuwar)

Few researchers can agree on Storm’s actual size — while Damballa says its over 200,000 bots, Trend Micro says its more like 40,000 to 100,000 today. But all researchers say that Storm is a whole new brand of botnet. First, it uses encrypted decentralized, peer-to-peer communication, unlike the traditional centralized IRC model. That makes it tough to kill because you can’t necessarily shut down its command and control machines. And intercepting Storm’s traffic requires cracking the encrypted data.

Storm also uses fast-flux, a round-robin method where infected bot machines (typically home computers) serve as proxies or hosts for malicious Websites. These are constantly rotated, changing their DNS records to prevent their discovery by researchers, ISPs, or law enforcement. And researchers say it’s tough to tell how the command and control communication structure is set up behind the P2P botnet. “Nobody knows how the mother ships are generating their C&C,” Trend Micro’s Ferguson says.

Storm uses a complex combination of malware called Peacomm that includes a worm, rootkit, spam relay, and Trojan.

But researchers don’t know — or can’t say — who exactly is behind Storm, except that it’s likely a fairly small, tightly knit group with a clear business plan. “All roads lead back to Russia,” Trend Micro’s Ferguson says.

“Storm is only thing now that keeps me awake at night and busy,” he says. “It’s professionalized crimeware… They have young, talented programmers apparently. And they write tools to do administrative [tracking], as well as writing cryptographic routines… and another will handle social engineering, and another will write the Trojan downloader, and another is writing the rootkit.”

Rbot

Size: 40,000 active members per 24 hour period

Type: IRC

Purpose: DDOS, spam, malicious operations

Malware: Windows worm

Rbot is basically an old-school IRC botnet that uses the Rbot malware kit. It isn’t likely to ever reach Storm size because IRC botnets just can’t scale accordingly. “An IRC server has to be a beefy machine to support anything anywhere close to the size of Peacomm/Storm,” Damballa’s Cox says.

It can disable antivirus software, too. Rbot’s underlying malware uses a backdoor to gain control of the infected machine, installing keyloggers, viruses, and even stealing files from the machine, as well as the usual spam and DDOS attacks.

Bobax

Size: 24,000 active members per 24 hour period

Type: HTTP

Purpose: Spam

Malware: Mass-mailing worm

Bobax is specifically for spamming, Cox says, and uses the stealthier HTTP for sending instructions to its bots on who and what to spam. …

According to Symantec, Bobax bores open a back door and downloads files onto the infected machine, and lowers its security settings. It spreads via a buffer overflow vulnerability in Windows, and inserts the spam code into the IE browser so that each time the browser runs, the virus is activated. And Bobax also does some reconnaissance to ensure that its spam runs are efficient: It can do bandwidth and network analysis to determine just how much spam it can send, according to Damballa. “Thus [they] are able to tailor their spamming so as not to tax the network, which helps them avoid detection,” according to company research.

Even more frightening, though, is that some Bobax variants can block access to antivirus and security vendor Websites, a new trend in Website exploitation.

Three top botnets Read More »

The future of security

From Bruce Schneier’s “Security in Ten Years” (Crypto-Gram: 15 December 2007):

Bruce Schneier: … The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.

Marcus Ranum: … Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.

Bruce Schneier: … I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

The future of security Read More »

My new book – Google Apps Deciphered – is out!

I’m really proud to announce that my 5th book is now out & available for purchase: Google Apps Deciphered: Compute in the Cloud to Streamline Your Desktop. My other books include:

(I’ve also contributed to two others: Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux and Microsoft Vista for IT Security Professionals.)

Google Apps Deciphered is a guide to setting up Google Apps, migrating to it, customizing it, and using it to improve productivity, communications, and collaboration. I walk you through each leading component of Google Apps individually, and then show my readers exactly how to make them work together for you on the Web or by integrating them with your favorite desktop apps. I provide practical insights on Google Apps programs for email, calendaring, contacts, wikis, word processing, spreadsheets, presentations, video, and even Google’s new web browser Chrome. My aim was to collect together and present tips and tricks I’ve gained by using and setting up Google Apps for clients, family, and friends.

Here’s the table of contents:

  • 1: Choosing an Edition of Google Apps
  • 2: Setting Up Google Apps
  • 3: Migrating Email to Google Apps
  • 4: Migrating Contacts to Google Apps
  • 5: Migrating Calendars to Google Apps
  • 6: Managing Google Apps Services
  • 7: Setting Up Gmail
  • 8: Things to Know About Using Gmail
  • 9: Integrating Gmail with Other Software and Services
  • 10: Integrating Google Contacts with Other Software and Services
  • 11: Setting Up Google Calendar
  • 12: Things to Know About Using Google Calendar
  • 13: Integrating Google Calendar with Other Software and Services
  • 14: Things to Know About Using Google Docs
  • 15: Integrating Google Docs with Other Software and Services
  • 16: Setting Up Google Sites
  • 17: Things to Know About Using Google Sites
  • 18: Things to Know About Using Google Talk
  • 19: Things to Know About Using Start Page
  • 20: Things to Know About Using Message Security and Recovery
  • 21: Things to Know About Using Google Video
  • Appendix A: Backing Up Google Apps
  • Appendix B: Dealing with Multiple Accounts
  • Appendix C: Google Chrome: A Browser Built for Cloud Computing

If you want to know more about Google Apps and how to use it, then I know you’ll enjoy and learn from Google Apps Deciphered. You can read about and buy the book at Amazon (http://www.amazon.com/Google-Apps-Deciphered-Compute-Streamline/dp/0137004702) for $26.39. If you have any questions or comments, don’t hesitate to contact me at scott at granneman dot com.

My new book – Google Apps Deciphered – is out! Read More »

Bruce Schneier on wholesale, constant surveillance

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”

What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.

Bruce Schneier on wholesale, constant surveillance Read More »

The NSA and threats to privacy

From James Bamford’s “Big Brother Is Listening” (The Atlantic: April 2006):

This legislation, the 1978 Foreign Intelligence Surveillance Act, established the FISA court—made up of eleven judges handpicked by the chief justice of the United States—as a secret part of the federal judiciary. The court’s job is to decide whether to grant warrants requested by the NSA or the FBI to monitor communications of American citizens and legal residents. The law allows the government up to three days after it starts eavesdropping to ask for a warrant; every violation of FISA carries a penalty of up to five years in prison. Between May 18, 1979, when the court opened for business, until the end of 2004, it granted 18,742 NSA and FBI applications; it turned down only four outright.

Such facts worry Jonathan Turley, a George Washington University law professor who worked for the NSA as an intern while in law school in the 1980s. The FISA “courtroom,” hidden away on the top floor of the Justice Department building (because even its location is supposed to be secret), is actually a heavily protected, windowless, bug-proof installation known as a Sensitive Compartmented Information Facility, or SCIF.

It is true that the court has been getting tougher. From 1979 through 2000, it modified only two out of 13,087 warrant requests. But from the start of the Bush administration, in 2001, the number of modifications increased to 179 out of 5,645 requests. Most of those—173—involved what the court terms “substantive modifications.”

Contrary to popular perception, the NSA does not engage in “wiretapping”; it collects signals intelligence, or “sigint.” In contrast to the image we have from movies and television of an FBI agent placing a listening device on a target’s phone line, the NSA intercepts entire streams of electronic communications containing millions of telephone calls and e-mails. It runs the intercepts through very powerful computers that screen them for particular names, telephone numbers, Internet addresses, and trigger words or phrases. Any communications containing flagged information are forwarded by the computer for further analysis.

Names and information on the watch lists are shared with the FBI, the CIA, the Department of Homeland Security, and foreign intelligence services. Once a person’s name is in the files, even if nothing incriminating ever turns up, it will likely remain there forever. There is no way to request removal, because there is no way to confirm that a name is on the list.

In December of 1997, in a small factory outside the southern French city of Toulouse, a salesman got caught in the NSA’s electronic web. Agents working for the NSA’s British partner, the Government Communications Headquarters, learned of a letter of credit, valued at more than $1.1 million, issued by Iran’s defense ministry to the French company Microturbo. According to NSA documents, both the NSA and the GCHQ concluded that Iran was attempting to secretly buy from Microturbo an engine for the embargoed C-802 anti-ship missile. Faxes zapping back and forth between Toulouse and Tehran were intercepted by the GCHQ, which sent them on not just to the NSA but also to the Canadian and Australian sigint agencies, as well as to Britain’s MI6. The NSA then sent the reports on the salesman making the Iranian deal to a number of CIA stations around the world, including those in Paris and Bonn, and to the U.S. Commerce Department and the Customs Service. Probably several hundred people in at least four countries were reading the company’s communications.

Such events are central to the current debate involving the potential harm caused by the NSA’s warrantless domestic eavesdropping operation. Even though the salesman did nothing wrong, his name made its way into the computers and onto the watch lists of intelligence, customs, and other secret and law-enforcement organizations around the world. Maybe nothing will come of it. Maybe the next time he tries to enter the United States or Britain he will be denied, without explanation. Maybe he will be arrested. As the domestic eavesdropping program continues to grow, such uncertainties may plague innocent Americans whose names are being run through the supercomputers even though the NSA has not met the established legal standard for a search warrant. It is only when such citizens are turned down while applying for a job with the federal government—or refused when seeking a Small Business Administration loan, or turned back by British customs agents when flying to London on vacation, or even placed on a “no-fly” list—that they will realize that something is very wrong. But they will never learn why.

General Michael Hayden, director of the NSA from 1999 to 2005 and now principal deputy director of national intelligence, noted in 2002 that during the 1990s, e-communications “surpassed traditional communications. That is the same decade when mobile cell phones increased from 16 million to 741 million—an increase of nearly 50 times. That is the same decade when Internet users went from about 4 million to 361 million—an increase of over 90 times. Half as many land lines were laid in the last six years of the 1990s as in the whole previous history of the world. In that same decade of the 1990s, international telephone traffic went from 38 billion minutes to over 100 billion. This year, the world’s population will spend over 180 billion minutes on the phone in international calls alone.”

Intercepting communications carried by satellite is fairly simple for the NSA. The key conduits are the thirty Intelsat satellites that ring the Earth, 22,300 miles above the equator. Many communications from Europe, Africa, and the Middle East to the eastern half of the United States, for example, are first uplinked to an Intelsat satellite and then downlinked to AT&T’s ground station in Etam, West Virginia. From there, phone calls, e-mails, and other communications travel on to various parts of the country. To listen in on that rich stream of information, the NSA built a listening post fifty miles away, near Sugar Grove, West Virginia. Consisting of a group of very large parabolic dishes, hidden in a heavily forested valley and surrounded by tall hills, the post can easily intercept the millions of calls and messages flowing every hour into the Etam station. On the West Coast, high on the edge of a bluff overlooking the Okanogan River, near Brewster, Washington, is the major commercial downlink for communications to and from Asia and the Pacific. Consisting of forty parabolic dishes, it is reportedly the largest satellite antenna farm in the Western Hemisphere. A hundred miles to the south, collecting every whisper, is the NSA’s western listening post, hidden away on a 324,000-acre Army base in Yakima, Washington. The NSA posts collect the international traffic beamed down from the Intelsat satellites over the Atlantic and Pacific. But each also has a number of dishes that appear to be directed at domestic telecommunications satellites.

Until recently, most international telecommunications flowing into and out of the United States traveled by satellite. But faster, more reliable undersea fiber-optic cables have taken the lead, and the NSA has adapted. The agency taps into the cables that don’t reach our shores by using specially designed submarines, such as the USS Jimmy Carter, to attach a complex “bug” to the cable itself. This is difficult, however, and undersea taps are short-lived because the batteries last only a limited time. The fiber-optic transmission cables that enter the United States from Europe and Asia can be tapped more easily at the landing stations where they come ashore. With the acquiescence of the telecommunications companies, it is possible for the NSA to attach monitoring equipment inside the landing station and then run a buried encrypted fiber-optic “backhaul” line to NSA headquarters at Fort Meade, Maryland, where the river of data can be analyzed by supercomputers in near real time.

Tapping into the fiber-optic network that carries the nation’s Internet communications is even easier, as much of the information transits through just a few “switches” (similar to the satellite downlinks). Among the busiest are MAE East (Metropolitan Area Ethernet), in Vienna, Virginia, and MAE West, in San Jose, California, both owned by Verizon. By accessing the switch, the NSA can see who’s e-mailing with whom over the Internet cables and can copy entire messages. Last September, the Federal Communications Commission further opened the door for the agency. The 1994 Communications Assistance for Law Enforcement Act required telephone companies to rewire their networks to provide the government with secret access. The FCC has now extended the act to cover “any type of broadband Internet access service” and the new Internet phone services—and ordered company officials never to discuss any aspect of the program.

The National Security Agency was born in absolute secrecy. Unlike the CIA, which was created publicly by a congressional act, the NSA was brought to life by a top-secret memorandum signed by President Truman in 1952, consolidating the country’s various military sigint operations into a single agency. Even its name was secret, and only a few members of Congress were informed of its existence—and they received no information about some of its most important activities. Such secrecy has lent itself to abuse.

During the Vietnam War, for instance, the agency was heavily involved in spying on the domestic opposition to the government. Many of the Americans on the watch lists of that era were there solely for having protested against the war. … Even so much as writing about the NSA could land a person a place on a watch list.

For instance, during World War I, the government read and censored thousands of telegrams—the e-mail of the day—sent hourly by telegraph companies. Though the end of the war brought with it a reversion to the Radio Act of 1912, which guaranteed the secrecy of communications, the State and War Departments nevertheless joined together in May of 1919 to create America’s first civilian eavesdropping and code-breaking agency, nicknamed the Black Chamber. By arrangement, messengers visited the telegraph companies each morning and took bundles of hard-copy telegrams to the agency’s offices across town. These copies were returned before the close of business that day.

A similar tale followed the end of World War II. In August of 1945, President Truman ordered an end to censorship. That left the Signal Security Agency (the military successor to the Black Chamber, which was shut down in 1929) without its raw intelligence—the telegrams provided by the telegraph companies. The director of the SSA sought access to cable traffic through a secret arrangement with the heads of the three major telegraph companies. The companies agreed to turn all telegrams over to the SSA, under a plan code-named Operation Shamrock. It ran until the government’s domestic spying programs were publicly revealed, in the mid-1970s.

Frank Church, the Idaho Democrat who led the first probe into the National Security Agency, warned in 1975 that the agency’s capabilities

“could be turned around on the American people, and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it is done, is within the reach of the government to know. Such is the capacity of this technology.”

The NSA and threats to privacy Read More »

The life cycle of a botnet client

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

What makes a botnet a botnet? In particular, how do you distinguish a botnet client from just another hacker break-in? First, the clients in a botnet must be able to take actions on the client without the hacker having to log into the client’s operating system (Windows, UNIX, or Mac OS). Second, many clients must be able to act in a coordinated fashion to accomplish a common goal with little or no intervention from the hacker. If a collection of computers meet this criteria it is a botnet.

The life of a botnet client, or botclient, begins when it has been exploited. A prospective botclient can be exploited via malicious code that a user is tricked into running; attacks against unpatched vulnerabilities; backdoors left by Trojan worms or remote access Trojans; and password guessing and brute force access attempts. In this section we’ll discuss each of these methods of exploiting botnets.

Rallying and Securing the Botnet Client

Although the order in the life cycle may vary, at some point early in the life of a new botnet client it must call home, a process called “rallying. “When rallying, the botnet client initiates contact with the botnet Command and Control (C&C) Server. Currently, most botnets use IRC for Command and Control.

Rallying is the term given for the first time a botnet client logins in to a C&C server. The login may use some form of encryption or authentication to limit the ability of others to eavesdrop on the communications. Some botnets are beginning to encrypt the communicated data.

At this point the new botnet client may request updates. The updates could be updated exploit software, an updated list of C&C server names, IP addresses, and/or channel names. This will assure that the botnet client can be managed and can be recovered should the current C&C server be taken offline.

The next order of business is to secure the new client from removal. The client can request location of the latest anti-antivirus (Anti-A/V) tool from the C&C server. The newly controlled botclient would download this soft- ware and execute it to remove the A/V tool, hide from it, or render it ineffective.

Shutting off the A/V tool may raise suspicions if the user is observant. Some botclients will run a dll that neuters the A/V tool. With an Anti-A/V dll in place the A/V tool may appear to be working normally except that it never detects or reports the files related to the botnet client. It may also change the Hosts file and LMHosts file so that attempts to contact an A/V vendor for updates will not succeed. Using this method, attempts to contact an A/V vendor can be redirected to a site containing malicious code or can yield a “website or server not found” error.

One tool, hidden32. exe, is used to hide applications that have a GUI interface from the user. Its use is simple; the botherder creates a batch file that executes hidden32 with the name of the executable to be hidden as its parameter. Another stealthy tool, HideUserv2, adds an invisible user to the administrator group.

Waiting for Orders and Retrieving the Payload

Once secured, the botnet client will listen to the C&C communications channel.

The botnet client will then request the associated payload. The payload is the term I give the software representing the intended function of this botnet client.

The life cycle of a botnet client Read More »

How the Greek cell phone network was compromised

From Vassilis Prevelakis and Diomidis Spinellis’ “The Athens Affair” (IEEE Spectrum: July 2007):

On 9 March 2005, a 38-year-old Greek electrical engineer named Costas Tsalikidis was found hanged in his Athens loft apartment, an apparent suicide. It would prove to be merely the first public news of a scandal that would roil Greece for months.

The next day, the prime minister of Greece was told that his cellphone was being bugged, as were those of the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.

The victims were customers of Athens-based Vodafone-Panafon, generally known as Vodafone Greece, the country’s largest cellular service provider; Tsalikidis was in charge of network planning at the company.

We now know that the illegally implanted software, which was eventually found in a total of four of Vodafone’s Greek switches, created parallel streams of digitized voice for the tapped phone calls. One stream was the ordinary one, between the two calling parties. The other stream, an exact copy, was directed to other cellphones, allowing the tappers to listen in on the conversations on the cellphones, and probably also to record them. The software also routed location and other information about those phone calls to these shadow handsets via automated text messages.

The day after Tsalikidis’s body was discovered, CEO Koronias met with the director of the Greek prime minister’s political office. Yiannis Angelou, and the minister of public order, Giorgos Voulgarakis. Koronias told them that rogue software used the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones and handed over a list of bugged numbers. Besides the prime minister and his wife, phones belonging to the ministers of national defense, foreign affairs, and justice, the mayor of Athens, and the Greek European Union commissioner were all compromised. Others belonged to members of civil rights organizations, peace activists, and antiglobalization groups; senior staff at the ministries of National Defense, Public Order, Merchant Marine, and Foreign Affairs; the New Democracy ruling party; the Hellenic Navy general staff; and a Greek-American employee at the United States Embassy in Athens.

First, consider how a phone call, yours or a prime minister’s, gets completed. Long before you dial a number on your handset, your cellphone has been communicating with nearby cellular base stations. One of those stations, usually the nearest, has agreed to be the intermediary between your phone and the network as a whole. Your telephone handset converts your words into a stream of digital data that is sent to a transceiver at the base station.

The base station’s activities are governed by a base station controller, a special-purpose computer within the station that allocates radio channels and helps coordinate handovers between the transceivers under its control.

This controller in turn communicates with a mobile switching center that takes phone calls and connects them to call recipients within the same switching center, other switching centers within the company, or special exchanges that act as gateways to foreign networks, routing calls to other telephone networks (mobile or landline). The mobile switching centers are particularly important to the Athens affair because they hosted the rogue phone-tapping software, and it is there that the eavesdropping originated. They were the logical choice, because they are at the heart of the network; the intruders needed to take over only a few of them in order to carry out their attack.

Both the base station controllers and the switching centers are built around a large computer, known as a switch, capable of creating a dedicated communications path between a phone within its network and, in principle, any other phone in the world. Switches are holdovers from the 1970s, an era when powerful computers filled rooms and were built around proprietary hardware and software. Though these computers are smaller nowadays, the system’s basic architecture remains largely unchanged.

Like most phone companies, Vodafone Greece uses the same kind of computer for both its mobile switching centers and its base station controllers—Ericsson’s AXE line of switches. A central processor coordinates the switch’s operations and directs the switch to set up a speech or data path from one phone to another and then routes a call through it. Logs of network activity and billing records are stored on disk by a separate unit, called a management processor.

The key to understanding the hack at the heart of the Athens affair is knowing how the Ericsson AXE allows lawful intercepts—what are popularly called “wiretaps.” Though the details differ from country to country, in Greece, as in most places, the process starts when a law enforcement official goes to a court and obtains a warrant, which is then presented to the phone company whose customer is to be tapped.

Nowadays, all wiretaps are carried out at the central office. In AXE exchanges a remote-control equipment subsystem, or RES, carries out the phone tap by monitoring the speech and data streams of switched calls. It is a software subsystem typically used for setting up wiretaps, which only law officers are supposed to have access to. When the wiretapped phone makes a call, the RES copies the conversation into a second data stream and diverts that copy to a phone line used by law enforcement officials.

Ericsson optionally provides an interception management system (IMS), through which lawful call intercepts are set up and managed. When a court order is presented to the phone company, its operators initiate an intercept by filling out a dialog box in the IMS software. The optional IMS in the operator interface and the RES in the exchange each contain a list of wiretaps: wiretap requests in the case of the IMS, actual taps in the RES. Only IMS-initiated wiretaps should be active in the RES, so a wiretap in the RES without a request for a tap in the IMS is a pretty good indicator that an unauthorized tap has occurred. An audit procedure can be used to find any discrepancies between them.

It took guile and some serious programming chops to manipulate the lawful call-intercept functions in Vodafone’s mobile switching centers. The intruders’ task was particularly complicated because they needed to install and operate the wiretapping software on the exchanges without being detected by Vodafone or Ericsson system administrators. From time to time the intruders needed access to the rogue software to update the lists of monitored numbers and shadow phones. These activities had to be kept off all logs, while the software itself had to be invisible to the system administrators conducting routine maintenance activities. The intruders achieved all these objectives.

The challenge faced by the intruders was to use the RES’s capabilities to duplicate and divert the bits of a call stream without using the dialog-box interface to the IMS, which would create auditable logs of their activities. The intruders pulled this off by installing a series of patches to 29 separate blocks of code, according to Ericsson officials who testified before the Greek parliamentary committee that investigated the wiretaps. This rogue software modified the central processor’s software to directly initiate a wiretap, using the RES’s capabilities. Best of all, for them, the taps were not visible to the operators, because the IMS and its user interface weren’t used.

The full version of the software would have recorded the phone numbers being tapped in an official registry within the exchange. And, as we noted, an audit could then find a discrepancy between the numbers monitored by the exchange and the warrants active in the IMS. But the rogue software bypassed the IMS. Instead, it cleverly stored the bugged numbers in two data areas that were part of the rogue software’s own memory space, which was within the switch’s memory but isolated and not made known to the rest of the switch.

That by itself put the rogue software a long way toward escaping detection. But the perpetrators hid their own tracks in a number of other ways as well. There were a variety of circumstances by which Vodafone technicians could have discovered the alterations to the AXE’s software blocks. For example, they could have taken a listing of all the blocks, which would show all the active processes running within the AXE—similar to the task manager output in Microsoft Windows or the process status (ps) output in Unix. They then would have seen that some processes were active, though they shouldn’t have been. But the rogue software apparently modified the commands that list the active blocks in a way that omitted certain blocks—the ones that related to intercepts—from any such listing.

In addition, the rogue software might have been discovered during a software upgrade or even when Vodafone technicians installed a minor patch. It is standard practice in the telecommunications industry for technicians to verify the existing block contents before performing an upgrade or patch. We don’t know why the rogue software was not detected in this way, but we suspect that the software also modified the operation of the command used to print the checksums—codes that create a kind of signature against which the integrity of the existing blocks can be validated. One way or another, the blocks appeared unaltered to the operators.

Finally, the software included a back door to allow the perpetrators to control it in the future. This, too, was cleverly constructed to avoid detection. A report by the Hellenic Authority for the Information and Communication Security and Privacy (the Greek abbreviation is ADAE) indicates that the rogue software modified the exchange’s command parser—a routine that accepts commands from a person with system administrator status—so that innocuous commands followed by six spaces would deactivate the exchange’s transaction log and the alarm associated with its deactivation, and allow the execution of commands associated with the lawful interception subsystem. In effect, it was a signal to allow operations associated with the wiretaps but leave no trace of them. It also added a new user name and password to the system, which could be used to obtain access to the exchange.

…Security experts have also discovered other rootkits for general-purpose operating systems, such as Linux, Windows, and Solaris, but to our knowledge this is the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch.

So the investigators painstakingly reconstructed an approximation of the original PLEX source files that the intruders developed. It turned out to be the equivalent of about 6500 lines of code, a surprisingly substantial piece of software.

How the Greek cell phone network was compromised Read More »

9 reasons the Storm botnet is different

From Bruce Schneier’s “Gathering ‘Storm’ Superworm Poses Grave Threat to PC Nets” (Wired: 4 October 2007):

Storm represents the future of malware. Let’s look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. …

3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. …

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. …

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect. …

5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” …

6. Storm’s payload — the code it uses to spread — morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites — anything to entice users to click on a phony link. …

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. …

9. Last month, Storm began attacking anti-spam sites focused on identifying it — spamhaus.org, 419eater and so on — and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

9 reasons the Storm botnet is different Read More »

The Chinese Internet threat

From Shane Harris’ “China’s Cyber-Militia” (National Journal: 31 May 2008):

Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.

One prominent expert told National Journal he believes that China’s People’s Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. “They said that, with confidence, it had been traced back to the PLA.” These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.

Bennett, whose former trade association includes some of the nation’s largest computer-security companies and who has testified before Congress on the vulnerability of information networks, also said that a blackout in February, which affected 3 million customers in South Florida, was precipitated by a cyber-hacker. That outage cut off electricity along Florida’s east coast, from Daytona Beach to Monroe County, and affected eight power-generating stations.

A second information-security expert independently corroborated Bennett’s account of the Florida blackout. According to this individual, who cited sources with direct knowledge of the investigation, a Chinese PLA hacker attempting to map Florida Power & Light’s computer infrastructure apparently made a mistake.

The industry source, who conducts security research for government and corporate clients, said that hackers in China have devoted considerable time and resources to mapping the technology infrastructure of other U.S. companies. That assertion has been backed up by the current vice chairman of the Joint Chiefs of Staff, who said last year that Chinese sources are probing U.S. government and commercial networks.

“The Chinese operate both through government agencies, as we do, but they also operate through sponsoring other organizations that are engaging in this kind of international hacking, whether or not under specific direction. It’s a kind of cyber-militia.… It’s coming in volumes that are just staggering.”

In addition to disruptive attacks on networks, officials are worried about the Chinese using long-established computer-hacking techniques to steal sensitive information from government agencies and U.S. corporations.

Brenner, the U.S. counterintelligence chief, said he knows of “a large American company” whose strategic information was obtained by its Chinese counterparts in advance of a business negotiation. As Brenner recounted the story, “The delegation gets to China and realizes, ‘These guys on the other side of the table know every bottom line on every significant negotiating point.’ They had to have got this by hacking into [the company’s] systems.”

During a trip to Beijing in December 2007, spyware programs designed to clandestinely remove information from personal computers and other electronic equipment were discovered on devices used by Commerce Secretary Carlos Gutierrez and possibly other members of a U.S. trade delegation, according to a computer-security expert with firsthand knowledge of the spyware used. Gutierrez was in China with the Joint Commission on Commerce and Trade, a high-level delegation that includes the U.S. trade representative and that meets with Chinese officials to discuss such matters as intellectual-property rights, market access, and consumer product safety. According to the computer-security expert, the spyware programs were designed to open communications channels to an outside system, and to download the contents of the infected devices at regular intervals. The source said that the computer codes were identical to those found in the laptop computers and other devices of several senior executives of U.S. corporations who also had their electronics “slurped” while on business in China.

The Chinese make little distinction between hackers who work for the government and those who undertake cyber-adventures on its behalf. “There’s a huge pool of Chinese individuals, students, academics, unemployed, whatever it may be, who are, at minimum, not discouraged from trying this out,” said Rodger Baker, a senior China analyst for Stratfor, a private intelligence firm. So-called patriotic-hacker groups have launched attacks from inside China, usually aimed at people they think have offended the country or pose a threat to its strategic interests. At a minimum the Chinese government has done little to shut down these groups, which are typically composed of technologically skilled and highly nationalistic young men.

The military is not waiting for China, or any other nation or hacker group, to strike a lethal cyber-blow. In March, Air Force Gen. Kevin Chilton, the chief of U.S. Strategic Command, said that the Pentagon has its own cyberwar plans. “Our challenge is to define, shape, develop, deliver, and sustain a cyber-force second to none,” Chilton told the Senate Armed Services Committee. He asked appropriators for an “increased emphasis” on the Defense Department’s cyber-capabilities to help train personnel to “conduct network warfare.”

The Air Force is in the process of setting up a Cyberspace Command, headed by a two-star general and comprising about 160 individuals assigned to a handful of bases. As Wired noted in a recent profile, Cyberspace Command “is dedicated to the proposition that the next war will be fought in the electromagnetic spectrum and that computers are military weapons.” The Air Force has launched a TV ad campaign to drum up support for the new command, and to call attention to cyberwar. “You used to need an army to wage a war,” a narrator in the TV spot declares. “Now all you need is an Internet connection.”

The Chinese Internet threat Read More »

Lots of good info about the FBI’s far-reaching wiretapping of US phone systems

From Ryan Singel’s “Point, Click … Eavesdrop: How the FBI Wiretap Net Operates” (Wired News: 29 August 2007):

The FBI has quietly built a sophisticated, point-and-click surveillance system that performs instant wiretaps on almost any communications device, according to nearly a thousand pages of restricted documents newly released under the Freedom of Information Act.

The surveillance system, called DCSNet, for Digital Collection System Network, connects FBI wiretapping rooms to switches controlled by traditional land-line operators, internet-telephony providers and cellular companies. It is far more intricately woven into the nation’s telecom infrastructure than observers suspected.

It’s a “comprehensive wiretap system that intercepts wire-line phones, cellular phones, SMS and push-to-talk systems,” says Steven Bellovin, a Columbia University computer science professor and longtime surveillance expert.

DCSNet is a suite of software that collects, sifts and stores phone numbers, phone calls and text messages. The system directly connects FBI wiretapping outposts around the country to a far-reaching private communications network.

The $10 million DCS-3000 client, also known as Red Hook, handles pen-registers and trap-and-traces, a type of surveillance that collects signaling information — primarily the numbers dialed from a telephone — but no communications content. (Pen registers record outgoing calls; trap-and-traces record incoming calls.)

DCS-6000, known as Digital Storm, captures and collects the content of phone calls and text messages for full wiretap orders.

A third, classified system, called DCS-5000, is used for wiretaps targeting spies or terrorists.

What DCSNet Can Do

Together, the surveillance systems let FBI agents play back recordings even as they are being captured (like TiVo), create master wiretap files, send digital recordings to translators, track the rough location of targets in real time using cell-tower information, and even stream intercepts outward to mobile surveillance vans.

FBI wiretapping rooms in field offices and undercover locations around the country are connected through a private, encrypted backbone that is separated from the internet. Sprint runs it on the government’s behalf.

The network allows an FBI agent in New York, for example, to remotely set up a wiretap on a cell phone based in Sacramento, California, and immediately learn the phone’s location, then begin receiving conversations, text messages and voicemail pass codes in New York. With a few keystrokes, the agent can route the recordings to language specialists for translation.

The numbers dialed are automatically sent to FBI analysts trained to interpret phone-call patterns, and are transferred nightly, by external storage devices, to the bureau’s Telephone Application Database, where they’re subjected to a type of data mining called link analysis.

The numerical scope of DCSNet surveillance is still guarded. But we do know that as telecoms have become more wiretap-friendly, the number of criminal wiretaps alone has climbed from 1,150 in 1996 to 1,839 in 2006. That’s a 60 percent jump. And in 2005, 92 percent of those criminal wiretaps targeted cell phones, according to a report published last year.

These figures include both state and federal wiretaps, and do not include antiterrorism wiretaps, which dramatically expanded after 9/11. They also don’t count the DCS-3000’s collection of incoming and outgoing phone numbers dialed. Far more common than full-blown wiretaps, this level of surveillance requires only that investigators certify that the phone numbers are relevant to an investigation.

In the 1990s, the Justice Department began complaining to Congress that digital technology, cellular phones and features like call forwarding would make it difficult for investigators to continue to conduct wiretaps. Congress responded by passing the Communications Assistance for Law Enforcement Act, or CALEA, in 1994, mandating backdoors in U.S. telephone switches.

CALEA requires telecommunications companies to install only telephone-switching equipment that meets detailed wiretapping standards. Prior to CALEA, the FBI would get a court order for a wiretap and present it to a phone company, which would then create a physical tap of the phone system.

With new CALEA-compliant digital switches, the FBI now logs directly into the telecom’s network. Once a court order has been sent to a carrier and the carrier turns on the wiretap, the communications data on a surveillance target streams into the FBI’s computers in real time.

The released documents suggest that the FBI’s wiretapping engineers are struggling with peer-to-peer telephony provider Skype, which offers no central location to wiretap, and with innovations like caller-ID spoofing and phone-number portability.

Despite its ease of use, the new technology is proving more expensive than a traditional wiretap. Telecoms charge the government an average of $2,200 for a 30-day CALEA wiretap, while a traditional intercept costs only $250, according to the Justice Department inspector general. A federal wiretap order in 2006 cost taxpayers $67,000 on average, according to the most recent U.S. Court wiretap report.

What’s more, under CALEA, the government had to pay to make pre-1995 phone switches wiretap-friendly. The FBI has spent almost $500 million on that effort, but many traditional wire-line switches still aren’t compliant.

Processing all the phone calls sucked in by DCSNet is also costly. At the backend of the data collection, the conversations and phone numbers are transferred to the FBI’s Electronic Surveillance Data Management System, an Oracle SQL database that’s seen a 62 percent growth in wiretap volume over the last three years — and more than 3,000 percent growth in digital files like e-mail. Through 2007, the FBI has spent $39 million on the system, which indexes and analyzes data for agents, translators and intelligence analysts.

Lots of good info about the FBI’s far-reaching wiretapping of US phone systems Read More »

A collective action problem: why the cops can’t talk to firemen

From Bruce Schneier’s “First Responders” (Crypto-Gram: 15 September 2007):

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25% of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80% of cities, municipal authorities couldn’t communicate with the FBI, FEMA, and other federal agencies.

The source of the problem is a basic economic one, called the “collective action problem.” A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

A collective action problem: why the cops can’t talk to firemen Read More »

My new book – Podcasting with Audacity – is out!

Audacity is universally recognized as the number one software program for creating podcasts. Hundreds of thousands of amateurs and professionals alike have created podcasts using Audacity.

Podcasting with Audacity: Creating a Podcast With Free Audio Software is designed to get you podcasting as quickly as possible. The first few chapters show you how to install Audacity, plug in your microphone, record your first podcast, and get it online as quickly as possible. The following chapters cover podcasting-specific topics, such as adding background music or conducting interviews. Finally, the remaining chapters focus on how Audacity works, with lots of tips and tricks to make complicated editing even easier.

Read an excerpt: "Edit Your Podcast" is available on the Web or download a 950 KB PDF. An unedited version of the book is available under as a wiki under a Creative Commons license at the Audacity website.

My new book – Podcasting with Audacity – is out! Read More »