code

Unix: An Oral History

From ‘s “Unix: An Oral History” (: ):

Multics

Gordon M. Brown

[Multics] was designed to include fault-free continuous operation capabilities, convenient remote terminal access and selective information sharing. One of the most important features of Multics was to follow the trend towards integrated multi-tasking and permit multiple programming environments and different human interfaces under one operating system.

Moreover, two key concepts had been picked up on in the development of Multics that would later serve to define Unix. These were that the less important features of the system introduced more complexity and, conversely, that the most important property of algorithms was simplicity. Ritchie explained this to Mahoney, articulating that:

The relationship of Multics to [the development of Unix] is actually interesting and fairly complicated. There were a lot of cultural things that were sort of taken over wholesale. And these include important things, [such as] the hierarchical file system and tree-structure file system – which incidentally did not get into the first version of Unix on the PDP-7. This is an example of why the whole thing is complicated. But any rate, things like the hierarchical file system, choices of simple things like the characters you use to edit lines as you’re typing, erasing characters were the same as those we had. I guess the most important fundamental thing is just the notion that the basic style of interaction with the machine, the fact that there was the notion of a command line, the notion was an explicit shell program. In fact the name shell came from Multics. A lot of extremely important things were completely internalized, and of course this is the way it is. A lot of that came through from Multics.

The Beginning

Michael Errecart and Cameron Jones

Files to Share

The Unix file system was based almost entirely on the file system for the failed Multics project. The idea was for file sharing to take place with explicitly separated file systems for each user, so that there would be no locking of file tables.

A major part of the answer to this question is that the file system had to be open. The needs of the group dictated that every user had access to every other user’s files, so the Unix system had to be extremely open. This openness is even seen in the password storage, which is not hidden at all but is encrypted. Any user can see all the encrypted passwords, but can only test one solution per second, which makes it extremely time consuming to try to break into the system.

The idea of standard input and output for devices eventually found its way into Unix as pipes. Pipes enabled users and programmers to send one function’s output into another function by simply placing a vertical line, a ‘|’ between the two functions. Piping is one of the most distinct features of Unix …

Language from B to C

… Thompson was intent on having Unix be portable, and the creation of a portable language was intrinsic to this. …

Finding a Machine

Darice Wong & Jake Knerr

… Thompson devoted a month apiece to the shell, editor, assembler, and other software tools. …

Use of Unix started in the patent office of Bell Labs, but by 1972 there were a number of non-research organizations at Bell Labs that were beginning to use Unix for software development. Morgan recalls the importance of text processing in the establishment of Unix. …

Building Unix

Jason Aughenbaugh, Jonathan Jessup, & Nicholas Spicher

The Origin of Pipes

The first edition of Thompson and Ritchie’s The Unix Programmer’s Manual was dated November 3, 1971; however, the idea of pipes is not mentioned until the Version 3 Unix manual, published in February 1973. …

Software Tools

grep was, in fact, one of the first programs that could be classified as a software tool. Thompson designed it at the request of McIlroy, as McIlroy explains:

One afternoon I asked Ken Thompson if he could lift the regular expression recognizer out of the editor and make a one-pass program to do it. He said yes. The next morning I found a note in my mail announcing a program named grep. It worked like a charm. When asked what that funny name meant, Ken said it was obvious. It stood for the editor command that it simulated, g/re/p (global regular expression print)….From that special-purpose beginning, grep soon became a household word. (Something I had to stop myself from writing in the first paragraph above shows how firmly naturalized the idea now is: ‘I used ed to grep out words from the dictionary.’) More than any other single program, grep focused the viewpoint that Kernighan and Plauger christened and formalized in Software Tools: make programs that do one thing and do it well, with as few preconceptions about input syntax as possible.

eqn and grep are illustrative of the Unix toolbox philosophy that McIlroy phrases as, “Write programs that do one thing and do it well. Write programs to work together. Write programs that handle text streams, because that is a universal interface.” This philosophy was enshrined in Kernighan and Plauger’s 1976 book, Software Tools, and reiterated in the “Foreword” to the issue of The Bell Systems Technical Journal that also introduced pipes.

Ethos

Robert Murray-Rust & Malika Seth

McIlroy says,

This is the Unix philosophy. Write programs that do one thing and do it well. Write programs to work together. Write programs that handle text streams because, that is a universal interface.

The dissemination of Unix, with a focus on what went on within Bell Labs

Steve Chen

In 1973, the first Unix applications were installed on a system involved in updating directory information and intercepting calls to numbers that had been changed. This was the first time Unix had been used in supporting an actual, ongoing operating business. Soon, Unix was being used to automate the operations systems at Bell Laboratories. It was automating the monitoring, involved in measurement, and helping to rout calls and ensure the quality of the calls.

There were numerous reasons for the friendliness the academic society, especially the academic Computer Science community, showed towards Unix. John Stoneback relates a few of these:

Unix came into many CS departments largely because it was the only powerful interactive system that could run on the sort of hardware (PDP-11s) that universities could afford in the mid ’70s. In addition, Unix itself was also very inexpensive. Since source code was provided, it was a system that could be shaped to the requirements of a particular installation. It was written in a language considerably more attractive than assembly, and it was small enough to be studied and understood by individuals. (John Stoneback, “The Collegiate Community,” Unix Review, October 1985, p. 27.)

The key features and characteristics of Unix that held it above other operating systems at the time were its software tools, its portability, its flexibility, and the fact that it was simple, compact, and efficient. The development of Unix in Bell Labs was carried on under a set of principles that the researchers had developed to guide their work. These principles included:

(i) Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

(ii) Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.

(iii) Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.

(iv) Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.”

(M.D. McIlroy, E.N. Pinson, and B.A. Tague “Unix Time-Sharing System Forward,” The Bell System Technical Journal, July-Aug 1088 vol 57, number 6 part 2. P. 1902)

Unix: An Oral History Read More »

A vote for CrossOver

Let me recommend Codeweavers’ CrossOver, a commercial implementation of WINE that works on Linux & Mac OS X. It’s reasonably priced, & it makes setting up & configuring both WINE and the programs that run inside WINE much easier. Plus, the company is made up of good people, & they’re very upfront on their site about what works with WINE, what mostly works, what kinda works, & what doesn’t work at all.

http://www.codeweavers.com/

A vote for CrossOver Read More »

A great example of poor comments in your code

From Steven Levy’s Hackers: Heroes of the Computer Revolution (Penguin Books: 2001): 43:

[Peter Samson, one of the first MIT hackers], though, was particularly obscure in refusing to add comments to his source code explaining what he was doing at a given time. One well-distributed program Samson wrote went on for hundreds of assembly language instructions, with only one comment beside an instruction which contained the number 1750. The comment was RIPJSB, and people racked their brains about its meaning until someone figured out that 1750 was the year Bach died, and that Samson had written an abbreviation for Rest In Peace Johann Sebastian Bach.

A great example of poor comments in your code Read More »

Big security problems with the current way Firefox handles extensions

From Help Net Security’s “Zero-day vulnerabilities in Firefox extensions discovered” (20 November 2009):

At the SecurityByte & OWASP AppSec Conference in India, Roberto Suggi Liverani and Nick Freeman, security consultants with security-assessment.com, offered insight into the substantial danger posed by Firefox extensions.

Mozilla doesn’t have a security model for extensions and Firefox fully trusts the code of the extensions. There are no security boundaries between extensions and, to make things even worse, an extension can silently modify another extension.

Any Mozilla application with the extension system is vulnerable to same type of issues. Extensions vulnerabilities are platform independent, and can result in full system compromise.

Big security problems with the current way Firefox handles extensions Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Could Green Dam lead to the largest botnet in history?

Green_Damn_site_blocked.jpg

From Rob Cottingham’s “From blocking to botnet: Censorship isn’t the only problem with China’s new Internet blocking software” (Social Signal: 10 June 2009):

Any blocking software needs to update itself from time to time: at the very least to freshen its database of forbidden content, and more than likely to fix bugs, add features and improve performance. (Most anti-virus software does this.)

If all the software does is to refresh the list of banned sites, that limits the potential for abuse. But if the software is loading new executable code onto the computer, suddenly there’s the potential for something a lot bigger.

Say you’re a high-ranking official in the Chinese military. And let’s say you have some responsibility for the state’s capacity to wage so-called cyber warfare: digital assaults on an enemy’s technological infrastructure.

It strikes you: there’s a single backdoor into more that 40 million Chinese computers, capable of installing… well, nearly anything you want.

What if you used that backdoor, not just to update blocking software, but to create something else?

Say, the biggest botnet in history?

Still, a botnet 40 million strong (plus the installed base already in place in Chinese schools and other institutions) at the beck and call of the military is potentially a formidable weapon. Even if the Chinese government has no intention today of using Green Dam for anything other than blocking pornography, the temptation to repurpose it for military purposes may prove to be overwhelming.

Could Green Dam lead to the largest botnet in history? Read More »

Green Dam is easily exploitable

Green_Damn_site_blocked.jpg

From Scott Wolchok, Randy Yao, and J. Alex Halderman’s “Analysis of the Green Dam Censorware System” (The University of Michigan: 11 June 2009):

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC.

According to press reports, China will soon require all PCs sold in the country to include Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material.

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

Green Dam is easily exploitable Read More »

Another huge botnet

From Kelly Jackson Higgins’ “Researchers Find Massive Botnet On Nearly 2 Million Infected Consumer, Business, Government PCs” (Dark Reading: 22 April 2009):

Researchers have discovered a major botnet operating out of the Ukraine that has infected 1.9 million machines, including large corporate and government PCs mainly in the U.S.

The botnet, which appears to be larger than the infamous Storm botnet was in its heyday, has infected machines from some 77 government-owned domains — 51 of which are U.S. government ones, according to Ophir Shalitin, marketing director of Finjan, which recently found the botnet. Shalitin says the botnet is controlled by six individuals and is hosted in Ukraine.

Aside from its massive size and scope, what is also striking about the botnet is what its malware can do to an infected machine. The malware lets an attacker read the victim’s email, communicate via HTTP in the botnet, inject code into other processes, visit Websites without the user knowing, and register as a background service on the infected machine, for instance.

Finjan says victims are infected when visiting legitimate Websites containing a Trojan that the company says is detected by only four of 39 anti-malware tools, according to a VirusTotal report run by Finjan researchers.

Around 45 percent of the bots are in the U.S., and the machines are Windows XP. Nearly 80 percent run Internet Explorer; 15 percent, Firefox; 3 percent, Opera; and 1 percent Safari. Finjan says the bots were found in banks and large corporations, as well as consumer machines.

Another huge botnet Read More »

Vista & Mac OS X security features

From Prince McLean’s “Pwn2Own contest winner: Macs are safer than Windows” (AppleInsider: 26 March 2009):

Once it did arrive, Vista introduced sophisticated new measures to make it more difficult for malicious crackers to inject code.

One is support for the CPU’s NX bit, which allows a process to mark certain areas of memory as “Non-eXecutable” so the CPU will not run any code stored there. This is referred to as “executable space protection,” and helps to prevent malicious code from being surreptitiously loaded into a program’s data storage and subsequently executed to gain access to the same privileges as the program itself, an exploit known as a “buffer overflow attack.”

A second security practice of Vista is “address space layout randomization” or ASLR, which is used to load executables, and the system libraries, heap, and stack into a randomly assigned location within the address space, making it far more difficult for crackers to know where to find vulnerabilities they can attack, even if they know what the bugs are and how to exploit them.

[Charlie Miller, the security expert who won both this and last year’s CanSecWest Pwn2Own security contests,] told Tom’s Hardware “the NX bit is very powerful. When used properly, it ensures that user-supplied code cannot be executed in the process during exploitation. Researchers (and hackers) have struggled with ways around this protection. ASLR is also very tough to defeat. This is the way the process randomizes the location of code in a process. Between these two hurdles, no one knows how to execute arbitrary code in Firefox or IE 8 in Vista right now. For the record, Leopard has neither of these features, at least implemented effectively. In the exploit I won Pwn2Own with, I knew right where my shellcode was located and I knew it would execute on the heap for me.”

While Apple did implement some support for NX and ASLR in Mac OS X, Leopard retains dyld, (the dynamic loader responsible for loading all of the frameworks, dylibs, and bundles needed by a process) in the same known location, making it relatively trivial to bypass its ASLR. This is slated to change later this year in Snow Leopard.

With the much larger address space available to 64-bit binaries, Snow Leopard’s ASLR will make it possible to hide the location of loaded code like a needle in a haystack, thwarting the efforts of malicious attackers to maintain predictable targets for controlling the code and data loaded into memory. Without knowing what addresses to target, the “vast majority of these exploits will fail,” another security expert who has also won a high profile Mac cracking contest explained to AppleInsider.

Vista & Mac OS X security features Read More »

The end of Storm?

From “Storm Worm botnet cracked wide open” (Heise Security: 9 January 2009):

A team of researchers from Bonn University and RWTH Aachen University have analysed the notorious Storm Worm botnet, and concluded it certainly isn’t as invulnerable as it once seemed. Quite the reverse, for in theory it can be rapidly eliminated using software developed and at least partially disclosed by Georg Wicherski, Tillmann Werner, Felix Leder and Mark Schlösser. However it seems in practice the elimination process would fall foul of the law.

Over the last two years, Storm Worm has demonstrated how easily organised internet criminals have been able to spread this infection. During that period, the Storm Worm botnet has accumulated more than a million infected computers, known as drones or zombies, obeying the commands of a control server and using peer-to-peer techniques to locate new servers. Even following a big clean-up with Microsoft’s Malicious Software Removal Tool, around 100,000 drones probably still remain. That means the Storm Worm botnet is responsible for a considerable share of the Spam tsunami and for many distributed denial-of-service attacks. It’s astonishing that no one has succeeded in dismantling the network, but these researchers say it isn’t due to technical finesse on the part of the Storm Worm’s developers.

Existing knowledge of the techniques used by the Storm Worm has mainly been obtained by observing the behaviour of infected systems, but the researchers took a different approach to disarm it. They reverse translated large parts of the machine code of the drone client program and analysed it, taking a particularly close look at the functions for communications between drones and with the server.

Using this background knowledge, they were able to develop their own client, which links itself into the peer-to-peer structure of a Storm Worm network in such a way that queries from other drones, looking for new command servers, can be reliably routed to it. That enables it to divert drones to a new server. The second step was to analyse the protocol for passing commands. The researchers were astonished to find that the server doesn’t have to authenticate itself to clients, so using their knowledge they were able to direct drones to a simple server. The latter could then issue commands to the test Storm worm drones in the laboratory so that, for example, they downloaded a specific program from a server, perhaps a special cleaning program, and ran it. The students then went on to write such a program.

The team has not yet taken the final step of putting the whole thing into action with a genuine Storm Worm botnet in the wild. From a legal point of view, that could involve many problems. Any unauthorised access to third-party computers could be regarded as tampering with data, which is punishable under paragraph § 303a of the German Penal Code. That paragraph threatens up to two years’ imprisonment for unlawfully deleting, suppressing, making unusable or changing third-party data. Although this legal process would only come into effect if there was a criminal complaint from an injured party, or if there was special public interest in the prosecution of the crime.

Besides risks of coming up against the criminal law, there is also a danger of civil claims for damages by the owners of infected PCs, because the operation might cause collateral damage. There are almost certain to be configurations in which the cleaning goes wrong, perhaps disabling computers so they won’t run any more. Botnet operators could also be expected to strike back, causing further damage.

The end of Storm? Read More »

Three top botnets

From Kelly Jackson Higgins’ “The World’s Biggest Botnets” (Dark Reading: 9 November 2007):

You know about the Storm Trojan, which is spread by the world’s largest botnet. But what you may not know is there’s now a new peer-to-peer based botnet emerging that could blow Storm away.

“We’re investigating a new peer-to-peer botnet that may wind up rivaling Storm in size and sophistication,” says Tripp Cox, vice president of engineering for startup Damballa, which tracks botnet command and control infrastructures. “We can’t say much more about it, but we can tell it’s distinct from Storm.”

Researchers estimate that there are thousands of botnets in operation today, but only a handful stand out by their sheer size and pervasiveness. Although size gives a botnet muscle and breadth, it can also make it too conspicuous, which is why botnets like Storm fluctuate in size and are constantly finding new ways to cover their tracks to avoid detection. Researchers have different head counts for different botnets, with Storm by far the largest (for now, anyway).

Damballa says its top three botnets are Storm, with 230,000 active members per 24 hour period; Rbot, an IRC-based botnet with 40,000 active members per 24 hour period; and Bobax, an HTTP-based botnet with 24,000 active members per 24 hour period, according to the company.

1. Storm

Size: 230,000 active members per 24 hour period

Type: peer-to-peer

Purpose: Spam, DDOS

Malware: Trojan.Peacomm (aka Nuwar)

Few researchers can agree on Storm’s actual size — while Damballa says its over 200,000 bots, Trend Micro says its more like 40,000 to 100,000 today. But all researchers say that Storm is a whole new brand of botnet. First, it uses encrypted decentralized, peer-to-peer communication, unlike the traditional centralized IRC model. That makes it tough to kill because you can’t necessarily shut down its command and control machines. And intercepting Storm’s traffic requires cracking the encrypted data.

Storm also uses fast-flux, a round-robin method where infected bot machines (typically home computers) serve as proxies or hosts for malicious Websites. These are constantly rotated, changing their DNS records to prevent their discovery by researchers, ISPs, or law enforcement. And researchers say it’s tough to tell how the command and control communication structure is set up behind the P2P botnet. “Nobody knows how the mother ships are generating their C&C,” Trend Micro’s Ferguson says.

Storm uses a complex combination of malware called Peacomm that includes a worm, rootkit, spam relay, and Trojan.

But researchers don’t know — or can’t say — who exactly is behind Storm, except that it’s likely a fairly small, tightly knit group with a clear business plan. “All roads lead back to Russia,” Trend Micro’s Ferguson says.

“Storm is only thing now that keeps me awake at night and busy,” he says. “It’s professionalized crimeware… They have young, talented programmers apparently. And they write tools to do administrative [tracking], as well as writing cryptographic routines… and another will handle social engineering, and another will write the Trojan downloader, and another is writing the rootkit.”

Rbot

Size: 40,000 active members per 24 hour period

Type: IRC

Purpose: DDOS, spam, malicious operations

Malware: Windows worm

Rbot is basically an old-school IRC botnet that uses the Rbot malware kit. It isn’t likely to ever reach Storm size because IRC botnets just can’t scale accordingly. “An IRC server has to be a beefy machine to support anything anywhere close to the size of Peacomm/Storm,” Damballa’s Cox says.

It can disable antivirus software, too. Rbot’s underlying malware uses a backdoor to gain control of the infected machine, installing keyloggers, viruses, and even stealing files from the machine, as well as the usual spam and DDOS attacks.

Bobax

Size: 24,000 active members per 24 hour period

Type: HTTP

Purpose: Spam

Malware: Mass-mailing worm

Bobax is specifically for spamming, Cox says, and uses the stealthier HTTP for sending instructions to its bots on who and what to spam. …

According to Symantec, Bobax bores open a back door and downloads files onto the infected machine, and lowers its security settings. It spreads via a buffer overflow vulnerability in Windows, and inserts the spam code into the IE browser so that each time the browser runs, the virus is activated. And Bobax also does some reconnaissance to ensure that its spam runs are efficient: It can do bandwidth and network analysis to determine just how much spam it can send, according to Damballa. “Thus [they] are able to tailor their spamming so as not to tax the network, which helps them avoid detection,” according to company research.

Even more frightening, though, is that some Bobax variants can block access to antivirus and security vendor Websites, a new trend in Website exploitation.

Three top botnets Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

Richard Stallman on the 4 freedoms

From Richard Stallman’s “Transcript of Richard Stallman at the 4th international GPLv3 conference; 23rd August 2006” (FSF Europe: 23 August 2006):

Specifically, this refers to four essential freedoms, which are the definition of Free Software.

Freedom zero is the freedom to run the program, as you wish, for any purpose.

Freedom one is the freedom to study the source code and then change it so that it does what you wish.

Freedom two is the freedom to help your neighbour, which is the freedom to distribute, including publication, copies of the program to others when you wish.

Freedom three is the freedom to help build your community, which is the freedom to distribute, including publication, your modified versions, when you wish.

These four freedoms make it possible for users to live an upright, ethical life as a member of a community and enable us individually and collectively to have control over what our software does and thus to have control over our computing.

Richard Stallman on the 4 freedoms Read More »

Offline copy protection in games

From Adam Swiderski’s “A History of Copy Protection” (Edge: 9 June 2008):

Fortunately, the games industry is creative, and thus it was that the offline copy protection was born and flourished. One of its most prevalent forms was an in-game quiz that would require gamers to refer to the manual for specific information – you’d be asked, for example, to enter the third word in the fourth paragraph on page 14. Some titles took a punishing approach to this little Q & A: SSI’s Star Command required a documentation check prior to each in-game save, while Master of Orion would respond to a failed manual check by gradually becoming so difficult that it was impossible to win. Perhaps the most notorious example of this method is Sierra’s King’s Quest III, in which lengthy passages of potion recipes and other information had to be reproduced from the manual. One typo, and you were greeted with a “Game Over” screen.

Other developers eschewed straight manual checks for in-box tools and items that were more integrated into the games with which they shipped, especially once photocopiers became more accessible and allowed would-be pirates to quickly and easily duplicate documentation. LucasArts made a name for itself in this field, utilizing such gems as the Monkey Island series’ multi-level code wheels. Other games, like Maniac Mansion and Indiana Jones and the Last Crusade shipped with the kind of color-masked text one would find in old-school decoder rings; the documents could not be reproduced by the photocopiers of the day and would require the application of a transparent red plastic filter in order to get at their contents.

Offline copy protection in games Read More »

The life cycle of a botnet client

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

What makes a botnet a botnet? In particular, how do you distinguish a botnet client from just another hacker break-in? First, the clients in a botnet must be able to take actions on the client without the hacker having to log into the client’s operating system (Windows, UNIX, or Mac OS). Second, many clients must be able to act in a coordinated fashion to accomplish a common goal with little or no intervention from the hacker. If a collection of computers meet this criteria it is a botnet.

The life of a botnet client, or botclient, begins when it has been exploited. A prospective botclient can be exploited via malicious code that a user is tricked into running; attacks against unpatched vulnerabilities; backdoors left by Trojan worms or remote access Trojans; and password guessing and brute force access attempts. In this section we’ll discuss each of these methods of exploiting botnets.

Rallying and Securing the Botnet Client

Although the order in the life cycle may vary, at some point early in the life of a new botnet client it must call home, a process called “rallying. “When rallying, the botnet client initiates contact with the botnet Command and Control (C&C) Server. Currently, most botnets use IRC for Command and Control.

Rallying is the term given for the first time a botnet client logins in to a C&C server. The login may use some form of encryption or authentication to limit the ability of others to eavesdrop on the communications. Some botnets are beginning to encrypt the communicated data.

At this point the new botnet client may request updates. The updates could be updated exploit software, an updated list of C&C server names, IP addresses, and/or channel names. This will assure that the botnet client can be managed and can be recovered should the current C&C server be taken offline.

The next order of business is to secure the new client from removal. The client can request location of the latest anti-antivirus (Anti-A/V) tool from the C&C server. The newly controlled botclient would download this soft- ware and execute it to remove the A/V tool, hide from it, or render it ineffective.

Shutting off the A/V tool may raise suspicions if the user is observant. Some botclients will run a dll that neuters the A/V tool. With an Anti-A/V dll in place the A/V tool may appear to be working normally except that it never detects or reports the files related to the botnet client. It may also change the Hosts file and LMHosts file so that attempts to contact an A/V vendor for updates will not succeed. Using this method, attempts to contact an A/V vendor can be redirected to a site containing malicious code or can yield a “website or server not found” error.

One tool, hidden32. exe, is used to hide applications that have a GUI interface from the user. Its use is simple; the botherder creates a batch file that executes hidden32 with the name of the executable to be hidden as its parameter. Another stealthy tool, HideUserv2, adds an invisible user to the administrator group.

Waiting for Orders and Retrieving the Payload

Once secured, the botnet client will listen to the C&C communications channel.

The botnet client will then request the associated payload. The payload is the term I give the software representing the intended function of this botnet client.

The life cycle of a botnet client Read More »

How the Greek cell phone network was compromised

From Vassilis Prevelakis and Diomidis Spinellis’ “The Athens Affair” (IEEE Spectrum: July 2007):

On 9 March 2005, a 38-year-old Greek electrical engineer named Costas Tsalikidis was found hanged in his Athens loft apartment, an apparent suicide. It would prove to be merely the first public news of a scandal that would roil Greece for months.

The next day, the prime minister of Greece was told that his cellphone was being bugged, as were those of the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.

The victims were customers of Athens-based Vodafone-Panafon, generally known as Vodafone Greece, the country’s largest cellular service provider; Tsalikidis was in charge of network planning at the company.

We now know that the illegally implanted software, which was eventually found in a total of four of Vodafone’s Greek switches, created parallel streams of digitized voice for the tapped phone calls. One stream was the ordinary one, between the two calling parties. The other stream, an exact copy, was directed to other cellphones, allowing the tappers to listen in on the conversations on the cellphones, and probably also to record them. The software also routed location and other information about those phone calls to these shadow handsets via automated text messages.

The day after Tsalikidis’s body was discovered, CEO Koronias met with the director of the Greek prime minister’s political office. Yiannis Angelou, and the minister of public order, Giorgos Voulgarakis. Koronias told them that rogue software used the lawful wiretapping mechanisms of Vodafone’s digital switches to tap about 100 phones and handed over a list of bugged numbers. Besides the prime minister and his wife, phones belonging to the ministers of national defense, foreign affairs, and justice, the mayor of Athens, and the Greek European Union commissioner were all compromised. Others belonged to members of civil rights organizations, peace activists, and antiglobalization groups; senior staff at the ministries of National Defense, Public Order, Merchant Marine, and Foreign Affairs; the New Democracy ruling party; the Hellenic Navy general staff; and a Greek-American employee at the United States Embassy in Athens.

First, consider how a phone call, yours or a prime minister’s, gets completed. Long before you dial a number on your handset, your cellphone has been communicating with nearby cellular base stations. One of those stations, usually the nearest, has agreed to be the intermediary between your phone and the network as a whole. Your telephone handset converts your words into a stream of digital data that is sent to a transceiver at the base station.

The base station’s activities are governed by a base station controller, a special-purpose computer within the station that allocates radio channels and helps coordinate handovers between the transceivers under its control.

This controller in turn communicates with a mobile switching center that takes phone calls and connects them to call recipients within the same switching center, other switching centers within the company, or special exchanges that act as gateways to foreign networks, routing calls to other telephone networks (mobile or landline). The mobile switching centers are particularly important to the Athens affair because they hosted the rogue phone-tapping software, and it is there that the eavesdropping originated. They were the logical choice, because they are at the heart of the network; the intruders needed to take over only a few of them in order to carry out their attack.

Both the base station controllers and the switching centers are built around a large computer, known as a switch, capable of creating a dedicated communications path between a phone within its network and, in principle, any other phone in the world. Switches are holdovers from the 1970s, an era when powerful computers filled rooms and were built around proprietary hardware and software. Though these computers are smaller nowadays, the system’s basic architecture remains largely unchanged.

Like most phone companies, Vodafone Greece uses the same kind of computer for both its mobile switching centers and its base station controllers—Ericsson’s AXE line of switches. A central processor coordinates the switch’s operations and directs the switch to set up a speech or data path from one phone to another and then routes a call through it. Logs of network activity and billing records are stored on disk by a separate unit, called a management processor.

The key to understanding the hack at the heart of the Athens affair is knowing how the Ericsson AXE allows lawful intercepts—what are popularly called “wiretaps.” Though the details differ from country to country, in Greece, as in most places, the process starts when a law enforcement official goes to a court and obtains a warrant, which is then presented to the phone company whose customer is to be tapped.

Nowadays, all wiretaps are carried out at the central office. In AXE exchanges a remote-control equipment subsystem, or RES, carries out the phone tap by monitoring the speech and data streams of switched calls. It is a software subsystem typically used for setting up wiretaps, which only law officers are supposed to have access to. When the wiretapped phone makes a call, the RES copies the conversation into a second data stream and diverts that copy to a phone line used by law enforcement officials.

Ericsson optionally provides an interception management system (IMS), through which lawful call intercepts are set up and managed. When a court order is presented to the phone company, its operators initiate an intercept by filling out a dialog box in the IMS software. The optional IMS in the operator interface and the RES in the exchange each contain a list of wiretaps: wiretap requests in the case of the IMS, actual taps in the RES. Only IMS-initiated wiretaps should be active in the RES, so a wiretap in the RES without a request for a tap in the IMS is a pretty good indicator that an unauthorized tap has occurred. An audit procedure can be used to find any discrepancies between them.

It took guile and some serious programming chops to manipulate the lawful call-intercept functions in Vodafone’s mobile switching centers. The intruders’ task was particularly complicated because they needed to install and operate the wiretapping software on the exchanges without being detected by Vodafone or Ericsson system administrators. From time to time the intruders needed access to the rogue software to update the lists of monitored numbers and shadow phones. These activities had to be kept off all logs, while the software itself had to be invisible to the system administrators conducting routine maintenance activities. The intruders achieved all these objectives.

The challenge faced by the intruders was to use the RES’s capabilities to duplicate and divert the bits of a call stream without using the dialog-box interface to the IMS, which would create auditable logs of their activities. The intruders pulled this off by installing a series of patches to 29 separate blocks of code, according to Ericsson officials who testified before the Greek parliamentary committee that investigated the wiretaps. This rogue software modified the central processor’s software to directly initiate a wiretap, using the RES’s capabilities. Best of all, for them, the taps were not visible to the operators, because the IMS and its user interface weren’t used.

The full version of the software would have recorded the phone numbers being tapped in an official registry within the exchange. And, as we noted, an audit could then find a discrepancy between the numbers monitored by the exchange and the warrants active in the IMS. But the rogue software bypassed the IMS. Instead, it cleverly stored the bugged numbers in two data areas that were part of the rogue software’s own memory space, which was within the switch’s memory but isolated and not made known to the rest of the switch.

That by itself put the rogue software a long way toward escaping detection. But the perpetrators hid their own tracks in a number of other ways as well. There were a variety of circumstances by which Vodafone technicians could have discovered the alterations to the AXE’s software blocks. For example, they could have taken a listing of all the blocks, which would show all the active processes running within the AXE—similar to the task manager output in Microsoft Windows or the process status (ps) output in Unix. They then would have seen that some processes were active, though they shouldn’t have been. But the rogue software apparently modified the commands that list the active blocks in a way that omitted certain blocks—the ones that related to intercepts—from any such listing.

In addition, the rogue software might have been discovered during a software upgrade or even when Vodafone technicians installed a minor patch. It is standard practice in the telecommunications industry for technicians to verify the existing block contents before performing an upgrade or patch. We don’t know why the rogue software was not detected in this way, but we suspect that the software also modified the operation of the command used to print the checksums—codes that create a kind of signature against which the integrity of the existing blocks can be validated. One way or another, the blocks appeared unaltered to the operators.

Finally, the software included a back door to allow the perpetrators to control it in the future. This, too, was cleverly constructed to avoid detection. A report by the Hellenic Authority for the Information and Communication Security and Privacy (the Greek abbreviation is ADAE) indicates that the rogue software modified the exchange’s command parser—a routine that accepts commands from a person with system administrator status—so that innocuous commands followed by six spaces would deactivate the exchange’s transaction log and the alarm associated with its deactivation, and allow the execution of commands associated with the lawful interception subsystem. In effect, it was a signal to allow operations associated with the wiretaps but leave no trace of them. It also added a new user name and password to the system, which could be used to obtain access to the exchange.

…Security experts have also discovered other rootkits for general-purpose operating systems, such as Linux, Windows, and Solaris, but to our knowledge this is the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch.

So the investigators painstakingly reconstructed an approximation of the original PLEX source files that the intruders developed. It turned out to be the equivalent of about 6500 lines of code, a surprisingly substantial piece of software.

How the Greek cell phone network was compromised Read More »

9 reasons the Storm botnet is different

From Bruce Schneier’s “Gathering ‘Storm’ Superworm Poses Grave Threat to PC Nets” (Wired: 4 October 2007):

Storm represents the future of malware. Let’s look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. …

3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. …

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. …

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect. …

5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” …

6. Storm’s payload — the code it uses to spread — morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites — anything to entice users to click on a phony link. …

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. …

9. Last month, Storm began attacking anti-spam sites focused on identifying it — spamhaus.org, 419eater and so on — and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

9 reasons the Storm botnet is different Read More »

The Chinese Internet threat

From Shane Harris’ “China’s Cyber-Militia” (National Journal: 31 May 2008):

Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.

One prominent expert told National Journal he believes that China’s People’s Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. “They said that, with confidence, it had been traced back to the PLA.” These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.

Bennett, whose former trade association includes some of the nation’s largest computer-security companies and who has testified before Congress on the vulnerability of information networks, also said that a blackout in February, which affected 3 million customers in South Florida, was precipitated by a cyber-hacker. That outage cut off electricity along Florida’s east coast, from Daytona Beach to Monroe County, and affected eight power-generating stations.

A second information-security expert independently corroborated Bennett’s account of the Florida blackout. According to this individual, who cited sources with direct knowledge of the investigation, a Chinese PLA hacker attempting to map Florida Power & Light’s computer infrastructure apparently made a mistake.

The industry source, who conducts security research for government and corporate clients, said that hackers in China have devoted considerable time and resources to mapping the technology infrastructure of other U.S. companies. That assertion has been backed up by the current vice chairman of the Joint Chiefs of Staff, who said last year that Chinese sources are probing U.S. government and commercial networks.

“The Chinese operate both through government agencies, as we do, but they also operate through sponsoring other organizations that are engaging in this kind of international hacking, whether or not under specific direction. It’s a kind of cyber-militia.… It’s coming in volumes that are just staggering.”

In addition to disruptive attacks on networks, officials are worried about the Chinese using long-established computer-hacking techniques to steal sensitive information from government agencies and U.S. corporations.

Brenner, the U.S. counterintelligence chief, said he knows of “a large American company” whose strategic information was obtained by its Chinese counterparts in advance of a business negotiation. As Brenner recounted the story, “The delegation gets to China and realizes, ‘These guys on the other side of the table know every bottom line on every significant negotiating point.’ They had to have got this by hacking into [the company’s] systems.”

During a trip to Beijing in December 2007, spyware programs designed to clandestinely remove information from personal computers and other electronic equipment were discovered on devices used by Commerce Secretary Carlos Gutierrez and possibly other members of a U.S. trade delegation, according to a computer-security expert with firsthand knowledge of the spyware used. Gutierrez was in China with the Joint Commission on Commerce and Trade, a high-level delegation that includes the U.S. trade representative and that meets with Chinese officials to discuss such matters as intellectual-property rights, market access, and consumer product safety. According to the computer-security expert, the spyware programs were designed to open communications channels to an outside system, and to download the contents of the infected devices at regular intervals. The source said that the computer codes were identical to those found in the laptop computers and other devices of several senior executives of U.S. corporations who also had their electronics “slurped” while on business in China.

The Chinese make little distinction between hackers who work for the government and those who undertake cyber-adventures on its behalf. “There’s a huge pool of Chinese individuals, students, academics, unemployed, whatever it may be, who are, at minimum, not discouraged from trying this out,” said Rodger Baker, a senior China analyst for Stratfor, a private intelligence firm. So-called patriotic-hacker groups have launched attacks from inside China, usually aimed at people they think have offended the country or pose a threat to its strategic interests. At a minimum the Chinese government has done little to shut down these groups, which are typically composed of technologically skilled and highly nationalistic young men.

The military is not waiting for China, or any other nation or hacker group, to strike a lethal cyber-blow. In March, Air Force Gen. Kevin Chilton, the chief of U.S. Strategic Command, said that the Pentagon has its own cyberwar plans. “Our challenge is to define, shape, develop, deliver, and sustain a cyber-force second to none,” Chilton told the Senate Armed Services Committee. He asked appropriators for an “increased emphasis” on the Defense Department’s cyber-capabilities to help train personnel to “conduct network warfare.”

The Air Force is in the process of setting up a Cyberspace Command, headed by a two-star general and comprising about 160 individuals assigned to a handful of bases. As Wired noted in a recent profile, Cyberspace Command “is dedicated to the proposition that the next war will be fought in the electromagnetic spectrum and that computers are military weapons.” The Air Force has launched a TV ad campaign to drum up support for the new command, and to call attention to cyberwar. “You used to need an army to wage a war,” a narrator in the TV spot declares. “Now all you need is an Internet connection.”

The Chinese Internet threat Read More »

The latest on electronic voting machines

From James Turner’s interview with Dr. Barbara Simons, past President of the Association for Computing Machinery & recent appointee to the Advisory Board of the Federal Election Assistance Commission, at “A 2008 e-Voting Wrapup with Dr. Barbara Simons” (O’Reilly Media: 7 November 2008):

[Note from Scott: headers added by me]

Optical Scan: Good & Bad

And most of the voting in Minnesota was done on precinct based optical scan machines, paper ballot which is then fed into the optical scanner at the precinct. And the good thing about that is it gives the voter immediate feedback if there is any problem, such as over-voting, voting twice for a candidate.

Well there’s several problems; one is–well first of all, as you say because these things have computers in them they can be mis-programmed, there can be software bugs. You could conceivably have malicious code. You could have the machines give you a different count from the right one. There was a situation back in the 2004 race where Gephardt in one of the Primaries–Gephardt received a large number of votes after he had withdrawn from the race. And this was done–using paper ballots, using optical scan paper ballots. I don’t know if it was this particular brand or not. And when they were recounted it was discovered that in fact that was the wrong result; that he had gotten fewer votes. Now I never saw an explanation for what happened but my guess is that whoever programmed these machines had mistakenly assigned the slot that was for Kerry to Gephardt and the slot that was for Gephardt to Kerry; that’s my guess. Now I don’t know if that’s true but if that did happen I think there’s very little reason to believe it was malicious because there was really nothing to be gained by doing that. So I think it was just an honest error but of course errors can occur.

DRE Studies

Ohio conducted a major study of electronic voting machines called the Everest Study which was commissioned by the current Secretary of State Bruner, Secretary of State Bruner and this study uncovered huge problems with these–with most of these voting systems, these touch screen voting systems. They were found to be insecure, unreliable, difficult to use; basically a similar study had been studied in California not too much earlier called the Top to Bottom Review and the Ohio study confirmed every–all of the problems that had been uncovered in California and found additional problems, so based on that there was a push to get rid of a lot of these machines.

States Using DREs

Maryland and Georgia are entirely touch screen States and so is New Jersey. In Maryland they’re supposed to replace them with optical scan paper ballots by 2010 but there’s some concern that there may not be the funding to do that. In fact Maryland and Georgia both use Diebold which is now called Premier, paperless touch screen voting machines; Georgia started using them in 2002 and in that race, that’s the race in which Max Cleveland, the Democratic Senator, paraplegic from–the Vietnam War Vet was defeated and I know that there are some people who questioned the outcome of that race because the area polls had showed him winning. And because that race–those machines are paperless there was no way to check the outcome. Another thing that was of a concern in Maryland in 2002 was that–I mean in Georgia in 2002 was that there were last minute software patches being added to the machines just before the Election and the software patches hadn’t really been inspected by any kind of independent agency.

More on Optical Scans

Well I think scanned ballots–well certainly scanned ballots give you a paper trail and they give you a good paper trail. The kind of paper trail you want and it’s not really a paper trail; it’s paper ballots because they are the ballots. What you want is you want it to be easy to audit and recount an election. And I think that’s something that really people hadn’t taken into consideration early on when a lot of these machines were first designed and purchased.

Disabilities

One of the things that was investigated in California when they did the Top to Bottom Review was just how easy is it for people with disabilities to use these touch screen machines? Nobody had ever done that before and these test results came back very negatively. If you look at the California results they’re very negative on these touch screen machines. In many cases people in wheelchairs had a very difficult time being able to operate them correctly, people who were blind sometimes had troubles understanding what was being said or things were said too loudly or too softly or they would get confused about the instructions or some of the ways that they had for manual inputting; their votes were confusing.

There is a–there are these things called Ballot Generating Devices which are not what we generally refer to as touch screen machines although they can be touch screen. The most widely used one is called the Auto Mark. And the way the Auto Mark works is you take a paper ballots, one of these optical scan ballots and you insert it into the Auto Mark and then it operates much the same way that these other paperless–potentially paperless touch screen machines work. It has a headphone–headset so that a blind voter can use it; it has–it’s possible for somebody in a wheelchair to vote, although in fact you don’t have to use this if you’re in a wheelchair; you can vote optical scan clearly. Somebody who has severe mobility impairments can vote on these machines using a sip, puff device where if you sip it’s a zero or one and if you puff it’s the opposite or a yes or a no. And these–the Auto Mark was designed with disability people in mind from early on. And it faired much better in the California tests. What it does is at the end when the voter with disabilities is finished he or she will say okay cast my ballot. At that point the Auto Mark simply marks the optical scan ballot; it just marks it. And then you have an optical scan ballot that can be read by an optical scanner. There should be no problems with it because it’s been generated by a machine. And you have a paper ballot that can be recounted.

Problems with DREs vs Optical Scans

One of the things to keep in–there’s a couple things to keep in mind when thinking about replacing these systems. The first is that these direct recording electronic systems or touch screen systems as they’re called they have to have–the States and localities that buy these systems have to have maintenance contracts with the vendors because they’re very complicated systems to maintain and of course the software is a secret. So some of these contracts are quite costly and these are ongoing expenses with these machines. In addition, because they have software in them they have to be securely stored and they have to be securely delivered and those create enormous problems especially when you have to worry about delivering large numbers of machines to places prior to the election. Frequently these machines end up staying in people’s garages or in churches for periods of time when they’re relatively insecure.

And you need far fewer scanners; the security issues with scanners are not as great because you can do an audit and a recount, so altogether it just seems to me that moving to paper based optical scan systems with precinct scanners so that the voter gets feedback on the ballot if the voter votes twice for President; the ballot is kicked out and the voter can vote a new ballot.

And as I say there is the Auto Mark for voters with disabilities to use; there’s also another system called Populex but that’s not as widely used as Auto Mark. There could be new systems coming forward.

1/2 of DREs Broken in Pennsylvania on Election Day

Editor’s Note: Dr. Simons wrote me later to say: “Many Pennsylvania polling places opened on election day with half or more of their voting machines broken — so they used emergency paper ballots until they could fix their machines.”

The latest on electronic voting machines Read More »

Serial-numbered confetti

From Bruce Schneier’s “News” (Crypto-Gram: 15 September 2007):

Taser — yep, that’s the company’s name as well as the product’s name — is now selling a personal-use version of their product. It’s called the Taser C2, and it has an interesting embedded identification technology. Whenever the weapon is fired, it also sprays some serial-number bar-coded confetti, so a firing can be traced to a weapon and — presumably — the owner.
http://www.taser.com/products/consumers/Pages/C2.aspx

Serial-numbered confetti Read More »