security

Bruce Schneier on wholesale, constant surveillance

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”

What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.

Bruce Schneier on wholesale, constant surveillance Read More »

Those who know how to fix know how to destroy as well

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

This is true in many aspects of our society. Here’s what I said in my book, Secrets and Lies (page 389): “As technology becomes more complicated, society’s experts become more specialized. And in almost every area, those with the expertise to build society’s infrastructure also have the expertise to destroy it. Ask any doctor how to poison someone untraceably, and he can tell you. Ask someone who works in aircraft maintenance how to drop a 747 out of the sky without getting caught, and he’ll know. Now ask any Internet security professional how to take down the Internet, permanently. I’ve heard about half a dozen different ways, and I know I haven’t exhausted the possibilities.”

Those who know how to fix know how to destroy as well Read More »

Bruce Schneier on security & crime economics

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Basically, you’re asking if crime pays. Most of the time, it doesn’t, and the problem is the different risk characteristics. If I make a computer security mistake — in a book, for a consulting client, at BT — it’s a mistake. It might be expensive, but I learn from it and move on. As a criminal, a mistake likely means jail time — time I can’t spend earning my criminal living. For this reason, it’s hard to improve as a criminal. And this is why there are more criminal masterminds in the movies than in real life.

Crime has been part of our society since our species invented society, and it’s not going away anytime soon. The real question is, “Why is there so much crime and hacking on the Internet, and why isn’t anyone doing anything about it?”

The answer is in the economics of Internet vulnerabilities and attacks: the organizations that are in the position to mitigate the risks aren’t responsible for the risks. This is an externality, and if you want to fix the problem you need to address it. In this essay (more here), I recommend liabilities; companies need to be liable for the effects of their software flaws. A related problem is that the Internet security market is a lemon’s market (discussed here), but there are strategies for dealing with that, too.

Bruce Schneier on security & crime economics Read More »

Bruce Schneier on identity theft

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

Identity theft is a problem for two reasons. One, personal identifying information is incredibly easy to get; and two, personal identifying information is incredibly easy to use. Most of our security measures have tried to solve the first problem. Instead, we need to solve the second problem. As long as it’s easy to impersonate someone if you have his data, this sort of fraud will continue to be a major problem.

The basic answer is to stop relying on authenticating the person, and instead authenticate the transaction. Credit cards are a good example of this. Credit card companies spend almost no effort authenticating the person — hardly anyone checks your signature, and you can use your card over the phone, where they can’t even check if you’re holding the card — and spend all their effort authenticating the transaction.

Bruce Schneier on identity theft Read More »

How the Storm botnet defeats anti-virus programs

From Lisa Vaas’ “Storm Worm Botnet Lobotomizing Anti-Virus Programs” (eWeek: 24 October 2007):

According to an Oct. 22 posting by Sophos analyst Richard Cohen, the Storm botnet – Sophos calls it Dorf, and its also known as Ecard malware – is dropping files that call a routine that gets Windows to tell it every time a new process is started. The malware checks the process file name against an internal list and kills the ones that match – sometimes. But Storm has taken a new twist: It now would rather leave processes running and just patch entry points of loading processes that might pose a threat to it. Then, when processes such as anti-virus programs run, they simply return a value of 0.

The strategy means that users wont be alarmed by their anti-virus software not running. Even more ominously, the technique is designed to fool NAC (network access control) systems, which bar insecure clients from registering on a network by checking to see whether a client is running anti-virus software and whether its patched.

Its the latest evidence of why Storm is “the scariest and most substantial threat” security researchers have ever seen, he said. Storm is patient, its resilient, its adaptive in that it can defeat anti-virus products in multiple ways (programmatically, it changes its signature every 30 minutes), its invisible because it comes with a rootkit built in and hides at the kernel level, and its clever enough to change every few weeks.

Hence the hush-hush nature of research around Storm. Corman said he can tell us that its now accurately pegged at 6 million, but he cant tell us who came up with the figure, or how. Besides retribution, Storms ability to morph means that those who know how to watch it are jealously guarding their techniques. “None of the researchers wanted me to say anything about it,” Corman said. “They’re afraid of retaliation. They fear that if we disclose their unique means of finding information on Storm,” the botnet herder will change tactics yet again and the window into Storm will slam shut.

How the Storm botnet defeats anti-virus programs Read More »

Denver International Airport, home to alien reptilians enslaving children in deep dungeons

From Jared Jacang Maher’s “DIA Conspiracies Take Off” (Denver Westword News: 30 August 2007):

Chris from Indianapolis has heard that the tunnels below DIA [Denver International Airport] were constructed as a kind of Noah’s Ark so that five million people could escape the coming earth change; shaken and earnest, he asks how someone might go about getting on the list.

Today, dozens of websites are devoted to the “Denver Airport Conspiracy,” and theorists have even nicknamed the place “Area 52.” Wikipedia presents DIA as a primary example of New World Order symbolism, above the entry about the eyeball/pyramid insignia on the one-dollar bill. And over the past two years, DIA has been the subject of books, articles, documentaries, radio interviews and countless YouTube and forum board postings, all attempting to unlock its mysteries. While the most extreme claim maintains that a massive underground facility exists below the airport where an alien race of reptilian humanoids feeds on missing children while awaiting the date of government-sponsored rapture, all of the assorted theories share a common thread: The key to decoding the truth about DIA and the sinister forces that control our reality is contained within the two Tanguma murals, “In Peace and Harmony With Nature” and “The Children of the World Dream of Peace.”

And not all these theorists are Unabomber-like crackpots uploading their hallucinations from basement lairs. Former BBC media personality David Icke, for example, has written twenty books in his quest to prove that the world is controlled by an elite group of reptilian aliens known as the Babylonian Brotherhood, whose ranks include George W. Bush, Queen Elizabeth II, the Jews and Kris Kristofferson. In various writings, lectures and interviews, he has long argued that DIA is one of many home bases for the otherworldly creatures, a fact revealed in the lizard/alien-faced military figure shown in Tanguma’s murals.

“Denver is scheduled to be the Western headquarters of the US New World Order during martial law take over,” Icke wrote in his 1999 book, The Biggest Secret. “Other contacts who have been underground at the Denver Airport claim that there are large numbers of human slaves, many of them children, working there under the control of the reptilians.”

On the other end of the conspiracy spectrum is anti-vaccination activist Dr. Len Horowitz, who believes that global viruses such as AIDS, Ebola, West Nile, tuberculosis and SARS are actually population-control plots engineered by the government. The former dentist from Florida does not speak about 2012 or reptiles — in fact, he sees Icke’s Jewish alien lizards as a Masonic plot to divert observers from the true earthly enemies: remnants of the Third Reich. He even used the mural’s sword-wielding military figure as the front cover of his 2001 book, Death in the Air.

“The Nazi alien symbolizes the Nazi-fascist links between contemporary population controllers and the military-medical-petrochemical-pharmaceutical cartel largely accountable for Hitler’s rise to power,” Horowitz explained in a 2003 interview with BookWire.

Although conspiracy theories vary widely, they all share three commonalities. “One is the belief that nothing happens by accident,” [Syracuse University professor Michael Barkun, author of the 2006 book A Culture of Conspiracy] points out. “Another is that everything is connected. And a third is that nothing is as it seems.” [Emphasis added]

[Alex] Christopher is a 65-year-old grandmother living in Alabama.

Christopher, on the other hand, was open to hearing anything. A man called her and said he had found an elevator at DIA that led to a corridor that led all the way down into a military base that also contained alien-operated concentration camps. She detailed this theory in her next book, Pandora’s Box II…

And the scale of DIA reflected this desire: It was to be the largest, most modern airport in the world. But almost as soon as ground was broken in 1989, problems cropped up. The massive public-works project was encumbered by design changes, difficult airline negotiations, allegations of cronyism in the contracting process, rumors of mismanagement and real troubles with the $700 million (and eventually abandoned) automated baggage system. Peña’s successor, Wellington Webb, was forced to push back the 1993 opening date three times. By the time DIA finally opened in February 1995, the original $1.5 billion cost had grown to $5.2 billion. Three months after that opening, the Congressional Subcommittee on Aviation held a special hearing on DIA in which one member said the Denver airport represented the “worst in government inefficiency, political behind-the-scenes deal-making, and financial mismanagement.” …

And what looked like a gamble in 1995 seems to have paid off for Denver. Today, DIA is considered one of the world’s most efficient, spacious and technologically advanced airports. It is the fifth-busiest in the nation and tenth-busiest in the world, serving some 50 million passengers in 2006.

Denver International Airport, home to alien reptilians enslaving children in deep dungeons Read More »

CopyBot copies all sorts of items in Second Life

From Glyn Moody’s “The duplicitous inhabitants of Second Life” (The Guardian: 23 November 2006):

What would happen to business and society if you could easily make a copy of anything – not just MP3s and DVDs, but clothes, chairs and even houses? That may not be a problem most of us will have to confront for a while yet, but the 1.5m residents of the virtual world Second Life are already grappling with this issue.

A new program called CopyBot allows Second Life users to duplicate repeatedly certain elements of any object in the vicinity – and sometimes all of it. That’s awkward in a world where such virtual goods can be sold for real money. When CopyBot first appeared, some retailers in Second Life shut up shop, convinced that their virtual goods were about to be endlessly copied and rendered worthless. Others protested, and suggested that in the absence of scarcity, Second Life’s economy would collapse.

Instead of sending a flow of pictures of the virtual world to the user as a series of pixels – something that would be impractical to calculate – the information would be transmitted as a list of basic shapes that were re-created on the user’s PC. For example, a virtual house might be a cuboid with rectangles representing windows and doors, cylinders for the chimney stacks etc.

This meant the local world could be sent in great detail very compactly, but also that the software on the user’s machine had all the information for making a copy of any nearby object. It’s like the web: in order to display a page, the browser receives not an image of the page, but all the underlying HTML code to generate that page, which also means that the HTML of any web page can be copied perfectly. Thus CopyBot – written by a group called libsecondlife as part of an open-source project to create Second Life applications – or something like it was bound to appear one day.

Liberating the economy has led to a boom in creativity, just as Rosedale hoped. It is in constant expansion as people buy virtual land, and every day more than $500,000 (£263,000) is spent buying virtual objects. But the downside is that unwanted copying is potentially a threat to the substantial businesses selling virtual goods that have been built up, and a concern for the real-life companies such as IBM, Adidas and Nissan which are beginning to enter Second Life.

Just as it is probably not feasible to stop “grey goo” – the Second Life equivalent of spam, which takes the form of self- replicating objects malicious “griefers” use to gum up the main servers – so it is probably technically impossible to stop copying. Fortunately, not all aspects of an object can be duplicated. To create complex items – such as a virtual car that can be driven – you use a special programming language to code their realistic behaviour. CopyBot cannot duplicate these programs because they are never passed to the user, but run on the Linden Lab’s computers.

As for the elements that you can copy, such as shape and texture, Rosedale explains: “What we’re going to do is add a lot of attribution. You’ll be able to easily see when an object or texture was first created,” – and hence if something is a later copy.

CopyBot copies all sorts of items in Second Life Read More »

Online criminals pay the most for bank account details

From The Economist‘s “The price of online robbery” (24 November 2008):

Bank details are the most popular single item for sale by online fraudsters, according to a new report by Symantec, an internet-security firm. They are also the priciest, perhaps because the average account for which details are offered has a balance of nearly $40,000. Sales of details of credit cards make up some 30% of all goods and services on offer on “underground” servers, and nearly 60% of their value. Cards without security codes (or CVV2 numbers) are harder to exploit, and are usually cheaper.

Online criminals pay the most for bank account details Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »

Debt collection business opens up huge security holes

From Mark Gibbs’ “Debt collectors mining your secrets” (Network World: 19 June 2008):

[Bud Hibbs, a consumer advocate] told me any debt collection company has access to an incredible amount of personal data from hundreds of possible sources and the motivation to mine it.

What intrigued me after talking with Hibbs was how the debt collection business works. It turns out pretty much anyone can set up a collections operation by buying a package of bad debts for around $40,000, hiring collectors who will work on commission, and applying for the appropriate city and state licenses. Once a company is set up it can buy access to Axciom and Experian and other databases and start hunting down defaulters.

So, here we have an entire industry dedicated to buying, selling and mining your personal data that has been derived from who knows where. Even better, because the large credit reporting companies use a lot of outsourcing for data entry, much of this data has probably been processed in India or Pakistan where, of course, the data security and integrity are guaranteed.

Hibbs points out that, with no prohibitions on sending data abroad and with the likes of, say, the Russian mafia being interested in the personal information, the probability of identity theft from these foreign data centers is enormous.

Debt collection business opens up huge security holes Read More »

More problems with voting, election 2008

From Ian Urbina’s “High Turnout May Add to Problems at Polling Places” (The New York Times: 3 November 2008):

Two-thirds of voters will mark their choice with a pencil on a paper ballot that is counted by an optical scanning machine, a method considered far more reliable and verifiable than touch screens. But paper ballots bring their own potential problems, voting experts say.

The scanners can break down, leading to delays and confusion for poll workers and voters. And the paper ballots of about a third of all voters will be counted not at the polling place but later at a central county location. That means that if a voter has made an error — not filling in an oval properly, for example, a mistake often made by the kind of novice voters who will be flocking to the polls — it will not be caught until it is too late. As a result, those ballots will be disqualified.

About a fourth of voters will still use electronic machines that offer no paper record to verify that their choice was accurately recorded, even though these machines are vulnerable to hacking and crashes that drop votes. The machines will be used by most voters in Indiana, Kentucky, Pennsylvania, Tennessee, Texas and Virginia. Eight other states, including Georgia, Maryland, New Jersey and South Carolina, will use touch-screen machines with no paper trails.

Florida has switched to its third ballot system in the past three election cycles, and glitches associated with the transition have caused confusion at early voting sites, election officials said. The state went back to using scanned paper ballots this year after touch-screen machines in Sarasota County failed to record any choice for 18,000 voters in a fiercely contested House race in 2006.

Voters in Colorado, Tennessee, Texas and West Virginia have reported using touch-screen machines that at least initially registered their choice for the wrong candidate or party.

Most states have passed laws requiring paper records of every vote cast, which experts consider an important safeguard. But most of them do not have strong audit laws to ensure that machine totals are vigilantly checked against the paper records.

In Ohio, Secretary of State Jennifer Brunner sued the maker of the touch-screen equipment used in half of her state’s 88 counties after an investigation showed that the machines “dropped” votes in recent elections when memory cards were uploaded to computer servers.

A report released last month by several voting rights groups found that eight of the states using touch-screen machines, including Colorado and Virginia, had no guidance or requirement to stock emergency paper ballots at the polls if the machines broke down.

More problems with voting, election 2008 Read More »

Matthew, the blind phone phreaker

From Kevin Poulsen’s “Teenage Hacker Is Blind, Brash and in the Crosshairs of the FBI” (Wired: 29 February 2008):

At 4 in the morning of May 1, 2005, deputies from the El Paso County Sheriff’s Office converged on the suburban Colorado Springs home of Richard Gasper, a TSA screener at the local Colorado Springs Municipal Airport. They were expecting to find a desperate, suicidal gunman holding Gasper and his daughter hostage.

“I will shoot,” the gravely voice had warned, in a phone call to police minutes earlier. “I’m not afraid. I will shoot, and then I will kill myself, because I don’t care.”

But instead of a gunman, it was Gasper himself who stepped into the glare of police floodlights. Deputies ordered Gasper’s hands up and held him for 90 minutes while searching the house. They found no armed intruder, no hostages bound in duct tape. Just Gasper’s 18-year-old daughter and his baffled parents.

A federal Joint Terrorism Task Force would later conclude that Gasper had been the victim of a new type of nasty hoax, called “swatting,” that was spreading across the United States. Pranksters were phoning police with fake murders and hostage crises, spoofing their caller IDs so the calls appear to be coming from inside the target’s home. The result: police SWAT teams rolling to the scene, sometimes bursting into homes, guns drawn.

Now the FBI thinks it has identified the culprit in the Colorado swatting as a 17-year-old East Boston phone phreak known as “Li’l Hacker.” Because he’s underage, Wired.com is not reporting Li’l Hacker’s last name. His first name is Matthew, and he poses a unique challenge to the federal justice system, because he is blind from birth.

Interviews by Wired.com with Matt and his associates, and a review of court documents, FBI reports and audio recordings, paints a picture of a young man with an uncanny talent for quick telephone con jobs. Able to commit vast amounts of information to memory instantly, Matt has mastered the intricacies of telephone switching systems, while developing an innate understanding of human psychology and organization culture — knowledge that he uses to manipulate his patsies and torment his foes.

Matt says he ordered phone company switch manuals off the internet and paid to have them translated into Braille. He became a regular caller to internal telephone company lines, where he’d masquerade as an employee to perform tricks like tracing telephone calls, getting free phone features, obtaining confidential customer information and disconnecting his rivals’ phones.

It was, relatively speaking, mild stuff. The teen though, soon fell in with a bad crowd. The party lines were dominated by a gang of half-a-dozen miscreants who informally called themselves the “Wrecking Crew” and “The Cavalry.”

By then, Matt’s reputation had taken on a life of its own, and tales of some of his hacks — perhaps apocryphal — are now legends. According to Daniels, he hacked his school’s PBX so that every phone would ring at once. Another time, he took control of a hotel elevator, sending it up and down over and over again. One story has it that Matt phoned a telephone company frame room worker at home in the middle of the night, and persuaded him to get out of bed and return to work to disconnect someone’s phone.

Matthew, the blind phone phreaker Read More »

How con artists use psychology to work

From Paul J. Zak’s “How to Run a Con” (Psychology Today: 13 November 2008):

When I was in high school, I took a job at an ARCO gas station on the outskirts of Santa Barbara, California. At the time, I drove a 1967 Mustang hotrod and thought I might pick up some tips and cheap parts by working around cars after school. You see a lot of interesting things working the night shift in a sketchy neighborhood. I constantly saw people making bad decisions: drunk drivers, gang members, unhappy cops, and con men. In fact, I was the victim of a classic con called “The Pigeon Drop.” If we humans have such big brains, how can we get conned?

Here’s what happened to me. One slow Sunday afternoon, a man comes out of the restroom with a pearl necklace in his hand. “Found it on the bathroom floor” he says. He followed with “Geez, looks nice-I wonder who lost it?” Just then, the gas station’s phone rings and a man asked if anyone found a pearl necklace that he had purchased as a gift for his wife. He offers a $200 reward for the necklace’s return. I tell him that a customer found it. “OK” he says, “I’ll be there in 30 minutes.” I give him the ARCO address and he gives me his phone number. The man who found the necklace hears all this but tells me he is running late for a job interview and cannot wait for the other man to arrive.

Huum, what to do? The man with the necklace said “Why don’t I give you the necklace and we split the reward?” The greed-o-meter goes off in my head, suppressing all rational thought. “Yeah, you give me the necklace to hold and I’ll give you $100” I suggest. He agrees. Since high school kids working at gas stations don’t have $100, I take money out of the cash drawer to complete the transaction.

You can guess the rest. The man with the lost necklace doesn’t come and never answers my many calls. After about an hour, I call the police. The “pearl” necklace was a two dollar fake and the number I was calling went to a pay phone nearby. I had to fess up to my boss and pay back the money with my next paycheck.

Why did this con work? Let’s do some neuroscience. While the primary motivator from my perspective was greed, the pigeon drop cleverly engages THOMAS (The Human Oxytocin Mediated Attachment System). … THOMAS is a powerful brain circuit that releases the neurochemical oxytocin when we are trusted and induces a desire to reciprocate the trust we have been shown–even with strangers.

The key to a con is not that you trust the conman, but that he shows he trusts you. Conmen ply their trade by appearing fragile or needing help, by seeming vulnerable. Because of THOMAS, the human brain makes us feel good when we help others–this is the basis for attachment to family and friends and cooperation with strangers. “I need your help” is a potent stimulus for action.

How con artists use psychology to work Read More »

A botnet with a contingency plan

From Gregg Keizer’s “Massive botnet returns from the dead, starts spamming” (Computerworld: 26 November 2008):

A big spam-spewing botnet shut down two weeks ago has been resurrected, security researchers said today, and is again under the control of criminals.

The “Srizbi” botnet returned from the dead late Tuesday, said Fengmin Gong, chief security content officer at FireEye Inc., when the infected PCs were able to successfully reconnect with new command-and-control servers, which are now based in Estonia.

Srizbi was knocked out more than two weeks ago when McColo Corp., a hosting company that had been accused of harboring a wide range of criminal activities, was yanked off the Internet by its upstream service providers. With McColo down, PCs infected with Srizbi and other bot Trojan horses were unable to communicate with their command servers, which had been hosted by McColo. As a result, spam levels dropped precipitously.

But as other researchers noted last week, Srizbi had a fallback strategy. In the end, that strategy paid off for the criminals who control the botnet.

According to Gong, when Srizbi bots were unable to connect with the command-and-control servers hosted by McColo, they tried to connect with new servers via domains that were generated on the fly by an internal algorithm. FireEye reverse-engineered Srizbi, rooted out that algorithm and used it to predict, then preemptively register, several hundred of the possible routing domains.

The domain names, said Gong, were generated on a three-day cycle, and for a while, FireEye was able to keep up — and effectively block Srizbi’s handlers from regaining control.

“We have registered a couple hundred domains,” Gong said, “but we made the decision that we cannot afford to spend so much money to keep registering so many [domain] names.”

Once FireEye stopped preempting Srizbi’s makers, the latter swooped in and registered the five domains in the next cycle. Those domains, in turn, pointed Srizbi bots to the new command-and-control servers, which then immediately updated the infected machines to a new version of the malware.

A botnet with a contingency plan Read More »

The NSA and threats to privacy

From James Bamford’s “Big Brother Is Listening” (The Atlantic: April 2006):

This legislation, the 1978 Foreign Intelligence Surveillance Act, established the FISA court—made up of eleven judges handpicked by the chief justice of the United States—as a secret part of the federal judiciary. The court’s job is to decide whether to grant warrants requested by the NSA or the FBI to monitor communications of American citizens and legal residents. The law allows the government up to three days after it starts eavesdropping to ask for a warrant; every violation of FISA carries a penalty of up to five years in prison. Between May 18, 1979, when the court opened for business, until the end of 2004, it granted 18,742 NSA and FBI applications; it turned down only four outright.

Such facts worry Jonathan Turley, a George Washington University law professor who worked for the NSA as an intern while in law school in the 1980s. The FISA “courtroom,” hidden away on the top floor of the Justice Department building (because even its location is supposed to be secret), is actually a heavily protected, windowless, bug-proof installation known as a Sensitive Compartmented Information Facility, or SCIF.

It is true that the court has been getting tougher. From 1979 through 2000, it modified only two out of 13,087 warrant requests. But from the start of the Bush administration, in 2001, the number of modifications increased to 179 out of 5,645 requests. Most of those—173—involved what the court terms “substantive modifications.”

Contrary to popular perception, the NSA does not engage in “wiretapping”; it collects signals intelligence, or “sigint.” In contrast to the image we have from movies and television of an FBI agent placing a listening device on a target’s phone line, the NSA intercepts entire streams of electronic communications containing millions of telephone calls and e-mails. It runs the intercepts through very powerful computers that screen them for particular names, telephone numbers, Internet addresses, and trigger words or phrases. Any communications containing flagged information are forwarded by the computer for further analysis.

Names and information on the watch lists are shared with the FBI, the CIA, the Department of Homeland Security, and foreign intelligence services. Once a person’s name is in the files, even if nothing incriminating ever turns up, it will likely remain there forever. There is no way to request removal, because there is no way to confirm that a name is on the list.

In December of 1997, in a small factory outside the southern French city of Toulouse, a salesman got caught in the NSA’s electronic web. Agents working for the NSA’s British partner, the Government Communications Headquarters, learned of a letter of credit, valued at more than $1.1 million, issued by Iran’s defense ministry to the French company Microturbo. According to NSA documents, both the NSA and the GCHQ concluded that Iran was attempting to secretly buy from Microturbo an engine for the embargoed C-802 anti-ship missile. Faxes zapping back and forth between Toulouse and Tehran were intercepted by the GCHQ, which sent them on not just to the NSA but also to the Canadian and Australian sigint agencies, as well as to Britain’s MI6. The NSA then sent the reports on the salesman making the Iranian deal to a number of CIA stations around the world, including those in Paris and Bonn, and to the U.S. Commerce Department and the Customs Service. Probably several hundred people in at least four countries were reading the company’s communications.

Such events are central to the current debate involving the potential harm caused by the NSA’s warrantless domestic eavesdropping operation. Even though the salesman did nothing wrong, his name made its way into the computers and onto the watch lists of intelligence, customs, and other secret and law-enforcement organizations around the world. Maybe nothing will come of it. Maybe the next time he tries to enter the United States or Britain he will be denied, without explanation. Maybe he will be arrested. As the domestic eavesdropping program continues to grow, such uncertainties may plague innocent Americans whose names are being run through the supercomputers even though the NSA has not met the established legal standard for a search warrant. It is only when such citizens are turned down while applying for a job with the federal government—or refused when seeking a Small Business Administration loan, or turned back by British customs agents when flying to London on vacation, or even placed on a “no-fly” list—that they will realize that something is very wrong. But they will never learn why.

General Michael Hayden, director of the NSA from 1999 to 2005 and now principal deputy director of national intelligence, noted in 2002 that during the 1990s, e-communications “surpassed traditional communications. That is the same decade when mobile cell phones increased from 16 million to 741 million—an increase of nearly 50 times. That is the same decade when Internet users went from about 4 million to 361 million—an increase of over 90 times. Half as many land lines were laid in the last six years of the 1990s as in the whole previous history of the world. In that same decade of the 1990s, international telephone traffic went from 38 billion minutes to over 100 billion. This year, the world’s population will spend over 180 billion minutes on the phone in international calls alone.”

Intercepting communications carried by satellite is fairly simple for the NSA. The key conduits are the thirty Intelsat satellites that ring the Earth, 22,300 miles above the equator. Many communications from Europe, Africa, and the Middle East to the eastern half of the United States, for example, are first uplinked to an Intelsat satellite and then downlinked to AT&T’s ground station in Etam, West Virginia. From there, phone calls, e-mails, and other communications travel on to various parts of the country. To listen in on that rich stream of information, the NSA built a listening post fifty miles away, near Sugar Grove, West Virginia. Consisting of a group of very large parabolic dishes, hidden in a heavily forested valley and surrounded by tall hills, the post can easily intercept the millions of calls and messages flowing every hour into the Etam station. On the West Coast, high on the edge of a bluff overlooking the Okanogan River, near Brewster, Washington, is the major commercial downlink for communications to and from Asia and the Pacific. Consisting of forty parabolic dishes, it is reportedly the largest satellite antenna farm in the Western Hemisphere. A hundred miles to the south, collecting every whisper, is the NSA’s western listening post, hidden away on a 324,000-acre Army base in Yakima, Washington. The NSA posts collect the international traffic beamed down from the Intelsat satellites over the Atlantic and Pacific. But each also has a number of dishes that appear to be directed at domestic telecommunications satellites.

Until recently, most international telecommunications flowing into and out of the United States traveled by satellite. But faster, more reliable undersea fiber-optic cables have taken the lead, and the NSA has adapted. The agency taps into the cables that don’t reach our shores by using specially designed submarines, such as the USS Jimmy Carter, to attach a complex “bug” to the cable itself. This is difficult, however, and undersea taps are short-lived because the batteries last only a limited time. The fiber-optic transmission cables that enter the United States from Europe and Asia can be tapped more easily at the landing stations where they come ashore. With the acquiescence of the telecommunications companies, it is possible for the NSA to attach monitoring equipment inside the landing station and then run a buried encrypted fiber-optic “backhaul” line to NSA headquarters at Fort Meade, Maryland, where the river of data can be analyzed by supercomputers in near real time.

Tapping into the fiber-optic network that carries the nation’s Internet communications is even easier, as much of the information transits through just a few “switches” (similar to the satellite downlinks). Among the busiest are MAE East (Metropolitan Area Ethernet), in Vienna, Virginia, and MAE West, in San Jose, California, both owned by Verizon. By accessing the switch, the NSA can see who’s e-mailing with whom over the Internet cables and can copy entire messages. Last September, the Federal Communications Commission further opened the door for the agency. The 1994 Communications Assistance for Law Enforcement Act required telephone companies to rewire their networks to provide the government with secret access. The FCC has now extended the act to cover “any type of broadband Internet access service” and the new Internet phone services—and ordered company officials never to discuss any aspect of the program.

The National Security Agency was born in absolute secrecy. Unlike the CIA, which was created publicly by a congressional act, the NSA was brought to life by a top-secret memorandum signed by President Truman in 1952, consolidating the country’s various military sigint operations into a single agency. Even its name was secret, and only a few members of Congress were informed of its existence—and they received no information about some of its most important activities. Such secrecy has lent itself to abuse.

During the Vietnam War, for instance, the agency was heavily involved in spying on the domestic opposition to the government. Many of the Americans on the watch lists of that era were there solely for having protested against the war. … Even so much as writing about the NSA could land a person a place on a watch list.

For instance, during World War I, the government read and censored thousands of telegrams—the e-mail of the day—sent hourly by telegraph companies. Though the end of the war brought with it a reversion to the Radio Act of 1912, which guaranteed the secrecy of communications, the State and War Departments nevertheless joined together in May of 1919 to create America’s first civilian eavesdropping and code-breaking agency, nicknamed the Black Chamber. By arrangement, messengers visited the telegraph companies each morning and took bundles of hard-copy telegrams to the agency’s offices across town. These copies were returned before the close of business that day.

A similar tale followed the end of World War II. In August of 1945, President Truman ordered an end to censorship. That left the Signal Security Agency (the military successor to the Black Chamber, which was shut down in 1929) without its raw intelligence—the telegrams provided by the telegraph companies. The director of the SSA sought access to cable traffic through a secret arrangement with the heads of the three major telegraph companies. The companies agreed to turn all telegrams over to the SSA, under a plan code-named Operation Shamrock. It ran until the government’s domestic spying programs were publicly revealed, in the mid-1970s.

Frank Church, the Idaho Democrat who led the first probe into the National Security Agency, warned in 1975 that the agency’s capabilities

“could be turned around on the American people, and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it is done, is within the reach of the government to know. Such is the capacity of this technology.”

The NSA and threats to privacy Read More »

The purpose of the Storm botnet? To send spam

From Tim Wilson’s “Researchers Link Storm Botnet to Illegal Pharmaceutical Sales” (DarkReading: 11 June 2008):

“Our previous research revealed an extremely sophisticated supply chain behind the illegal pharmacy products shipped after orders were placed on botnet-spammed Canadian pharmacy Websites. But the relationship between the technology-focused botnet masters and the global supply chain organizations was murky until now,” said Patrick Peterson, vice president of technology at IronPort and a Cisco fellow.

“Our research has revealed a smoking gun that shows that Storm and other botnet spam generates commissionable orders, which are then fulfilled by the supply chains, generating revenue in excess of $150 million per year.”

In fact, the “Canadian Pharmacy” Website, which many Storm emails promote, is estimated to have sales of $150 million per year by itself, the report says. The site offers a customer service phone number that goes into voice mail and buyers usually do receive the drugs — but the shipments include counterfeit pharmaceuticals from China and India, rather than brand-name drugs from Canada, IronPort says.

IronPort’s research revealed that more than 80 percent of Storm botnet spam advertises online pharmacy brands. This spam is sent by millions of consumers’ PCs, which have been infected by the Storm worm via a multitude of sophisticated social engineering tricks and Web-based exploits. Further investigation revealed that spam templates, “spamvertized” URLs, Website designs, credit card processing, product fulfillment, and customer support were being provided by a Russian criminal organization that operates in conjunction with Storm, IronPort says.

However, IronPort-sponsored pharmacological testing revealed that two thirds of the shipments contained the active ingredient but were not the correct dosage, while the others were placebos.

The purpose of the Storm botnet? To send spam Read More »

Offline copy protection in games

From Adam Swiderski’s “A History of Copy Protection” (Edge: 9 June 2008):

Fortunately, the games industry is creative, and thus it was that the offline copy protection was born and flourished. One of its most prevalent forms was an in-game quiz that would require gamers to refer to the manual for specific information – you’d be asked, for example, to enter the third word in the fourth paragraph on page 14. Some titles took a punishing approach to this little Q & A: SSI’s Star Command required a documentation check prior to each in-game save, while Master of Orion would respond to a failed manual check by gradually becoming so difficult that it was impossible to win. Perhaps the most notorious example of this method is Sierra’s King’s Quest III, in which lengthy passages of potion recipes and other information had to be reproduced from the manual. One typo, and you were greeted with a “Game Over” screen.

Other developers eschewed straight manual checks for in-box tools and items that were more integrated into the games with which they shipped, especially once photocopiers became more accessible and allowed would-be pirates to quickly and easily duplicate documentation. LucasArts made a name for itself in this field, utilizing such gems as the Monkey Island series’ multi-level code wheels. Other games, like Maniac Mansion and Indiana Jones and the Last Crusade shipped with the kind of color-masked text one would find in old-school decoder rings; the documents could not be reproduced by the photocopiers of the day and would require the application of a transparent red plastic filter in order to get at their contents.

Offline copy protection in games Read More »

How movies are moved around on botnets

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Figure 2.11 illustrates the use of botnets for selling stolen intellectual property, in this case Movies, TV shows, or video. The diagram is based on information from the Pyramid of Internet Piracy created by Motion Picture Arts Association (MPAA) and an actual case. To start the process, a supplier rips a movie or software from an existing DVD or uses a camcorder to record a first run movie in the theaters. These are either burnt to DVDs to be sold on the black market or they are sold or provided to a Release Group. The Release Group is likely to be an organized crime group, excuse me, business associates who wish to invest in the entertainment industry. I am speculating that the Release Group engages (hires) a botnet operator that can meet their delivery and performance specifications. The botherder then commands the botnet clients to retrieve the media from the supplier and store it in a participating botnet client. These botnet clients may be qualified according to the system processor speed and the nature of the Internet connection. The huge Internet pipe, fast connection, and lax security at most universities make them a prime target for this form of botnet application. MPAA calls these clusters of high speed locations “Topsites.”

. . .

According to the MPAA, 44 percent of all movie piracy is attributed to college students. Therefore it makes sense that the Release Groups would try to use university botnet clients as Topsites. The next groups in the chain are called Facilitators. They operate Web sites and search engines and act as Internet directories. These may be Web sites for which you pay a monthly fee or a fee per download. Finally individuals download the films for their own use or they list them via Peer-to-Peer sharing applications like Gnutella, BitTorrent for download.

How movies are moved around on botnets Read More »

Money involved in adware & clicks4hire schemes

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Dollar-Revenue and GimmyCash are two companies that have paid for installation of their Adware programs. Each has a pay rate formula based on the country of installation. Dollar-Revenue pays 30 cents for installing their adware in a U. S. Web site, 20 cents for a Canadian Web site, 10 cents for a U.K. Web site, 1 cent for a Chinese Web site, and 2 cents for all other Web sites. GimmyCash. com pays 40 cents for U. S. and Canadian Web site installs, 20 cents for 16 European countries, and 2 cents for everywhere else. In addition, GimmyCash pays 5 percent of the webmaster’s earnings that you refer to GimmyCash.

Money involved in adware & clicks4hire schemes Read More »

The various participants in phishing schemes

From Chapter 2: Botnets Overview of Craig A. Schiller’s Botnets: The Killer Web App (Syngress: 2007):

Christopher Abad provides insight into the phishing economy in an article published online by FirstMonday.org (http://www.firstmonday.org/issues/ issue10_9/abad/). The article, “The economy of phishing: A survey of the operations of the phishing market,” reveals the final phase of the phishing life cycle, called cashing. These are usually not the botherders or the phishers. The phishers are simply providers of credential goods to the cashers. Cashers buy the credential goods from the phishers, either taking a commission on the funds extracted or earned based on the quality, completeness, which financial institution it is from, and the victim’s balance in the account. A high-balance, verified, full-credential account can be purchased for up to $100. Full creden- tials means that you have the credit card number, bank and routing numbers, the expiration date, the security verification code (cvv2) on the back of the card, the ATM pin number, and the current balance. Credit card numbers for a financial institution selected by the supplier can be bought for 50 cents per account. The casher’s commission of this transaction may run as much as 70 percent. When the deal calls for commissions to be paid in cash, the vehicle of choice is Western Union.

The continuation of phishing attacks depends largely on the ability of the casher’s to convert the information into cash. The preferred method is to use the credential information to create duplicate ATM cards and use the cards to withdraw cash from ATM terminals. Not surprisingly the demand for these cards leans heavily in favor of banks that provide inadequate protections of the ATM cards. Institutions like Bank of America are almost nonexistent in the phisher marketplace due to the strong encryption (triple DES) used to protect information on its ATM cards.

The various participants in phishing schemes Read More »