security

Even worse spam is coming

From Spam Daily News’s “Spam zombies from outer space“:

Spammers could soon use zombie computers in a totally new way. Infected computers could run programs that spy into a person’s email, mine it for information, and generate realistic-looking replies.

John Aycock, an assistant professor of computer science at the University of Calgary, and his student Nathan Friess conducted new research that shows it is possible to create a new type of spam that would likely bypass even the best spam filters and trick experienced computer users who would normally delete suspicious email messages.

There are two key reasons why spam is suspicious to anti-spam filters and human targets alike. First, it often comes from an unrecognized source. Second, it doesn’t look right.

The evolution of spam zombies will change this. These new zombies will mine corpora of email they find on infected machines, using this data to automatically forge and send improved, convincing spam to others.

The next generation of spam could be sent from your friends’ and colleagues’ email addresses – and even mimic patterns that mark their messages as their own (such as common abbreviations, misspellings, capitalization, and personal signatures) – making you more likely to click on a Web link or open an attachment.

What features can be easily extracted from an email corpus? There are four categories:

1. Email addresses. The victim’s email address and any other email aliases they have can be extracted, as can the email addresses of people with whom the victim corresponds.

2. Information related to the victim’s email program and its configuration. For example, the User-Agent, the message encoding as text and/or HTML, automatically-appended signature file, the quoting style used for replies and forwarded messages, etc.

3. Vocabulary. The normal vocabulary used by the victim and the people with whom they correspond.

4. Email style.

  • Line length, as some people never break lines;
  • Capitalization, or lack thereof;
  • Manually-added signatures, often the victim’s name;
  • Abbreviations, e.g., “u” for “you”;
  • Misspellings and typos;
  • Inappropriate synonyms, e.g., “there” instead of “their”;
  • Replying above or below quoted text in replies.

Even worse spam is coming Read More »

The Flash Worm, AKA the Warhol Worm

From Noam Eppel’s “Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security“:

In 2001, the infamous Code Red Worm was infecting a remarkable 2,000 new hosts each minute. Nick Weaver at UC Berkeley proposed the possibility of a “Flash Worm” which could spread across the Internet and infect all vulnerable servers in less than 15 minutes. A well engineered flash worm could spread worldwide in a matter of seconds.

The Flash Worm, AKA the Warhol Worm Read More »

A technical look at the Morris Worm of 1988

From Donn Seeley’s “The Internet Worm of 1988: A Tour of the Worm“:

November 3, 1988 is already coming to be known as Black Thursday. System administrators around the country came to work on that day and discovered that their networks of computers were laboring under a huge load. If they were able to log in and generate a system status listing, they saw what appeared to be dozens or hundreds of “shell” (command interpreter) processes. If they tried to kill the processes, they found that new processes appeared faster than they could kill them. Rebooting the computer seemed to have no effect within minutes after starting up again, the machine was overloaded by these mysterious processes.

… The worm had taken advantage of lapses in security on systems that were running 4.2 or 4.3 BSD UNIX or derivatives like SunOS. These lapses allowed it to connect to machines across a network, bypass their login authentication, copy itself and then proceed to attack still more machines. The massive system load was generated by multitudes of worms trying to propagate the epidemic. …

The worm consists of a 99-line bootstrap program written in the C language, plus a large relocatable object file that comes in VAX and Sun-3 flavors. …

The activities of the worm break down into the categories of attack and defense. Attack consists of locating hosts (and accounts) to penetrate, then exploiting security holes on remote systems to pass across a copy of the worm and run it. The worm obtains host addresses by examining the system tables /etc/hosts.equiv and /.rhosts, user files like .forward and. rhosts, dynamic routing information produced by the netstat program, and finally randomly generated host addresses on local networks. It ranks these by order of preference, trying a file like /etc/hosts.equiv first because it contains names of local machines that are likely to permit unauthenticated connections. Penetration of a remote system can be accomplished in any of three ways. The worm can take advantage of a bug in the finger server that allows it to download code in place of a finger request and trick the server into executing it. The worm can use a “trap door” in the sendmail SMTP mail service, exercising a bug in the debugging code that allows it to execute a command interpreter and download code across a mail connection. If the worm can penetrate a local account by guessing its password, it can use the rexec and rsh remote command interpreter services to attack hosts that share that account. In each case the worm arranges to get a remote command interpreter which it can use to copy over, compile and execute the 99-line bootstrap. The bootstrap sets up its own network connection with the local worm and copies over the other files it needs, and using these pieces a remote worm is built and the infection procedure starts over again. …

When studying a tricky program like this, it’s just as important to establish what the program does not do as what it does do. The worm does not delete a system’s files: it only removes files that it created in the process of bootstrapping. The program does not attempt to incapacitate a system by deleting important files, or indeed any files. It does not remove log files or otherwise interfere with normal operation other than by consuming system resources. The worm does not modify existing files: it is not a virus. The worm propagates by copying itself and compiling itself on each system; it does not modify other programs to do its work for it. Due to its method of infection, it can’t count on sufficient privileges to be able to modify programs. The worm does not install trojan horses: its method of attack is strictly active, it never waits for a user to trip over a trap. Part of the reason for this is that the worm can’t afford to waste time waiting for trojan horses-it must reproduce before it is discovered. Finally, the worm does not record or transmit decrypted passwords: except for its own static list of favorite passwords, the worm does not propagate cracked passwords on to new worms nor does it transmit them back to some home base. This is not to say that the accounts that the worm penetrated are secure merely because the worm did not tell anyone what their passwords were, of course-if the worm can guess an account’s password, certainly others can too. The worm does not try to capture superuser privileges: while it does try to break into accounts, it doesn’t depend on having particular privileges to propagate, and never makes special use of such privileges if it somehow gets them. The worm does not propagate over uucp or X.25 or DECNET or BITNET: it specifically requires TCP/IP. The worm does not infect System V systems unless they have been modified to use Berkeley network programs like sendmail, fingerd and rexec.

A technical look at the Morris Worm of 1988 Read More »

CCTV in the UK deters crime

From Technology Review‘s “Big Brother Logs On“:

In many ways, the drama of pervasive surveillance is being played out first in Orwell’s native land, the United Kingdom, which operates more closed-circuit cameras per capita than any other country in the world. This very public surveillance began in 1986 on an industrial estate near the town of King’s Lynn, approximately 100 kilometers north of London. Prior to the installation of three video cameras, a total of 58 crimes had been reported on the estate. None was reported over the next two years. In 1995, buoyed by that success, the government made matching grants available to other cities and towns that wanted to install public surveillance cameras – and things took off from there. …

And not many argue about surveillance’s ability to deter crime. Recent British government reports cite closed-circuit TV as a major reason for declining crime rates. After these systems were put in place, the town of Berwick reported that burglaries fell by 69 percent; in Northampton overall crime decreased by 57 percent; and in Glasgow, Scotland, crime slumped by 68 percent. Public reaction in England has been mixed, but many embrace the technology. …

CCTV in the UK deters crime Read More »

Bruce Schneier on steganography

From Bruce Schneier’s “Steganography: Truths and Fictions“:

Steganography is the science of hiding messages in messages. … In the computer world, it has come to mean hiding secret messages in graphics, pictures, movies, or sounds. …

The point of steganography is to hide the existence of the message, to hide the fact that the parties are communicating anything other than innocuous photographs. This only works when it can be used within existing communications patterns. I’ve never sent or received a GIF in my life. If someone suddenly sends me one, it won’t take a rocket scientist to realize that there’s a steganographic message hidden somewhere in it. If Alice and Bob already regularly exchange files that are suitable to hide steganographic messages, then an eavesdropper won’t know which messages — if any — contain the messages. If Alice and Bob change their communications patterns to hide the messages, it won’t work. An eavesdropper will figure it out.

… Don’t use the sample image that came with the program when you downloaded it; your eavesdropper will quickly recognize that one. Don’t use the same image over and over again; your eavesdropper will look for the differences between that indicate the hidden message. Don’t use an image that you’ve downloaded from the net; your eavesdropper can easily compare the image you’re sending with the reference image you downloaded.

Bruce Schneier on steganography Read More »

The Witty Worm was special

From CAIDA’s “The Spread of the Witty Worm“:

On Friday March 19, 2004 at approximately 8:45pm PST, an Internet worm began to spread, targeting a buffer overflow vulnerability in several Internet Security Systems (ISS) products, including ISS RealSecure Network, RealSecure Server Sensor, RealSecure Desktop, and BlackICE. The worm takes advantage of a security flaw in these firewall applications that was discovered earlier this month by eEye Digital Security. Once the Witty worm infects a computer, it deletes a randomly chosen section of the hard drive, over time rendering the machine unusable. The worm’s payload contained the phrase “(^.^) insert witty message here (^.^)” so it came to be known as the Witty worm.

While the Witty worm is only the latest in a string of self-propagating remote exploits, it distinguishes itself through several interesting features:

  • Witty was the first widely propagated Internet worm to carry a destructive payload.
  • Witty was started in an organized manner with an order of magnitude more ground-zero hosts than any previous worm.
  • Witty represents the shortest known interval between vulnerability disclosure and worm release — it began to spread the day after the ISS vulnerability was publicized.
  • Witty spread through a host population in which every compromised host was doing something proactive to secure their computers and networks.
  • Witty spread through a population almost an order of magnitude smaller than that of previous worms, demonstrating the viability of worms as an automated mechanism to rapidly compromise machines on the Internet, even in niches without a software monopoly. …

Once Witty infects a host, the host sends 20,000 packets by generating packets with a random destination IP address, a random size between 796 and 1307 bytes, and a destination port. The worm payload of 637 bytes is padded with data from system memory to fill this random size and a packet is sent out from source port 4000. After sending 20,000 packets, Witty seeks to a random point on the hard disk, writes 65k of data from the beginning of iss-pam1.dll to the disk. After closing the disk, the worm repeats this process until the machine is rebooted or until the worm permanently crashes the machine.

Witty Worm Spread

With previous Internet worms, including Code-Red, Nimda, and SQL Slammer, a few hosts were seeded with the worm and proceeded to spread it to the rest of the vulnerable population. The spread was slow early on and then accelerates dramatically as the number of infected machines spewing worm packets to the rest of the Internet rises. Eventually as the victim population becomes saturated, the spread of the worm slows because there are few vulnerable machines left to compromise. Plotted on a graph, this worm growth appears as an S-shaped exponential growth curve called a sigmoid.

At 8:45:18pm[4] PST on March 19, 2004, the network telescope received its first Witty worm packet. In contrast to previous worms, we observed 110 hosts infected in the first ten seconds, and 160 at the end of 30 seconds. The chances of a single instance of the worm infecting 110 machines so quickly are vanishingly small — worse than 10-607. This rapid onset indicates that the worm used either a hitlist or previously compromised vulnerable hosts to start the worm. …

After the sharp rise in initial coordinated activity, the Witty worm followed a normal exponential growth curve for a pathogen spreading in a fixed population. Witty reached its peak after approximately 45 minutes, at which point the majority of vulnerable hosts had been infected. After that time, the churn caused by dynamic addressing causes the IP address count to inflate without any additional Witty infections. At the peak of the infection, Witty hosts flooded the Internet with more than 90Gbits/second of traffic (more than 11 million packets per second). …

The vulnerable host population pool for the Witty worm was quite different from that of previous virulent worms. Previous worms have lagged several weeks behind publication of details about the remote-exploit bug, and large portions of the victim populations appeared to not know what software was running on their machines, let alone take steps to make sure that software was up to date with security patches. In contrast, the Witty worm infected a population of hosts that were proactive about security — they were running firewall software. The Witty worm also started to spread the day after information about the exploit and the software upgrades to fix the bug were available. …

By infecting firewall devices, Witty proved particularly adept at thwarting security measures and successfully infecting hosts on internal networks. …

The Witty worm incorporates a number of dangerous characteristics. It is the first widely spreading Internet worm to actively damage infected machines. It was started from a large set of machines simultaneously, indicating the use of a hit list or a large number of compromised machines. Witty demonstrated that any minimally deployed piece of software with a remotely exploitable bug can be a vector for wide-scale compromise of host machines without any action on the part of a victim. The practical implications of this are staggering; with minimal skill, a malevolent individual could break into thousands of machines and use them for almost any purpose with little evidence of the perpetrator left on most of the compromised hosts.

The Witty Worm was special Read More »

Malware focused on theft above all

From AFP’s “70 percent of malicious software aimed at theft: survey“:

Seventy percent of malicious software being circulated is linked to various types of cybercrime, a study by security firms Panda Software showed. …

The survey confirms a shift from several years ago, when malicious software was often aimed at garnering attention or exposing security flaws.

“Malware has become a took for generating financial returns,” the report said. …

About 40 percent of the problems detected by Panda was spyware, a type of malicious code designed for financial gain, primarily through collecting data on users’ Internet activities.

Another 17 percent was trojans, including “banker trojans” that steal confidential data related to bank services, others that download malicious applications onto systems.

Eight percent of the problems detected were “dialers,” malicious code that dials up premium-rate numbers without users’ knowledge; “bots,” a scheme involving the sale or rental of networks of infected computers, accounted for four percent of the total.

The e-mail worm, which was recently considered a major Internet threat, made up only four percent of the total.

Malware focused on theft above all Read More »

OmniPerception = facial recognition + smart card

From Technology Review‘s’ “Face Forward“:

To get around these problems, OmniPerception, a spinoff from the University of Surrey in England, has combined its facial-recognition technology with a smart-card system. This could make face recognition more robust and better suited to applications such as passport authentication and building access control, which, if they use biometrics at all, rely mainly on fingerprint verification, says David McIntosh, the company’s CEO. With OmniPerception’s technology, an image of a person’s face is verified against a “facial PIN” carried on the card, eliminating the need to search a central database and making the system less intimidating to privacy-conscious users. …

OmniPerception’s technology creates a PIN about 2,500 digits long from its analysis of the most distinctive features of a person’s face. The number is embedded in a smart card-such as those, say, that grant access to a building-and used to verify that the card belongs to the person presenting it. A user would place his or her card in or near a reader and face a camera, which would take a photo and feed it to the card. The card would then compare the PIN it carried to information it derived from the new photo and either accept or reject the person as the rightful owner of the card. The technology could also be used to ensure passport or driver’s license authenticity and to secure ATM or Internet banking transactions, says McIntosh.

OmniPerception = facial recognition + smart card Read More »

Face recognition software as an example of “function creep”

From Technology Review‘s’ “Creepy Functions“:

Consider one example of function creep. The Electoral Commission of Uganda has retained Viisage Technology to implement a face recognition system capable of enrolling 10 million voters in 60 days. The goal is to reduce voter registration fraud. But Woodward notes that the system might also be put to work fingering political opponents of the regime. And Uganda probably isn’t the first country that springs to mind when someone says “due process” or “civil rights.”

From Technology Review‘s’ “Big Brother Logs On“:

Take the fact that the faces of a large portion of the driving population are becoming digitized by motor vehicles agencies and placed into databases, says Steinhardt. It isn’t much of a stretch to extend the system to a Big Brother-like nationwide identification and tracking network. Or consider that the Electoral Commission of Uganda has retained Viisage Technology to implement a “turnkey face recognition system” capable of enrolling 10 million voter registrants within 60 days. By generating a database containing the faceprint of every one of the country’s registered voters-and combining it with algorithms able to scour all 10 million images within six seconds to find a match-the commission hopes to reduce voter registration fraud. But once such a database is compiled, notes John Woodward, a former CIA operations officer who managed spies in several Asian countries and who’s now an analyst with the Rand Corporation, it could be employed for tracking and apprehending known or suspected political foes. Woodward calls that “function creep.”

Face recognition software as an example of “function creep” Read More »

Smart World of Warcraft Trojan

From Information Week‘s’ “ Trojan Snags World Of Warcraft Passwords To Cash Out Accounts“:

A new password-stealing Trojan targeting players of the popular online game “World of Warcraft” hopes to make money off secondary sales of gamer goods, a security company warned Tuesday.

MicroWorld, an Indian-based anti-virus and security software maker with offices in the U.S., Germany, and Malaysia, said that the PWS.Win32.WOW.x Trojan horse was spreading fast, and attacking World of Warcraft players.

If the attacker managed to hijack a password, he could transfer in-game goods — personal items, including weapons — that the player had accumulated to his own account, then later sell them for real-world cash on “gray market” Web sites. Unlike some rival multiplayer online games, Warcraft’s publisher, Blizzard Entertainment, bans the practice of trading virtual items for real cash.

Smart World of Warcraft Trojan Read More »

Bring down the cell network with SMS spam

From John Schwartz’s “Text Hackers Could Jam Cellphones, a Paper Says“:

Malicious hackers could take down cellular networks in large cities by inundating their popular text-messaging services with the equivalent of spam, said computer security researchers, who will announce the findings of their research today.

Such an attack is possible, the researchers say, because cellphone companies provide the text-messaging service to their networks in a way that could allow an attacker who jams the message system to disable the voice network as well.

And because the message services are accessible through the Internet, cellular networks are open to the denial-of-service attacks that occur regularly online, in which computers send so many messages or commands to a target that the rogue data blocks other machines from connecting.

By pushing 165 messages a second into the network, said Patrick D. McDaniel, a professor of computer science and engineering at Pennsylvania State University and the lead researcher on the paper, “you can congest all of Manhattan.”

Also see http://www.smsanalysis.org/.

Bring down the cell network with SMS spam Read More »

Subway’s frequent-eater program killed because of fraud

From Bruce Schneier’s “Forging Low-Value Paper Certificates“:

Both Subway and Cold Stone Creamery have discontinued their frequent-purchaser programs because the paper documentation is too easy to forge. (The article says that forged Subway stamps are for sale on eBay.)

… Subway is implementing a system based on magnetic stripe cards instead.

Subway’s frequent-eater program killed because of fraud Read More »

Israeli car theft scam

From Bruce Schneier’s “Automobile Identity Theft“:

This scam was uncovered in Israel:

1. Thief rents a car.

2. An identical car, legitimately owned, is found and its “identity” stolen.

3. The stolen identity is applied to the rented car and is then offered for sale in a newspaper ad.

4. Innocent buyer purchases the car from the thief as a regular private party sale.

5. After a few days the thief steals the car back from the buyer and returns it to the rental shop.

What ended up happening is that the “new” owners claimed compensation for the theft and most of the damage was absorbed by the insurers.

Israeli car theft scam Read More »

Open source breathalyzers

From Bruce Schneier’s “DUI Cases Thrown Out Due to Closed-Source Breathalyzer“:

According to the article: “Hundreds of cases involving breath-alcohol tests have been thrown out by Seminole County judges in the past five months because the test’s manufacturer will not disclose how the machines work.”

This is the right decision. Throughout history, the government has had to make the choice: prosecute, or keep your investigative methods secret. They couldn’t have both. If they wanted to keep their methods secret, they had to give up on prosecution.

People have the right to confront their accuser. People have a right to examine the evidence against them, and to contest the validity of that evidence.

Open source breathalyzers Read More »

Bruce Schneier on phishing

From Bruce Schneier’s “Phishing“:

Phishing, for those of you who have been away from the Internet for the past few years, is when an attacker sends you an e-mail falsely claiming to be a legitimate business in order to trick you into giving away your account info — passwords, mostly. When this is done by hacking DNS, it’s called pharming. …

In general, two Internet trends affect all forms of identity theft. The widespread availability of personal information has made it easier for a thief to get his hands on it. At the same time, the rise of electronic authentication and online transactions — you don’t have to walk into a bank, or even use a bank card, in order to withdraw money now — has made that personal information much more valuable. …

The newest variant, called “spear phishing,” involves individually targeted and personalized e-mail messages that are even harder to detect. …

It’s not that financial institutions suffer no losses. Because of something called Regulation E, they already pay most of the direct costs of identity theft. But the costs in time, stress, and hassle are entirely borne by the victims. And in one in four cases, the victims have not been able to completely restore their good name.

In economics, this is known as an externality: It’s an effect of a business decision that is not borne by the person or organization making the decision. Financial institutions have no incentive to reduce those costs of identity theft because they don’t bear them. …

If there’s one general precept of security policy that is universally true, it is that security works best when the entity that is in the best position to mitigate the risk is responsible for that risk.

Bruce Schneier on phishing Read More »

Beauregard fools Halleck & escapes

From Shelby Foote’s The Civil War: Fort Sumter to Perryville (384):

When [Pierre Gustave Toutant de Beauregard‘s men] stole out of the intrenchments [at Corinth] after nightfall, they left dummy guns in the embrasures and dummy cannoneers to serve them, fashioned by stuffing ragged uniforms with straw. A single band moved up and down the deserted works, pausing at scattered points to play retreat, tattoo, and taps. Campfires were left burning, with a supply of wood alongside each for the drummer boys who stayed behind to stoke them and beat reveille next morning. All night a train of empty cars rattled back and forth along the tracks through Corinth, stopping at frequent intervals to blow its whistle, the signal for a special detail of leather-lunged soldiers to cheer with all their might. The hope was that this would not only cover the incidental sounds of the withdrawal, but would also lead the Federals to believe that the town’s defenders were being heavily reinforced.

It worked to perfection. … Daylight showed “dense black smoke in clouds,” but no sign of the enemy Pope expected to find massed in his front. Picking his way forward he came upon dummy guns and dummy cannoneers, some with broad grins painted on. Otherwise the works were deserted. …

Seven full weeks of planning and strain, in command of the largest army ever assembled under one field general in the Western Hemisphere, had earned [Halleck] one badly smashed-up North Mississippi railroad intersection.

Beauregard fools Halleck & escapes Read More »

Users know how to create good passwords, but they don’t

From Usability News’ “Password Security: What Users Know and What They Actually Do“:

A total of 328 undergraduate and graduate level college students from Wichita State University volunteered to participate in the survey, and were regular users of the Internet with one or more password protected accounts. Ages of the participants ranged from 18 to 58 years (M = 25.34). Thirteen cases were deleted due to missing data, resulting in 315 participants in the final data analysis. …

When asked what practices should be used in the creation and usage of passwords, the majority of respondents, 50.8% (160), were able to identify most of the password practices that are recommended for creating secure passwords (Tufts University, 2005), although 62.9% (198) failed to identify a practice that would result in the most secure password; using numbers and special characters in place of letters.

Differences between password practices users reported and the passwords practices they believe they should use included:

  • 73% (230) of respondents reported that they should change their passwords for accounts every three to six months, but 52.7% (166) responded that they “Never” change their password when not required.
  • 50.8% (160) of respondents reported that they should use special characters in their passwords, but only 4.8% (12) reported doing so.
  • 63.5% (200) of respondents reported that they should use seven or more characters in their passwords, but only 35.5% (112) indicated that they use this number of characters with any regularity.
  • 70.5% (222) of respondents indicated that personally meaningful words should not be used, but 49.8% (156) reported that they use this practice.
  • 68.3% (215) of respondents report that personally meaningful numbers should not be used in passwords, but 54.9% (173) reported using this practice. …

The majority of participants in the current study most commonly reported password generation practices that are simplistic and hence very insecure. Particular practices reported include using lowercase letters, numbers or digits, personally meaningful words and numbers (e.g., dates). It is widely known that users typically use birthdates, anniversary dates, telephone numbers, license plate numbers, social security numbers, street addresses, apartment numbers, etc. Likewise, personally meaningful words are typically derived from predictable areas and interests in the person’s life and could be guessed through basic knowledge of his or her interests. …

It would seem to be a logical assumption that the practices and behaviors users engage in would be related to what they think they should do in order to create secure passwords. This does not seem to be the case as participants in the current study were able to identify many of the recommended practices, despite the fact that they did not use the practices themselves.

Users know how to create good passwords, but they don’t Read More »

The Sumitomo Mitsuibank bank heist

From Richard Stiennon’s “Lessons Learned from Biggest Bank Heist in History“:

Last year’s news that thieves had managed to break in to Sumitomo Mitsui Bank’s branch in London and attempt to transfer almost $440 million to accounts in other countries should give CIO’s cause for concern. …

First a recap. Last year it came to light that U.K. authorities had put the kibosh on what would have been the largest bank heist in history.

The story is still developing but this is what we know: Thieves masquerading as cleaning staff with the help of a security guard installed hardware keystroke loggers on computers within the London branch of Sumitomo Mitsui, a huge Japanese bank.

These computers evidently belonged to help desk personnel. The keystroke loggers captured everything typed into the computer including, of course, administrative passwords for remote access.

By installing software keystroke loggers on the PCs that belonged to the bank personnel responsible for wire transfers over the SWIFT (Society for Worldwide Interbank Financial Telecommunication) network, the thieves captured credentials that were then used to transfer 220 million pounds (call it half-a-billion dollars).

Luckily the police were involved by that time and were able to stymie the attack.

From Richard Stiennon’s “Super-Glue: Best practice for countering key stroke loggers“:

… it is reported that Sumitomo Bank’s best practice for avoiding a repeat attack is that they now super-glue the keyboard connections into the backs of their PCs.

The Sumitomo Mitsuibank bank heist Read More »

Risk compensation & homestasis

From Damn Interesting’s “The Balance of Risk“:

What’s happening is a process known as risk compensation. It’s a tendency in humans to increase risky behavior proportionately as safeguards are introduced, and it’s very common. So common, in fact, as to render predictions of how well any given piece of safety equipment will work almost useless.

… Why would we do such a strange thing? Dr. Gerald Wilde of Queens University in Ontario proposes a hypothesis he calls risk homeostasis. In a nutshell it proposes that human beings have a target level of risk with which they are most comfortable. When a given activity exceeds their comfort level, people will modify their behavior to reduce their risk until they are comfortable with their level of danger. So far, that’s not exactly a controversial observation. But risk homeostasis proposes another half to that continuum – according to Dr. Wilde, if a given person’s level of risk drops too far below their comfort level, they will again modify their behavior. This time though, they will increase their level of risk until they are once again in their target zone.

… Fortunately for us, risk homeostasis does not seem to apply in all cases. Safety innovations that are invisible tend not to provoke changes in behavior – for example changing windshields to safety glass does not alter most peoples’ driving behavior. The difference in the windshield is effectively invisible to the driver, and so doesn’t affect the driving.

… An additional complication for the already beleaguered safety engineers is that risk homeostasis is dependent not upon actual danger, but rather the perception of risk. Much of the gender and age differences in risk-taking behavior appear to stem less from differing desires for risk, and more from the individual’s different evaluation of risk. Young people, and particularly young men, tend to evaluate their level of risk as much lower than older people would, even in identical situations. This implies that promoting safer behavior depends more upon altering the perceptions of the target population, rather than improving the safety of the environment – a much trickier proposition.

Risk compensation & homestasis Read More »