business

The future of news as shown by the 2008 election

From Steven Berlin Johnson’s “Old Growth Media And The Future Of News” (StevenBerlinJohnson.com: 14 March 2009):

The first Presidential election that I followed in an obsessive way was the 1992 election that Clinton won. I was as compulsive a news junkie about that campaign as I was about the Mac in college: every day the Times would have a handful of stories about the campaign stops or debates or latest polls. Every night I would dutifully tune into Crossfire to hear what the punditocracy had to say about the day’s events. I read Newsweek and Time and the New Republic, and scoured the New Yorker for its occasional political pieces. When the debates aired, I’d watch religiously and stay up late soaking in the commentary from the assembled experts.

That was hardly a desert, to be sure. But compare it to the information channels that were available to me following the 2008 election. Everything I relied on in 1992 was still around of course – except for the late, lamented Crossfire – but it was now part of a vast new forest of news, data, opinion, satire – and perhaps most importantly, direct experience. Sites like Talking Points Memo and Politico did extensive direct reporting. Daily Kos provided in-depth surveys and field reports on state races that the Times would never have had the ink to cover. Individual bloggers like Andrew Sullivan responded to each twist in the news cycle; HuffPo culled the most provocative opinion pieces from the rest of the blogosphere. Nate Silver at fivethirtyeight.com did meta-analysis of polling that blew away anything William Schneider dreamed of doing on CNN in 1992. When the economy imploded in September, I followed economist bloggers like Brad DeLong to get their expert take the candidates’ responses to the crisis. (Yochai Benchler talks about this phenomenon of academics engaging with the news cycle in a smart response here.) I watched the debates with a thousand virtual friends live-Twittering alongside me on the couch. All this was filtered and remixed through the extraordinary political satire of John Stewart and Stephen Colbert, which I watched via viral clips on the Web as much as I watched on TV.

What’s more: the ecosystem of political news also included information coming directly from the candidates. Think about the Philadelphia race speech, arguably one of the two or three most important events in the whole campaign. Eight million people watched it on YouTube alone. Now, what would have happened to that speech had it been delivered in 1992? Would any of the networks have aired it in its entirety? Certainly not. It would have been reduced to a minute-long soundbite on the evening news. CNN probably would have aired it live, which might have meant that 500,000 people caught it. Fox News and MSNBC? They didn’t exist yet. A few serious newspaper might have reprinted it in its entirety, which might have added another million to the audience. Online perhaps someone would have uploaded a transcript to Compuserve or The Well, but that’s about the most we could have hoped for.

There is no question in mind my mind that the political news ecosystem of 2008 was far superior to that of 1992: I had more information about the state of the race, the tactics of both campaigns, the issues they were wrestling with, the mind of the electorate in different regions of the country. And I had more immediate access to the candidates themselves: their speeches and unscripted exchanges; their body language and position papers.

The old line on this new diversity was that it was fundamentally parasitic: bloggers were interesting, sure, but if the traditional news organizations went away, the bloggers would have nothing to write about, since most of what they did was link to professionally reported stories. Let me be clear: traditional news organizations were an important part of the 2008 ecosystem, no doubt about it. … But no reasonable observer of the political news ecosystem could describe all the new species as parasites on the traditional media. Imagine how many barrels of ink were purchased to print newspaper commentary on Obama’s San Francisco gaffe about people “clinging to their guns and religion.” But the original reporting on that quote didn’t come from the Times or the Journal; it came from a “citizen reporter” named Mayhill Fowler, part of the Off The Bus project sponsored by Jay Rosen’s Newassignment.net and The Huffington Post.

The future of news as shown by the 2008 election Read More »

Cell phone viruses

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

Earlier this year, smartphone users in China started to get messages promising a “sexy view” if they clicked on a link. The link led to a download. That download was a spam generator which, once installed, sent identical “sexy view” messages to everyone in the owner’s contacts list.

That was the first virus known to travel by text message. It was chiefly an annoyance, but there is great potential harm from mobile viruses, especially as technologies such as Bluetooth provide new ways for viruses to spread. But there has never yet been a cellphone threat as serious as Conficker is to PCs.

There are two reasons for that, says Albert-László Barabási of Northeastern University in Boston. He and his colleagues used billing data to model the spread of a mobile virus. They found that Bluetooth is an inefficient way of transmitting a virus as it can only jump between users who are within 30 metres of each other. A better option would be for the virus to disguise itself as a picture message. But that could still only infect handsets running the same operating system. As the mobile market is fragmented, says Barabási, no one virus can gain a foothold.

Cell phone viruses Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Stolen credit card data is cheaper than ever in the Underground

From Brian Krebs’ “Glut of Stolen Banking Data Trims Profits for Thieves” (The Washington Post: 15 April 2009):

A massive glut in the number of credit and debit cards stolen in data breaches at financial institutions last year has flooded criminal underground markets that trade in this material, driving prices for the illicit goods to the lowest levels seen in years, experts have found.

For a glimpse of just how many financial records were lost to hackers last year, consider the stats released this week by Verizon Business. The company said it responded to at least 90 confirmed data breaches last year involving roughly 285 million consumer records, a number that exceeded the combined total number of breached records from cases the company investigated from 2004 to 2007. Breaches at banks and financial institutions were responsible for 93 percent of all such records compromised last year, Verizon found.

As a result, the stolen identities and credit and debit cards for sale in the underground markets is outpacing demand for the product, said Bryan Sartin, director of investigative response at Verizon Business.

Verizon found that profit margins associated with selling stolen credit card data have dropped from $10 to $16 per record in mid-2007 to less than $0.50 per record today.

According to a study released last week by Symantec Corp., the price for each card can be sold for as low as 6 cents when they are purchased in bulk.

Lawrence Baldwin, a security consultant in Alpharetta, Ga., has been working with several financial institutions to help infiltrate illegal card-checking services. Baldwin estimates that at least 25,000 credit and debit cards are checked each day at three separate illegal card-checking Web sites he is monitoring. That translates to about 800,000 cards per month or nearly 10 million cards each year.

Baldwin said the checker sites take advantage of authentication weaknesses in the card processing system that allow merchants to conduct so-called “pre-authorization requests,” which merchants use to place a temporary charge on the account to make sure that the cardholder has sufficient funds to pay for the promised goods or services.

Pre-authorization requests are quite common. When a waiter at a restaurant swipes a customer’s card and brings the receipt to the table so the customer can add a tip, for example, that initial charge is essentially a pre-authorization.

With these card-checking services, however, in most cases the charge initiated by the pre-authorization check is never consummated. As a result, unless a consumer is monitoring their accounts online in real-time, they may never notice a pre-authorization initiated by a card-checking site against their card number, because that query won’t show up as a charge on the customer’s monthly statement.

The crooks have designed their card-checking sites so that each check is submitted into the card processing network using a legitimate, hijacked merchant account number combined with a completely unrelated merchant name, Baldwin discovered.

One of the many innocent companies caught up in one of these card-checking services is Wild Birds Unlimited, a franchise pet store outside of Buffalo, N.Y. Baldwin said a fraudulent card-checking service is running pre-authorization requests using Wild Bird’s store name and phone number in combination with another merchant’s ID number.

Danielle Pecoraro, the store’s manager, said the bogus charges started in January 2008. Since then, she said, her store has received an average of three to four phone calls each day from people who had never shopped there, wondering why small, $1-$10 charges from her store were showing up on their monthly statements. Some of the charges were for as little as 24 cents, and a few were for as much as $1,900.

Stolen credit card data is cheaper than ever in the Underground Read More »

80% of all spam from botnets

From Jacqui Cheng’s “Report: botnets sent over 80% of all June spam” (Ars Technica: 29 June 2009):

A new report (PDF) from Symantec’s MessageLabs says that more than 80 percent of all spam sent today comes from botnets, despite several recent shut-downs.

According to MessageLabs’ June report, spam accounted for 90.4 percent of all e-mail sent in the month of June—this was roughly unchanged since May. Botnets, however, sent about 83.2 percent of that spam, with the largest spam-wielding botnet being Cutwail. Cutwail is described as “one of the largest and most active botnets” and has doubled its size and output per bot since March of this year. As a result, it is now responsible for 45 percent of all spam, with others like Mega-D, Xarvester, Donbot, Grum, and Rustock making up much of the difference

80% of all spam from botnets Read More »

The light bulb con job

From Bruce Schneier’s “The Psychology of Con Men” (Crypto-Gram: 15 November 2008):

Great story: “My all-time favourite [short con] only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.”

http://www.abc.net.au/rn/lawreport/stories/2008/2376933.htm

The light bulb con job Read More »

Storm made $7000 each day from spam

From Bruce Schneier’s “The Economics of Spam” (Crypto-Gram: 15 November 2008):

Researchers infiltrated the Storm worm and monitored its doings.

“After 26 days, and almost 350 million e-mail messages, only 28 sales resulted — a conversion rate of well under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. Taken together, these conversions would have resulted in revenues of $2,731.88 — a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network — we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm’s pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.

“Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than “millions of dollars every day,” but certainly a healthy enterprise.”

Storm made $7000 each day from spam Read More »

Quanta Crypto: cool but useless

From Bruce Schneier’s “Quantum Cryptography” (Crypto-Gram: 15 November 2008):

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper — period.

While I like the science of quantum cryptography — my undergraduate degree was in physics — I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols.

Quanta Crypto: cool but useless Read More »

Famous “Laws” of Business & Technology

These come from a variety of sources; just Google the law to find out more about it.

Parkinson’s Law

“Work expands so as to fill the time available for its completion.”

Source: Cyril Northcote Parkinson in The Economist (1955)

The Peter Principle

“In a hierarchy every employee tends to rise to his level of incompetence.”

Source: Dr. Laurence J. Peter and Raymond Hull in The Peter Principle (1968)

The Dilbert Principle

“Leadership is nature’s way of removing morons from the productive flow.”

Source: Scott Adams’ Dilbert (February 5, 1995)

Hofstadter’s Law

“It always takes longer than you expect, even when you take into account Hofstadter’s Law.”

Source: Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid (1979)

Amara’s Law

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Source: Roy Amara.

Brooks’ Law

Adding manpower to a late software project makes it later.

Source: Fred Brooks’ The Mythical Man-Month (1975)

Clarke’s 3 Laws

  1. First law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. Second law: The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Third law: Any sufficiently advanced technology is indistinguishable from magic.

Source: Arthur C. Clarke’s “Hazards of Prophecy: The Failure of Imagination” in Profiles of the Future (1962)

Conway’s Law

“Any piece of software reflects the organizational structure that produced it.”

Source: Melvin Conway (1968)

Gall’s Law

“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.”

Source: John Gall’s Systemantics: How Systems Really Work and How They Fail (1978)

Godwin’s Law

“As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.”

Source: Mike Godwin (1990)

Hanlon’s Razor

“Never attribute to malice that which can be adequately explained by stupidity.”

Herblock’s Law

“If it’s good, they’ll stop making it.”

Source: Herbert Lawrence Block

Kranzberg’s 6 Laws of Technology

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity – and so is the history of technology.

Source: Melvin Kranzberg’s “Kranzberg’s Laws” Technology and Culture, Vol. 27, No. 3 (1986): 544-560

Linus’s Law

“Given enough eyeballs, all bugs are shallow.”

Source: Linus Torvalds

Schneier’s Law

“Any person can invent a security system so clever that she or he can’t think of how to break it.”

Source: Cory Doctorow’s “Microsoft Research DRM talk” (17 June 2004)

Sturgeon’s Revelation

“90 percent of everything is crap.”

Source: Theodore Sturgeon (1951)

Wirth’s Law

“Software is getting slower more rapidly than hardware becomes faster.”

Source: Niklaus Wirth (1995)

Zawinski’s Law

“Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”

Source: Jamie Zawinski

Granneman’s Law of Operating System Usage

“To mess up a Linux box, you need to work at it; to mess up your Windows box, you just need to work on it. ”

Source: Scott Granneman’s “Linux vs. Windows Viruses” in SecurityFocus (10 February 2003)

Famous “Laws” of Business & Technology Read More »

Small charges on your credit card – why?

Too Much Credit
Creative Commons License photo credit: Andres Rueda

From Brian Kreb’s “An Odyssey of Fraud” (The Washington Post: 17 June 2009):

Andy Kordopatis is the proprietor of Odyssey Bar, a modest watering hole in Pocatello, Idaho, a few blocks away from Idaho State University. Most of his customers pay for their drinks with cash, but about three times a day he receives a phone call from someone he’s never served — in most cases someone who’s never even been to Idaho — asking why their credit or debit card has been charged a small amount by his establishment.

Kordopatis says he can usually tell what’s coming next when the caller immediately asks to speak with the manager or owner.

“That’s when I start telling them that I know why they’re calling, and about the Russian hackers who are using my business,” Kordopatis said.

The Odyssey Bar is but one of dozens of small establishments throughout the United States seemingly picked at random by organized cyber criminals to serve as unwitting pawns in a high-stakes game of chess against the U.S. financial system. This daily pattern of phone calls and complaints has been going on for more than a year now. Kordopatis said he has talked to the company that processes his bar’s credit card payments about fixing the problem, but says they can’t do anything because he hasn’t actually lost any money from the scam.

The Odyssey Bar’s merchant account is being abused by online services that cyber thieves built to help other crooks check the balances and limits on stolen credit and debit card account numbers.

Small charges on your credit card – why? Read More »

Mine fires that burn for 400 years

Centralia - Where there's smoke..
Creative Commons License photo credit: C. Young Photography

From Joshua Foer’s “Giant Burning Holes of the World” (Boing Boing: 16 June 2009):

… these sorts of mine fires can stay lit for a very long time. One burned in the city of Zwickau, Germany from 1476 to 1860. Another coal fire in Germany, at a place called Brennender Berg (Burning Mountain), has been smoking continually since 1688!

Mine fires that burn for 400 years Read More »

Could Green Dam lead to the largest botnet in history?

Green_Damn_site_blocked.jpg

From Rob Cottingham’s “From blocking to botnet: Censorship isn’t the only problem with China’s new Internet blocking software” (Social Signal: 10 June 2009):

Any blocking software needs to update itself from time to time: at the very least to freshen its database of forbidden content, and more than likely to fix bugs, add features and improve performance. (Most anti-virus software does this.)

If all the software does is to refresh the list of banned sites, that limits the potential for abuse. But if the software is loading new executable code onto the computer, suddenly there’s the potential for something a lot bigger.

Say you’re a high-ranking official in the Chinese military. And let’s say you have some responsibility for the state’s capacity to wage so-called cyber warfare: digital assaults on an enemy’s technological infrastructure.

It strikes you: there’s a single backdoor into more that 40 million Chinese computers, capable of installing… well, nearly anything you want.

What if you used that backdoor, not just to update blocking software, but to create something else?

Say, the biggest botnet in history?

Still, a botnet 40 million strong (plus the installed base already in place in Chinese schools and other institutions) at the beck and call of the military is potentially a formidable weapon. Even if the Chinese government has no intention today of using Green Dam for anything other than blocking pornography, the temptation to repurpose it for military purposes may prove to be overwhelming.

Could Green Dam lead to the largest botnet in history? Read More »

Green Dam is easily exploitable

Green_Damn_site_blocked.jpg

From Scott Wolchok, Randy Yao, and J. Alex Halderman’s “Analysis of the Green Dam Censorware System” (The University of Michigan: 11 June 2009):

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC.

According to press reports, China will soon require all PCs sold in the country to include Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material.

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

Green Dam is easily exploitable Read More »

The limitations of Windows 7 on netbooks

From Farhad Manjoo’s “I, for One, Welcome Our New Android Overlords” (Slate: 5 June 2008):

Microsoft promises that Windows 7 will be able to run on netbooks, but it has announced a risky strategy to squeeze profits from these machines. The company plans to cripple the cheapest versions of the new OS in order to encourage PC makers to pay for premium editions. If you buy a netbook that comes with the low-priced Windows 7 Starter Edition, you won’t be able to change your screen’s background or window colors, you won’t be able to play DVDs, you can’t connect it to another monitor, and you won’t see many of the user-interface advances found in other versions. If you’d like more flexibility, you’ll need to upgrade to a more expensive version of Windows—which will, of course, defeat the purpose of your cheap PC. (Microsoft had originally planned to limit Starter Edition even further—you wouldn’t be able to run more than three programs at a time. It removed that limitation after howls of protest.)

The limitations of Windows 7 on netbooks Read More »

Meeting expectations, no matter how silly, in design

From Operator No. 9’s “That decorating touch” (Interactive Week: 24 April 2000): 100:

Intel AnyPoint Wireless:

Dan Sweeney, general manager of Intel’s Home Networking division, says that when the company showed consumer focus groups the AnyPoint Wireless home networking system …, people became very confused, because there wasn’t a visible antenna. The desktop version of the wireless adapter — about the size of a deck of cards — has an antenna hidden inside it. ‘They looked at it and said, “That’s not a radio!”‘ Sweeney says. So Intel’s industrial designers added a tiny little plastic tip on top of the unit that is supposed to resemble an antenna. It actually looks — and I’m sure this was not intended by the designers — kind of like the type of hat klansmen or maybe religious leaders — bishops? vicars? — wear. Then again, maybe I just need to get out more often.

Meeting expectations, no matter how silly, in design Read More »

The Uncanny Valley, art forgery, & love

Apply new wax to old wood
Creative Commons License photo credit: hans s

From Errol Morris’ “Bamboozling Ourselves (Part 2)” (The New York Times: 28 May 2009):

[Errol Morris:] The Uncanny Valley is a concept developed by the Japanese robot scientist Masahiro Mori. It concerns the design of humanoid robots. Mori’s theory is relatively simple. We tend to reject robots that look too much like people. Slight discrepancies and incongruities between what we look like and what they look like disturb us. The closer a robot resembles a human, the more critical we become, the more sensitive to slight discrepancies, variations, imperfections. However, if we go far enough away from the humanoid, then we much more readily accept the robot as being like us. This accounts for the success of so many movie robots — from R2-D2 to WALL-E. They act like humans but they don’t look like humans. There is a region of acceptability — the peaks around The Uncanny Valley, the zone of acceptability that includes completely human and sort of human but not too human. The existence of The Uncanny Valley also suggests that we are programmed by natural selection to scrutinize the behavior and appearance of others. Survival no doubt depends on such an innate ability.

EDWARD DOLNICK: [The art forger Van Meegeren] wants to avoid it. So his big challenge is he wants to paint a picture that other people are going to take as Vermeer, because Vermeer is a brand name, because Vermeer is going to bring him lots of money, if he can get away with it, but he can’t paint a Vermeer. He doesn’t have that skill. So how is he going to paint a picture that doesn’t look like a Vermeer, but that people are going to say, “Oh! It’s a Vermeer?” How’s he going to pull it off? It’s a tough challenge. Now here’s the point of The Uncanny Valley: as your imitation gets closer and closer to the real thing, people think, “Good, good, good!” — but then when it’s very close, when it’s within 1 percent or something, instead of focusing on the 99 percent that is done well, they focus on the 1 percent that you’re missing, and you’re in trouble. Big trouble.

Van Meegeren is trapped in the valley. If he tries for the close copy, an almost exact copy, he’s going to fall short. He’s going to look silly. So what he does instead is rely on the blanks in Vermeer’s career, because hardly anything is known about him; he’s like Shakespeare in that regard. He’ll take advantage of those blanks by inventing a whole new era in Vermeer’s career. No one knows what he was up to all this time. He’ll throw in some Vermeer touches, including a signature, so that people who look at it will be led to think, “Yes, this is a Vermeer.”

Van Meegeren was sometimes careful, other times astonishingly reckless. He could have passed certain tests. What was peculiar, and what was quite startling to me, is that it turned out that nobody ever did any scientific test on Van Meegeren, even the stuff that was available in his day, until after he confessed. And to this day, people hardly ever test pictures, even multi-million dollar ones. And I was so surprised by that that I kept asking, over and over again: why? Why would that be? Before you buy a house, you have someone go through it for termites and the rest. How could it be that when you’re going to lay out $10 million for a painting, you don’t test it beforehand? And the answer is that you don’t test it because, at the point of being about to buy it, you’re in love! You’ve found something. It’s going to be the high mark of your collection; it’s going to be the making of you as a collector. You finally found this great thing. It’s available, and you want it. You want it to be real. You don’t want to have someone let you down by telling you that the painting isn’t what you think it is. It’s like being newly in love. Everything is candlelight and wine. Nobody hires a private detective at that point. It’s only years down the road when things have gone wrong that you say, “What was I thinking? What’s going on here?” The collector and the forger are in cahoots. The forger wants the collector to snap it up, and the collector wants it to be real. You are on the same side. You think that it would be a game of chess or something, you against him. “Has he got the paint right?” “Has he got the canvas?” You’re going to make this checkmark and that checkmark to see if the painting measures up. But instead, both sides are rooting for this thing to be real. If it is real, then you’ve got a masterpiece. If it’s not real, then today is just like yesterday. You’re back where you started, still on the prowl.

The Uncanny Valley, art forgery, & love Read More »

Taxi driver party lines

8th Ave .....Midtown Manhattan
Creative Commons License photo credit: 708718

From Annie Karni’s “Gabbing Taxi Drivers Talking on ‘Party Lines’” (The New York Sun: 11 January 2007):

It’s not just wives at home or relatives overseas that keep taxi drivers tied up on their cellular phones during work shifts. Many cabbies say that when they are chatting on duty, it’s often with their cab driver colleagues on group party lines. Taxi drivers say they use conference calls to discuss directions and find out about congested routes to avoid. They come to depend on one another as first responders, reacting faster even than police to calls from drivers in distress. Some drivers say they participate in group prayers on a party line.

It is during this morning routine, waiting for the first shuttle flights to arrive from Washington and Boston, where many friendships between cabbies are forged and cell phone numbers are exchanged, Mr. Sverdlov said. Once drivers have each other’s numbers, they can use push-to-talk technology to call large groups all at once.

Mr. Sverdlov said he conferences with up to 10 cabbies at a time to discuss “traffic, what’s going on, this and that, and where do cops stay.” He estimated that every month, he logs about 20,000 talking minutes on his cell phone.

While civilian drivers are allowed to use hands-free devices to talk on cell phones while behind the wheel, the Taxi & Limousine Commission imposed a total cell phone ban for taxi drivers on duty in 1999. In 2006, the Taxi & Limousine Commission issued 1,049 summonses for phone use while on duty, up by almost 69% from the 621 summonses it issued the previous year. Drivers caught chatting while driving are fined $200 and receive two-point penalties on their licenses.

Drivers originally from countries like Israel, China, and America, who are few and far between, say they rarely chat on the phone with other cab drivers because of the language barrier. For many South Asians and Russian drivers, however, conference calls that are prohibited by the Taxi & Limousine Commission are mainstays of cabby life.

Taxi driver party lines Read More »

Avoid toxic people

From Milton Glaser’s “Ten Things I Have Learned” (Milton Glaser: 22 November 2001):

… the important thing that I can tell you is that there is a test to determine whether someone is toxic or nourishing in your relationship with them. Here is the test: You have spent some time with this person, either you have a drink or go for dinner or you go to a ball game. It doesn’t matter very much but at the end of that time you observe whether you are more energised or less energised. Whether you are tired or whether you are exhilarated. If you are more tired then you have been poisoned. If you have more energy you have been nourished. The test is almost infallible and I suggest that you use it for the rest of your life.

Avoid toxic people Read More »

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

Al Qaeda’s use of social networking sites Read More »

The watchclock knows where your night watchman is

Detex Watchclock Station
Creative Commons License photo credit: 917press

From Christopher Fahey’s “Who Watches the Watchman?” (GraphPaper: 2 May 2009):

The Detex Newman watchclock was first introduced in 1927 and is still in wide use today.

&hellip What could you possibly do in 1900 to be absolutely sure a night watchman was making his full patrol?

An elegant solution, designed and patented in 1901 by the German engineer A.A. Newman, is called the “watchclock”. It’s an ingenious mechanical device, slung over the shoulder like a canteen and powered by a simple wind-up spring mechanism. It precisely tracks and records a night watchman’s position in both space and time for the duration of every evening. It also generates a detailed, permanent, and verifiable record of each night’s patrol.

What’s so interesting to me about the watchclock is that it’s an early example of interaction design used to explicitly control user behavior. The “user” of the watchclock device is obliged to behave in a strictly delimited fashion.

The key, literally, to the watchclock system is that the watchman is required to “clock in” at a series of perhaps a dozen or more checkpoints throughout the premises. Positioned at each checkpoint is a unique, coded key nestled in a little steel box and secured by a small chain. Each keybox is permanently and discreetly installed in strategically-placed nooks and crannies throughout the building, for example in a broom closet or behind a stairway.

The watchman makes his patrol. He visits every checkpoint and clicks each unique key into the watchclock. Within the device, the clockwork marks the exact time and key-location code to a paper disk or strip. If the watchman visits all checkpoints in order, they will have completed their required patrol route.

The watchman’s supervisor can subsequently unlock the device itself (the watchman himself cannot open the watchclock) and review the paper records to confirm if the watchman was or was not doing their job.

The watchclock knows where your night watchman is Read More »