teaching

Cell phone viruses

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

Earlier this year, smartphone users in China started to get messages promising a “sexy view” if they clicked on a link. The link led to a download. That download was a spam generator which, once installed, sent identical “sexy view” messages to everyone in the owner’s contacts list.

That was the first virus known to travel by text message. It was chiefly an annoyance, but there is great potential harm from mobile viruses, especially as technologies such as Bluetooth provide new ways for viruses to spread. But there has never yet been a cellphone threat as serious as Conficker is to PCs.

There are two reasons for that, says Albert-László Barabási of Northeastern University in Boston. He and his colleagues used billing data to model the spread of a mobile virus. They found that Bluetooth is an inefficient way of transmitting a virus as it can only jump between users who are within 30 metres of each other. A better option would be for the virus to disguise itself as a picture message. But that could still only infect handsets running the same operating system. As the mobile market is fragmented, says Barabási, no one virus can gain a foothold.

Cell phone viruses Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

Stolen credit card data is cheaper than ever in the Underground

From Brian Krebs’ “Glut of Stolen Banking Data Trims Profits for Thieves” (The Washington Post: 15 April 2009):

A massive glut in the number of credit and debit cards stolen in data breaches at financial institutions last year has flooded criminal underground markets that trade in this material, driving prices for the illicit goods to the lowest levels seen in years, experts have found.

For a glimpse of just how many financial records were lost to hackers last year, consider the stats released this week by Verizon Business. The company said it responded to at least 90 confirmed data breaches last year involving roughly 285 million consumer records, a number that exceeded the combined total number of breached records from cases the company investigated from 2004 to 2007. Breaches at banks and financial institutions were responsible for 93 percent of all such records compromised last year, Verizon found.

As a result, the stolen identities and credit and debit cards for sale in the underground markets is outpacing demand for the product, said Bryan Sartin, director of investigative response at Verizon Business.

Verizon found that profit margins associated with selling stolen credit card data have dropped from $10 to $16 per record in mid-2007 to less than $0.50 per record today.

According to a study released last week by Symantec Corp., the price for each card can be sold for as low as 6 cents when they are purchased in bulk.

Lawrence Baldwin, a security consultant in Alpharetta, Ga., has been working with several financial institutions to help infiltrate illegal card-checking services. Baldwin estimates that at least 25,000 credit and debit cards are checked each day at three separate illegal card-checking Web sites he is monitoring. That translates to about 800,000 cards per month or nearly 10 million cards each year.

Baldwin said the checker sites take advantage of authentication weaknesses in the card processing system that allow merchants to conduct so-called “pre-authorization requests,” which merchants use to place a temporary charge on the account to make sure that the cardholder has sufficient funds to pay for the promised goods or services.

Pre-authorization requests are quite common. When a waiter at a restaurant swipes a customer’s card and brings the receipt to the table so the customer can add a tip, for example, that initial charge is essentially a pre-authorization.

With these card-checking services, however, in most cases the charge initiated by the pre-authorization check is never consummated. As a result, unless a consumer is monitoring their accounts online in real-time, they may never notice a pre-authorization initiated by a card-checking site against their card number, because that query won’t show up as a charge on the customer’s monthly statement.

The crooks have designed their card-checking sites so that each check is submitted into the card processing network using a legitimate, hijacked merchant account number combined with a completely unrelated merchant name, Baldwin discovered.

One of the many innocent companies caught up in one of these card-checking services is Wild Birds Unlimited, a franchise pet store outside of Buffalo, N.Y. Baldwin said a fraudulent card-checking service is running pre-authorization requests using Wild Bird’s store name and phone number in combination with another merchant’s ID number.

Danielle Pecoraro, the store’s manager, said the bogus charges started in January 2008. Since then, she said, her store has received an average of three to four phone calls each day from people who had never shopped there, wondering why small, $1-$10 charges from her store were showing up on their monthly statements. Some of the charges were for as little as 24 cents, and a few were for as much as $1,900.

Stolen credit card data is cheaper than ever in the Underground Read More »

Green Dam is easily exploitable

Green_Damn_site_blocked.jpg

From Scott Wolchok, Randy Yao, and J. Alex Halderman’s “Analysis of the Green Dam Censorware System” (The University of Michigan: 11 June 2009):

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC.

According to press reports, China will soon require all PCs sold in the country to include Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material.

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

Green Dam is easily exploitable Read More »

The limitations of Windows 7 on netbooks

From Farhad Manjoo’s “I, for One, Welcome Our New Android Overlords” (Slate: 5 June 2008):

Microsoft promises that Windows 7 will be able to run on netbooks, but it has announced a risky strategy to squeeze profits from these machines. The company plans to cripple the cheapest versions of the new OS in order to encourage PC makers to pay for premium editions. If you buy a netbook that comes with the low-priced Windows 7 Starter Edition, you won’t be able to change your screen’s background or window colors, you won’t be able to play DVDs, you can’t connect it to another monitor, and you won’t see many of the user-interface advances found in other versions. If you’d like more flexibility, you’ll need to upgrade to a more expensive version of Windows—which will, of course, defeat the purpose of your cheap PC. (Microsoft had originally planned to limit Starter Edition even further—you wouldn’t be able to run more than three programs at a time. It removed that limitation after howls of protest.)

The limitations of Windows 7 on netbooks Read More »

The Uncanny Valley, art forgery, & love

Apply new wax to old wood
Creative Commons License photo credit: hans s

From Errol Morris’ “Bamboozling Ourselves (Part 2)” (The New York Times: 28 May 2009):

[Errol Morris:] The Uncanny Valley is a concept developed by the Japanese robot scientist Masahiro Mori. It concerns the design of humanoid robots. Mori’s theory is relatively simple. We tend to reject robots that look too much like people. Slight discrepancies and incongruities between what we look like and what they look like disturb us. The closer a robot resembles a human, the more critical we become, the more sensitive to slight discrepancies, variations, imperfections. However, if we go far enough away from the humanoid, then we much more readily accept the robot as being like us. This accounts for the success of so many movie robots — from R2-D2 to WALL-E. They act like humans but they don’t look like humans. There is a region of acceptability — the peaks around The Uncanny Valley, the zone of acceptability that includes completely human and sort of human but not too human. The existence of The Uncanny Valley also suggests that we are programmed by natural selection to scrutinize the behavior and appearance of others. Survival no doubt depends on such an innate ability.

EDWARD DOLNICK: [The art forger Van Meegeren] wants to avoid it. So his big challenge is he wants to paint a picture that other people are going to take as Vermeer, because Vermeer is a brand name, because Vermeer is going to bring him lots of money, if he can get away with it, but he can’t paint a Vermeer. He doesn’t have that skill. So how is he going to paint a picture that doesn’t look like a Vermeer, but that people are going to say, “Oh! It’s a Vermeer?” How’s he going to pull it off? It’s a tough challenge. Now here’s the point of The Uncanny Valley: as your imitation gets closer and closer to the real thing, people think, “Good, good, good!” — but then when it’s very close, when it’s within 1 percent or something, instead of focusing on the 99 percent that is done well, they focus on the 1 percent that you’re missing, and you’re in trouble. Big trouble.

Van Meegeren is trapped in the valley. If he tries for the close copy, an almost exact copy, he’s going to fall short. He’s going to look silly. So what he does instead is rely on the blanks in Vermeer’s career, because hardly anything is known about him; he’s like Shakespeare in that regard. He’ll take advantage of those blanks by inventing a whole new era in Vermeer’s career. No one knows what he was up to all this time. He’ll throw in some Vermeer touches, including a signature, so that people who look at it will be led to think, “Yes, this is a Vermeer.”

Van Meegeren was sometimes careful, other times astonishingly reckless. He could have passed certain tests. What was peculiar, and what was quite startling to me, is that it turned out that nobody ever did any scientific test on Van Meegeren, even the stuff that was available in his day, until after he confessed. And to this day, people hardly ever test pictures, even multi-million dollar ones. And I was so surprised by that that I kept asking, over and over again: why? Why would that be? Before you buy a house, you have someone go through it for termites and the rest. How could it be that when you’re going to lay out $10 million for a painting, you don’t test it beforehand? And the answer is that you don’t test it because, at the point of being about to buy it, you’re in love! You’ve found something. It’s going to be the high mark of your collection; it’s going to be the making of you as a collector. You finally found this great thing. It’s available, and you want it. You want it to be real. You don’t want to have someone let you down by telling you that the painting isn’t what you think it is. It’s like being newly in love. Everything is candlelight and wine. Nobody hires a private detective at that point. It’s only years down the road when things have gone wrong that you say, “What was I thinking? What’s going on here?” The collector and the forger are in cahoots. The forger wants the collector to snap it up, and the collector wants it to be real. You are on the same side. You think that it would be a game of chess or something, you against him. “Has he got the paint right?” “Has he got the canvas?” You’re going to make this checkmark and that checkmark to see if the painting measures up. But instead, both sides are rooting for this thing to be real. If it is real, then you’ve got a masterpiece. If it’s not real, then today is just like yesterday. You’re back where you started, still on the prowl.

The Uncanny Valley, art forgery, & love Read More »

Taxi driver party lines

8th Ave .....Midtown Manhattan
Creative Commons License photo credit: 708718

From Annie Karni’s “Gabbing Taxi Drivers Talking on ‘Party Lines’” (The New York Sun: 11 January 2007):

It’s not just wives at home or relatives overseas that keep taxi drivers tied up on their cellular phones during work shifts. Many cabbies say that when they are chatting on duty, it’s often with their cab driver colleagues on group party lines. Taxi drivers say they use conference calls to discuss directions and find out about congested routes to avoid. They come to depend on one another as first responders, reacting faster even than police to calls from drivers in distress. Some drivers say they participate in group prayers on a party line.

It is during this morning routine, waiting for the first shuttle flights to arrive from Washington and Boston, where many friendships between cabbies are forged and cell phone numbers are exchanged, Mr. Sverdlov said. Once drivers have each other’s numbers, they can use push-to-talk technology to call large groups all at once.

Mr. Sverdlov said he conferences with up to 10 cabbies at a time to discuss “traffic, what’s going on, this and that, and where do cops stay.” He estimated that every month, he logs about 20,000 talking minutes on his cell phone.

While civilian drivers are allowed to use hands-free devices to talk on cell phones while behind the wheel, the Taxi & Limousine Commission imposed a total cell phone ban for taxi drivers on duty in 1999. In 2006, the Taxi & Limousine Commission issued 1,049 summonses for phone use while on duty, up by almost 69% from the 621 summonses it issued the previous year. Drivers caught chatting while driving are fined $200 and receive two-point penalties on their licenses.

Drivers originally from countries like Israel, China, and America, who are few and far between, say they rarely chat on the phone with other cab drivers because of the language barrier. For many South Asians and Russian drivers, however, conference calls that are prohibited by the Taxi & Limousine Commission are mainstays of cabby life.

Taxi driver party lines Read More »

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

Al Qaeda’s use of social networking sites Read More »

A story of failed biometrics at a gym

Fingerprints
Creative Commons License photo credit: kevindooley

From Jake Vinson’s “Cracking your Fingers” (The Daily WTF: 28 April 2009):

A few days later, Ross stood proudly in the reception area, hands on his hips. A high-tech fingerprint scanner sat at the reception area near the turnstile and register, as the same scanner would be used for each, though the register system wasn’t quite ready for rollout yet. Another scanner sat on the opposite side of the turnstile, for gym members to sign out. … The receptionist looked almost as pleased as Ross that morning as well, excited that this meant they were working toward a system that necessitated less manual member ID lookups.

After signing a few people up, the new system was going swimmingly. Some users declined to use the new system, instead walking to the far side of the counter to use the old touchscreen system. Then Johnny tried to leave after his workout.

… He scanned his finger on his way out, but the turnstile wouldn’t budge.

“Uh, just a second,” the receptionist furiously typed and clicked, while Johnny removed one of his earbuds out and stared. “I’ll just have to manually override it…” but it was useless. There was no manual override option. Somehow, it was never considered that the scanner would malfunction. After several seconds of searching and having Johnny try to scan his finger again, the receptionist instructed him just to jump over the turnstile.

It was later discovered that the system required a “sign in” and a “sign out,” and if a member was recognized as someone else when attempting to sign out, the system rejected the input, and the turnstile remained locked in position. This was not good.

The scene repeated itself several times that day. Worse, the fingerprint scanner at the exit was getting kind of disgusting. Dozens of sweaty fingerprints required the scanner to be cleaned hourly, and even after it was freshly cleaned, it sometimes still couldn’t read fingerprints right. The latticed patterns on the barbell grips would leave indented patterns temporarily on the members’ fingers, there could be small cuts or folds on fingertips just from carrying weights or scrapes on the concrete coming out of the pool, fingers were wrinkly after a long swim, or sometimes the system just misidentified the person for no apparent reason.

Fingerprint Scanning

In much the same way that it’s not a good idea to store passwords in plaintext, it’s not a good idea to store raw fingerprint data. Instead, it should be hashed, so that the same input will consistently give the same output, but said output can’t be used to determine what the input was. In biometry, there are many complex algorithms that can analyze a fingerprint via several points on the finger. This system was set up to record seven points.

After a few hours of rollout, though, it became clear that the real world doesn’t conform to how it should’ve worked in theory. There were simply too many variables, too many activities in the gym that could cause fingerprints to become altered. As such, the installers did what they thought was the reasonable thing to do – reduce the precision from seven points down to something substantially lower.

The updated system was in place for a few days, and it seemed to be working better; no more people being held up trying to leave.

Discovery

… [The monitor] showed Ray as coming in several times that week, often twice on the same day, just hours apart. For each day listed, Ray had only come the later of the two times.

Reducing the precision of the fingerprint scanning resulted in the system identifying two people as one person. Reviewing the log, they saw that some regulars weren’t showing up in the system, and many members had two or three people being identified by the scanner as them.

A story of failed biometrics at a gym Read More »

Interviewed for an article about mis-uses of Twitter

The Saint Louis Beacon published an article on 27 April 2009 titled “Tweets from the jury box aren’t amusing“, about legal “cases across the country where jurors have used cell phones, BlackBerrys and other devices to comment – sometimes minute by minute or second by second on Twitter, for instance – on what they are doing, hearing and seeing during jury duty.” In it, I was quoted as follows:

The small mobile devices so prevalent today can encourage a talkative habit that some people may find hard to break.

“People get so used to doing it all the time,” said Scott Granneman, an adjunct professor of communications and journalism at Washington University. “They don’t stop to think that being on a jury means they should stop. It’s an etiquette thing.”

Moreover, he added, the habit can become so ingrained in some people – even those on juries who are warned repeatedly they should not Tweet or text or talk about they case they are on – that they make excuses.

“It’s habitual,” Granneman said. “They say ‘It’s just my friends and family reading this.’ They don’t know the whole world is following this.

“Anybody can go to any Twitter page. There may be only eight people following you, but anybody can go anyone’s page and anybody can reTweet – forward someone’s page: ‘Oh my God, the defense attorney is so stupid.’ That can go on and on and on.”

Interviewed for an article about mis-uses of Twitter Read More »

Steve Jobs on mediocrity & market share

From Steven Levy’s “OK, Mac, Make a Wish: Apple’s ‘computer for the rest of us’ is, insanely, 20” (Newsweek: 2 February 2004):

If that’s so, then why is the Mac market share, even after Apple’s recent revival, sputtering at a measly 5 percent? Jobs has a theory about that, too. Once a company devises a great product, he says, it has a monopoly in that realm, and concentrates less on innovation than protecting its turf. “The Mac user interface was a 10-year monopoly,” says Jobs. “Who ended up running the company? Sales guys. At the critical juncture in the late ’80s, when they should have gone for market share, they went for profits. They made obscene profits for several years. And their products became mediocre. And then their monopoly ended with Windows 95. They behaved like a monopoly, and it came back to bite them, which always happens.”

Steve Jobs on mediocrity & market share Read More »

Newspapers are doomed

From Jeff Sigmund’s “Newspaper Web Site Audience Increases More Than Ten Percent In First Quarter To 73.3 Million Visitors” (Newspaper Association of America: 23 April 2009):

Newspaper Web sites attracted more than 73.3 million monthly unique visitors on average (43.6 percent of all Internet users) in the first quarter of 2009, a record number that reflects a 10.5 percent increase over the same period a year ago, according to a custom analysis provided by Nielsen Online for the Newspaper Association of America.

In addition, newspaper Web site visitors generated an average of more than 3.5 billion page views per month throughout the quarter, an increase of 12.8 percent over the same period a year ago (3.1 billion page views).

Contrast that with the article on Craigslist in Wikipedia (1 May 2009):

The site serves over twenty billion page views per month, putting it in 28th place overall among web sites world wide, ninth place overall among web sites in the United States (per Alexa.com on March 27, 2009), to over fifty million unique monthly visitors in the United States alone (per Compete.com on April 7, 2009). As of March 17, 2009 it was ranked 7th on Alexa. With over forty million new classified advertisements each month, Craigslist is the leading classifieds service in any medium. The site receives over one million new job listings each month, making it one of the top job boards in the world.

Even at its best, the entire newspaper industry only gets 1/5 of what Craigslist sees each month.

Newspapers are doomed Read More »

Criminal goods & service sold on the black market

From Ellen Messmer’s “Symantec takes cybercrime snapshot with ‘Underground Economy’ report” (Network World: 24 November 2008):

The “Underground Economy” report [from Symantec] contains a snapshot of online criminal activity observed from July 2007 to June 2008 by a Symantec team monitoring activities in Internet Relay Chat (IRC) and Web-based forums where stolen goods are advertised. Symantec estimates the total value of the goods advertised on what it calls “underground servers” was about $276 million, with credit-card information accounting for 59% of the total.

If that purloined information were successfully exploited, it probably would bring the buyers about $5 billion, according to the report — just a drop in the bucket, points out David Cowings, senior manager of operations at Symantec Security Response.

“Ninety-eight percent of the underground-economy servers have life spans of less than 6 months,” Cowings says. “The smallest IRC server we saw had five channels and 40 users. The largest IRC server network had 28,000 channels and 90,000 users.”

In the one year covered by the report, Symantec’s team observed more than 69,000 distinct advertisers and 44 million total messages online selling illicit credit-card and financial data, but the 10 most active advertisers appeared to account for 11% of the total messages posted and $575,000 in sales.

According to the report, a bank-account credential was selling for $10 to $1,000, depending on the balance and location of the account. Sellers also hawked specific financial sites’ vulnerabilities for an average price of $740, though prices did go as high as $2,999.

In other spots, the average price for a keystroke logger — malware used to capture a victim’s information — was an affordable $23. Attack tools, such as botnets, sold for an average of $225. “For $10, you could host a phishing site on someone’s server or compromised Web site,” Cowings says.

Desktop computer games appeared to be the most-pirated software, accounting for 49% of all file instances that Symantec observed. The second-highest category was utility applications; third-highest was multimedia productivity applications, such as photograph or HTML editors.

Criminal goods & service sold on the black market Read More »

Some facts about GPL 2 & GPL 3

From Liz Laffan’s “GPLv2 vs GPLv3: The two seminal open source licenses, their roots, consequences and repercussions” (VisionMobile: September 2007):

From a licensing perspective, the vast majority (typically 60-70%) of all open source projects are licensed under the GNU Public License version 2 (GPLv2).

GPLv3 was published in July 2007, some 16 years following the creation of GPLv2. The purpose of this new license is to address some of the areas identified for improvement and clarification in GPLv2 – such as patent indemnity, internationalisation and remedies for inadvertent license infringement (rather than the previous immediate termination effect). The new GPLv3 license is nearly double the length of the GPLv2 …

GPLv3 differs to GPLv2 in several important ways. Firstly it provides more clarity on patent licenses and attempts to clarify what is meant by both a distribution and derivative works. Secondly it revokes the immediate termination of license clause in favour of licensee opportunities to ‘fix’ any violations within a given time-period. In addition there are explicit ‘Additional Terms’ which permits users to choose from a fixed set of alternative terms which can modify the standard GPLv3 terms. These are all welcome, positive moves which should benefit all users of the GPLv3 license.

Nonetheless there are three contentious aspects of GPLv3 that have provoked much discussion in the FOSS community and could deter adoption of GPLv3 by more circumspect users and organisations.

Some facts about GPL 2 & GPL 3 Read More »

Open source & patents

From Liz Laffan’s “GPLv2 vs GPLv3: The two seminal open source licenses, their roots, consequences and repercussions” (VisionMobile: September 2007):

Cumulatively patents have been doubling practically every year since 1990. Patents are now probably the most contentious issue in software-related intellectual property rights.

However we should also be aware that software written from scratch is as likely to infringe patents as FOSS covered software – due mainly to the increasing proliferation of patents in all software technologies. Consequently the risk of patent infringement is largely comparable whether one chooses to write one’s own software or use software covered by the GPLv2; one will most likely have to self-indemnify against a potential patent infringement claim in both cases.

The F.U.D. (fear, uncertainty and doubt) that surrounds patents in FOSS has been further heightened by two announcements, both instigated by Microsoft. Firstly in November 2006 Microsoft and Novell1 entered into a cross- licensing patent agreement where Microsoft gave Novell assurances that it would not sue the company or its customers if they were to be found infringing Microsoft patents in the Novell Linux distribution. Secondly in May 2007 Microsoft2 restated (having alluded to the same in 2004) that FOSS violates 235 Microsoft patents. Unfortunately, the Redmond giant did not state which patents in particular were being infringed and nor have they initiated any actions against a user or distributor of Linux.

The FOSS community have reacted to these actions by co-opting the patent system and setting up the Patent Commons (http://www.patentcommons.org). This initiative, managed by the Linux Foundation, coordinates and manages a patent commons reference library, documenting information about patent- related pledges in support of Linux and FOSS that are provided by large software companies. Moreover, software giants such as IBM and Nokia have committed not to assert patents against the Linux kernel and other FOSS projects. In addition, the FSF have strengthened the patent clause of GPLv3…

Open source & patents Read More »

Google’s server farm revealed

From Nicholas Carr’s “Google lifts its skirts” (Rough Type: 2 April 2009):

I was particularly surprised to learn that Google rented all its data-center space until 2005, when it built its first center. That implies that The Dalles, Oregon, plant (shown in the photo above) was the company’s first official data smelter. Each of Google’s containers holds 1,160 servers, and the facility’s original server building had 45 containers, which means that it probably was running a total of around 52,000 servers. Since The Dalles plant has three server buildings, that means – and here I’m drawing a speculative conclusion – that it might be running around 150,000 servers altogether.

Here are some more details, from Rich Miller’s report:

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the hot aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to [Jimmy] Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment …

[Urs] Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

Google’s server farm revealed Read More »

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes

From Catey Hill’s “Massive ATM heist! $9M stolen in only 30 minutes” (New York Daily News: 12 February 2009)

With information stolen from only 100 ATM cards, thieves made off with $9 million in cash, according to published reports. It only took 30 minutes.

“We’ve seen similar attempts to defraud a bank through ATM machines but not, not anywhere near the scale we have here,” FBI Agent Ross Rice told Fox 5. “We’ve never seen one this well coordinated,” the FBI told Fox 5.

The heist happened in November, but FBI officials released more information about the events only recently. …

How did they do it? The thieves hacked into the RBS WorldPay computer system and stole payroll card information from the company. A payroll card is used by many companies to pay the salaries of their employees. The cards work a lot like a debit card and can be used in any ATM.

Once the thieves had the card info, they employed a group of ‘cashers’ – people employed to go get the money out of the ATMs. The cashers went to ATMs around the world and withdrew money.
“Over 130 different ATM machines in 49 cities worldwide were accessed in a 30-minute period on November 8,” Agent Rice told Fox 5.

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes Read More »

Now that the Seattle Post-Intelligencer has switched to the Web …

From William Yardley and Richard Pérez-Peña’s “Seattle Paper Shifts Entirely to the Web” (The New York Times: 16 March 2009):

The P-I, as it is called, will resemble a local Huffington Post more than a traditional newspaper, with a news staff of about 20 people rather than the 165 it had, and a site with mostly commentary, advice and links to other news sites, along with some original reporting.

The new P-I site has recruited some current and former government officials, including a former mayor, a former police chief and the current head of Seattle schools, to write columns, and it will repackage some material from Hearst’s large stable of magazines. It will keep some of the paper’s popular columnists and bloggers and the large number of unpaid local bloggers whose work appears on the site.

Because the newspaper has had no business staff of its own, the new operation plans to hire more than 20 people in areas like ad sales.

Now that the Seattle Post-Intelligencer has switched to the Web … Read More »

Social software: 5 properties & 3 dynamics

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

A great deal of sociality is about engaging with publics, but we take for granted certain structural aspects of those publics. Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

1. Persistence. What you say sticks around. This is great for asynchronicity, not so great when everything you’ve ever said has gone down on your permanent record. …

2. Replicability. You can copy and paste a conversation from one medium to another, adding to the persistent nature of it. This is great for being able to share information, but it is also at the crux of rumor-spreading. Worse: while you can replicate a conversation, it’s much easier to alter what’s been said than to confirm that it’s an accurate portrayal of the original conversation.

3. Searchability. My mother would’ve loved to scream search into the air and figure out where I’d run off with friends. She couldn’t; I’m quite thankful. But with social media, it’s quite easy to track someone down or to find someone as a result of searching for content. Search changes the landscape, making information available at our fingertips. This is great in some circumstances, but when trying to avoid those who hold power over you, it may be less than ideal.

4. Scalability. Social media scales things in new ways. Conversations that were intended for just a friend or two might spiral out of control and scale to the entire school or, if it is especially embarrassing, the whole world. …

5. (de)locatability. With the mobile, you are dislocated from any particular point in space, but at the same time, location-based technologies make location much more relevant. This paradox means that we are simultaneously more and less connected to physical space.

Those five properties are intertwined, but their implications have to do with the ways in which they alter social dynamics. Let’s look at three different dynamics that have been reconfigured as a result of social media.

1. Invisible Audiences. We are used to being able to assess the people around us when we’re speaking. We adjust what we’re saying to account for the audience. Social media introduces all sorts of invisible audiences. There are lurkers who are present at the moment but whom we cannot see, but there are also visitors who access our content at a later date or in a different environment than where we first produced them. As a result, we are having to present ourselves and communicate without fully understanding the potential or actual audience. The potential invisible audiences can be stifling. Of course, there’s plenty of room to put your head in the sand and pretend like those people don’t really exist.

2. Collapsed Contexts. Connected to this is the collapsing of contexts. In choosing what to say when, we account for both the audience and the context more generally. Some behaviors are appropriate in one context but not another, in front of one audience but not others. Social media brings all of these contexts crashing into one another and it’s often difficult to figure out what’s appropriate, let alone what can be understood.

3. Blurring of Public and Private. Finally, there’s the blurring of public and private. These distinctions are normally structured around audience and context with certain places or conversations being “public” or “private.” These distinctions are much harder to manage when you have to contend with the shifts in how the environment is organized.

All of this means that we’re forced to contend with a society in which things are being truly reconfigured. So what does this mean? As we are already starting to see, this creates all new questions about context and privacy, about our relationship to space and to the people around us.

Social software: 5 properties & 3 dynamics Read More »