tech in changing society

The Uncanny Valley, art forgery, & love

Apply new wax to old wood
Creative Commons License photo credit: hans s

From Errol Morris’ “Bamboozling Ourselves (Part 2)” (The New York Times: 28 May 2009):

[Errol Morris:] The Uncanny Valley is a concept developed by the Japanese robot scientist Masahiro Mori. It concerns the design of humanoid robots. Mori’s theory is relatively simple. We tend to reject robots that look too much like people. Slight discrepancies and incongruities between what we look like and what they look like disturb us. The closer a robot resembles a human, the more critical we become, the more sensitive to slight discrepancies, variations, imperfections. However, if we go far enough away from the humanoid, then we much more readily accept the robot as being like us. This accounts for the success of so many movie robots — from R2-D2 to WALL-E. They act like humans but they don’t look like humans. There is a region of acceptability — the peaks around The Uncanny Valley, the zone of acceptability that includes completely human and sort of human but not too human. The existence of The Uncanny Valley also suggests that we are programmed by natural selection to scrutinize the behavior and appearance of others. Survival no doubt depends on such an innate ability.

EDWARD DOLNICK: [The art forger Van Meegeren] wants to avoid it. So his big challenge is he wants to paint a picture that other people are going to take as Vermeer, because Vermeer is a brand name, because Vermeer is going to bring him lots of money, if he can get away with it, but he can’t paint a Vermeer. He doesn’t have that skill. So how is he going to paint a picture that doesn’t look like a Vermeer, but that people are going to say, “Oh! It’s a Vermeer?” How’s he going to pull it off? It’s a tough challenge. Now here’s the point of The Uncanny Valley: as your imitation gets closer and closer to the real thing, people think, “Good, good, good!” — but then when it’s very close, when it’s within 1 percent or something, instead of focusing on the 99 percent that is done well, they focus on the 1 percent that you’re missing, and you’re in trouble. Big trouble.

Van Meegeren is trapped in the valley. If he tries for the close copy, an almost exact copy, he’s going to fall short. He’s going to look silly. So what he does instead is rely on the blanks in Vermeer’s career, because hardly anything is known about him; he’s like Shakespeare in that regard. He’ll take advantage of those blanks by inventing a whole new era in Vermeer’s career. No one knows what he was up to all this time. He’ll throw in some Vermeer touches, including a signature, so that people who look at it will be led to think, “Yes, this is a Vermeer.”

Van Meegeren was sometimes careful, other times astonishingly reckless. He could have passed certain tests. What was peculiar, and what was quite startling to me, is that it turned out that nobody ever did any scientific test on Van Meegeren, even the stuff that was available in his day, until after he confessed. And to this day, people hardly ever test pictures, even multi-million dollar ones. And I was so surprised by that that I kept asking, over and over again: why? Why would that be? Before you buy a house, you have someone go through it for termites and the rest. How could it be that when you’re going to lay out $10 million for a painting, you don’t test it beforehand? And the answer is that you don’t test it because, at the point of being about to buy it, you’re in love! You’ve found something. It’s going to be the high mark of your collection; it’s going to be the making of you as a collector. You finally found this great thing. It’s available, and you want it. You want it to be real. You don’t want to have someone let you down by telling you that the painting isn’t what you think it is. It’s like being newly in love. Everything is candlelight and wine. Nobody hires a private detective at that point. It’s only years down the road when things have gone wrong that you say, “What was I thinking? What’s going on here?” The collector and the forger are in cahoots. The forger wants the collector to snap it up, and the collector wants it to be real. You are on the same side. You think that it would be a game of chess or something, you against him. “Has he got the paint right?” “Has he got the canvas?” You’re going to make this checkmark and that checkmark to see if the painting measures up. But instead, both sides are rooting for this thing to be real. If it is real, then you’ve got a masterpiece. If it’s not real, then today is just like yesterday. You’re back where you started, still on the prowl.

The Uncanny Valley, art forgery, & love Read More »

Taxi driver party lines

8th Ave .....Midtown Manhattan
Creative Commons License photo credit: 708718

From Annie Karni’s “Gabbing Taxi Drivers Talking on ‘Party Lines’” (The New York Sun: 11 January 2007):

It’s not just wives at home or relatives overseas that keep taxi drivers tied up on their cellular phones during work shifts. Many cabbies say that when they are chatting on duty, it’s often with their cab driver colleagues on group party lines. Taxi drivers say they use conference calls to discuss directions and find out about congested routes to avoid. They come to depend on one another as first responders, reacting faster even than police to calls from drivers in distress. Some drivers say they participate in group prayers on a party line.

It is during this morning routine, waiting for the first shuttle flights to arrive from Washington and Boston, where many friendships between cabbies are forged and cell phone numbers are exchanged, Mr. Sverdlov said. Once drivers have each other’s numbers, they can use push-to-talk technology to call large groups all at once.

Mr. Sverdlov said he conferences with up to 10 cabbies at a time to discuss “traffic, what’s going on, this and that, and where do cops stay.” He estimated that every month, he logs about 20,000 talking minutes on his cell phone.

While civilian drivers are allowed to use hands-free devices to talk on cell phones while behind the wheel, the Taxi & Limousine Commission imposed a total cell phone ban for taxi drivers on duty in 1999. In 2006, the Taxi & Limousine Commission issued 1,049 summonses for phone use while on duty, up by almost 69% from the 621 summonses it issued the previous year. Drivers caught chatting while driving are fined $200 and receive two-point penalties on their licenses.

Drivers originally from countries like Israel, China, and America, who are few and far between, say they rarely chat on the phone with other cab drivers because of the language barrier. For many South Asians and Russian drivers, however, conference calls that are prohibited by the Taxi & Limousine Commission are mainstays of cabby life.

Taxi driver party lines Read More »

Al Qaeda’s use of social networking sites

From Brian Prince’s “How Terrorism Touches the ‘Cloud’ at RSA” (eWeek: 23 April 2009):

When it comes to the war on terrorism, not all battles, intelligence gathering and recruitment happen in the street. Some of it occurs in the more elusive world of the Internet, where supporters of terrorist networks build social networking sites to recruit and spread their message.  
Enter Jeff Bardin of Treadstone 71, a former code breaker, Arabic translator and U.S. military officer who has been keeping track of vBulletin-powered sites run by supporters of al Qaeda. There are between 15 and 20 main sites, he said, which are used by terrorist groups for everything from recruitment to the distribution of violent videos of beheadings.

… “One social networking site has over 200,000 participants. …

The videos on the sites are produced online by a company called “As-Sahab Media” (As-Sahab means “the cloud” in English). Once shot, the videos make their way from hideouts to the rest of the world via a system of couriers. Some of them contain images of violence; others exhortations from terrorist leaders. Also on the sites are tools such as versions of “Mujahideen Secrets,” which is used for encryption.

“It’s a pretty solid tool; it’s not so much that the tool is so much different from the new PGP-type [tool], but the fact is they built it from scratch, which shows a very mature software development lifecycle,” he said.

Al Qaeda’s use of social networking sites Read More »

A story of failed biometrics at a gym

Fingerprints
Creative Commons License photo credit: kevindooley

From Jake Vinson’s “Cracking your Fingers” (The Daily WTF: 28 April 2009):

A few days later, Ross stood proudly in the reception area, hands on his hips. A high-tech fingerprint scanner sat at the reception area near the turnstile and register, as the same scanner would be used for each, though the register system wasn’t quite ready for rollout yet. Another scanner sat on the opposite side of the turnstile, for gym members to sign out. … The receptionist looked almost as pleased as Ross that morning as well, excited that this meant they were working toward a system that necessitated less manual member ID lookups.

After signing a few people up, the new system was going swimmingly. Some users declined to use the new system, instead walking to the far side of the counter to use the old touchscreen system. Then Johnny tried to leave after his workout.

… He scanned his finger on his way out, but the turnstile wouldn’t budge.

“Uh, just a second,” the receptionist furiously typed and clicked, while Johnny removed one of his earbuds out and stared. “I’ll just have to manually override it…” but it was useless. There was no manual override option. Somehow, it was never considered that the scanner would malfunction. After several seconds of searching and having Johnny try to scan his finger again, the receptionist instructed him just to jump over the turnstile.

It was later discovered that the system required a “sign in” and a “sign out,” and if a member was recognized as someone else when attempting to sign out, the system rejected the input, and the turnstile remained locked in position. This was not good.

The scene repeated itself several times that day. Worse, the fingerprint scanner at the exit was getting kind of disgusting. Dozens of sweaty fingerprints required the scanner to be cleaned hourly, and even after it was freshly cleaned, it sometimes still couldn’t read fingerprints right. The latticed patterns on the barbell grips would leave indented patterns temporarily on the members’ fingers, there could be small cuts or folds on fingertips just from carrying weights or scrapes on the concrete coming out of the pool, fingers were wrinkly after a long swim, or sometimes the system just misidentified the person for no apparent reason.

Fingerprint Scanning

In much the same way that it’s not a good idea to store passwords in plaintext, it’s not a good idea to store raw fingerprint data. Instead, it should be hashed, so that the same input will consistently give the same output, but said output can’t be used to determine what the input was. In biometry, there are many complex algorithms that can analyze a fingerprint via several points on the finger. This system was set up to record seven points.

After a few hours of rollout, though, it became clear that the real world doesn’t conform to how it should’ve worked in theory. There were simply too many variables, too many activities in the gym that could cause fingerprints to become altered. As such, the installers did what they thought was the reasonable thing to do – reduce the precision from seven points down to something substantially lower.

The updated system was in place for a few days, and it seemed to be working better; no more people being held up trying to leave.

Discovery

… [The monitor] showed Ray as coming in several times that week, often twice on the same day, just hours apart. For each day listed, Ray had only come the later of the two times.

Reducing the precision of the fingerprint scanning resulted in the system identifying two people as one person. Reviewing the log, they saw that some regulars weren’t showing up in the system, and many members had two or three people being identified by the scanner as them.

A story of failed biometrics at a gym Read More »

Interviewed for an article about mis-uses of Twitter

The Saint Louis Beacon published an article on 27 April 2009 titled “Tweets from the jury box aren’t amusing“, about legal “cases across the country where jurors have used cell phones, BlackBerrys and other devices to comment – sometimes minute by minute or second by second on Twitter, for instance – on what they are doing, hearing and seeing during jury duty.” In it, I was quoted as follows:

The small mobile devices so prevalent today can encourage a talkative habit that some people may find hard to break.

“People get so used to doing it all the time,” said Scott Granneman, an adjunct professor of communications and journalism at Washington University. “They don’t stop to think that being on a jury means they should stop. It’s an etiquette thing.”

Moreover, he added, the habit can become so ingrained in some people – even those on juries who are warned repeatedly they should not Tweet or text or talk about they case they are on – that they make excuses.

“It’s habitual,” Granneman said. “They say ‘It’s just my friends and family reading this.’ They don’t know the whole world is following this.

“Anybody can go to any Twitter page. There may be only eight people following you, but anybody can go anyone’s page and anybody can reTweet – forward someone’s page: ‘Oh my God, the defense attorney is so stupid.’ That can go on and on and on.”

Interviewed for an article about mis-uses of Twitter Read More »

Steve Jobs on mediocrity & market share

From Steven Levy’s “OK, Mac, Make a Wish: Apple’s ‘computer for the rest of us’ is, insanely, 20” (Newsweek: 2 February 2004):

If that’s so, then why is the Mac market share, even after Apple’s recent revival, sputtering at a measly 5 percent? Jobs has a theory about that, too. Once a company devises a great product, he says, it has a monopoly in that realm, and concentrates less on innovation than protecting its turf. “The Mac user interface was a 10-year monopoly,” says Jobs. “Who ended up running the company? Sales guys. At the critical juncture in the late ’80s, when they should have gone for market share, they went for profits. They made obscene profits for several years. And their products became mediocre. And then their monopoly ended with Windows 95. They behaved like a monopoly, and it came back to bite them, which always happens.”

Steve Jobs on mediocrity & market share Read More »

Newspapers are doomed

From Jeff Sigmund’s “Newspaper Web Site Audience Increases More Than Ten Percent In First Quarter To 73.3 Million Visitors” (Newspaper Association of America: 23 April 2009):

Newspaper Web sites attracted more than 73.3 million monthly unique visitors on average (43.6 percent of all Internet users) in the first quarter of 2009, a record number that reflects a 10.5 percent increase over the same period a year ago, according to a custom analysis provided by Nielsen Online for the Newspaper Association of America.

In addition, newspaper Web site visitors generated an average of more than 3.5 billion page views per month throughout the quarter, an increase of 12.8 percent over the same period a year ago (3.1 billion page views).

Contrast that with the article on Craigslist in Wikipedia (1 May 2009):

The site serves over twenty billion page views per month, putting it in 28th place overall among web sites world wide, ninth place overall among web sites in the United States (per Alexa.com on March 27, 2009), to over fifty million unique monthly visitors in the United States alone (per Compete.com on April 7, 2009). As of March 17, 2009 it was ranked 7th on Alexa. With over forty million new classified advertisements each month, Craigslist is the leading classifieds service in any medium. The site receives over one million new job listings each month, making it one of the top job boards in the world.

Even at its best, the entire newspaper industry only gets 1/5 of what Craigslist sees each month.

Newspapers are doomed Read More »

Criminal goods & service sold on the black market

From Ellen Messmer’s “Symantec takes cybercrime snapshot with ‘Underground Economy’ report” (Network World: 24 November 2008):

The “Underground Economy” report [from Symantec] contains a snapshot of online criminal activity observed from July 2007 to June 2008 by a Symantec team monitoring activities in Internet Relay Chat (IRC) and Web-based forums where stolen goods are advertised. Symantec estimates the total value of the goods advertised on what it calls “underground servers” was about $276 million, with credit-card information accounting for 59% of the total.

If that purloined information were successfully exploited, it probably would bring the buyers about $5 billion, according to the report — just a drop in the bucket, points out David Cowings, senior manager of operations at Symantec Security Response.

“Ninety-eight percent of the underground-economy servers have life spans of less than 6 months,” Cowings says. “The smallest IRC server we saw had five channels and 40 users. The largest IRC server network had 28,000 channels and 90,000 users.”

In the one year covered by the report, Symantec’s team observed more than 69,000 distinct advertisers and 44 million total messages online selling illicit credit-card and financial data, but the 10 most active advertisers appeared to account for 11% of the total messages posted and $575,000 in sales.

According to the report, a bank-account credential was selling for $10 to $1,000, depending on the balance and location of the account. Sellers also hawked specific financial sites’ vulnerabilities for an average price of $740, though prices did go as high as $2,999.

In other spots, the average price for a keystroke logger — malware used to capture a victim’s information — was an affordable $23. Attack tools, such as botnets, sold for an average of $225. “For $10, you could host a phishing site on someone’s server or compromised Web site,” Cowings says.

Desktop computer games appeared to be the most-pirated software, accounting for 49% of all file instances that Symantec observed. The second-highest category was utility applications; third-highest was multimedia productivity applications, such as photograph or HTML editors.

Criminal goods & service sold on the black market Read More »

Some facts about GPL 2 & GPL 3

From Liz Laffan’s “GPLv2 vs GPLv3: The two seminal open source licenses, their roots, consequences and repercussions” (VisionMobile: September 2007):

From a licensing perspective, the vast majority (typically 60-70%) of all open source projects are licensed under the GNU Public License version 2 (GPLv2).

GPLv3 was published in July 2007, some 16 years following the creation of GPLv2. The purpose of this new license is to address some of the areas identified for improvement and clarification in GPLv2 – such as patent indemnity, internationalisation and remedies for inadvertent license infringement (rather than the previous immediate termination effect). The new GPLv3 license is nearly double the length of the GPLv2 …

GPLv3 differs to GPLv2 in several important ways. Firstly it provides more clarity on patent licenses and attempts to clarify what is meant by both a distribution and derivative works. Secondly it revokes the immediate termination of license clause in favour of licensee opportunities to ‘fix’ any violations within a given time-period. In addition there are explicit ‘Additional Terms’ which permits users to choose from a fixed set of alternative terms which can modify the standard GPLv3 terms. These are all welcome, positive moves which should benefit all users of the GPLv3 license.

Nonetheless there are three contentious aspects of GPLv3 that have provoked much discussion in the FOSS community and could deter adoption of GPLv3 by more circumspect users and organisations.

Some facts about GPL 2 & GPL 3 Read More »

Open source & patents

From Liz Laffan’s “GPLv2 vs GPLv3: The two seminal open source licenses, their roots, consequences and repercussions” (VisionMobile: September 2007):

Cumulatively patents have been doubling practically every year since 1990. Patents are now probably the most contentious issue in software-related intellectual property rights.

However we should also be aware that software written from scratch is as likely to infringe patents as FOSS covered software – due mainly to the increasing proliferation of patents in all software technologies. Consequently the risk of patent infringement is largely comparable whether one chooses to write one’s own software or use software covered by the GPLv2; one will most likely have to self-indemnify against a potential patent infringement claim in both cases.

The F.U.D. (fear, uncertainty and doubt) that surrounds patents in FOSS has been further heightened by two announcements, both instigated by Microsoft. Firstly in November 2006 Microsoft and Novell1 entered into a cross- licensing patent agreement where Microsoft gave Novell assurances that it would not sue the company or its customers if they were to be found infringing Microsoft patents in the Novell Linux distribution. Secondly in May 2007 Microsoft2 restated (having alluded to the same in 2004) that FOSS violates 235 Microsoft patents. Unfortunately, the Redmond giant did not state which patents in particular were being infringed and nor have they initiated any actions against a user or distributor of Linux.

The FOSS community have reacted to these actions by co-opting the patent system and setting up the Patent Commons (http://www.patentcommons.org). This initiative, managed by the Linux Foundation, coordinates and manages a patent commons reference library, documenting information about patent- related pledges in support of Linux and FOSS that are provided by large software companies. Moreover, software giants such as IBM and Nokia have committed not to assert patents against the Linux kernel and other FOSS projects. In addition, the FSF have strengthened the patent clause of GPLv3…

Open source & patents Read More »

Google’s server farm revealed

From Nicholas Carr’s “Google lifts its skirts” (Rough Type: 2 April 2009):

I was particularly surprised to learn that Google rented all its data-center space until 2005, when it built its first center. That implies that The Dalles, Oregon, plant (shown in the photo above) was the company’s first official data smelter. Each of Google’s containers holds 1,160 servers, and the facility’s original server building had 45 containers, which means that it probably was running a total of around 52,000 servers. Since The Dalles plant has three server buildings, that means – and here I’m drawing a speculative conclusion – that it might be running around 150,000 servers altogether.

Here are some more details, from Rich Miller’s report:

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the hot aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to [Jimmy] Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment …

[Urs] Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

Google’s server farm revealed Read More »

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes

From Catey Hill’s “Massive ATM heist! $9M stolen in only 30 minutes” (New York Daily News: 12 February 2009)

With information stolen from only 100 ATM cards, thieves made off with $9 million in cash, according to published reports. It only took 30 minutes.

“We’ve seen similar attempts to defraud a bank through ATM machines but not, not anywhere near the scale we have here,” FBI Agent Ross Rice told Fox 5. “We’ve never seen one this well coordinated,” the FBI told Fox 5.

The heist happened in November, but FBI officials released more information about the events only recently. …

How did they do it? The thieves hacked into the RBS WorldPay computer system and stole payroll card information from the company. A payroll card is used by many companies to pay the salaries of their employees. The cards work a lot like a debit card and can be used in any ATM.

Once the thieves had the card info, they employed a group of ‘cashers’ – people employed to go get the money out of the ATMs. The cashers went to ATMs around the world and withdrew money.
“Over 130 different ATM machines in 49 cities worldwide were accessed in a 30-minute period on November 8,” Agent Rice told Fox 5.

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes Read More »

Now that the Seattle Post-Intelligencer has switched to the Web …

From William Yardley and Richard Pérez-Peña’s “Seattle Paper Shifts Entirely to the Web” (The New York Times: 16 March 2009):

The P-I, as it is called, will resemble a local Huffington Post more than a traditional newspaper, with a news staff of about 20 people rather than the 165 it had, and a site with mostly commentary, advice and links to other news sites, along with some original reporting.

The new P-I site has recruited some current and former government officials, including a former mayor, a former police chief and the current head of Seattle schools, to write columns, and it will repackage some material from Hearst’s large stable of magazines. It will keep some of the paper’s popular columnists and bloggers and the large number of unpaid local bloggers whose work appears on the site.

Because the newspaper has had no business staff of its own, the new operation plans to hire more than 20 people in areas like ad sales.

Now that the Seattle Post-Intelligencer has switched to the Web … Read More »

Social software: 5 properties & 3 dynamics

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

A great deal of sociality is about engaging with publics, but we take for granted certain structural aspects of those publics. Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

1. Persistence. What you say sticks around. This is great for asynchronicity, not so great when everything you’ve ever said has gone down on your permanent record. …

2. Replicability. You can copy and paste a conversation from one medium to another, adding to the persistent nature of it. This is great for being able to share information, but it is also at the crux of rumor-spreading. Worse: while you can replicate a conversation, it’s much easier to alter what’s been said than to confirm that it’s an accurate portrayal of the original conversation.

3. Searchability. My mother would’ve loved to scream search into the air and figure out where I’d run off with friends. She couldn’t; I’m quite thankful. But with social media, it’s quite easy to track someone down or to find someone as a result of searching for content. Search changes the landscape, making information available at our fingertips. This is great in some circumstances, but when trying to avoid those who hold power over you, it may be less than ideal.

4. Scalability. Social media scales things in new ways. Conversations that were intended for just a friend or two might spiral out of control and scale to the entire school or, if it is especially embarrassing, the whole world. …

5. (de)locatability. With the mobile, you are dislocated from any particular point in space, but at the same time, location-based technologies make location much more relevant. This paradox means that we are simultaneously more and less connected to physical space.

Those five properties are intertwined, but their implications have to do with the ways in which they alter social dynamics. Let’s look at three different dynamics that have been reconfigured as a result of social media.

1. Invisible Audiences. We are used to being able to assess the people around us when we’re speaking. We adjust what we’re saying to account for the audience. Social media introduces all sorts of invisible audiences. There are lurkers who are present at the moment but whom we cannot see, but there are also visitors who access our content at a later date or in a different environment than where we first produced them. As a result, we are having to present ourselves and communicate without fully understanding the potential or actual audience. The potential invisible audiences can be stifling. Of course, there’s plenty of room to put your head in the sand and pretend like those people don’t really exist.

2. Collapsed Contexts. Connected to this is the collapsing of contexts. In choosing what to say when, we account for both the audience and the context more generally. Some behaviors are appropriate in one context but not another, in front of one audience but not others. Social media brings all of these contexts crashing into one another and it’s often difficult to figure out what’s appropriate, let alone what can be understood.

3. Blurring of Public and Private. Finally, there’s the blurring of public and private. These distinctions are normally structured around audience and context with certain places or conversations being “public” or “private.” These distinctions are much harder to manage when you have to contend with the shifts in how the environment is organized.

All of this means that we’re forced to contend with a society in which things are being truly reconfigured. So what does this mean? As we are already starting to see, this creates all new questions about context and privacy, about our relationship to space and to the people around us.

Social software: 5 properties & 3 dynamics Read More »

Kids & adults use social networking sites differently

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

For American teenagers, social network sites became a social hangout space, not unlike the malls in which I grew up or the dance halls of yesteryears. This was a place to gather with friends from school and church when in-person encounters were not viable. Unlike many adults, teenagers were never really networking. They were socializing in pre-exiting groups.

Social network sites became critically important to them because this was where they sat and gossiped, jockeyed for status, and functioned as digital flaneurs. They used these tools to see and be seen. …

Teen conversations may appear completely irrational, or pointless at best. “Yo, wazzup?” “Not much, how you?” may not seem like much to an outsider, but this is a form of social grooming. It’s a way of checking in, confirming friendships, and negotiating social waters.

Adults have approached Facebook in very different ways. Adults are not hanging out on Facebook. They are more likely to respond to status messages than start a conversation on someone’s wall (unless it’s their birthday of course). Adults aren’t really decorating their profiles or making sure that their About Me’s are up-to-date. Adults, far more than teens, are using Facebook for its intended purpose as a social utility. For example, it is a tool for communicating with the past.

Adults may giggle about having run-ins with mates from high school, but underneath it all, many of them are curious. This isn’t that different than the school reunion. … Teenagers craft quizzes for themselves and their friends. Adults are crafting them to show-off to people from the past and connect the dots between different audiences as a way of coping with the awkwardness of collapsed contexts.

Kids & adults use social networking sites differently Read More »

The importance of network effects to social software

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Many who build technology think that a technology’s feature set is the key to its adoption and popularity. With social media, this is often not the case. There are triggers that drive early adopters to a site, but the single most important factor in determining whether or not a person will adopt one of these sites is whether or not it is the place where their friends hangout. In each of these cases, network effects played a significant role in the spread and adoption of the site.

The uptake of social media is quite different than the uptake of non-social technologies. For the most part, you don’t need your friends to use Word to find the tool useful. You do need your friends to use email for it to be useful, but, thanks to properties of that medium, you don’t need them to be using Outlook or Hotmail to write to them. Many of the new genres of social media are walled gardens, requiring your friends to use that exact site to be valuable. This has its advantages for the companies who build it – that’s the whole attitude behind lock-in. But it also has its costs. Consider for example the fact that working class and upper class kids can’t talk to one another if they are on different SNSs.

Friendster didn’t understand network effects. In kicking off users who weren’t conforming to their standards, they pissed off more than those users; they pissed off those users’ friends who were left with little purpose to use the site. The popularity of Friendster unraveled as fast as it picked up, but the company never realized what hit them. All of their metrics were based on number of users. While only a few users deleted their accounts, the impact of those lost accounts was huge. The friends of those who departed slowly stopped using the site. At first, they went from logging in every hour to logging in every day, never affecting the metrics. But as nothing new came in and as the collective interest waned, their attention went elsewhere. Today, Friendster is succeeding because of its popularity in other countries, but in the US, it’s a graveyard of hipsters stuck in 2003.

The importance of network effects to social software Read More »

MySpace/Facebook history & sociology

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Facebook had launched as a Harvard-only site before expanding to other elite institutions before expanding to other 4-year-colleges before expanding to 2-year colleges. It captured the mindshare of college students everywhere. It wasn’t until 2005 that they opened the doors to some companies and high schools. And only in 2006, did they open to all.

Facebook was narrated as the “safe” alternative and, in the 2006-2007 school year, a split amongst American teens occurred. Those college-bound kids from wealthier or upwardly mobile backgrounds flocked to Facebook while teens from urban or less economically privileged backgrounds rejected the transition and opted to stay with MySpace while simultaneously rejecting the fears brought on by American media. Many kids were caught in the middle and opted to use both, but the division that occurred resembles the same “jocks and burnouts” narrative that shaped American schools in the 1980s.

MySpace/Facebook history & sociology Read More »

Defining social media, social software, & Web 2.0

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Social media is the latest buzzword in a long line of buzzwords. It is often used to describe the collection of software that enables individuals and communities to gather, communicate, share, and in some cases collaborate or play. In tech circles, social media has replaced the earlier fave “social software.” Academics still tend to prefer terms like “computer-mediated communication” or “computer-supported cooperative work” to describe the practices that emerge from these tools and the old skool academics might even categorize these tools as “groupwork” tools. Social media is driven by another buzzword: “user-generated content” or content that is contributed by participants rather than editors.

… These tools are part of a broader notion of “Web2.0.” Yet-another-buzzword, Web2.0 means different things to different people.

For the technology crowd, Web2.0 was about a shift in development and deployment. Rather than producing a product, testing it, and shipping it to be consumed by an audience that was disconnected from the developer, Web2.0 was about the perpetual beta. This concept makes all of us giggle, but what this means is that, for technologists, Web2.0 was about constantly iterating the technology as people interacted with it and learning from what they were doing. To make this happen, we saw the rise of technologies that supported real-time interactions, user-generated content, remixing and mashups, APIs and open-source software that allowed mass collaboration in the development cycle. …

For the business crowd, Web2.0 can be understood as hope. Web2.0 emerged out of the ashes of the fallen tech bubble and bust. Scars ran deep throughout Silicon Valley and venture capitalists and entrepreneurs wanted to party like it was 1999. Web2.0 brought energy to this forlorn crowd. At first they were skeptical, but slowly they bought in. As a result, we’ve seen a resurgence of startups, venture capitalists, and conferences. At this point, Web2.0 is sometimes referred to as Bubble2.0, but there’s something to say about “hope” even when the VCs start co-opting that term because they want four more years.

For users, Web2.0 was all about reorganizing web-based practices around Friends. For many users, direct communication tools like email and IM were used to communicate with one’s closest and dearest while online communities were tools for connecting with strangers around shared interests. Web2.0 reworked all of that by allowing users to connect in new ways. While many of the tools may have been designed to help people find others, what Web2.0 showed was that people really wanted a way to connect with those that they already knew in new ways. Even tools like MySpace and Facebook which are typically labeled social networkING sites were never really about networking for most users. They were about socializing inside of pre-existing networks.

Defining social media, social software, & Web 2.0 Read More »