Microsoft Exchange is expensive

From Joel Snyder’s “Exchange: Should I stay or should I go?” (Network World: 9 March 2009):

There are many ways to buy Exchange, depending on how many users you need, but the short answer is that none of them cost less than about $75 per user and can run up to $140 per user for the bundles that include Exchange and Windows Server and user licenses for both of those as well as Forefront, Microsoft’s antispam/antivirus service. …

If you really want to make a case for cost, you could also claim that Exchange requires a $90 Outlook license for each user, a Windows XP or Vista license for each user, and more expensive hardware than a similar open source platform might require.

Microsoft Exchange is expensive Read More »

How to increase donations on non-profit websites

From Jakob Nielsen’s “Donation Usability: Increasing Online Giving to Non-Profits and Charities” (Alertbox: 30 March 2009):

We asked participants what information they want to see on non-profit websites before they decide whether to donate. Their answers fell into 4 broad categories, 2 of which were the most heavily requested:

  • The organization’s mission, goals, objectives, and work.
  • How it uses donations and contributions.

That is: What are you trying to achieve, and how will you spend my money?

Sadly, only 43% of the sites we studied answered the first question on their homepage. Further, only a ridiculously low 4% answered the second question on the homepage. Although organizations typically provided these answers somewhere within the site, users often had problems finding this crucial information.

In choosing between 2 charities, people referred to 5 categories of information. However, an organization’s mission, goals, objectives, and work was by far the most important. Indeed, it was 3.6 times as important as the runner-up issue, which was the organization’s presence in the user’s own community.

How to increase donations on non-profit websites Read More »

More on Google’s server farms

From Joel Hruska’s “The Beast unveiled: inside a Google server” (Ars Technica: 2 April 2009):

Each Google server is hooked to an independent 12V battery to keep the units running in the event of a power outage. Data centers themselves are built and housed in shipping containers (we’ve seen Sun pushing this trend as well), a practice that went into effect after the brownouts of 2005. Each container holds a total of 1,160 servers and can theoretically draw up to 250kW. Those numbers might seem a bit high for a data center optimized for energy efficiency—it breaks down to around 216W per system—but there are added cooling costs to be considered in any type of server deployment. These sorts of units were built for parking under trees (or at sea, per Google’s patent application).

By using individual batteries hooked to each server (instead of a UPS), the company is able to use the available energy much more efficiently (99.9 percent efficiency vs. 92-95 percent efficiency for a typical battery) and the rack-mounted servers are 2U with 8 DIMM slots. Ironically, for a company talking about power efficiency, the server box in question is scarcely a power sipper. The GA-9IVDP is a custom-built motherboard—I couldn’t find any information about it in Gigabyte’s website—but online research and a scan of Gigabyte’s similarly named products implies that this is a Socket 604 dual-Xeon board running dual Nocono (Prescott) P4 processors.

More on Google’s server farms Read More »

Google’s server farm revealed

From Nicholas Carr’s “Google lifts its skirts” (Rough Type: 2 April 2009):

I was particularly surprised to learn that Google rented all its data-center space until 2005, when it built its first center. That implies that The Dalles, Oregon, plant (shown in the photo above) was the company’s first official data smelter. Each of Google’s containers holds 1,160 servers, and the facility’s original server building had 45 containers, which means that it probably was running a total of around 52,000 servers. Since The Dalles plant has three server buildings, that means – and here I’m drawing a speculative conclusion – that it might be running around 150,000 servers altogether.

Here are some more details, from Rich Miller’s report:

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the hot aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to [Jimmy] Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment …

[Urs] Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

Google’s server farm revealed Read More »

Why David Foster Wallace used footnotes

From D. T. Max’s “Notes and Errata*: A DFW Companion Guide to ‘The Unfinished’” (The Rumpus: 31 March 2009):

He explained that endnotes “allow . . . me to make the primary-text an easier read while at once 1) allowing a discursive, authorial intrusive style w/o Finneganizing the story, 2) mimic the information-flood and data-triage I expect’d be an even bigger part of US life 15 years hence. 3) have a lot more technical/medical verisimilitude 4) allow/make the reader go literally physically ‘back and forth’ in a way that perhaps cutely mimics some of the story’s thematic concerns . . . 5) feel emotionally like I’m satisfying your request for compression of text without sacrificing enormous amounts of stuff.”

He was known for endlessly fracturing narratives and for stem-winding sentences adorned with footnotes that were themselves stem-winders. Such techniques originally had been his way of reclaiming language from banality, while at the same time representing all the caveats, micro-thoughts, meta-moments, and other flickers of his hyperactive mind.

Why David Foster Wallace used footnotes Read More »

Reasons Windows has a poor security architecture

From Daniel Eran Dilger’s “The Unavoidable Malware Myth: Why Apple Won’t Inherit Microsoft’s Malware Crown” (AppleInsider: 1 April 2008):

Thanks to its extensive use of battle-hardened Unix and open source software, Mac OS X also has always had security precautions in place that Windows lacked. It has also not shared the architectural weaknesses of Windows that have made that platform so easy to exploit and so difficult to clean up afterward, including:

  • the Windows Registry and the convoluted software installation mess related to it,
  • the Windows NT/2000/XP Interactive Services flaw opening up shatter attacks,
  • a wide open, legacy network architecture that left unnecessary, unsecured ports exposed by default,
  • poorly designed network sharing protocols that failed to account for adequate security measures,
  • poorly designed administrative messaging protocols that failed to account for adequate security,
  • poorly designed email clients that gave untrusted scripts access to spam one’s own contacts unwittingly,
  • an integrated web browser architecture that opened untrusted executables by design, and many others.

Reasons Windows has a poor security architecture Read More »

Vista & Mac OS X security features

From Prince McLean’s “Pwn2Own contest winner: Macs are safer than Windows” (AppleInsider: 26 March 2009):

Once it did arrive, Vista introduced sophisticated new measures to make it more difficult for malicious crackers to inject code.

One is support for the CPU’s NX bit, which allows a process to mark certain areas of memory as “Non-eXecutable” so the CPU will not run any code stored there. This is referred to as “executable space protection,” and helps to prevent malicious code from being surreptitiously loaded into a program’s data storage and subsequently executed to gain access to the same privileges as the program itself, an exploit known as a “buffer overflow attack.”

A second security practice of Vista is “address space layout randomization” or ASLR, which is used to load executables, and the system libraries, heap, and stack into a randomly assigned location within the address space, making it far more difficult for crackers to know where to find vulnerabilities they can attack, even if they know what the bugs are and how to exploit them.

[Charlie Miller, the security expert who won both this and last year’s CanSecWest Pwn2Own security contests,] told Tom’s Hardware “the NX bit is very powerful. When used properly, it ensures that user-supplied code cannot be executed in the process during exploitation. Researchers (and hackers) have struggled with ways around this protection. ASLR is also very tough to defeat. This is the way the process randomizes the location of code in a process. Between these two hurdles, no one knows how to execute arbitrary code in Firefox or IE 8 in Vista right now. For the record, Leopard has neither of these features, at least implemented effectively. In the exploit I won Pwn2Own with, I knew right where my shellcode was located and I knew it would execute on the heap for me.”

While Apple did implement some support for NX and ASLR in Mac OS X, Leopard retains dyld, (the dynamic loader responsible for loading all of the frameworks, dylibs, and bundles needed by a process) in the same known location, making it relatively trivial to bypass its ASLR. This is slated to change later this year in Snow Leopard.

With the much larger address space available to 64-bit binaries, Snow Leopard’s ASLR will make it possible to hide the location of loaded code like a needle in a haystack, thwarting the efforts of malicious attackers to maintain predictable targets for controlling the code and data loaded into memory. Without knowing what addresses to target, the “vast majority of these exploits will fail,” another security expert who has also won a high profile Mac cracking contest explained to AppleInsider.

Vista & Mac OS X security features Read More »

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes

From Catey Hill’s “Massive ATM heist! $9M stolen in only 30 minutes” (New York Daily News: 12 February 2009)

With information stolen from only 100 ATM cards, thieves made off with $9 million in cash, according to published reports. It only took 30 minutes.

“We’ve seen similar attempts to defraud a bank through ATM machines but not, not anywhere near the scale we have here,” FBI Agent Ross Rice told Fox 5. “We’ve never seen one this well coordinated,” the FBI told Fox 5.

The heist happened in November, but FBI officials released more information about the events only recently. …

How did they do it? The thieves hacked into the RBS WorldPay computer system and stole payroll card information from the company. A payroll card is used by many companies to pay the salaries of their employees. The cards work a lot like a debit card and can be used in any ATM.

Once the thieves had the card info, they employed a group of ‘cashers’ – people employed to go get the money out of the ATMs. The cashers went to ATMs around the world and withdrew money.
“Over 130 different ATM machines in 49 cities worldwide were accessed in a 30-minute period on November 8,” Agent Rice told Fox 5.

$9 million stolen from 130 ATM machines in 49 cities in 30 minutes Read More »

Now that the Seattle Post-Intelligencer has switched to the Web …

From William Yardley and Richard Pérez-Peña’s “Seattle Paper Shifts Entirely to the Web” (The New York Times: 16 March 2009):

The P-I, as it is called, will resemble a local Huffington Post more than a traditional newspaper, with a news staff of about 20 people rather than the 165 it had, and a site with mostly commentary, advice and links to other news sites, along with some original reporting.

The new P-I site has recruited some current and former government officials, including a former mayor, a former police chief and the current head of Seattle schools, to write columns, and it will repackage some material from Hearst’s large stable of magazines. It will keep some of the paper’s popular columnists and bloggers and the large number of unpaid local bloggers whose work appears on the site.

Because the newspaper has had no business staff of its own, the new operation plans to hire more than 20 people in areas like ad sales.

Now that the Seattle Post-Intelligencer has switched to the Web … Read More »

Why we can easily remember jingles but not jokes

From Natalie Angier’s “In One Ear and Out the Other” (The New York Times: 16 March 2009):

In understanding human memory and its tics, Scott A. Small, a neurologist and memory researcher at Columbia, suggests the familiar analogy with computer memory.

We have our version of a buffer, he said, a short-term working memory of limited scope and fast turnover rate. We have our equivalent of a save button: the hippocampus, deep in the forebrain is essential for translating short-term memories into a more permanent form.

Our frontal lobes perform the find function, retrieving saved files to embellish as needed. And though scientists used to believe that short- and long-term memories were stored in different parts of the brain, they have discovered that what really distinguishes the lasting from the transient is how strongly the memory is engraved in the brain, and the thickness and complexity of the connections linking large populations of brain cells. The deeper the memory, the more readily and robustly an ensemble of like-minded neurons will fire.

This process, of memory formation by neuronal entrainment, helps explain why some of life’s offerings weasel in easily and then refuse to be spiked. Music, for example. “The brain has a strong propensity to organize information and perception in patterns, and music plays into that inclination,” said Michael Thaut, a professor of music and neuroscience at Colorado State University. “From an acoustical perspective, music is an overstructured language, which the brain invented and which the brain loves to hear.”

A simple melody with a simple rhythm and repetition can be a tremendous mnemonic device. “It would be a virtually impossible task for young children to memorize a sequence of 26 separate letters if you just gave it to them as a string of information,” Dr. Thaut said. But when the alphabet is set to the tune of the ABC song with its four melodic phrases, preschoolers can learn it with ease.

And what are the most insidious jingles or sitcom themes but cunning variations on twinkle twinkle ABC?

Really great jokes, on the other hand, punch the lights out of do re mi. They work not by conforming to pattern recognition routines but by subverting them. “Jokes work because they deal with the unexpected, starting in one direction and then veering off into another,” said Robert Provine, a professor of psychology at the University of Maryland, Baltimore County, and the author of “Laughter: A Scientific Investigation.” “What makes a joke successful are the same properties that can make it difficult to remember.”

This may also explain why the jokes we tend to remember are often the most clichéd ones. A mother-in-law joke? Yes…

Why we can easily remember jingles but not jokes Read More »

Social software: 5 properties & 3 dynamics

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

A great deal of sociality is about engaging with publics, but we take for granted certain structural aspects of those publics. Certain properties are core to social media in a combination that alters how people engage with one another. I want to discuss five properties of social media and three dynamics. These are the crux of what makes the phenomena we’re seeing so different from unmediated phenomena.

1. Persistence. What you say sticks around. This is great for asynchronicity, not so great when everything you’ve ever said has gone down on your permanent record. …

2. Replicability. You can copy and paste a conversation from one medium to another, adding to the persistent nature of it. This is great for being able to share information, but it is also at the crux of rumor-spreading. Worse: while you can replicate a conversation, it’s much easier to alter what’s been said than to confirm that it’s an accurate portrayal of the original conversation.

3. Searchability. My mother would’ve loved to scream search into the air and figure out where I’d run off with friends. She couldn’t; I’m quite thankful. But with social media, it’s quite easy to track someone down or to find someone as a result of searching for content. Search changes the landscape, making information available at our fingertips. This is great in some circumstances, but when trying to avoid those who hold power over you, it may be less than ideal.

4. Scalability. Social media scales things in new ways. Conversations that were intended for just a friend or two might spiral out of control and scale to the entire school or, if it is especially embarrassing, the whole world. …

5. (de)locatability. With the mobile, you are dislocated from any particular point in space, but at the same time, location-based technologies make location much more relevant. This paradox means that we are simultaneously more and less connected to physical space.

Those five properties are intertwined, but their implications have to do with the ways in which they alter social dynamics. Let’s look at three different dynamics that have been reconfigured as a result of social media.

1. Invisible Audiences. We are used to being able to assess the people around us when we’re speaking. We adjust what we’re saying to account for the audience. Social media introduces all sorts of invisible audiences. There are lurkers who are present at the moment but whom we cannot see, but there are also visitors who access our content at a later date or in a different environment than where we first produced them. As a result, we are having to present ourselves and communicate without fully understanding the potential or actual audience. The potential invisible audiences can be stifling. Of course, there’s plenty of room to put your head in the sand and pretend like those people don’t really exist.

2. Collapsed Contexts. Connected to this is the collapsing of contexts. In choosing what to say when, we account for both the audience and the context more generally. Some behaviors are appropriate in one context but not another, in front of one audience but not others. Social media brings all of these contexts crashing into one another and it’s often difficult to figure out what’s appropriate, let alone what can be understood.

3. Blurring of Public and Private. Finally, there’s the blurring of public and private. These distinctions are normally structured around audience and context with certain places or conversations being “public” or “private.” These distinctions are much harder to manage when you have to contend with the shifts in how the environment is organized.

All of this means that we’re forced to contend with a society in which things are being truly reconfigured. So what does this mean? As we are already starting to see, this creates all new questions about context and privacy, about our relationship to space and to the people around us.

Social software: 5 properties & 3 dynamics Read More »

Kids & adults use social networking sites differently

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

For American teenagers, social network sites became a social hangout space, not unlike the malls in which I grew up or the dance halls of yesteryears. This was a place to gather with friends from school and church when in-person encounters were not viable. Unlike many adults, teenagers were never really networking. They were socializing in pre-exiting groups.

Social network sites became critically important to them because this was where they sat and gossiped, jockeyed for status, and functioned as digital flaneurs. They used these tools to see and be seen. …

Teen conversations may appear completely irrational, or pointless at best. “Yo, wazzup?” “Not much, how you?” may not seem like much to an outsider, but this is a form of social grooming. It’s a way of checking in, confirming friendships, and negotiating social waters.

Adults have approached Facebook in very different ways. Adults are not hanging out on Facebook. They are more likely to respond to status messages than start a conversation on someone’s wall (unless it’s their birthday of course). Adults aren’t really decorating their profiles or making sure that their About Me’s are up-to-date. Adults, far more than teens, are using Facebook for its intended purpose as a social utility. For example, it is a tool for communicating with the past.

Adults may giggle about having run-ins with mates from high school, but underneath it all, many of them are curious. This isn’t that different than the school reunion. … Teenagers craft quizzes for themselves and their friends. Adults are crafting them to show-off to people from the past and connect the dots between different audiences as a way of coping with the awkwardness of collapsed contexts.

Kids & adults use social networking sites differently Read More »

The importance of network effects to social software

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Many who build technology think that a technology’s feature set is the key to its adoption and popularity. With social media, this is often not the case. There are triggers that drive early adopters to a site, but the single most important factor in determining whether or not a person will adopt one of these sites is whether or not it is the place where their friends hangout. In each of these cases, network effects played a significant role in the spread and adoption of the site.

The uptake of social media is quite different than the uptake of non-social technologies. For the most part, you don’t need your friends to use Word to find the tool useful. You do need your friends to use email for it to be useful, but, thanks to properties of that medium, you don’t need them to be using Outlook or Hotmail to write to them. Many of the new genres of social media are walled gardens, requiring your friends to use that exact site to be valuable. This has its advantages for the companies who build it – that’s the whole attitude behind lock-in. But it also has its costs. Consider for example the fact that working class and upper class kids can’t talk to one another if they are on different SNSs.

Friendster didn’t understand network effects. In kicking off users who weren’t conforming to their standards, they pissed off more than those users; they pissed off those users’ friends who were left with little purpose to use the site. The popularity of Friendster unraveled as fast as it picked up, but the company never realized what hit them. All of their metrics were based on number of users. While only a few users deleted their accounts, the impact of those lost accounts was huge. The friends of those who departed slowly stopped using the site. At first, they went from logging in every hour to logging in every day, never affecting the metrics. But as nothing new came in and as the collective interest waned, their attention went elsewhere. Today, Friendster is succeeding because of its popularity in other countries, but in the US, it’s a graveyard of hipsters stuck in 2003.

The importance of network effects to social software Read More »

MySpace/Facebook history & sociology

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Facebook had launched as a Harvard-only site before expanding to other elite institutions before expanding to other 4-year-colleges before expanding to 2-year colleges. It captured the mindshare of college students everywhere. It wasn’t until 2005 that they opened the doors to some companies and high schools. And only in 2006, did they open to all.

Facebook was narrated as the “safe” alternative and, in the 2006-2007 school year, a split amongst American teens occurred. Those college-bound kids from wealthier or upwardly mobile backgrounds flocked to Facebook while teens from urban or less economically privileged backgrounds rejected the transition and opted to stay with MySpace while simultaneously rejecting the fears brought on by American media. Many kids were caught in the middle and opted to use both, but the division that occurred resembles the same “jocks and burnouts” narrative that shaped American schools in the 1980s.

MySpace/Facebook history & sociology Read More »

Defining social media, social software, & Web 2.0

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Social media is the latest buzzword in a long line of buzzwords. It is often used to describe the collection of software that enables individuals and communities to gather, communicate, share, and in some cases collaborate or play. In tech circles, social media has replaced the earlier fave “social software.” Academics still tend to prefer terms like “computer-mediated communication” or “computer-supported cooperative work” to describe the practices that emerge from these tools and the old skool academics might even categorize these tools as “groupwork” tools. Social media is driven by another buzzword: “user-generated content” or content that is contributed by participants rather than editors.

… These tools are part of a broader notion of “Web2.0.” Yet-another-buzzword, Web2.0 means different things to different people.

For the technology crowd, Web2.0 was about a shift in development and deployment. Rather than producing a product, testing it, and shipping it to be consumed by an audience that was disconnected from the developer, Web2.0 was about the perpetual beta. This concept makes all of us giggle, but what this means is that, for technologists, Web2.0 was about constantly iterating the technology as people interacted with it and learning from what they were doing. To make this happen, we saw the rise of technologies that supported real-time interactions, user-generated content, remixing and mashups, APIs and open-source software that allowed mass collaboration in the development cycle. …

For the business crowd, Web2.0 can be understood as hope. Web2.0 emerged out of the ashes of the fallen tech bubble and bust. Scars ran deep throughout Silicon Valley and venture capitalists and entrepreneurs wanted to party like it was 1999. Web2.0 brought energy to this forlorn crowd. At first they were skeptical, but slowly they bought in. As a result, we’ve seen a resurgence of startups, venture capitalists, and conferences. At this point, Web2.0 is sometimes referred to as Bubble2.0, but there’s something to say about “hope” even when the VCs start co-opting that term because they want four more years.

For users, Web2.0 was all about reorganizing web-based practices around Friends. For many users, direct communication tools like email and IM were used to communicate with one’s closest and dearest while online communities were tools for connecting with strangers around shared interests. Web2.0 reworked all of that by allowing users to connect in new ways. While many of the tools may have been designed to help people find others, what Web2.0 showed was that people really wanted a way to connect with those that they already knew in new ways. Even tools like MySpace and Facebook which are typically labeled social networkING sites were never really about networking for most users. They were about socializing inside of pre-existing networks.

Defining social media, social software, & Web 2.0 Read More »

DRM fails utterly

From John Siracusa’s “The once and future e-book: on reading in the digital age” (Ars Technica: 1 February 2009):

Nuances aside, the big picture remains the same: DRM for digital media distribution to consumers is a mathematically, technologically, and intellectually bankrupt exercise. It fails utterly to deliver its intended benefit: the prevention of piracy. Its disadvantages, however, are provided in full force: limiting what consumers can legally do with content they have legitimately purchased, under threat of civil penalties or criminal prosecution.

DRM fails utterly Read More »