collaboration

Clay Shirky on the changes to publishing & media

From Parul Sehgal’s “Here Comes Clay Shirky” (Publisher’s Weekly: 21 June 2010):

PW: In April of this year, Wired‘s Kevin Kelly turned a Shirky quote—“Institutions will try to preserve the problem to which they are the solution”—into “the Shirky Principle,” in deference to the simple, yet powerful observation. … Kelly explained, “The Shirky Principle declares that complex solutions, like a company, or an industry, can become so dedicated to the problem they are the solution to, that often they inadvertently perpetuate the problem.”

CS: It is possible to think that the Internet will be a net positive for society while admitting that there are significant downsides—after all, it’s not a revolution if nobody loses.

No one will ever wonder, is there anything amusing for me on the Internet? That is a solved problem. What we should really care about are [the Internet’s] cultural uses.

In Here Comes Everybody I told the story of the Abbot of Sponheim who in 1492 wrote a book saying that if this printing press thing is allowed to expand, what will the scribes do for a living? But it was more important that Europe be literate than for scribes to have a job.

In a world where a book had to be a physical object, charging money was a way to cause more copies to come into circulation. In the digital world, charging money for something is a way to produce fewer copies. There is no way to preserve the status quo and not abandon that value.

Some of it’s the brilliant Upton Sinclair observation: “It’s hard to make a man understand something if his livelihood depends on him not understanding it.” From the laying on of hands of [Italian printer] Aldus Manutius on down, publishing has always been this way. This is a medium where a change to glue-based paperback binding constituted a revolution.

PW: When do you think a similar realization will come to book publishing?

CS: I think someone will make the imprint that bypasses the traditional distribution networks. Right now the big bottleneck is the head buyer at Barnes & Noble. That’s the seawall holding back the flood in publishing. Someone’s going to say, “I can do a business book or a vampire book or a romance novel, whatever, that might sell 60% of the units it would sell if I had full distribution and a multimillion dollar marketing campaign—but I can do it for 1% percent of the cost.” It has already happened a couple of times with specialty books. The moment of tip happens when enough things get joined up to create their own feedback loop, and the feedback loop in publishing changes when someone at Barnes & Noble says: “We can’t afford not to stock this particular book or series from an independent publisher.” It could be on Lulu, or iUniverse, whatever. And, I feel pretty confident saying it’s going to happen in the next five years.

Clay Shirky on the changes to publishing & media Read More »

Criminal goods & service sold on the black market

From Ellen Messmer’s “Symantec takes cybercrime snapshot with ‘Underground Economy’ report” (Network World: 24 November 2008):

The “Underground Economy” report [from Symantec] contains a snapshot of online criminal activity observed from July 2007 to June 2008 by a Symantec team monitoring activities in Internet Relay Chat (IRC) and Web-based forums where stolen goods are advertised. Symantec estimates the total value of the goods advertised on what it calls “underground servers” was about $276 million, with credit-card information accounting for 59% of the total.

If that purloined information were successfully exploited, it probably would bring the buyers about $5 billion, according to the report — just a drop in the bucket, points out David Cowings, senior manager of operations at Symantec Security Response.

“Ninety-eight percent of the underground-economy servers have life spans of less than 6 months,” Cowings says. “The smallest IRC server we saw had five channels and 40 users. The largest IRC server network had 28,000 channels and 90,000 users.”

In the one year covered by the report, Symantec’s team observed more than 69,000 distinct advertisers and 44 million total messages online selling illicit credit-card and financial data, but the 10 most active advertisers appeared to account for 11% of the total messages posted and $575,000 in sales.

According to the report, a bank-account credential was selling for $10 to $1,000, depending on the balance and location of the account. Sellers also hawked specific financial sites’ vulnerabilities for an average price of $740, though prices did go as high as $2,999.

In other spots, the average price for a keystroke logger — malware used to capture a victim’s information — was an affordable $23. Attack tools, such as botnets, sold for an average of $225. “For $10, you could host a phishing site on someone’s server or compromised Web site,” Cowings says.

Desktop computer games appeared to be the most-pirated software, accounting for 49% of all file instances that Symantec observed. The second-highest category was utility applications; third-highest was multimedia productivity applications, such as photograph or HTML editors.

Criminal goods & service sold on the black market Read More »

Defining social media, social software, & Web 2.0

From danah boyd’s “Social Media is Here to Stay… Now What?” at the Microsoft Research Tech Fest, Redmond, Washington (danah: 26 February 2009):

Social media is the latest buzzword in a long line of buzzwords. It is often used to describe the collection of software that enables individuals and communities to gather, communicate, share, and in some cases collaborate or play. In tech circles, social media has replaced the earlier fave “social software.” Academics still tend to prefer terms like “computer-mediated communication” or “computer-supported cooperative work” to describe the practices that emerge from these tools and the old skool academics might even categorize these tools as “groupwork” tools. Social media is driven by another buzzword: “user-generated content” or content that is contributed by participants rather than editors.

… These tools are part of a broader notion of “Web2.0.” Yet-another-buzzword, Web2.0 means different things to different people.

For the technology crowd, Web2.0 was about a shift in development and deployment. Rather than producing a product, testing it, and shipping it to be consumed by an audience that was disconnected from the developer, Web2.0 was about the perpetual beta. This concept makes all of us giggle, but what this means is that, for technologists, Web2.0 was about constantly iterating the technology as people interacted with it and learning from what they were doing. To make this happen, we saw the rise of technologies that supported real-time interactions, user-generated content, remixing and mashups, APIs and open-source software that allowed mass collaboration in the development cycle. …

For the business crowd, Web2.0 can be understood as hope. Web2.0 emerged out of the ashes of the fallen tech bubble and bust. Scars ran deep throughout Silicon Valley and venture capitalists and entrepreneurs wanted to party like it was 1999. Web2.0 brought energy to this forlorn crowd. At first they were skeptical, but slowly they bought in. As a result, we’ve seen a resurgence of startups, venture capitalists, and conferences. At this point, Web2.0 is sometimes referred to as Bubble2.0, but there’s something to say about “hope” even when the VCs start co-opting that term because they want four more years.

For users, Web2.0 was all about reorganizing web-based practices around Friends. For many users, direct communication tools like email and IM were used to communicate with one’s closest and dearest while online communities were tools for connecting with strangers around shared interests. Web2.0 reworked all of that by allowing users to connect in new ways. While many of the tools may have been designed to help people find others, what Web2.0 showed was that people really wanted a way to connect with those that they already knew in new ways. Even tools like MySpace and Facebook which are typically labeled social networkING sites were never really about networking for most users. They were about socializing inside of pre-existing networks.

Defining social media, social software, & Web 2.0 Read More »

Prices for various services and software in the underground

From Tom Espiner’s “Cracking open the cybercrime economy” (CNET News: 14 December 2007):

“Over the years, the criminal elements, the ones who are making money, making millions out of all this online crime, are just getting stronger and stronger. I don’t think we are really winning this war.”

As director of antivirus research for F-Secure, you might expect Mikko Hypponen to overplay the seriousness of the situation. But according to the Finnish company, during 2007 the number of samples of malicious code on its database doubled, having taken 20 years to reach the size it was at the beginning of this year.

“From Trojan creation sites out of Germany and the Eastern bloc, you can purchase kits and support for malware in yearly contracts,” said [David Marcus, security research manager at McAfee Avert Labs]. “They present themselves as a cottage industry which sells tools or creation kits. It’s hard to tell if it’s a conspiracy or a bunch of autonomous individuals who are good at covering their tracks.”

Joe Telafici, director of operations at McAfee’s Avert Labs, said Storm is continuing to evolve. “We’ve seen periodic activity from Storm indicating that it is still actively being maintained. They have actually ripped out core pieces of functionality to modify the obfuscation mechanisms that weren’t working any more. Most people keep changing the wrapper until it gets by (security software)–these guys changed the functionality.”

Peter Gutmann, a security researcher at the University of Auckland, says in a report that malicious software via the affiliate model–in which someone pays others to infect users with spyware and Trojans–has become more prevalent in 2007.

The affiliate model was pioneered by the iframedollars.biz site in 2005, which paid Webmasters 6 cents per infected site. Since then, this has been extended to a “vast number of adware affiliates,” according to Gutmann. For example, one adware supplier pays 30 cents for each install in the United States, 20 cents in Canada, 10 cents in the United Kingdom, and 1 or 2 cents elsewhere.

Hackers also piggyback malicious software on legitimate software. According to Gutmann, versions of coolwebsearch co-install a mail zombie and a keystroke logger, while some peer-to-peer and file-sharing applications come with bundled adware and spyware.

In March, the price quoted on malware sites for the Gozi Trojan, which steals data and sends it to hackers in an encrypted form, was between $1,000 and $2,000 for the basic version. Buyers could purchase add-on services at varying prices starting at $20.

In the 2007 black economy, everything can be outsourced, according to Gutmann. A scammer can buy hosts for a phishing site, buy spam services to lure victims, buy drops to send the money to, and pay a cashier to cash out the accounts. …

Antidetection vendors sell services to malicious-software and botnet vendors, who sell stolen credit card data to middlemen. Those middlemen then sell that information to fraudsters who deal in stolen credit card data and pay a premium for verifiably active accounts. “The money seems to be in the middlemen,” Gutmann says.

One example of this is the Gozi Trojan. According to reports, the malware was available this summer as a service from iFrameBiz and stat482.com, who bought the Trojan from the HangUp team, a group of Russian hackers. The Trojan server was managed by 76service.com, and hosted by the Russian Business Network, which security vendors allege offered “bullet-proof” hosting for phishing sites and other illicit operations.

According to Gutmann, there are many independent malicious-software developers selling their wares online. Private releases can be tailored to individual clients, while vendors offer support services, often bundling antidetection. For example, the private edition of Hav-rat version 1.2, a Trojan written by hacker Havalito, is advertised as being completely undetectable by antivirus companies. If it does get detected then it will be replaced with a new copy that again is supposedly undetectable.

Hackers can buy denial-of-service attacks for $100 per day, while spammers can buy CDs with harvested e-mail addresses. Spammers can also send mail via spam brokers, handled via online forums such as specialham.com and spamforum.biz. In this environment, $1 buys 1,000 to 5,000 credits, while $1,000 buys 10,000 compromised PCs. Credit is deducted when the spam is accepted by the target mail server. The brokers handle spam distribution via open proxies, relays and compromised PCs, while the sending is usually done from the client’s PC using broker-provided software and control information.

Carders, who mainly deal in stolen credit card details, openly publish prices, or engage in private negotiations to decide the price, with some sources giving bulk discounts for larger purchases. The rate for credit card details is approximately $1 for all the details down to the Card Verification Value (CVV); $10 for details with CVV linked to a Social Security number; and $50 for a full bank account.

Prices for various services and software in the underground Read More »

Criminals working together to improve their tools

From Dan Goodin’s “Crimeware giants form botnet tag team” (The Register: 5 September 2008):

The Rock Phish gang – one of the net’s most notorious phishing outfits – has teamed up with another criminal heavyweight called Asprox in overhauling its network with state-of-the-art technology, according to researchers from RSA.

Over the past five months, Rock Phishers have painstakingly refurbished their infrastructure, introducing several sophisticated crimeware packages that get silently installed on the PCs of its victims. One of those programs makes infected machines part of a fast-flux botnet that adds reliability and resiliency to the Rock Phish network.

Based in Europe, the Rock Phish group is a criminal collective that has been targeting banks and other financial institutions since 2004. According to RSA, they are responsible for half of the worldwide phishing attacks and have siphoned tens of millions of dollars from individuals’ bank accounts. The group got its name from a now discontinued quirk in which the phishers used directory paths that contained the word “rock.”

The first sign the group was expanding operations came in April, when it introduced a trojan known alternately as Zeus or WSNPOEM, which steals sensitive financial information in transit from a victim’s machine to a bank. Shortly afterward, the gang added more crimeware, including a custom-made botnet client that was spread, among other means, using the Neosploit infection kit.

Soon, additional signs appeared pointing to a partnership between Rock Phishers and Asprox. Most notably, the command and control server for the custom Rock Phish crimeware had exactly the same directory structure of many of the Asprox servers, leading RSA researchers to believe Rock Phish and Asprox attacks were using at least one common server. …

RSA researchers also noticed that a decrease in phishing attacks hosted on Rock Phishers’ old servers coincided with never-before-seen phishing attacks used on the Asprox botnet.

In this case, Rock Phishers seem to be betting that the spoofed pages used in their phishing attacks will remain up longer using fast-flux technology from Asprox.

“It just shows that these guys know each other and are willing to provide services to each other,” said Joe Stewart, a researcher at SecureWorks who has spent years tracking Asprox and groups that use fast-flux botnets. “This goes on in the underground all the time.”

Criminals working together to improve their tools Read More »

My new book – Google Apps Deciphered – is out!

I’m really proud to announce that my 5th book is now out & available for purchase: Google Apps Deciphered: Compute in the Cloud to Streamline Your Desktop. My other books include:

(I’ve also contributed to two others: Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux and Microsoft Vista for IT Security Professionals.)

Google Apps Deciphered is a guide to setting up Google Apps, migrating to it, customizing it, and using it to improve productivity, communications, and collaboration. I walk you through each leading component of Google Apps individually, and then show my readers exactly how to make them work together for you on the Web or by integrating them with your favorite desktop apps. I provide practical insights on Google Apps programs for email, calendaring, contacts, wikis, word processing, spreadsheets, presentations, video, and even Google’s new web browser Chrome. My aim was to collect together and present tips and tricks I’ve gained by using and setting up Google Apps for clients, family, and friends.

Here’s the table of contents:

  • 1: Choosing an Edition of Google Apps
  • 2: Setting Up Google Apps
  • 3: Migrating Email to Google Apps
  • 4: Migrating Contacts to Google Apps
  • 5: Migrating Calendars to Google Apps
  • 6: Managing Google Apps Services
  • 7: Setting Up Gmail
  • 8: Things to Know About Using Gmail
  • 9: Integrating Gmail with Other Software and Services
  • 10: Integrating Google Contacts with Other Software and Services
  • 11: Setting Up Google Calendar
  • 12: Things to Know About Using Google Calendar
  • 13: Integrating Google Calendar with Other Software and Services
  • 14: Things to Know About Using Google Docs
  • 15: Integrating Google Docs with Other Software and Services
  • 16: Setting Up Google Sites
  • 17: Things to Know About Using Google Sites
  • 18: Things to Know About Using Google Talk
  • 19: Things to Know About Using Start Page
  • 20: Things to Know About Using Message Security and Recovery
  • 21: Things to Know About Using Google Video
  • Appendix A: Backing Up Google Apps
  • Appendix B: Dealing with Multiple Accounts
  • Appendix C: Google Chrome: A Browser Built for Cloud Computing

If you want to know more about Google Apps and how to use it, then I know you’ll enjoy and learn from Google Apps Deciphered. You can read about and buy the book at Amazon (http://www.amazon.com/Google-Apps-Deciphered-Compute-Streamline/dp/0137004702) for $26.39. If you have any questions or comments, don’t hesitate to contact me at scott at granneman dot com.

My new book – Google Apps Deciphered – is out! Read More »

1% create, 10% comment, 89% just use

From Charles Arthur’s “What is the 1% rule?” (Guardian Unlimited: 20 July 2006):

It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will “interact” with it (commenting or offering improvements) and the other 89 will just view it.

It’s a meme that emerges strongly in statistics from YouTube, which in just 18 months has gone from zero to 60% of all online video viewing.

The numbers are revealing: each day there are 100 million downloads and 65,000 uploads – which as Antony Mayfield (at http://open.typepad.com/open) points out, is 1,538 downloads per upload – and 20m unique users per month.

That puts the “creator to consumer” ratio at just 0.5%, but it’s early days yet …

50% of all Wikipedia article edits are done by 0.7% of users, and more than 70% of all articles have been written by just 1.8% of all users, according to the Church of the Customer blog (http://customerevangelists.typepad.com/blog/).

Earlier metrics garnered from community sites suggested that about 80% of content was produced by 20% of the users, but the growing number of data points is creating a clearer picture of how Web 2.0 groups need to think. For instance, a site that demands too much interaction and content generation from users will see nine out of 10 people just pass by.

Bradley Horowitz of Yahoo points out that much the same applies at Yahoo: in Yahoo Groups, the discussion lists, “1% of the user population might start a group; 10% of the user population might participate actively, and actually author content, whether starting a thread or responding to a thread-in-progress; 100% of the user population benefits from the activities of the above groups,” he noted on his blog (www.elatable.com/blog/?p=5) in February.

1% create, 10% comment, 89% just use Read More »

Thoughts on tagging/folksonomy

From Ulises Ali Mejias’ “A del.icio.us study: Bookmark, Classify and Share: A mini-ethnography of social practices in a distributed classification community“:

This principle of distribution is at work in socio-technical systems that allow users to collaboratively organize a shared set of resources by assigning classifiers, or tags, to each item. The practice is coming to be known as free tagging, open tagging, ethnoclassification, folksonomy, or faceted hierarchy (henceforth referred to in this study as distributed classification) …

One important feature of systems such as these is that they do not impose a rigid taxonomy. Instead, they allow users to assign whatever classifiers they choose. Although this might sound counter-productive to the ultimate goal of organizing content, in practice it seems to work rather well, although it does present some drawbacks. For example, most people will probably classify pictures of cats by using the tag ‘cats.’ But what happens when some individuals use ‘cat’ or ‘feline’ or ‘meowmeow’ …

It seems that while most people might not be motivated to contribute to a pre-established system of classification that may not meet their needs, or to devise new and complex taxonomies of their own, they are quite happy to use distributed systems of classification that are quick and able to accommodate their personal (and ever changing) systems of classification. …

But distributed classification does not accrue benefits only to the individual. It is a very social endeavor in which the community as a whole can benefit. Jon Udell describes some of the individual and social possibilities of this method of classification:

These systems offer lots of ways to visualize and refine the tag space. It’s easy to know whether a tag you’ve used is unique or, conversely, popular. It’s easy to rename a tag across a set of items. It’s easy to perform queries that combine tags. Armed with such powerful tools, people can collectively enrich shared data. (Udell 2004) …

Set this [an imposed taxonomy] against the idea of allowing a user to add tags to any given document in the corpus. Like Del.icio.us, there needn’t be a pre-defined hierarchy or lexicon of terms to use; one can simply lean on the power of ethnoclassification to build that lexicon dynamically. As such, it will dynamically evolve as usages change and shift, even as needs change and shift. (Williams, 2004)

The primary benefit of free tagging is that we know the classification makes sense to users… For a content creator who is uploading information into such a system, being able to freely list subjects, instead of choosing from a pre-approved “pick list,” makes tagging content much easier. This, in turn, makes it more likely that users will take time to classify their contributions. (Merholz, 2004)

Folksonomies work best when a number of users all describe the same piece of information. For instance, on del.icio.us, many people have bookmarked wikipedia (http://del.icio.us/url/bca8b85b54a7e6c01a1bcfaf15be1df5), each with a different set of words to describe it. Among the various tags used, del.icio.us shows that reference, wiki, and encyclopedia are the most popular. (Wikipedia entry for folksonomy, retrieved December 15, 2004 from http://en.wikipedia.org/wiki/Folksonomy)

Of course, this approach is not without its potential problems:

With no one controlling the vocabulary, users develop multiple terms for identical concepts. For example, if you want to find all references to New York City on Del.icio.us, you’ll have to look through “nyc,” “newyork,” and “newyorkcity.” You may also encounter the inverse problem — users employing the same term for disparate concepts. (Merholz, 2004) …

But as Clay Shirky remarks, this solution might diminish some of the benefits that we can derive from folksonomies:

Synonym control is not as wonderful as is often supposed, because synonyms often aren’t. Even closely related terms like movies, films, flicks, and cinema cannot be trivially collapsed into a single word without loss of meaning, and of social context … (Shirky, 2004) …

The choice of tags [in the entire del.icio.us system] follows something resembling the Zipf or power law curve often seen in web-related traffic. Just six tags (python, delicious/del.icio.us, programming, hacks, tools, and web) account for 80% of all the tags chosen, and a long tail of 58 other tags make up the remaining 20%, with most occurring just once or twice … In the del.icio.us community, the rich get richer and the poor stay poor via http://del.icio.us/popular. Links noted by enough users within a short space of time get listed here, and many del.icio.us users use it to keep up with the zeitgeist. (Biddulph, 2004) …

Thoughts on tagging/folksonomy Read More »

A game completely controlled by the players

From Ron Dulin’s “A Tale in the Desert“:

A Tale in the Desert is set in ancient Egypt. Very ancient Egypt: The only society to be found is that which has been created by the existing players. Your mentor will show you how to gather materials and show you the basics of learning and construction. These are the primary goals in the game–you learn from academies and universities, and then you use what you’ve learned to build things, such as structures and tools. As your character learns new skills, you can advance. …

Higher-level tests are much more complex and require you to enlist lower-level characters to help you complete them. Players are directly involved in almost all aspects of the game, from the introduction of new technologies to the game’s rules to the landscape itself. With a few exceptions, almost every structure you see in the game was built by a player or group of players. New technologies are introduced through research at universities, which is aided by players’ donations to these institutions. Most interestingly, though, the game rules themselves can be changed through the legal system. If you don’t like a certain aspect of the game, within reason, you can introduce a petition to have it changed. If you get enough signatures on your petition, it will be subject to a general vote. If it passes, it becomes a new law. This system is also used for permanently banning players who have, for some reason or another, made other players’ in-game lives difficult. …

The designers themselves have stated that A Tale in the Desert is about creating a society, and watching the experiment in action is almost as enjoyable as taking part.

A game completely controlled by the players Read More »

10 early choices that helped make the Internet successful

From Dan Gillmor’s “10 choices that were critical to the Net’s success“:

1) Make it all work on top of existing networks.

2) Use packets, not circuits.

3) Create a ‘routing’ function.

4) Split the Transmission Control Protocol (TCP) and Internet Protocol (IP) …

5) The National Science Foundation (NSF) funds the University of California-Berkeley, to put TCP/IP into the Unix operating system originally developed by AT&T.

6) CSNET, an early network used by universities, connects with the ARPANET … The connection was for e-mail only, but it led to much more university research on networks and a more general understanding among students, faculty and staff of the value of internetworking.

7) The NSF requires users of the NSFNET to use TCP/IP, not competing protocols.

8) International telecommunications standards bodies reject TCP/IP, then create a separate standard called OSI.

9) The NSF creates an “Acceptable Use Policy” restricting NSFNET use to noncommercial activities.

10) Once things start to build, government stays mostly out of the way.

10 early choices that helped make the Internet successful Read More »

How terrorists use the Web

From Technology Review‘s “Terror’s Server“:

According to [Gabriel] Weimann [professor of communications at University of Haifa], the number of [terror-related] websites has leapt from only 12 in 1997 to around 4,300 today. …

These sites serve as a means to recruit members, solicit funds, and promote and spread ideology. …

The September 11 hijackers used conventional tools like chat rooms and e-mail to communicate and used the Web to gather basic information on targets, says Philip Zelikow, a historian at the University of Virginia and the former executive director of the 9/11 Commission. …

Finally, terrorists are learning that they can distribute images of atrocities with the help of the Web. … “The Internet allows a small group to publicize such horrific and gruesome acts in seconds, for very little or no cost, worldwide, to huge audiences, in the most powerful way,” says Weimann. …

How terrorists use the Web Read More »

A very brief history of programming

From Brian Hayes’ “The Post-OOP Paradigm“:

The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when “the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” Half a century later, we’re still debugging.

The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1s and 0s. Moreover, it was up to the programmer to keep track of where everything was stored in the machine’s memory. Before you could call a subroutine, you had to calculate its address.

The technology that lifted these burdens from the programmer was assembly language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming.

Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x 2+y 2 might require dozens of assembly-language instructions. Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x 2+y 2 would be written simply as X**2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler.

… By the 1960s large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large-system programming a “tar pit” and remarked, “Everyone seems to have been surprised by the stickiness of the problem.”

One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra’s brief letter to the editor titled “Go to statement considered harmful.” Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the goto command, which allows jumps into or out of the middle of a routine). Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Böhm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs.

Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.

Object-oriented programming addresses these issues by packing both data and procedures—both nouns and verbs—into a single object. An object named triangle would have inside it some data structure representing a three-sided shape, but it would also include the procedures (called methods in this context) for acting on the data. To rotate a triangle, you send a message to the triangle object, telling it to rotate itself. Sending and receiving messages is the only way objects communicate with one another; outsiders are not allowed direct access to the data. Because only the object’s own methods know about the internal data structures, it’s easier to keep them in sync.

You define the class triangle just once; individual triangles are created as instances of the class. A mechanism called inheritance takes this idea a step further. You might define a more-general class polygon, which would have triangle as a subclass, along with other subclasses such as quadrilateral, pentagon and hexagon. Some methods would be common to all polygons; one example is the calculation of perimeter, which can be done by adding the lengths of the sides, no matter how many sides there are. If you define the method calculate-perimeter in the class polygon, all the subclasses inherit this code.

A very brief history of programming Read More »

A living story, tattooed on flesh

From The New York Times Magazine‘s “Skin Literature“:

Most artists spend their careers trying to create something that will live forever. But the writer Shelley Jackson is creating a work of literature that is intentionally and indisputably mortal. Jackson is publishing her latest short story by recruiting 2,095 people, each of whom will have one word of the story tattooed on his or her body. The story, titled ‘Skin,’ will appear only on the collective limbs, torsos and backsides of its participants. And decades from now, when the last of Jackson’s ‘words’ dies, so, too, will her tale.

As of November, Jackson, the Brooklyn-based author of a short-story collection called ‘The Melancholy of Anatomy,’ had enrolled about 1,800 volunteers, some from such distant countries as Argentina, Jordan, Thailand and Finland. Participants, who contact Jackson through her Web site, cannot choose which word they receive. And their tattoos must be inked in the font that Jackson has specified. But they do have some freedom to bend and stretch the narrative. They can select the place on their bodies they want to become part of the Jackson opus. In return, Jackson asks her ‘words’ to sign a 12-page release absolving her of liability and promising not to share the story with others. (Participants are the only people who will get to see the full text of the story.) They must also send her two photographs — one of the word on their skin, the other a portrait of themselves without the word visible — which she may later publish or exhibit.

… Mothers and daughters are requesting consecutive words. So are couples, perhaps hoping to form the syntactic equivalent of a civil union. For others, the motives are social: Jackson is encouraging her far-flung words to get to know each other via e-mail, telephone, even in person. (Imagine the possibilities. A sentence getting together for dinner. A paragraph having a party.) …

… when a participant meets his or her demise, Jackson vows, she will try to attend that person’s funeral. But the 41-year-old author understands that some of her 2,095 collaborators, many of whom are in their 20’s, might outlive her. If she dies first, she says, she hopes several of them will come to her funeral and make her the first writer ever to be mourned by her words.

A living story, tattooed on flesh Read More »

New communication, new art forms

From Jim Hanas’ “The Story Doesn’t Care: An Interview with Sean Stewart“:

I think that every means of communication carries within itself the potential for a form of art. Once the printing press was built, novels were going to happen. It took the novel a little while to figure out exactly what it was going to be, but once the press was there, something was going to occur. Once motion picture cameras were around, the movies—in some format or another—were going to happen.

I modestly or immodestly think that [developers of alternate reality games] got some things fundamentally right about the way the web and the internet want to tell stories in a way that not everyone had gotten quite when we lucked into it. What people do on the web is they look for things and they gossip. We found a way of storytelling that has a lot to do with looking for things and gossiping about them. …

Suspension of disbelief is a much more fragile creation in the kinds of campaigns we’re doing right now than it is in novels, where everyone has taken the last two hundred years to agree on a set of rules about how you understand what’s happening in a book. That hasn’t happened here. Right now, this is at an unbelievably fluid and dynamic stage—a whole bunch of things that have been figured out in other art forms, we’re working them out on the fly.

New communication, new art forms Read More »

The new American community: affinity vs. proximity

From “Study: Want Community? Go Online” [emphasis added]:

Nearly 40 percent of Americans say they participate in online communities, with sites around hobbies, shared personal interests, and health-related issues among the most popular. That’s according to a survey conducted by ACNielsen and commissioned by eBay.

The survey was conducted in late September. Of 1,007 respondents, 87 percent say they are part of a community. Of those, 66 percent say they participate in shared personal interest sites. Next comes hobby sites (62 percent), health community sites (55 percent), public issues sites (49 percents), and commerce sites (47 percent). Others participate in social or business networking sites (42 percent), sports sites (42 percent), alumni sites (39 percent), or dating sites (23 percent).

“We are finding that affinity is quickly replacing proximity as the key driver in forming communities,” said Bruce Paul, vice president of ACNielsen. …

“I think that a lot of people initially connect [on online communities] because they share information, which for a site like eBay is beneficial because they learn and grow from each other,” said Rachel Makool, director of community relations for eBay. “Then, of course, relationships form, and they grow from there.”

Researchers note that among offline communities, only membership in religious congregations (59 percent), social groups (54 percent), and neighborhood groups (52 percent) are more common than participation in online communities (39 percent). Professional groups (37 percent), activity groups (32 percent), school volunteer groups (30 percent), and health/country clubs (31 percent) came in behind online communities.

The study also shows that though 30 percent of online community members interact on a daily basis, only 7 percent of offline community members interacted that often. It also reveals that 47 percent of offline communities have an online component, such as e-mailing or chatting online.

The new American community: affinity vs. proximity Read More »

Blogging at IBM

From “3,600+ blogs: A glance into IBM’s internal blogging“:

Through the central blog dashboard at the intranet W3, IBMers now can find more than 3,600 blogs written by their co-workers. As of June 13 there were 3,612 internal blogs with 30,429 posts. Internal blogging is still at a stage of testing and trying at IBM but the number of blogs is growing rapidly …

US, Canada and Australia are very active countries but also in small European countries there are quite many internal bloggers. 147 in Sweden and 170 in the Netherlands to mention two examples. …

… the most common topics.

News or events that affect the business
“When IBM sold the personal computing division rumours were flying around before it actually happened and people were blogging about that, giving their opinions about what was going to happen and how it would affect IBM.”

Metablogging
“It’s a new technology of special interest to people who blog.”

Administrative things
“The little changes going on in the company — the water-cooler talk.”

Product announcements
“Not necessarily of general interest but of interest to the specific community working with the product.”

Hints and tips
“…for example about what bloggers have found interesting on the intranet.”

Blogging at IBM Read More »

Cave or community

From Sandeep Krishnamurthy’s Cave or Community?: An Empirical Examination of 100 Mature Open Source Projects:

I systematically look at the actual number of developers involved in the production of one hundred mature OSS products. What I found is more consistent with the lone developer (or cave) model of production rather than a community model (with a few glaring exceptions, of course). …

… My contention is only that communities do things other than produce the actual product- e.g. provide feature suggestions, try products out as lead users, answer questions etc. …

To be more specific the top 100 most active projects (based on Sourceforge’s activity percentile) in the mature class were chosen for this study. …

Finding 1: The vast majority of mature OSS programs are developed by a small number of individuals. …

Moreover, as shown in Table 2, only 29% of all projects had more than 5 developers while 51% of projects had 1 project administrator. Only 19 out of 100 projects had more than 10 developers. On the other extreme, 22% of projects had only one developer associated with them. …

Finding 2: Very few OSS products generate a lot of discussion. Most products do not generate too much discussion. …

Finding 3: Products with more developers tend to be viewed and downloaded more often. …

Finding 4: The number of developers working on a OSS program was unrelated to the release date.

It could be argued that older projects may have more developers associated with them. However, we found no relationship between the release date and the number of developers associated with a program. …

Even though the discussion here may seem like an example of extreme free- riding, the reader needs to know that all free-riding is not necessarily “bad”. For instance, consider public radio stations in the United States. Even the most successful stations have about a 10% contribution rate or a 90% free-ridership rate. But, they are still able to meet their goals! Similarly, the literature on lurking in e-mail lists has suggested that if everyone in a community contributes it may actually be counter-productive.

Similarly, a recent survey of participants in open-source projects conducted by the Boston Consulting Group and MIT provides more insight. The top five motivations of open-source participants were

1. To take part in an intellectually stimulating project.
2. To improve their skill.
3. To take the opportunity to work with open-source code.
4. Non-work functionality.
5. Work-related functionality.

Cave or community Read More »