internet

Differences between Macintosh & Unix programmers

From Eric Steven Raymond’s “Problems in the Environment of Unix” (The Art of Unix Programming: 19 September 2003):

Macintosh programmers are all about the user experience. They’re architects and decorators. They design from the outside in, asking first “What kind of interaction do we want to support?” and then building the application logic behind it to meet the demands of the user-interface design. This leads to programs that are very pretty and infrastructure that is weak and rickety. In one notorious example, as late as Release 9 the MacOS memory manager sometimes required the user to manually deallocate memory by manually chucking out exited but still-resident programs. Unix people are viscerally revolted by this kind of mal-design; they don’t understand how Macintosh people could live with it.

By contrast, Unix people are all about infrastructure. We are plumbers and stonemasons. We design from the inside out, building mighty engines to solve abstractly defined problems (like “How do we get reliable packet-stream delivery from point A to point B over unreliable hardware and links?”). We then wrap thin and often profoundly ugly interfaces around the engines. The commands date(1), find(1), and ed(1) are notorious examples, but there are hundreds of others. Macintosh people are viscerally revolted by this kind of mal-design; they don’t understand how Unix people can live with it. …

In many ways this kind of parochialism has served us well. We are the keepers of the Internet and the World Wide Web. Our software and our traditions dominate serious computing, the applications where 24/7 reliability and minimal downtime is a must. We really are extremely good at building solid infrastructure; not perfect by any means, but there is no other software technical culture that has anywhere close to our track record, and it is one to be proud of. …

To non-technical end users, the software we build tends to be either bewildering and incomprehensible, or clumsy and condescending, or both at the same time. Even when we try to do the user-friendliness thing as earnestly as possible, we’re woefully inconsistent at it. Many of the attitudes and reflexes we’ve inherited from old-school Unix are just wrong for the job. Even when we want to listen to and help Aunt Tillie, we don’t know how — we project our categories and our concerns onto her and give her ‘solutions’ that she finds as daunting as her problems.

Differences between Macintosh & Unix programmers Read More »

Just how big is YouTube?

From Reuters’s “YouTube serves up 100 mln videos a day” (16 July 2006):

YouTube, the leader in Internet video search, said on Sunday viewers have are now watching more than 100 million videos per day on its site, marking the surge in demand for its “snack-sized” video fare.

Since springing from out of nowhere late last year, YouTube has come to hold the leading position in online video with 29 percent of the U.S. multimedia entertainment market, according to the latest weekly data from Web measurement site Hitwise.

YouTube videos account for 60 percent of all videos watched online, the company said. …

In June, 2.5 billion videos were watched on YouTube, which is based in San Mateo, California and has just over 30 employees. More than 65,000 videos are now uploaded daily to YouTube, up from around 50,000 in May, the company said.

YouTube boasts nearly 20 million unique users per month, according to Nielsen//NetRatings, another Internet audience measurement firm.

Just how big is YouTube? Read More »

1st 2 questions AOL tech support asks

From Spare me the details (The Economist: 28 October 2004):

LISA HOOK, an executive at AOL, one of the biggest providers of traditional (“dial-up”) internet access, has learned amazing things by listening in on the calls to AOL’s help desk. Usually, the problem is that users cannot get online. The help desk’s first question is: “Do you have a computer?” Surprisingly often the answer is no, and the customer was trying to shove the installation CD into the stereo or TV set. The help desk’s next question is: “Do you have a second telephone line?” Again, surprisingly often the answer is no, which means that the customer cannot get on to the internet because he is on the line to the help desk. And so it goes on. …

1st 2 questions AOL tech support asks Read More »

Remote fingerprinting of devices connected to the Net

Anonymous Internet access is now a thing of the past. A doctoral student at the University of California has conclusively fingerprinted computer hardware remotely, allowing it to be tracked wherever it is on the Internet.

In a paper on his research, primary author and Ph.D. student Tadayoshi Kohno said: “There are now a number of powerful techniques for remote operating system fingerprinting, that is, remotely determining the operating systems of devices on the Internet. We push this idea further and introduce the notion of remote physical device fingerprinting … without the fingerprinted device’s known cooperation.”

The potential applications for Kohno’s technique are impressive. For example, “tracking, with some probability, a physical device as it connects to the Internet from different access points, counting the number of devices behind a NAT even when the devices use constant or random IP identifications, remotely probing a block of addresses to determine if the addresses correspond to virtual hosts (for example, as part of a virtual honeynet), and unanonymising anonymised network traces.” …

Another application for Kohno’s technique is to “obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device.”

The technique works by “exploiting small, microscopic deviations in device hardware: clock skews.” In practice, Kohno’s paper says, his techniques “exploit the fact that most modern TCP stacks implement the TCP timestamps option from RFC 1323 whereby, for performance purposes, each party in a TCP flow includes information about its perception of time in each outgoing packet. A fingerprinter can use the information contained within the TCP headers to estimate a device’s clock skew and thereby fingerprint a physical device.”

Kohno goes on to say: ” Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall.”

And the paper stresses that “For all our methods, we stress that the fingerprinter does not require any modification to or cooperation from the fingerprintee.” Kohno and his team tested their techniques on many operating systems, including Windows XP and 2000, Mac OS X Panther, Red Hat and Debian Linux, FreeBSD, OpenBSD and even Windows for Pocket PCs 2002.

Remote fingerprinting of devices connected to the Net Read More »

IE unsafe 98% of the time

From Noam Eppel’s “Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security“:

The security company Scanit recently conducted a survey which tracked three web browsers (MSIE, Firefox, Opera) in 2004 and counted which days they were “known unsafe.” Their definition of “known unsafe”: a remotely exploitable security vulnerability had been publicly announced and no patch was yet available. Microsoft Internet Explorer, which is the most popular browser in use today and installed by default on most Windows-based computers, was 98% unsafe. Astonishingly, there were only 7 days in 2004 without an unpatched publicly disclosed security hole. Read that last sentence again if you have to.

IE unsafe 98% of the time Read More »

The Flash Worm, AKA the Warhol Worm

From Noam Eppel’s “Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security“:

In 2001, the infamous Code Red Worm was infecting a remarkable 2,000 new hosts each minute. Nick Weaver at UC Berkeley proposed the possibility of a “Flash Worm” which could spread across the Internet and infect all vulnerable servers in less than 15 minutes. A well engineered flash worm could spread worldwide in a matter of seconds.

The Flash Worm, AKA the Warhol Worm Read More »

A technical look at the Morris Worm of 1988

From Donn Seeley’s “The Internet Worm of 1988: A Tour of the Worm“:

November 3, 1988 is already coming to be known as Black Thursday. System administrators around the country came to work on that day and discovered that their networks of computers were laboring under a huge load. If they were able to log in and generate a system status listing, they saw what appeared to be dozens or hundreds of “shell” (command interpreter) processes. If they tried to kill the processes, they found that new processes appeared faster than they could kill them. Rebooting the computer seemed to have no effect within minutes after starting up again, the machine was overloaded by these mysterious processes.

… The worm had taken advantage of lapses in security on systems that were running 4.2 or 4.3 BSD UNIX or derivatives like SunOS. These lapses allowed it to connect to machines across a network, bypass their login authentication, copy itself and then proceed to attack still more machines. The massive system load was generated by multitudes of worms trying to propagate the epidemic. …

The worm consists of a 99-line bootstrap program written in the C language, plus a large relocatable object file that comes in VAX and Sun-3 flavors. …

The activities of the worm break down into the categories of attack and defense. Attack consists of locating hosts (and accounts) to penetrate, then exploiting security holes on remote systems to pass across a copy of the worm and run it. The worm obtains host addresses by examining the system tables /etc/hosts.equiv and /.rhosts, user files like .forward and. rhosts, dynamic routing information produced by the netstat program, and finally randomly generated host addresses on local networks. It ranks these by order of preference, trying a file like /etc/hosts.equiv first because it contains names of local machines that are likely to permit unauthenticated connections. Penetration of a remote system can be accomplished in any of three ways. The worm can take advantage of a bug in the finger server that allows it to download code in place of a finger request and trick the server into executing it. The worm can use a “trap door” in the sendmail SMTP mail service, exercising a bug in the debugging code that allows it to execute a command interpreter and download code across a mail connection. If the worm can penetrate a local account by guessing its password, it can use the rexec and rsh remote command interpreter services to attack hosts that share that account. In each case the worm arranges to get a remote command interpreter which it can use to copy over, compile and execute the 99-line bootstrap. The bootstrap sets up its own network connection with the local worm and copies over the other files it needs, and using these pieces a remote worm is built and the infection procedure starts over again. …

When studying a tricky program like this, it’s just as important to establish what the program does not do as what it does do. The worm does not delete a system’s files: it only removes files that it created in the process of bootstrapping. The program does not attempt to incapacitate a system by deleting important files, or indeed any files. It does not remove log files or otherwise interfere with normal operation other than by consuming system resources. The worm does not modify existing files: it is not a virus. The worm propagates by copying itself and compiling itself on each system; it does not modify other programs to do its work for it. Due to its method of infection, it can’t count on sufficient privileges to be able to modify programs. The worm does not install trojan horses: its method of attack is strictly active, it never waits for a user to trip over a trap. Part of the reason for this is that the worm can’t afford to waste time waiting for trojan horses-it must reproduce before it is discovered. Finally, the worm does not record or transmit decrypted passwords: except for its own static list of favorite passwords, the worm does not propagate cracked passwords on to new worms nor does it transmit them back to some home base. This is not to say that the accounts that the worm penetrated are secure merely because the worm did not tell anyone what their passwords were, of course-if the worm can guess an account’s password, certainly others can too. The worm does not try to capture superuser privileges: while it does try to break into accounts, it doesn’t depend on having particular privileges to propagate, and never makes special use of such privileges if it somehow gets them. The worm does not propagate over uucp or X.25 or DECNET or BITNET: it specifically requires TCP/IP. The worm does not infect System V systems unless they have been modified to use Berkeley network programs like sendmail, fingerd and rexec.

A technical look at the Morris Worm of 1988 Read More »

Clay Shirky on why the Semantic Web will fail

From Clay Shirky’s “The Semantic Web, Syllogism, and Worldview“:

What is the Semantic Web good for?

The simple answer is this: The Semantic Web is a machine for creating syllogisms. A syllogism is a form of logic, first described by Aristotle, where “…certain things being stated, something other than what is stated follows of necessity from their being so.” [Organon]

The canonical syllogism is:

Humans are mortal
Greeks are human
Therefore, Greeks are mortal

with the third statement derived from the previous two.

The Semantic Web is made up of assertions, e.g. “The creator of shirky.com is Clay Shirky.” Given the two statements

– Clay Shirky is the creator of shirky.com
– The creator of shirky.com lives in Brooklyn

you can conclude that I live in Brooklyn, something you couldn’t know from either statement on its own. From there, other expressions that include Clay Shirky, shirky.com, or Brooklyn can be further coupled.

The Semantic Web specifies ways of exposing these kinds of assertions on the Web, so that third parties can combine them to discover things that are true but not specified directly. This is the promise of the Semantic Web — it will improve all the areas of your life where you currently use syllogisms.

Which is to say, almost nowhere. …

Despite their appealing simplicity, syllogisms don’t work well in the real world, because most of the data we use is not amenable to such effortless recombination. As a result, the Semantic Web will not be very useful either. …

In the real world, we are usually operating with partial, inconclusive or context-sensitive information. When we have to make a decision based on this information, we guess, extrapolate, intuit, we do what we did last time, we do what we think our friends would do or what Jesus or Joan Jett would have done, we do all of those things and more, but we almost never use actual deductive logic. …

Syllogisms sound stilted in part because they traffic in absurd absolutes. …

There is a list of technologies that are actually political philosophy masquerading as code, a list that includes Xanadu, Freenet, and now the Semantic Web. The Semantic Web’s philosophical argument — the world should make more sense than it does — is hard to argue with. The Semantic Web, with its neat ontologies and its syllogistic logic, is a nice vision. However, like many visions that project future benefits but ignore present costs, it requires too much coordination and too much energy to effect in the real world, where deductive logic is less effective and shared worldview is harder to create than we often want to admit.

Clay Shirky on why the Semantic Web will fail Read More »

The structure & meaning of the URL as key to the Web’s success

From Clay Shirky’s “The Semantic Web, Syllogism, and Worldview“:

The systems that have succeeded at scale have made simple implementation the core virtue, up the stack from Ethernet over Token Ring to the web over gopher and WAIS. The most widely adopted digital descriptor in history, the URL, regards semantics as a side conversation between consenting adults, and makes no requirements in this regard whatsoever: sports.yahoo.com/nfl/ is a valid URL, but so is 12.0.0.1/ftrjjk.ppq. The fact that a URL itself doesn’t have to mean anything is essential — the Web succeeded in part because it does not try to make any assertions about the meaning of the documents it contained, only about their location.

The structure & meaning of the URL as key to the Web’s success Read More »

What’s a socio-technical system?

From Ulises Ali Mejias’ “A del.icio.us study: Bookmark, Classify and Share: A mini-ethnography of social practices in a distributed classification community“:

A socio-technical system is conformed of hardware, software, physical surroundings, people, procedures, laws and regulations, and data and data structures.

What’s a socio-technical system? Read More »

The difficulty of recovering from identity theft

From TechWeb News’s “One In Four Identity-Theft Victims Never Fully Recover“:

Making things right after a stolen identity can take months and cost thousands, a survey of identity theft victims released Tuesday said. Worse, in more than one in four cases, victims haven’t been able to completely restore their good name.

The survey, conducted by Nationwide Mutual Insurance Co., found that 28 percent of identity thieves’ marks aren’t able to reconstruct their identities even after more than a year of work. On average, victims spent 81 hours trying to resolve their case.

According to the poll, the average amount of total charges made using a victim’s identity was $3,968. Fortunately, most were not held responsible for the fraudulent charges; 16 percent, however, reported that they had to pay for some or all of the bogus purchases.

Other results posted by the survey were just as dispiriting. More than half of the victims discovered the theft on their own by noticing unusual charges on credit cards or depleted bank accounts, but that took time: on average, five and a half months passed between when the theft occurred and when it was spotted.

Only 17 percent were notified by a creditor or financial institution of suspicious activity, a figure that’s certain to fuel federal lawmakers pondering legislation that would require public disclosure of large data breaches.

The difficulty of recovering from identity theft Read More »

10 early choices that helped make the Internet successful

From Dan Gillmor’s “10 choices that were critical to the Net’s success“:

1) Make it all work on top of existing networks.

2) Use packets, not circuits.

3) Create a ‘routing’ function.

4) Split the Transmission Control Protocol (TCP) and Internet Protocol (IP) …

5) The National Science Foundation (NSF) funds the University of California-Berkeley, to put TCP/IP into the Unix operating system originally developed by AT&T.

6) CSNET, an early network used by universities, connects with the ARPANET … The connection was for e-mail only, but it led to much more university research on networks and a more general understanding among students, faculty and staff of the value of internetworking.

7) The NSF requires users of the NSFNET to use TCP/IP, not competing protocols.

8) International telecommunications standards bodies reject TCP/IP, then create a separate standard called OSI.

9) The NSF creates an “Acceptable Use Policy” restricting NSFNET use to noncommercial activities.

10) Once things start to build, government stays mostly out of the way.

10 early choices that helped make the Internet successful Read More »

Flat local calling rates in US helped grow the Net

From Andrew Odlyzko’s “Pricing and Architecture of the Internet: Historical Perspectives from Telecommunications and Transportation“:

Moreover, flat rates for local calling played a key role in the rise of the Internet, by promoting much faster spread of this technology in the U.S. than in other countries. (This, as well as the FCC decisions about keeping Internet calls free from access charges, should surely be added to the list of “the 10 key choices that were critical to the Net’s success,” that were compiled by Scott Bradner [28].)

Flat local calling rates in US helped grow the Net Read More »

Monopolies & Internet innovation

From Andrew Odlyzko’s “Pricing and Architecture of the Internet: Historical Perspectives from Telecommunications and Transportation“:

The power to price discriminate, especially for a monopolist, is like the power of taxation, something that can be used to destroy. There are many governments that are interested in controlling Internet traffic for political or other reasons, and are interfering (with various degrees of success) with the end-to-end principle. However, in most democratic societies, the pressure to change the architecture of the Internet is coming primarily from economic concerns, trying to extract more revenues from users. This does not necessarily threaten political liberty, but it does impede innovation. If some new protocol or service is invented, gains from its use could be appropriated by the carriers if they could impose special charges for it.

The power of price discrimination was well understood in ancient times, even if the economic concept was not defined. As the many historical vignettes presented before show, differential pricing was frequently allowed, but only to a controlled degree. The main con- cern in the early days was about general fairness and about service providers leveraging their control of a key facility into control over other businesses. Personal discrimination was particularly hated, and preference was given to general rules applying to broad classes (such as student or senior citizen discounts today). Very often bounds on charges were imposed to limit price discrimination. …

Openness, non-discrimination, and the end-to-end principle have contributed greatly to the success of the Internet, by allowing innovation to flourish. Service providers have traditionally been very poor in introducing services that mattered and even in forecasting where their profits would come from. Sometimes this was because of ignorance, as in the failure of WAP and success of SMS, both of which came as great surprises to the wireless industry, even though this should have been the easiest thing to predict [55]. Sometimes it was because the industry tried to control usage excessively. For example, services such as Minitel have turned out to be disappointments for their proponents largely because of the built-in limitations. We can also recall the attempts by the local telephone monopolies in the mid-to late-1990s to impose special fees on Internet access calls. Various studies were trotted out about the harm that long Internet calls were causing to the network. In retrospect, though, Internet access was a key source of the increased revenues and profits at the local telcos in the late 1990s. Since the main value of the phone was its accessibility at any time, long Internet calls led to installation of second lines that were highly profitable for service providers. (The average length of time that a phone line was in use remained remarkably constant during that period [49].)

Much of the progress in telecommunications over the last couple of decades was due to innovations by users. The “killer apps” on the Internet, email, Web, browser, search engines, and Napster, were all invented by end users, not by carriers. (Even email was specifically not designed into the ARPANET, the progenitor of the Internet, and its dominance came as a surprise [55].)

Monopolies & Internet innovation Read More »

Arguments against the Web’s ungovernability

From Technology Review‘s “Taming the Web“:

Nonetheless, the claim that the Internet is ungovernable by its nature is more of a hope than a fact. It rests on three widely accepted beliefs, each of which has become dogma to webheads. First, the Net is said to be too international to oversee: there will always be some place where people can set up a server and distribute whatever they want. Second, the Net is too interconnected to fence in: if a single person has something, he or she can instantly make it available to millions of others. Third, the Net is too full of hackers: any effort at control will invariably be circumvented by the world’s army of amateur tinkerers, who will then spread the workaround everywhere.

Unfortunately, current evidence suggests that two of the three arguments for the Net’s uncontrollability are simply wrong; the third, though likely to be correct, is likely to be irrelevant. In consequence, the world may well be on the path to a more orderly electronic future-one in which the Internet can and will be controlled. If so, the important question is not whether the Net can be regulated and monitored, but how and by whom. …

As Swaptor shows, the Net can be accessed from anywhere in theory, but as a practical matter, most out-of-the-way places don’t have the requisite equipment. And even if people do actually locate their services in a remote land, they can be easily discovered. …

Rather than being composed of an uncontrollable, shapeless mass of individual rebels, Gnutella-type networks have identifiable, centralized targets that can easily be challenged, shut down or sued. Obvious targets are the large backbone machines, which, according to peer-to-peer developers, can be identified by sending out multiple searches and requests. By tracking the answers and the number of hops they take between computers, it is possible not only to identify the Internet addresses of important sites but also to pinpoint their locations within the network.

Once central machines have been identified, companies and governments have a potent legal weapon against them: their Internet service providers. …

In other words, those who claim that the Net cannot be controlled because the world’s hackers will inevitably break any protection scheme are not taking into account that the Internet runs on hardware – and that this hardware is, in large part, the product of marketing decisions, not technological givens.

Arguments against the Web’s ungovernability Read More »

Security will retard innovation

From Technology Review‘s “Terror’s Server“:

Zittrain [Jonathan Zittrain, codirector of the Berkman Center for Internet and Society at Harvard Law School] concurs with Neumann [Peter Neumann, a computer scientist at SRI International, a nonprofit research institute in Menlo Park, CA] but also predicts an impending overreaction. Terrorism or no terrorism, he sees a convergence of security, legal, and business trends that will force the Internet to change, and not necessarily for the better. “Collectively speaking, there are going to be technological changes to how the Internet functions — driven either by the law or by collective action. If you look at what they are doing about spam, it has this shape to it,” Zittrain says. And while technologi­cal change might improve online security, he says, “it will make the Internet less flexible. If it’s no longer possible for two guys in a garage to write and distribute killer-app code without clearing it first with entrenched interests, we stand to lose the very processes that gave us the Web browser, instant messaging, Linux, and e-mail.”

Security will retard innovation Read More »

How terrorists use the Web

From Technology Review‘s “Terror’s Server“:

According to [Gabriel] Weimann [professor of communications at University of Haifa], the number of [terror-related] websites has leapt from only 12 in 1997 to around 4,300 today. …

These sites serve as a means to recruit members, solicit funds, and promote and spread ideology. …

The September 11 hijackers used conventional tools like chat rooms and e-mail to communicate and used the Web to gather basic information on targets, says Philip Zelikow, a historian at the University of Virginia and the former executive director of the 9/11 Commission. …

Finally, terrorists are learning that they can distribute images of atrocities with the help of the Web. … “The Internet allows a small group to publicize such horrific and gruesome acts in seconds, for very little or no cost, worldwide, to huge audiences, in the most powerful way,” says Weimann. …

How terrorists use the Web Read More »

What is a socio-technical system?

From “Why a Socio-Technical System?“:

You have divined by now that a socio-technical system is a mixture of people and technology. It is, in fact, a much more complex mixture. Below, we outline many of the items that may be found in an STS. In the notes, we will make the case that many of the individual items of a socio-technical system are difficult to distinguish from each other because of their close inter-relationships.

Socio-technical systems include:

Hardware Mainframes, workstations, peripheral, connecting networks. This is the classic meaning of technology. It is hard to imagine a socio-technical system without some hardware component (though we welcome suggestions). In our above examples, the hardware is the microcomputers and their connecting wires, hubs, routers, etc.

Software Operating systems, utilities, application programs, specialized code. It is getting increasingly hard to tell the difference between software and hardware, but we expect that software is likely to be an integral part of any socio-technical system. Software (and by implication, hardware too) often incorporates social rules and organizational procedures as part of its design (e.g. optimize these parameters, ask for these data, store the data in these formats, etc.). Thus, software can serve as a stand-in for some of the factors listed below, and the incorporation of social rules into the technology can make these rules harder to see and harder to change. In the examples above, much of the software is likely to change from the emergency room to the elementary school. The software that does not change (e.g. the operating system) may have been designed more with one socio-technical system in mind (e.g. Unix was designed with an academic socio-technical system in mind). The re-use of this software in a different socio-technical system may cause problems of mismatch.

Physical surroundings. Buildings also influence and embody social rules, and their design can effect the ways that a technology is used. The manager’s office that is protected by a secretary’s office is one example; the large office suite with no walls is another. The physical environment of the military supplier and the elementary school are likely to be quite different, and some security issues may be handled by this physical environment rather than by the technology. Moving a technology that assumes one physical environment into a different environment one may cause mismatch problems.

People Individuals, groups, roles (support, training, management, line personnel, engineer, etc.), agencies. Note that we list here not just people (e.g. Mr. Jones) but roles (Mr. Jones, head of quality assurance), groups (Management staff in Quality Assurance) and agencies (The Department of Defense). In addition to his role as head of quality assurance, Mr. Jones may also have other roles (e.g. a teacher, a professional electrical engineer, etc.). The person in charge of the microcomputers in our example above may have very different roles in the different socio-technical systems, and these different roles will bring with them different responsibilities and ethical issues. Software and hardware designed assuming the kind of support one would find in a university environment may not match well with an elementary school or emergency room environment.

Procedures both official and actual, management models, reporting relationships, documentation requirements, data flow, rules & norms. Procedures describe the way things are done in an organization (or at least the official line regarding how they ought to be done). Both the official rules and their actual implementation are important in understanding a socio-technical system. In addition, there are norms about how things are done that allow organizations to work. These norms may not be specified (indeed, it might be counter-productive to specify them). But those who understand them know how to, for instance, make complaints, get a questionable part passed, and find answers to technical questions. Procedures are prime candidates to be encoded in software design.

Laws and regulations. These also are procedures like those above, but they carry special societal sanctions if the violators are caught. They might be laws regarding the protection of privacy, or regulations about the testing of chips in military use. These societal laws and regulations might be in conflict with internal procedures and rules. For instance, some companies have implicit expectations that employees will share (and probably copy) commercial software. Obviously these illegal expectations cannot be made explicit, but they can be made known.

Data and data structures. What data are collected, how they are archived, to whom they are made available, and the formats in which they are stored are all decisions that go into the design of a socio-technical system. Data archiving in an emergency room it will be quite different from that in an insurance company, and will be subject to different ethical issues too.

What is a socio-technical system? Read More »

Man, I lived a lot of this

Ode to the 90s
Found on FuckedCompany.com
I part-time telecommuted
as a Webmaster
for a dot com
in Y2K consulting.
They said it was
temp-to-perm.
it didn't pay
but there were options.
I swung by the office to make trades.
(Not that there's anything
wrong with that.)
cause we had a T1 Line
and there was a bull market
with a strong,
virile President.
and you never knew
when it could
crash.
I was a millionaire at 27
for thirty seconds.
I dug grunge.
then eighties.
Tony Bennet.
then Chumbawumba.
how bizzare.
how bizzare.
smoked Cohibas.
(Not that there's anything
wrong with that.)
but I didn't inhale.
Alrighty, then...
I learned HTML
and swing dancing.
moved to Seattle
but I was back on the redeye.
why did I eat
those krispy kremes?
it all seemed like a good idea
at the time.
I had a Pentium III
yeah
baby
yeah
with 9 gigs and a DVD.
It can do anythingh
even play movies.
I fell in love
in a chatroom
with a .BMP
I got the .JPEG
I wasn't so sure.....
I got emails,
but I couldn't Reply
my server was down
and our IT can't handle the MIS.
And my email didn't allow enclosures...
her ICQ was in my PDA
but I upgraded and
the memory's gone.

[Boing Boing Blog]

Man, I lived a lot of this Read More »

Black Friday, now Cyber Monday

From "Ready, Aim, Shop" in The New York Times:

Though it sounds like slick marketing, Cyber Monday, it turns out, is a legitimate trend. According to Shop.org, a trade group, 77 percent of online retailers reported a substantial sales increase on the Monday after Thanksgiving last year. “Not good for employers,” observed Ed Bussey, senior vice president of marketing at the online lingerie retailer Figleaves.com.

Figleaves.com said sales on Cyber Monday last year were twice those of Black Friday. And that number is likely to jump this year when it offers the online equivalent of a doorbuster – 20 percent off all items.

Black Friday, now Cyber Monday Read More »