technology

Paul Graham on software patents

From Paul Graham’s “Are Software Patents Evil?“:

The situation with patents is similar. Business is a kind of ritualized warfare. Indeed, it evolved from actual warfare: most early traders switched on the fly from merchants to pirates depending on how strong you seemed. In business there are certain rules describing how companies may and may not compete with one another, and someone deciding that they’re going to play by their own rules is missing the point. Saying “I’m not going to apply for patents just because everyone else does” is not like saying “I’m not going to lie just because everyone else does.” It’s more like saying “I’m not going to use TCP/IP just because everyone else does.” Oh yes you are.

A closer comparison might be someone seeing a hockey game for the first time, realizing with shock that the players were deliberately bumping into one another, and deciding that one would on no account be so rude when playing hockey oneself.

Hockey allows checking. It’s part of the game. If your team refuses to do it, you simply lose. So it is in business. Under the present rules, patents are part of the game. …

When you read of big companies filing patent suits against smaller ones, it’s usually a big company on the way down, grasping at straws. For example, Unisys’s attempts to enforce their patent on LZW compression. When you see a big company threatening patent suits, sell. When a company starts fighting over IP, it’s a sign they’ve lost the real battle, for users.

A company that sues competitors for patent infringement is like a defender who has been beaten so thoroughly that he turns to plead with the referee. You don’t do that if you can still reach the ball, even if you genuinely believe you’ve been fouled. So a company threatening patent suits is a company in trouble. …

In other words, no one will sue you for patent infringement till you have money, and once you have money, people will sue you whether they have grounds to or not. So I advise fatalism. Don’t waste your time worrying about patent infringement. You’re probably violating a patent every time you tie your shoelaces. At the start, at least, just worry about making something great and getting lots of users. If you grow to the point where anyone considers you worth attacking, you’re doing well.

We do advise the companies we fund to apply for patents, but not so they can sue competitors. Successful startups either get bought or grow into big companies. If a startup wants to grow into a big company, they should apply for patents to build up the patent portfolio they’ll need to maintain an armed truce with other big companies. If they want to get bought, they should apply for patents because patents are part of the mating dance with acquirers. …

Patent trolls are companies consisting mainly of lawyers whose whole business is to accumulate patents and threaten to sue companies who actually make things. Patent trolls, it seems safe to say, are evil. I feel a bit stupid saying that, because when you’re saying something that Richard Stallman and Bill Gates would both agree with, you must be perilously close to tautologies.

Paul Graham on software patents Read More »

Douglas Adams on information overload

From Douglas Adam’s “Is there an Artificial God?“:

Let me back up for a minute and talk about the way we communicate. Traditionally, we have a bunch of different ways in which we communicate with each other. One way is one-to-one; we talk to each other, have a conversation. Another is one-to-many, which I’m doing at the moment, or someone could stand up and sing a song, or announce we’ve got to go to war. Then we have many-to-one communication; we have a pretty patchy, clunky, not-really-working version we call democracy, but in a more primitive state I would stand up and say, ‘OK, we’re going to go to war’ and some may shout back ‘No we’re not!’ – and then we have many-to-many communication in the argument that breaks out afterwards!

In this century (and the previous century) we modelled one-to-one communications in the telephone, which I assume we are all familiar with. We have one-to-many communication—boy do we have an awful lot of that; broadcasting, publishing, journalism, etc.—we get information poured at us from all over the place and it’s completely indiscriminate as to where it might land. It’s curious, but we don’t have to go very far back in our history until we find that all the information that reached us was relevant to us and therefore anything that happened, any news, whether it was about something that’s actually happened to us, in the next house, or in the next village, within the boundary or within our horizon, it happened in our world and if we reacted to it the world reacted back. It was all relevant to us, so for example, if somebody had a terrible accident we could crowd round and really help. Nowadays, because of the plethora of one-to-many communication we have, if a plane crashes in India we may get terribly anxious about it but our anxiety doesn’t have any impact. We’re not very well able to distinguish between a terrible emergency that’s happened to somebody a world away and something that’s happened to someone round the corner. We can’t really distinguish between them any more, which is why we get terribly upset by something that has happened to somebody in a soap opera that comes out of Hollywood and maybe less concerned when it’s happened to our sister. We’ve all become twisted and disconnected and it’s not surprising that we feel very stressed and alienated in the world because the world impacts on us but we don’t impact the world. Then there’s many-to-one; we have that, but not very well yet and there’s not much of it about. Essentially, our democratic systems are a model of that and though they’re not very good, they will improve dramatically.

But the fourth, the many-to-many, we didn’t have at all before the coming of the Internet, which, of course, runs on fibre-optics. It’s communication between us …

Douglas Adams on information overload Read More »

Why businesses want RFID for inventory

From Technology Review‘s “Tracking Privacy“:

Technology Review: How would RFID work to track products?

Sandra Hughes [Global privacy executive, Procter and Gamble]: It’s a technology that involves a silicon chip and an antenna, which together we call a tag. The tags emit radio signals to devices that we call readers. One of the things that is important to know about is EPC. Some people use RFID and EPC interchangeably, but they are different. EPC stands for electronic product code; it’s really like an electronic bar code.

TR: So manufacturers and distributors would use EPCs encoded in RFID tags to mark and track products? Why’s that any better than using regular bar codes?

Hughes: Bar codes require a line of sight, so somebody with a bar code reader has to get right up on the bar code and scan it. When you’re thinking about the supply chain, somebody in the warehouse is having to look at every single case. With RFID, a reader should be able to pick up just by one swipe all of the cases on the pallet, even the ones stacked up in the middle that can’t be seen. So it’s much, much faster and more efficient and accurate.

TR: Why is that speed important?

Hughes: We want our product to be on the shelf for consumers when they want it. A recent study of retailers showed that the top 2,000 items in stores had a 12 percent out-of-stock rate on Saturday afternoons, the busiest shopping day. I think the industry average for inventory levels is 65 days, which means products sitting around, taking up space for that time, and that costs about $3 billion annually. Often a retail clerk can’t quickly find products in the crowded back room of a store to make sure that the shelves are filled for the consumer, or doesn’t know that a shelf is sitting empty because she hasn’t walked by lately. With RFID, the shelf can signal to the back room that it is empty, and the clerk can quickly find the product.

Why businesses want RFID for inventory Read More »

Arguments against the Web’s ungovernability

From Technology Review‘s “Taming the Web“:

Nonetheless, the claim that the Internet is ungovernable by its nature is more of a hope than a fact. It rests on three widely accepted beliefs, each of which has become dogma to webheads. First, the Net is said to be too international to oversee: there will always be some place where people can set up a server and distribute whatever they want. Second, the Net is too interconnected to fence in: if a single person has something, he or she can instantly make it available to millions of others. Third, the Net is too full of hackers: any effort at control will invariably be circumvented by the world’s army of amateur tinkerers, who will then spread the workaround everywhere.

Unfortunately, current evidence suggests that two of the three arguments for the Net’s uncontrollability are simply wrong; the third, though likely to be correct, is likely to be irrelevant. In consequence, the world may well be on the path to a more orderly electronic future-one in which the Internet can and will be controlled. If so, the important question is not whether the Net can be regulated and monitored, but how and by whom. …

As Swaptor shows, the Net can be accessed from anywhere in theory, but as a practical matter, most out-of-the-way places don’t have the requisite equipment. And even if people do actually locate their services in a remote land, they can be easily discovered. …

Rather than being composed of an uncontrollable, shapeless mass of individual rebels, Gnutella-type networks have identifiable, centralized targets that can easily be challenged, shut down or sued. Obvious targets are the large backbone machines, which, according to peer-to-peer developers, can be identified by sending out multiple searches and requests. By tracking the answers and the number of hops they take between computers, it is possible not only to identify the Internet addresses of important sites but also to pinpoint their locations within the network.

Once central machines have been identified, companies and governments have a potent legal weapon against them: their Internet service providers. …

In other words, those who claim that the Net cannot be controlled because the world’s hackers will inevitably break any protection scheme are not taking into account that the Internet runs on hardware – and that this hardware is, in large part, the product of marketing decisions, not technological givens.

Arguments against the Web’s ungovernability Read More »

Security will retard innovation

From Technology Review‘s “Terror’s Server“:

Zittrain [Jonathan Zittrain, codirector of the Berkman Center for Internet and Society at Harvard Law School] concurs with Neumann [Peter Neumann, a computer scientist at SRI International, a nonprofit research institute in Menlo Park, CA] but also predicts an impending overreaction. Terrorism or no terrorism, he sees a convergence of security, legal, and business trends that will force the Internet to change, and not necessarily for the better. “Collectively speaking, there are going to be technological changes to how the Internet functions — driven either by the law or by collective action. If you look at what they are doing about spam, it has this shape to it,” Zittrain says. And while technologi­cal change might improve online security, he says, “it will make the Internet less flexible. If it’s no longer possible for two guys in a garage to write and distribute killer-app code without clearing it first with entrenched interests, we stand to lose the very processes that gave us the Web browser, instant messaging, Linux, and e-mail.”

Security will retard innovation Read More »

Terrorist social networks

From Technology Review‘s “Terror’s Server“:

For example, research suggests that people with nefarious intent tend to exhibit distinct patterns in their use of e-mails or online forums like chat rooms. Whereas most people establish a wide variety of contacts over time, those engaged in plotting a crime tend to keep in touch only with a very tight circle of people, says William Wallace, an operations researcher at Rensselaer Polytechnic Institute.

This phenomenon is quite predictable. “Very few groups of people communicate repeatedly only among themselves,” says Wallace. “It’s very rare; they don’t trust people outside the group to communicate. When 80 percent of communications is within a regular group, this is where we think we will find the groups who are planning activities that are malicious.” Of course, not all such groups will prove to be malicious; the odd high-school reunion will crop up. But Wallace’s group is developing an algorithm that will narrow down the field of so-called social networks to those that warrant the scrutiny of intelligence officials. The algorithm is scheduled for completion and delivery to intelligence agencies this summer. …

Terrorist social networks Read More »

How terrorists use the Web

From Technology Review‘s “Terror’s Server“:

According to [Gabriel] Weimann [professor of communications at University of Haifa], the number of [terror-related] websites has leapt from only 12 in 1997 to around 4,300 today. …

These sites serve as a means to recruit members, solicit funds, and promote and spread ideology. …

The September 11 hijackers used conventional tools like chat rooms and e-mail to communicate and used the Web to gather basic information on targets, says Philip Zelikow, a historian at the University of Virginia and the former executive director of the 9/11 Commission. …

Finally, terrorists are learning that they can distribute images of atrocities with the help of the Web. … “The Internet allows a small group to publicize such horrific and gruesome acts in seconds, for very little or no cost, worldwide, to huge audiences, in the most powerful way,” says Weimann. …

How terrorists use the Web Read More »

When to use XML

From W3C’s “Architecture of the World Wide Web, Volume One“:

XML defines textual data formats that are naturally suited to describing data objects which are hierarchical and processed in a chosen sequence. It is widely, but not universally, applicable for data formats; an audio or video format, for example, is unlikely to be well suited to expression in XML. Design constraints that would suggest the use of XML include:

1. Requirement for a hierarchical structure.
2. Need for a wide range of tools on a variety of platforms.
3. Need for data that can outlive the applications that currently process it.
4. Ability to support internationalization in a self-describing way that makes confusion over coding options unlikely.
5. Early detection of encoding errors with no requirement to “work around” such errors.
6. A high proportion of human-readable textual content.
7. Potential composition of the data format with other XML-encoded formats.
8. Desire for data easily parsed by both humans and machines.
9. Desire for vocabularies that can be invented in a distributed manner and combined flexibly.

When to use XML Read More »

The growth in data & the problem of storage

From Technology Review‘s “The Fading Memory of the State“:

Tom Hawk, general manager for enterprise storage at IBM, says that in the next three years, humanity will generate more data–from websites to digital photos and video–than it generated in the previous 1,000 years. … In 1996, companies spent 11 percent of their IT budgets on storage, but that figure will likely double to 22 percent in 2007, according to International Technology Group of Los Altos, CA.

… the Pentagon generates tens of millions of images from personnel files each year; the Clinton White House generated 38 million e-mail messages (and the current Bush White House is expected to generate triple that number); and the 2000 census returns were converted into more than 600 million TIFF-format image files, some 40 terabytes of data. A single patent application can contain a million pages, plus complex files like 3-D models of proteins or CAD drawings of aircraft parts. All told, NARA expects to receive 347 petabytes … of electronic records by 2022.

Currently, the Archives holds only a trivial number of electronic records. Stored on steel racks in NARA’s [National Archives and Records Administration] 11-year-old facility in College Park, the digital collection adds up to just five terabytes. Most of it consists of magnetic tapes of varying ages, many of them holding a mere 200 megabytes apiece–about the size of 10 high-resolution digital photographs.

The growth in data & the problem of storage Read More »

Cameraphones are different cameras & different phones

From David Pescovitz’s “The Big Picture“:

Mobile researcher John Poisson, CEO of the Fours Initiative, focuses on how cameraphones could revolutionize photography and communication — if people would only start using them more.

As the leader of Sony Corporation’s mobile media research and design groups in Tokyo, John Poisson spent two years focused on how people use cameraphones, and why they don’t use them more often.

TheFeature: What have you learned over the course of your research?

Poisson: People think of the cameraphone as a more convenient tool for digital photography, an extension of the digital camera. That’s missing the mark. The mobile phone is a communications device. The minute you attach a camera to that, and give people the ability to share the content that they’re creating in real time, the dynamic changes significantly.

TheFeature: Aren’t providers already developing applications to take advantage of that shift?

Poisson: Well, we have things like the ability to moblog, to publish pictures to a blog, which is not necessarily the most relevant model to consumers. Those tools are developed by people who understand blogging and apply it in their daily lives. But it ignores the trend that we and Mimi Ito and others are seeing as part of the evolution of photography. If you look at the way people have (historically) used cameras, it started off with portraiture and photographs of record — formalized photographs with a capital “P.” Then as the technology evolved, we had this notion of something called a snapshot, which is much more informal. People could take a higher number of pictures with not so much concern over composition. It was more about capturing an experience than photographing something. The limit of that path was the Polaroid. It was about taking the picture and sharing it instantly. What we have today is the ability to create today is a kind of distributed digital manifestation of that process.

Cameraphones are different cameras & different phones Read More »

Culture, values, & designing technology systems

From danah boyd’s “G/localization: When Global Information and Local Interaction Collide“:

Culture is the set of values, norms and artifacts that influence people’s lives and worldview. Culture is embedded in material objects and in conceptual frameworks about how the world works. …

People are a part of multiple cultures – the most obvious of which are constructed by religion and nationality, but there are all sorts of cultures that form from identities and communities of practice. … Identification and participation in that culture means sharing a certain set of cultural values and ideas about how the world should work. …

Cultural norms evolve over time, influenced by people, their practices, and their environment. Culture is written into law and laws influence the evolution of culture. Cultures develop their own symbols as a way of conveying information. Often, these symbols make sense to those within a culture but are not parsable to those outside. Part of becoming indoctrinated into a culture is learning the symbols of that culture. …

… there are numerous cultural forces affecting your life at all times. How you see the world and how you design or build technology is greatly influenced by the various cultural concepts you hold onto. …

… algorithms are simply the computer manifestation of a coder’s cultural norms.

Culture, values, & designing technology systems Read More »

Google on the Google File System (& Linux)

From Sanjay Ghemawat, Howard Gobioff, & Shun-Tak Leung’s “The Google File System“:

We have designed and implemented the Google File Sys- tem, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. …

The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our ser- vice as well as research and development efforts that require large data sets. The largest cluster to date provides hun- dreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. …

We have seen problems caused by application bugs, operating system bugs, human errors, and the failures of disks, memory, connectors, networking, and power sup- plies. Therefore, constant monitoring, error detection, fault tolerance, and automatic recovery must be integral to the system.

Second, files are huge by traditional standards. Multi-GB files are common. Each file typically contains many applica- tion objects such as web documents. When we are regularly working with fast growing data sets of many TBs comprising billions of objects, it is unwieldy to manage billions of ap- proximately KB-sized files even when the file system could support it. As a result, design assumptions and parameters such as I/O operation and blocksizes have to be revisited.

Third, most files are mutated by appending new data rather than overwriting existing data. Random writes within a file are practically non-existent. Once written, the files are only read, and often only sequentially. …

Multiple GFS clusters are currently deployed for different purposes. The largest ones have over 1000 storage nodes, over 300 TB of diskstorage, and are heavily accessed by hundreds of clients on distinct machines on a continuous basis. …

Despite occasional problems, the availability of Linux code has helped us time and again to explore and understand system behavior. When appropriate, we improve the kernel and share the changes with the open source community.

Google on the Google File System (& Linux) Read More »

The original description of Ajax

From Jesse James Garrett’s “Ajax: A New Approach to Web Applications“:

Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

  • standards-based presentation using XHTML and CSS;
  • dynamic display and interaction using the Document Object Model;
  • data interchange and manipulation using XML and XSLT;
  • asynchronous data retrieval using XMLHttpRequest;
  • and JavaScript binding everything together.

The classic web application model works like this: Most user actions in the interface trigger an HTTP request back to a web server. The server does some processing — retrieving data, crunching numbers, talking to various legacy systems — and then returns an HTML page to the client. It’s a model adapted from the Web’s original use as a hypertext medium, but as fans of The Elements of User Experience know, what makes the Web good for hypertext doesn’t necessarily make it good for software applications. …

An Ajax application eliminates the start-stop-start-stop nature of interaction on the Web by introducing an intermediary — an Ajax engine — between the user and the server. It seems like adding a layer to the application would make it less responsive, but the opposite is true.

Instead of loading a webpage, at the start of the session, the browser loads an Ajax engine — written in JavaScript and usually tucked away in a hidden frame. This engine is responsible for both rendering the interface the user sees and communicating with the server on the user’s behalf. The Ajax engine allows the user’s interaction with the application to happen asynchronously — independent of communication with the server. So the user is never staring at a blank browser window and an hourglass icon, waiting around for the server to do something. …

Every user action that normally would generate an HTTP request takes the form of a JavaScript call to the Ajax engine instead. Any response to a user action that doesn’t require a trip back to the server — such as simple data validation, editing data in memory, and even some navigation — the engine handles on its own. If the engine needs something from the server in order to respond — if it’s submitting data for processing, loading additional interface code, or retrieving new data — the engine makes those requests asynchronously, usually using XML, without stalling a user’s interaction with the application.

The original description of Ajax Read More »

A very brief history of programming

From Brian Hayes’ “The Post-OOP Paradigm“:

The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when “the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” Half a century later, we’re still debugging.

The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1s and 0s. Moreover, it was up to the programmer to keep track of where everything was stored in the machine’s memory. Before you could call a subroutine, you had to calculate its address.

The technology that lifted these burdens from the programmer was assembly language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming.

Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x 2+y 2 might require dozens of assembly-language instructions. Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x 2+y 2 would be written simply as X**2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler.

… By the 1960s large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large-system programming a “tar pit” and remarked, “Everyone seems to have been surprised by the stickiness of the problem.”

One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra’s brief letter to the editor titled “Go to statement considered harmful.” Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the goto command, which allows jumps into or out of the middle of a routine). Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Böhm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs.

Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.

Object-oriented programming addresses these issues by packing both data and procedures—both nouns and verbs—into a single object. An object named triangle would have inside it some data structure representing a three-sided shape, but it would also include the procedures (called methods in this context) for acting on the data. To rotate a triangle, you send a message to the triangle object, telling it to rotate itself. Sending and receiving messages is the only way objects communicate with one another; outsiders are not allowed direct access to the data. Because only the object’s own methods know about the internal data structures, it’s easier to keep them in sync.

You define the class triangle just once; individual triangles are created as instances of the class. A mechanism called inheritance takes this idea a step further. You might define a more-general class polygon, which would have triangle as a subclass, along with other subclasses such as quadrilateral, pentagon and hexagon. Some methods would be common to all polygons; one example is the calculation of perimeter, which can be done by adding the lengths of the sides, no matter how many sides there are. If you define the method calculate-perimeter in the class polygon, all the subclasses inherit this code.

A very brief history of programming Read More »

Free markets need visibility to work

From Slashdot’s “Pay-per-email and the ‘Market Myth’“:

But I think there’s a bigger problem underlying all of this. It’s not about specific problems with GoodMail’s or AOL’s or Hotmail’s system. The problem is that many advocates of these systems say that any flaws will get sorted out automatically by “the market” — and in this case I think that is simply wrong. And in fact the people on Thursday’s panel can’t really believe it either, because one thing we all agreed on was that Bonded Sender sucks. But has the marketplace punished Hotmail for using it? Have people left in droves because non-Bonded-Sender e-mail gets blocked? No, because if they never see it getting blocked they don’t know what happens. Free markets only solve problems that are actually visible to the user.

Free markets need visibility to work Read More »

Intel: anyone can challenge anyone

From FORTUNE’s “Lessons in Leadership: The Education of Andy Grove“:

[Intel CEO Andy] Grove had never been one to rely on others’ interpretations of reality. … At Intel he fostered a culture in which “knowledge power” would trump “position power.” Anyone could challenge anyone else’s idea, so long as it was about the idea and not the person–and so long as you were ready for the demand “Prove it.” That required data. Without data, an idea was only a story–a representation of reality and thus subject to distortion.

Intel: anyone can challenge anyone Read More »

Intel’s ups and downs

From FORTUNE’s “Lessons in Leadership: The Education of Andy Grove“:

By 1983, when Grove distilled much of his thinking in his book High Output Management (still a worthwhile read), he was president of a fast-growing $1.1-billion-a-year corporation, a leading maker of memory chips, whose CEO was Gordon Moore. … What Moore’s Law did not and could not predict was that Japanese firms, too, might master this process and turn memory chips into a commodity. …

Intel kept denying the cliff ahead until its profits went over the edge, plummeting from $198 million in 1984 to less than $2 million in 1985. It was in the middle of this crisis, when many managers would have obsessed about specifics, that Grove stepped outside himself. He and Moore had been agonizing over their dilemma for weeks, he recounts in Only the Paranoid Survive, when something happened: “I looked out the window at the Ferris wheel of the Great America amusement park revolving in the distance when I turned back to Gordon, and I asked, ‘If we got kicked out and the board brought in a new CEO, what do you think he would do?’ Gordon answered without hesitation, ‘He would get us out of memories.’ I stared at him, numb, then said, ‘Why shouldn’t you and I walk out the door, come back, and do it ourselves?'”

… once IBM chose Intel’s microprocessor to be the chip at the heart of its PCs, demand began to explode. Even so, the shift from memory chips was brutally hard–in 1986, Intel fired some 8,000 people and lost more than $180 million on $1.3 billion in sales–the only loss the company has ever posted since its early days as a startup.

Intel’s ups and downs Read More »

An interesting way to look at DRM

From “The Big DRM Mistake?“:

Fundamentally, DRM is a about persistent access control – it is a term for a set of technologies that allow for data to be protected beyond the file system of the original machine. Thus, for example, the read/write/execute access control on most *nix file systems will not only be applicable to the original machine but to all machines.

Stated in these terms, I agree with the aims of DRM. However, it is the ways in which large media and software businesses have mis-applied DRM that have ruined the associations most users have with the technology.

An interesting way to look at DRM Read More »