microsoft

Microsoft’s programmers, evaluated by an engineer

From John Wharton’s “The Origins of DOS” (Microprocessor Report: 3 October 1994):

In August of 1981, soon after Microsoft had acquired full rights to 86-DOS, Bill Gates visited Santa Clara in an effort to persuade Intel to abandon a joint development project with DRI and endorse MS-DOS instead. It was I – the Intel applications engineer then responsible for iRMX-86 and other 16-bit operating systems – who was assigned the task of performing a technical evaluation of the 86- DOS software. It was I who first informed Gates that the software he just bought was not, in fact, fully compatible with CP/M 2.2. At the time I had the distinct impression that, until then, he’d thought the entire OS had been cloned.

The strong impression I drew 13 years ago was that Microsoft programmers were untrained, undisciplined, and content merely to replicate other people’s ideas, and that they did not seem to appreciate the importance of defining operating systems and user interfaces with an eye to the future.

Microsoft’s programmers, evaluated by an engineer Read More »

A wireless router with 2 networks: 1 secure, 1 open

From Bruce Schneier’s “My Open Wireless Network” (Crypto-Gram: 15 January 2008):

A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either “Bill” or “Linus” mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network. In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It’s a really clever idea.

A wireless router with 2 networks: 1 secure, 1 open Read More »

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »

His employer’s misconfigured laptop gets him charged with a crime

From Robert McMillan’s “A misconfigured laptop, a wrecked life” (NetworkWorld: 18 June 2008):

When the Commonwealth of Massachusetts issued Michael Fiola a Dell Latitude in November 2006, it set off a chain of events that would cost him his job, his friends and about a year of his life, as he fought criminal charges that he had downloaded child pornography onto the laptop. Last week, prosecutors dropped their year-old case after a state investigation of his computer determined there was insufficient evidence to prove he had downloaded the files.

An initial state investigation had come to the opposite conclusion, and authorities took a second look at Fiola’s case only after he hired a forensic investigator to look at his laptop. What she found was scary, given the gravity of the charges against him: The Microsoft SMS (Systems Management Server) software used to keep his laptop up to date was not functional. Neither was its antivirus protection. And the laptop was crawling with malicious programs that were most likely responsible for the files on his PC.

Fiola had been an investigator with the state’s Department of Industrial Accidents, examining businesses to see whether they had worker’s compensation plans. Over the past two days, however, he’s become a spokesman for people who have had their lives ruined by malicious software.

[Fiola narrates his story:] We had a laptop basically to do our reports instantaneously. If I went to a business and found that they were out of compliance, I would log on and type in a report so it could get back to the home office in Boston immediately. We also used it to research businesses. …

My boss called me into his office at 9 a.m. The director of the Department of Industrial Accidents, my immediate supervisor, and the personnel director were there. They handed me a letter and said, “You are being fired for a violation of the computer usage policy. You have pornography on your computer. You’re fired. Clean out your desk. Let’s go.” …

It was horrible. No paycheck. I lost all my benefits. I lost my insurance. My wife is very, very understanding. She took the bull by the horns and found an attorney. I was just paralyzed, I couldn’t do anything. I can’t describe the feeling to you. I wouldn’t wish this on my worst enemy. It’s just devastating.

If you get in a car accident and you kill somebody, people talk to you afterwards. All our friends abandoned us. The only family that stood by us was my dad, her parents, my stepdaughter and one other good friend of ours. And that was it. Nobody called. We spent many weekends at home just crying. I’m 53 years old and I don’t think I’ve cried as much in my whole life as I did in the past 18 months. …

His employer’s misconfigured laptop gets him charged with a crime Read More »

Why you should not run Windows as Admin

From Aaron Margosis’ “Why you shouldn’t run as admin…” (17 June 2004):

But if you’re running as admin [on Windows], an exploit can:

  • install kernel-mode rootkits and/or keyloggers (which can be close to impossible to detect)
  • install and start services
  • install ActiveX controls, including IE and shell add-ins (common with spyware and adware)
  • access data belonging to other users
  • cause code to run whenever anybody else logs on (including capturing passwords entered into the Ctrl-Alt-Del logon dialog)
  • replace OS and other program files with trojan horses
  • access LSA Secrets, including other sensitive account information, possibly including account info for domain accounts
  • disable/uninstall anti-virus
  • cover its tracks in the event log
  • render your machine unbootable
  • if your account is an administrator on other computers on the network, the malware gains admin control over those computers as well
  • and lots more

Why you should not run Windows as Admin Read More »

I for one welcome our new OS overlords: Google Chrome

As some of you may have heard, Google has announced its own web browser, Chrome. It’s releasing the Windows version today, with Mac & Linux versions to follow.

To educate people about the new browser & its goals, they release a 38 pg comic book drawn by the brilliant Scott McCloud. It’s a really good read, but it gets a bit technical at times. However, someone did a “Reader’s Digest” version, which you can read here:

http://technologizer.com/2008/09/01/google-chrome-comic-the-readers-digest-version

I highly encourage you to read it. This browser is doing some very interesting, smart things. And it’s open source, so other browsers can use its code & ideas.

If you want to read the full comic, you can do so here:

http://www.google.com/googlebooks/chrome/

BTW … I don’t think Chrome has the potential of becoming the next big browser; I think instead it has the potential to become the next big operating system. See http://www.techcrunch.com/2008/09/01/meet-chrome-googles-windows-killer/ for more on that.

I for one welcome our new OS overlords: Google Chrome Read More »

Obama, Clinton, Microsoft Excel, and OpenOffice.org

I recently posted this to my local Linux Users Group mailing list:

Thought y’all would find this interesting – from http://machinist.salon.com/blog/2008/05/26/fundraising_excel/index.html:

“A milestone of sorts was reached earlier this year, when Obama, the Illinois senator whose revolutionary online fundraising has overwhelmed Clinton, filed an electronic fundraising report so large it could not be processed by popular basic spreadsheet applications like Microsoft Excel 2003 and Lotus 1-2-3.

Those programs can’t download data files with more than 65,536 rows or 256 columns.

Obama’s January fundraising report, detailing the $23 million he raised and $41 million he spent in the last three months of 2007, far exceeded 65,536 rows listing contributions, refunds, expenditures, debts, reimbursements and other details. It was the first report to confound basic database programs since 2001, when the Federal Election Commission began directly posting candidates’ fundraising reports online in an effort to make political money more accessible and transparent to voters.

By March, the reports filed by Clinton, a New York senator who attributes Obama’s victories in several states to her own lack of money, also could no longer be downloaded into spreadsheets using basic applications.

If you want to comb through Obama or Clinton’s cash, you either need to divide and import their reports section-by-section (a time-consuming and mind-numbing process) or purchase a more powerful database application, such as Microsoft Access or Microsoft Excel 2007, both of which retail for $229.”

Interestingly, OpenOffice.org 2.0 has the same limitation. OpenOffice.org 3 will expand the number of columns to 1024, according to http://www.oooninja.com/2008/03/openofficeorg-30-new-features.html. No idea about how many rows. Anyone know?

OK … looked it up … it appears that the row limit is STILL in place, so you can’t use OOo to open Obama’s or Hillary’s spreadsheets. Of course, you could use MySQL …

Oh yeah … and here’s one more note, from the same Salon article quoted above:

“In a revealing insight into the significant fundraising disparity between the two Democrats and presumptive Republican presidential nominee, Arizona Sen. John McCain, it is still possible to download his reports with plain-old Excel.”

Ouch!

Obama, Clinton, Microsoft Excel, and OpenOffice.org Read More »

Out now: Microsoft Vista for IT Security Professionals

Microsoft Vista for IT Security Professionals is designed for the professional system administrators who need to securely deploy Microsoft Vista in their networks. Readers will not only learn about the new security features of Vista, but they will learn how to safely integrate Vista with their existing wired and wireless network infrastructure and safely deploy with their existing applications and databases. The book begins with a discussion of Microsoft’s Trustworthy Computing Initiative and Vista’s development cycle, which was like none other in Microsoft’s history. Expert authors will separate the hype from the reality of Vista’s preparedness to withstand the 24 x 7 attacks it will face from malicious attackers as the world’s #1 desktop operating system. The book has a companion CD which contains hundreds of working scripts and utilities to help administrators secure their environments.

This book is written for intermediate to advanced System administrators managing Microsoft networks who are deploying Microsoft’s new flagship desktop operating system: Vista. This book is appropriate for system administrators managing small networks of fewer than 10 machines up to enterprise-class networks with tens of thousands of systems. This book is also appropriate for readers preparing for the Microsoft exam MCDST 70-620.

I contributed two appendices to this book:

  • Appendix A: Microsoft Vista: The International Community
  • Appendix B: Changes to the Vista EULA

Appendix A, “Microsoft Vista: The International Community”, was about Microsoft’s legal troubles in Europe and Asia, and the changes the company had to make to Vista to accommodate those governments. Appendix B, “Changes to the Vista EULA”, explained that the EULA in Vista is even worse than that found in XP, which was worse than any previous EULA. In other words, Vista has a problematic EULA that users need to know about before they buy the OS.

Read excerpts: Front Matter (350 KB PDF) and Chapter 1: Microsoft Vista: An Overview (760 KB PDF). You can flip through the entire book, although you’re limited to the total number of pages you can view (but it’s a pretty high number, like 50 or so).

Out now: Microsoft Vista for IT Security Professionals Read More »

Microsoft executive sets self up for hubristic fall

From Scott M. Fulton, III’s “Allchin Suggests Vista Won’t Need Antivirus” (BetaNews: 9 November 2006):

During a telephone conference with reporters yesterday, outgoing Microsoft co-president Jim Allchin, while touting the new security features of Windows Vista, which was released to manufacturing yesterday, told a reporter that the system’s new lockdown features are so capable and thorough that he was comfortable with his own seven-year-old son using Vista without antivirus software installed.

Microsoft executive sets self up for hubristic fall Read More »

Russian bot herders behind massive increase in spam

From Ryan Naraine’s “‘Pump-and-Dump’ Spam Surge Linked to Russian Bot Herders” (eWeek: 16 November 2006):

The recent surge in e-mail spam hawking penny stocks and penis enlargement pills is the handiwork of Russian hackers running a botnet powered by tens of thousands of hijacked computers.

Internet security researchers and law enforcement authorities have traced the operation to a well-organized hacking gang controlling a 70,000-strong peer-to-peer botnet seeded with the SpamThru Trojan. …

For starters, the Trojan comes with its own anti-virus scanner – a pirated copy of Kaspersky’s security software – that removes competing malware files from the hijacked machine. Once a Windows machine is infected, it becomes a peer in a peer-to-peer botnet controlled by a central server. If the control server is disabled by botnet hunters, the spammer simply has to control a single peer to retain control of all the bots and send instructions on the location of a new control server.

The bots are segmented into different server ports, determined by the variant of the Trojan installed, and further segmented into peer groups of no more than 512 bots. This allows the hackers to keep the overhead involved in exchanging information about other peers to a minimum, Stewart explained.

… the attackers are meticulous about keeping statistics on bot infections around the world.

For example, the SpamThru controller keeps statistics on the country of origin of all bots in the botnet. In all, computers in 166 countries are part of the botnet, with the United States accounting for more than half of the infections.

The botnet stats tracker even logs the version of Windows the infected client is running, down to the service pack level. One chart commandeered by Stewart showed that Windows XP SP2 … machines dominate the makeup of the botnet, a clear sign that the latest version of Microsoft’s operating system is falling prey to attacks.

Another sign of the complexity of the operation, Stewart found, was a database hacking component that signaled the ability of the spammers to target its pump-and-dump scams to victims most likely to be associated with stock trading.

Stewart said about 20 small investment and financial news sites have been breached for the express purpose of downloading user databases with e-mail addresses matched to names and other site registration data. On the bot herder’s control server, Stewart found a MySQL database dump of e-mail addresses associated with an online shop. …

The SpamThru spammer also controls lists of millions of e-mail addresses harvested from the hard drives of computers already in the botnet. …

“It’s a very enterprising operation and it’s interesting that they’re only doing pump-and-dump and penis enlargement spam. That’s probably because those are the most lucrative,” he added.

Even the spam messages come with a unique component. The messages are both text- and image-based and a lot of effort has been put into evading spam filters. For example, each SpamThru client works as its own spam engine, downloading a template containing the spam and random phrases to use as hash-busters, random “from” names, and a list of several hundred e-mail addresses to send to.

Stewart discovered that the image files in the templates are modified with every e-mail message sent, allowing the spammer to change the width and height. The image-based spam also includes random pixels at the bottom, specifically to defeat anti-spam technologies that reject mail based on a static image.

All SpamThru bots – the botnet controls about 73,000 infected clients – are also capable of using a list of proxy servers maintained by the controller to evade blacklisting of the bot IP addresses by anti-spam services. Stewart said this allows the Trojan to act as a “massive distributed engine for sending spam,” without the cost of maintaining static servers.

With a botnet of this size, the group is theoretically capable of sending a billion spam e-mails in a single day.

Russian bot herders behind massive increase in spam Read More »

My reply to those “You sent a virus to me!” emails

On Saturday 17 April 2004, I received the following email from someone I didn’t know:

> Hello,
>
> I am not sure who you are but our security detected a Netsky virus in an
> email that you sent. Whether a personal message or a spam, please make
> attention to the fact that you are spreading viruses and have your systems
> checked. Also, when a virus is detected the message does not get through so
> we have no idea who you are or the nature of your message.

My reply

I really wouldn’t bother sending these messages out, or you will find yourself with a full-time job.

Virtually every modern virus spoofs the sender of the email address of the sender. In other words, the virus scans the infected computer for email addresses, and then picks one for the TO field and one for the FROM field. Someone that has both of our email addresses on their computer is infected, and the virus chose your email address for TO and my email address for FROM. That is the extent of it. Unfortunately, we have no way to knowing who really is infected, so emailing the person who appears to have sent the email is a complete waste of your time.

Finally, I could not be infected, as I do not use Windows. I use Linux, which is impervious to the glut of viruses and worms that infect Microsoft’s poorly-coded operating system.

My reply to those “You sent a virus to me!” emails Read More »

Microsoft: only way to deal with malware is to wipe the computer

From Ryan Naraine’s “Microsoft Says Recovery from Malware Becoming Impossible” (eWeek: 4 April 2006):

In a rare discussion about the severity of the Windows malware scourge, a Microsoft security official said businesses should consider investing in an automated process to wipe hard drives and reinstall operating systems as a practical way to recover from malware infestation.

“When you are dealing with rootkits and some advanced spyware programs, the only solution is to rebuild from scratch. In some cases, there really is no way to recover without nuking the systems from orbit,” Mike Danseglio, program manager in the Security Solutions group at Microsoft, said in a presentation at the InfoSec World conference here.

Offensive rootkits, which are used hide malware programs and maintain an undetectable presence on an infected machine, have become the weapon of choice for virus and spyware writers and, because they often use kernel hooks to avoid detection, Danseglio said IT administrators may never know if all traces of a rootkit have been successfully removed.

He cited a recent instance where an unnamed branch of the U.S. government struggled with malware infestations on more than 2,000 client machines. “In that case, it was so severe that trying to recover was meaningless. They did not have an automated process to wipe and rebuild the systems, so it became a burden. They had to design a process real fast,” Danseglio added.

… “We’ve seen the self-healing malware that actually detects that you’re trying to get rid of it. You remove it, and the next time you look in that directory, it’s sitting there. It can simply reinstall itself,” he said.

“Detection is difficult, and remediation is often impossible,” Danseglio declared. “If it doesn’t crash your system or cause your system to freeze, how do you know it’s there? The answer is you just don’t know. Lots of times, you never see the infection occur in real time, and you don’t see the malware lingering or running in the background.”

… Danseglio said the success of social engineering attacks is a sign that the weakest link in malware defense is “human stupidity.”

“Social engineering is a very, very effective technique. We have statistics that show significant infection rates for the social engineering malware. Phishing is a major problem because there really is no patch for human stupidity,” he said.

Microsoft: only way to deal with malware is to wipe the computer Read More »

Why big co’s are bad are creating new products

From Paul Graham’s “Hiring is Obsolete” (May 2005):

Buying startups also solves another problem afflicting big companies: they can’t do product development. Big companies are good at extracting the value from existing products, but bad at creating new ones.

Why? It’s worth studying this phenomenon in detail, because this is the raison d’etre of startups.

To start with, most big companies have some kind of turf to protect, and this tends to warp their development decisions. For example, Web-based applications are hot now, but within Microsoft there must be a lot of ambivalence about them, because the very idea of Web-based software threatens the desktop. So any Web-based application that Microsoft ends up with, will probably, like Hotmail, be something developed outside the company.

Another reason big companies are bad at developing new products is that the kind of people who do that tend not to have much power in big companies (unless they happen to be the CEO). Disruptive technologies are developed by disruptive people. And they either don’t work for the big company, or have been outmaneuvered by yes-men and have comparatively little influence.

Big companies also lose because they usually only build one of each thing. When you only have one Web browser, you can’t do anything really risky with it. If ten different startups design ten different Web browsers and you take the best, you’ll probably get something better.

The more general version of this problem is that there are too many new ideas for companies to explore them all. There might be 500 startups right now who think they’re making something Microsoft might buy. Even Microsoft probably couldn’t manage 500 development projects in-house.

Big companies also don’t pay people the right way. People developing a new product at a big company get paid roughly the same whether it succeeds or fails. People at a startup expect to get rich if the product succeeds, and get nothing if it fails. So naturally the people at the startup work a lot harder.

The mere bigness of big companies is an obstacle. In startups, developers are often forced to talk directly to users, whether they want to or not, because there is no one else to do sales and support. It’s painful doing sales, but you learn much more from trying to sell people something than reading what they said in focus groups.

And then of course, big companies are bad at product development because they’re bad at everything. Everything happens slower in big companies than small ones, and product development is something that has to happen fast, because you have to go through a lot of iterations to get something good.

Why big co’s are bad are creating new products Read More »

The real vs. stated purpose of PowerPoint

From Paul Graham’s “Hiring is Obsolete” (May 2005):

For example, the stated purpose of Powerpoint is to present ideas. Its real role is to overcome people’s fear of public speaking. It allows you to give an impressive-looking talk about nothing, and it causes the audience to sit in a dark room looking at slides, instead of a bright one looking at you.

The real vs. stated purpose of PowerPoint Read More »

Cultural differences between Unix and Windows

From Joel Spolsky’s “Biculturalism” (Joel on Software: 14 December 2003):

What are the cultural differences between Unix and Windows programmers? There are many details and subtleties, but for the most part it comes down to one thing: Unix culture values code which is useful to other programmers, while Windows culture values code which is useful to non-programmers.

This is, of course, a major simplification, but really, that’s the big difference: are we programming for programmers or end users? Everything else is commentary. …

Let’s look at a small example. The Unix programming culture holds in high esteem programs which can be called from the command line, which take arguments that control every aspect of their behavior, and the output of which can be captured as regularly-formatted, machine readable plain text. Such programs are valued because they can easily be incorporated into other programs or larger software systems by programmers. To take one miniscule example, there is a core value in the Unix culture, which Raymond calls “Silence is Golden,” that a program that has done exactly what you told it to do successfully should provide no output whatsoever. It doesn’t matter if you’ve just typed a 300 character command line to create a file system, or built and installed a complicated piece of software, or sent a manned rocket to the moon. If it succeeds, the accepted thing to do is simply output nothing. The user will infer from the next command prompt that everything must be OK.

This is an important value in Unix culture because you’re programming for other programmers. As Raymond puts it, “Programs that babble don’t tend to play well with other programs.” By contrast, in the Windows culture, you’re programming for Aunt Marge, and Aunt Marge might be justified in observing that a program that produces no output because it succeeded cannot be distinguished from a program that produced no output because it failed badly or a program that produced no output because it misinterpreted your request.

Similarly, the Unix culture appreciates programs that stay textual. They don’t like GUIs much, except as lipstick painted cleanly on top of textual programs, and they don’t like binary file formats. This is because a textual interface is easier to program against than, say, a GUI interface, which is almost impossible to program against unless some other provisions are made, like a built-in scripting language. Here again, we see that the Unix culture values creating code that is useful to other programmers, something which is rarely a goal in Windows programming.

Which is not to say that all Unix programs are designed solely for programmers. Far from it. But the culture values things that are useful to programmers, and this explains a thing or two about a thing or two. …

The Unix cultural value of visible source code makes it an easier environment to develop for. Any Windows developer will tell you about the time they spent four days tracking down a bug because, say, they thought that the memory size returned by LocalSize would be the same as the memory size they originally requested with LocalAlloc, or some similar bug they could have fixed in ten minutes if they could see the source code of the library. …

When Unix was created and when it formed its cultural values, there were no end users. Computers were expensive, CPU time was expensive, and learning about computers meant learning how to program. It’s no wonder that the culture which emerged valued things which are useful to other programmers. By contrast, Windows was created with one goal only: to sell as many copies as conceivable at a profit. …

For example, Unix has a value of separating policy from mechanism which, historically, came from the designers of X. This directly led to a schism in user interfaces; nobody has ever quite been able to agree on all the details of how the desktop UI should work, and they think this is OK, because their culture values this diversity, but for Aunt Marge it is very much not OK to have to use a different UI to cut and paste in one program than she uses in another.

Cultural differences between Unix and Windows Read More »

Microsoft’s BitLocker could be used for DRM

From Bruce Schneier’s “Microsoft’s BitLocker” (Crypto-Gram Newsletter: 15 May 2006):

BitLocker is not a DRM system. However, it is straightforward to turn it into a DRM system. Simply give programs the ability to require that files be stored only on BitLocker-enabled drives, and then only be transferable to other BitLocker-enabled drives. How easy this would be to implement, and how hard it would be to subvert, depends on the details of the system.

Microsoft’s BitLocker could be used for DRM Read More »