microsoft

Talk about Markdown to SLUUG this Wednesday

I’ll be giving a talk to the St. Louis UNIX Users Group next Wednesday night about Markdown, a tool I absolutely love.

You’re invited to come. Please do – I think you’ll definitely learn a lot.

Date: Wednesday, Nov. 9, 2011
Time: 6:30 – 9 pm
Where: 11885 Lackland Rd., St Louis, MO 63146
Map: http://g.co/maps/6gg9g
Directions: http://www.sluug.org/resources/meeting_info/map_graybar.shtml

Here’s the description:

John Gruber, the inventor of Markdown, describes it this way: “Markdown is a text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML). Thus, ‘Markdown’ is two things: (1) a plain text formatting syntax; and (2) a software tool, written in Perl, that converts the plain text formatting to HTML. … The overriding design goal for Markdown’s formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions.”

This talk by Scott Granneman & Bill Odom will cover the basics of Markdown’s syntax, key variants of Markdown, tools for composing Markdown (including vim, of course!), and ways you can easily transform a plain text file written in Markdown into HTML, JSON, TXT, LaTeX, man, MediaWiki, Textile, DocBook XML, ODT, EPUB, Slidy and S5 HTML and JavaScript slide shows, RTF, or even Word!

If you have any questions, please contact me. Hope to see you there!

Talk about Markdown to SLUUG this Wednesday Read More »

David Pogue’s insights about tech over time

From David Pogue’s “The Lessons of 10 Years of Talking Tech” (The New York Times: 25 November 2010):

As tech decades go, this one has been a jaw-dropper. Since my first column in 2000, the tech world has not so much blossomed as exploded. Think of all the commonplace tech that didn’t even exist 10 years ago: HDTV, Blu-ray, GPS, Wi-Fi, Gmail, YouTube, iPod, iPhone, Kindle, Xbox, Wii, Facebook, Twitter, Android, online music stores, streaming movies and on and on.

With the turkey cooking, this seems like a good moment to review, to reminisce — and to distill some insight from the first decade in the new tech millennium.

Things don’t replace things; they just splinter. I can’t tell you how exhausting it is to keep hearing pundits say that some product is the “iPhone killer” or the “Kindle killer.” Listen, dudes: the history of consumer tech is branching, not replacing.

Things don’t replace things; they just add on. Sooner or later, everything goes on-demand. The last 10 years have brought a sweeping switch from tape and paper storage to digital downloads. Music, TV shows, movies, photos and now books and newspapers. We want instant access. We want it easy.

Some people’s gadgets determine their self-esteem. … Today’s gadgets are intensely personal. Your phone or camera or music player makes a statement, reflects your style and character. No wonder some people interpret criticisms of a product as a criticism of their choices. By extension, it’s a critique of them.

Everybody reads with a lens. … feelings run just as strongly in the tech realm. You can’t use the word “Apple,” “Microsoft” or “Google” in a sentence these days without stirring up emotion.

It’s not that hard to tell the winners from the losers. … There was the Microsoft Spot Watch (2003). This was a wireless wristwatch that could display your appointments and messages — but cost $10 a month, had to be recharged nightly and wouldn’t work outside your home city unless you filled out a Web form in advance.

Some concepts’ time may never come. The same “breakthrough” ideas keep surfacing — and bombing, year after year. For the love of Mike, people, nobody wants videophones!

Teenagers do not want “communicators” that do nothing but send text messages, either (AT&T Ogo, Sony Mylo, Motorola V200). People do not want to surf the Internet on their TV screens (WebTV, AOLTV, Google TV). And give it up on the stripped-down kitchen “Internet appliances” (3Com Audrey, Netpliance i-Opener, Virgin Webplayer). Nobody has ever bought one, and nobody ever will.

Forget about forever — nothing lasts a year. Of the thousands of products I’ve reviewed in 10 years, only a handful are still on the market. Oh, you can find some gadgets whose descendants are still around: iPod, BlackBerry, Internet Explorer and so on.

But it’s mind-frying to contemplate the millions of dollars and person-years that were spent on products and services that now fill the Great Tech Graveyard: Olympus M-Robe. PocketPC. Smart Display. MicroMV. MSN Explorer. Aibo. All those PlaysForSure music players, all those Palm organizers, all those GPS units you had to load up with maps from your computer.

Everybody knows that’s the way tech goes. The trick is to accept your
gadget’s obsolescence at the time you buy it…

Nobody can keep up. Everywhere I go, I meet people who express the same reaction to consumer tech today: there’s too much stuff coming too fast. It’s impossible to keep up with trends, to know what to buy, to avoid feeling left behind. They’re right. There’s never been a period of greater technological change. You couldn’t keep up with all of it if you tried.

David Pogue’s insights about tech over time Read More »

Microsoft’s real customers

From James Fallow’s “Inside the Leviathan: A short and stimulating brush with Microsoft’s corporate culture” (The Atlantic: February 2000):

Financial analysts have long recognized that Microsoft’s profit really comes from two sources. One is operating systems (Windows, in all its varieties), and the other is the Office suite of programs. Everything else — Flight Simulator, Slate, MSNBC, mice and keyboards — is financially meaningless. What these two big categories have in common is that individuals are not the significant customers. Operating systems are sold mainly to computer companies such as Dell and Compaq, which pass them pre-loaded to individual consumers. And the main paying customers for Office are big corporations (or what the high-tech world calls LORGs, for “large-size organizations”), which may buy thousands of “seats” for their employees at hundreds of dollars apiece. Product planning, therefore, is focused with admirable clarity on those whose decisions really matter to Microsoft — the information-technology manager at Chevron or the U.S. Department of Agriculture, for example — rather than some writer with an idea about how to make his colleagues happier with a program.

Microsoft’s real customers Read More »

Lovely – Microsoft will let companies create ad-filled desktop themes

From Jeff Bertolucci’s “Windows 7 Ads: Microsoft Tarts Up the Desktop” (PC World: 13 November 2009):

Microsoft has announced plans to peddle Windows 7 desktop space to advertisers, who’ll create Windows UI themes–customized backgrounds, audio clips, and other elements–that highlight their brand, Computerworld reports. In fact, some advertiser themes are already available in the Windows 7 Personalization Gallery, including desktop pitches for soft drinks (Coca-Cola, Pepsi), autos (Ducati, Ferrari, Infiniti), and big-budget Hollywood blockbusters (Avatar).

The advertiser themes are different, however, in that they won’t be foisted on unsuspecting users. Rather, you’ll have to download and install the ad pitch yourself. As a result, I doubt many Windows 7 users will gripe about ad themes. Hey, if you’re a Preparation H fan, why not devote the desktop to your favorite ointment?

Lovely – Microsoft will let companies create ad-filled desktop themes Read More »

Australian police: don’t bank online with Windows

From Munir Kotadia’s “NSW Police: Don’t use Windows for internet banking” (ITnews: 9 October 2009):

Consumers wanting to safely connect to their internet banking service should use Linux or the Apple iPhone, according to a detective inspector from the NSW Police, who was giving evidence on behalf of the NSW Government at the public hearing into Cybercrime today in Sydney.

Detective Inspector Bruce van der Graaf from the Computer Crime Investigation Unit told the hearing that he uses two rules to protect himself from cybercriminals when banking online.

The first rule, he said, was to never click on hyperlinks to the banking site and the second was to avoid Microsoft Windows.

“If you are using the internet for a commercial transaction, use a Linux boot up disk – such as Ubuntu or some of the other flavours. Puppylinux is a nice small distribution that boots up fairly quickly.

Van der Graaf also mentioned the iPhone, which he called “quite safe” for internet banking.

“Another option is the Apple iPhone. It is only capable of running one process at a time so there is really no danger from infection,” he said.

Australian police: don’t bank online with Windows Read More »

How security experts defended against Conficker

From Jim Giles’ “The inside story of the Conficker worm” (New Scientist: 12 June 2009):

23 October 2008 … The dry, technical language of Microsoft’s October update did not indicate anything particularly untoward. A security flaw in a port that Windows-based PCs use to send and receive network signals, it said, might be used to create a “wormable exploit”. Worms are pieces of software that spread unseen between machines, mainly – but not exclusively – via the internet (see “Cell spam”). Once they have installed themselves, they do the bidding of whoever created them.

If every Windows user had downloaded the security patch Microsoft supplied, all would have been well. Not all home users regularly do so, however, and large companies often take weeks to install a patch. That provides windows of opportunity for criminals.

The new worm soon ran into a listening device, a “network telescope”, housed by the San Diego Supercomputing Center at the University of California. The telescope is a collection of millions of dummy internet addresses, all of which route to a single computer. It is a useful monitor of the online underground: because there is no reason for legitimate users to reach out to these addresses, mostly only suspicious software is likely to get in touch.

The telescope’s logs show the worm spreading in a flash flood. For most of 20 November, about 3000 infected computers attempted to infiltrate the telescope’s vulnerable ports every hour – only slightly above the background noise generated by older malicious code still at large. At 6 pm, the number began to rise. By 9 am the following day, it was 115,000 an hour. Conficker was already out of control.

That same day, the worm also appeared in “honeypots” – collections of computers connected to the internet and deliberately unprotected to attract criminal software for analysis. It was soon clear that this was an extremely sophisticated worm. After installing itself, for example, it placed its own patch over the vulnerable port so that other malicious code could not use it to sneak in. As Brandon Enright, a network security analyst at the University of California, San Diego, puts it, smart burglars close the window they enter by.

Conficker also had an ingenious way of communicating with its creators. Every day, the worm came up with 250 meaningless strings of letters and attached a top-level domain name – a .com, .net, .org, .info or .biz – to the end of each to create a series of internet addresses, or URLs. Then the worm contacted these URLs. The worm’s creators knew what each day’s URLs would be, so they could register any one of them as a website at any time and leave new instructions for the worm there.

It was a smart trick. The worm hunters would only ever spot the illicit address when the infected computers were making contact and the update was being downloaded – too late to do anything. For the next day’s set of instructions, the creators would have a different list of 250 to work with. The security community had no way of keeping up.

No way, that is, until Phil Porras got involved. He and his computer security team at SRI International in Menlo Park, California, began to tease apart the Conficker code. It was slow going: the worm was hidden within two shells of encryption that defeated the tools that Porras usually applied. By about a week before Christmas, however, his team and others – including the Russian security firm Kaspersky Labs, based in Moscow – had exposed the worm’s inner workings, and had found a list of all the URLs it would contact.

[Rick Wesson of Support Intelligence] has years of experience with the organisations that handle domain registration, and within days of getting Porras’s list he had set up a system to remove the tainted URLs, using his own money to buy them up.

It seemed like a major win, but the hackers were quick to bounce back: on 29 December, they started again from scratch by releasing an upgraded version of the worm that exploited the same security loophole.

This new worm had an impressive array of new tricks. Some were simple. As well as propagating via the internet, the worm hopped on to USB drives plugged into an infected computer. When those drives were later connected to a different machine, it hopped off again. The worm also blocked access to some security websites: when an infected user tried to go online and download the Microsoft patch against it, they got a “site not found” message.

Other innovations revealed the sophistication of Conficker’s creators. If the encryption used for the previous strain was tough, that of the new version seemed virtually bullet-proof. It was based on code little known outside academia that had been released just three months earlier by researchers at the Massachusetts Institute of Technology.

Indeed, worse was to come. On 15 March, Conficker presented the security experts with a new problem. It reached out to a URL called rmpezrx.org. It was on the list that Porras had produced, but – those involved decline to say why – it had not been blocked. One site was all that the hackers needed. A new version was waiting there to be downloaded by all the already infected computers, complete with another new box of tricks.

Now the cat-and-mouse game became clear. Conficker’s authors had discerned Porras and Wesson’s strategy and so from 1 April, the code of the new worm soon revealed, it would be able to start scanning for updates on 500 URLs selected at random from a list of 50,000 that were encoded in it. The range of suffixes would increase to 116 and include many country codes, such as .kz for Kazakhstan and .ie for Ireland. Each country-level suffix belongs to a different national authority, each of which sets its own registration procedures. Blocking the previous set of domains had been exhausting. It would soon become nigh-on impossible – even if the new version of the worm could be fully decrypted.

Luckily, Porras quickly repeated his feat and extracted the crucial list of URLs. Immediately, Wesson and others contacted the Internet Corporation for Assigned Names and Numbers (ICANN), an umbrella body that coordinates country suffixes.

From the second version onwards, Conficker had come with a much more efficient option: peer-to-peer (P2P) communication. This technology, widely used to trade pirated copies of software and films, allows software to reach out and exchange signals with copies of itself.

Six days after the 1 April deadline, Conficker’s authors let loose a new version of the worm via P2P. With no central release point to target, security experts had no means of stopping it spreading through the worm’s network. The URL scam seems to have been little more than a wonderful way to waste the anti-hackers’ time and resources. “They said: you’ll have to look at 50,000 domains. But they never intended to use them,” says Joe Stewart of SecureWorks in Atlanta, Georgia. “They used peer-to-peer instead. They misdirected us.”

The latest worm release had a few tweaks, such as blocking the action of software designed to scan for its presence. But piggybacking on it was something more significant: the worm’s first moneymaking schemes. These were a spam program called Waledac and a fake antivirus package named Spyware Protect 2009.

The same goes for fake software: when the accounts of a Russian company behind an antivirus scam became public last year, it appeared that one criminal had earned more than $145,000 from it in just 10 days.

How security experts defended against Conficker Read More »

The limitations of Windows 7 on netbooks

From Farhad Manjoo’s “I, for One, Welcome Our New Android Overlords” (Slate: 5 June 2008):

Microsoft promises that Windows 7 will be able to run on netbooks, but it has announced a risky strategy to squeeze profits from these machines. The company plans to cripple the cheapest versions of the new OS in order to encourage PC makers to pay for premium editions. If you buy a netbook that comes with the low-priced Windows 7 Starter Edition, you won’t be able to change your screen’s background or window colors, you won’t be able to play DVDs, you can’t connect it to another monitor, and you won’t see many of the user-interface advances found in other versions. If you’d like more flexibility, you’ll need to upgrade to a more expensive version of Windows—which will, of course, defeat the purpose of your cheap PC. (Microsoft had originally planned to limit Starter Edition even further—you wouldn’t be able to run more than three programs at a time. It removed that limitation after howls of protest.)

The limitations of Windows 7 on netbooks Read More »

Open source & patents

From Liz Laffan’s “GPLv2 vs GPLv3: The two seminal open source licenses, their roots, consequences and repercussions” (VisionMobile: September 2007):

Cumulatively patents have been doubling practically every year since 1990. Patents are now probably the most contentious issue in software-related intellectual property rights.

However we should also be aware that software written from scratch is as likely to infringe patents as FOSS covered software – due mainly to the increasing proliferation of patents in all software technologies. Consequently the risk of patent infringement is largely comparable whether one chooses to write one’s own software or use software covered by the GPLv2; one will most likely have to self-indemnify against a potential patent infringement claim in both cases.

The F.U.D. (fear, uncertainty and doubt) that surrounds patents in FOSS has been further heightened by two announcements, both instigated by Microsoft. Firstly in November 2006 Microsoft and Novell1 entered into a cross- licensing patent agreement where Microsoft gave Novell assurances that it would not sue the company or its customers if they were to be found infringing Microsoft patents in the Novell Linux distribution. Secondly in May 2007 Microsoft2 restated (having alluded to the same in 2004) that FOSS violates 235 Microsoft patents. Unfortunately, the Redmond giant did not state which patents in particular were being infringed and nor have they initiated any actions against a user or distributor of Linux.

The FOSS community have reacted to these actions by co-opting the patent system and setting up the Patent Commons (http://www.patentcommons.org). This initiative, managed by the Linux Foundation, coordinates and manages a patent commons reference library, documenting information about patent- related pledges in support of Linux and FOSS that are provided by large software companies. Moreover, software giants such as IBM and Nokia have committed not to assert patents against the Linux kernel and other FOSS projects. In addition, the FSF have strengthened the patent clause of GPLv3…

Open source & patents Read More »

Microsoft Exchange is expensive

From Joel Snyder’s “Exchange: Should I stay or should I go?” (Network World: 9 March 2009):

There are many ways to buy Exchange, depending on how many users you need, but the short answer is that none of them cost less than about $75 per user and can run up to $140 per user for the bundles that include Exchange and Windows Server and user licenses for both of those as well as Forefront, Microsoft’s antispam/antivirus service. …

If you really want to make a case for cost, you could also claim that Exchange requires a $90 Outlook license for each user, a Windows XP or Vista license for each user, and more expensive hardware than a similar open source platform might require.

Microsoft Exchange is expensive Read More »

Reasons Windows has a poor security architecture

From Daniel Eran Dilger’s “The Unavoidable Malware Myth: Why Apple Won’t Inherit Microsoft’s Malware Crown” (AppleInsider: 1 April 2008):

Thanks to its extensive use of battle-hardened Unix and open source software, Mac OS X also has always had security precautions in place that Windows lacked. It has also not shared the architectural weaknesses of Windows that have made that platform so easy to exploit and so difficult to clean up afterward, including:

  • the Windows Registry and the convoluted software installation mess related to it,
  • the Windows NT/2000/XP Interactive Services flaw opening up shatter attacks,
  • a wide open, legacy network architecture that left unnecessary, unsecured ports exposed by default,
  • poorly designed network sharing protocols that failed to account for adequate security measures,
  • poorly designed administrative messaging protocols that failed to account for adequate security,
  • poorly designed email clients that gave untrusted scripts access to spam one’s own contacts unwittingly,
  • an integrated web browser architecture that opened untrusted executables by design, and many others.

Reasons Windows has a poor security architecture Read More »

Conficker creating a new gargantuan botneth

From Asavin Wattanajantra’s “Windows worm could create the ‘world’s biggest botnet’” (IT PRO: 19 January 2009):

The Downadup or “Conficker” worm has increased to over nine million infections over the weekend – increasing from 2.4 million in a four-day period, according to F-Secure.

The worm has password cracking capabilities, which is often successful because company passwords sometimes match a predefined password list that the worm carries.

Corporate networks around the world have already been infected by the network worm, which is particularly hard to eradicate as it is able to evolve – making use of a long list of websites – by downloading another version of itself.

Rik Ferguson, solution architect at Trend Micro, told IT PRO that the worm was very difficult to block for security companies as they had to make sure that they blocked every single one of the hundreds of domains that it could download from.

Ferguson said that the worm was creating a staggering amount of infections, even if just the most conservative infection estimates are taken into account. He said: “What’s particularly interesting about this worm is that it is the first hybrid with old school worm infection capabilities and command and control infrastructure.”

Conficker creating a new gargantuan botneth Read More »

The end of Storm

From Brian Krebs’ “Atrivo Shutdown Hastened Demise of Storm Worm” (The Washington Post: 17 October 2008):

The infamous Storm worm, which powered a network of thousands of compromised PCs once responsible for sending more than 20 percent of all spam, appears to have died off. Security experts say Storm’s death knell was sounded by the recent shutdown of Atrivo, a California based ISP that was home to a number of criminal cyber crime operations, including at least three of the master servers used to control the Storm network.

Three out of four of [Storm’s] control servers were located at Atrivo, a.k.a. Intercage, said Joe Stewart, a senior security researcher with Atlanta based SecureWorks who helped unlock the secrets of the complex Storm network. The fourth server, he said, operated out of Hosting.ua, an Internet provider based in the Ukraine.

Stewart said the final spam run blasted out by Storm was on Sept. 18.Three days later, Atrivo was forced off the Internet after its sole remaining upstream provider — Pacific Internet Exchange (PIE) — decided to stop routing for the troubled ISP. In the weeks leading up to that disconnection, four other upstream providers severed connectivity to Atrivo, following detailed reports from Security Fix and Host Exploit that pointed to a massive amount of spam, malicious software and a host of other cyber criminal operations emanating from it.

Stewart said spam sent by the Storm network had been steadily decreasing throughout 2008, aided in large part by the inclusion of the malware in Microsoft’s malicious software removal tool, which has scrubbed Storm from hundreds of thousands of PCs since last fall. Stewart said it’s impossible to tell whether the Storm worm was disrupted by the Atrivo shutdown or if the worm’s authors pulled the plug themselves and decided to move on. But at least 30,000 systems remain infected with the Storm malware.

The end of Storm Read More »

The end of Storm?

From “Storm Worm botnet cracked wide open” (Heise Security: 9 January 2009):

A team of researchers from Bonn University and RWTH Aachen University have analysed the notorious Storm Worm botnet, and concluded it certainly isn’t as invulnerable as it once seemed. Quite the reverse, for in theory it can be rapidly eliminated using software developed and at least partially disclosed by Georg Wicherski, Tillmann Werner, Felix Leder and Mark Schlösser. However it seems in practice the elimination process would fall foul of the law.

Over the last two years, Storm Worm has demonstrated how easily organised internet criminals have been able to spread this infection. During that period, the Storm Worm botnet has accumulated more than a million infected computers, known as drones or zombies, obeying the commands of a control server and using peer-to-peer techniques to locate new servers. Even following a big clean-up with Microsoft’s Malicious Software Removal Tool, around 100,000 drones probably still remain. That means the Storm Worm botnet is responsible for a considerable share of the Spam tsunami and for many distributed denial-of-service attacks. It’s astonishing that no one has succeeded in dismantling the network, but these researchers say it isn’t due to technical finesse on the part of the Storm Worm’s developers.

Existing knowledge of the techniques used by the Storm Worm has mainly been obtained by observing the behaviour of infected systems, but the researchers took a different approach to disarm it. They reverse translated large parts of the machine code of the drone client program and analysed it, taking a particularly close look at the functions for communications between drones and with the server.

Using this background knowledge, they were able to develop their own client, which links itself into the peer-to-peer structure of a Storm Worm network in such a way that queries from other drones, looking for new command servers, can be reliably routed to it. That enables it to divert drones to a new server. The second step was to analyse the protocol for passing commands. The researchers were astonished to find that the server doesn’t have to authenticate itself to clients, so using their knowledge they were able to direct drones to a simple server. The latter could then issue commands to the test Storm worm drones in the laboratory so that, for example, they downloaded a specific program from a server, perhaps a special cleaning program, and ran it. The students then went on to write such a program.

The team has not yet taken the final step of putting the whole thing into action with a genuine Storm Worm botnet in the wild. From a legal point of view, that could involve many problems. Any unauthorised access to third-party computers could be regarded as tampering with data, which is punishable under paragraph § 303a of the German Penal Code. That paragraph threatens up to two years’ imprisonment for unlawfully deleting, suppressing, making unusable or changing third-party data. Although this legal process would only come into effect if there was a criminal complaint from an injured party, or if there was special public interest in the prosecution of the crime.

Besides risks of coming up against the criminal law, there is also a danger of civil claims for damages by the owners of infected PCs, because the operation might cause collateral damage. There are almost certain to be configurations in which the cleaning goes wrong, perhaps disabling computers so they won’t run any more. Botnet operators could also be expected to strike back, causing further damage.

The end of Storm? Read More »

Srizbi, Bobax, & Storm – the rankings

From Gregg Keizer’s “RSA – Top botnets control 1M hijacked computers” (Computerworld: 4 October 2008):

Joe Stewart, director of malware research at SecureWorks, presented his survey at the RSA Conference, which opened Monday in San Francisco. The survey ranked the top 11 botnets that send spam; by extrapolating their size, Stewart estimated the bots on his list control just over a million machines and are capable of flooding the Internet with more than 100 billion spam messages every day.

The botnet at the top of the chart is Srizbi. According to Stewart, this botnet — which also goes by the names “Cbeplay” and “Exchanger” — has an estimated 315,000 bots and can blast out 60 billion messages a day.

While it may not have gotten the publicity that Storm has during the last year, it’s built around a much more substantial collection of hijacked computers, said Stewart. In comparison, Storm’s botnet counts just 85,000 machines, only 35,000 of which are set up to send spam. Storm, in fact, is No. 5 on Stewart’s list.

“Storm is pretty insignificant at this point,” said Stewart. “It got all this attention, so Microsoft added it to its malicious software detection tool [in September 2007], and that’s removed hundreds of thousands of compromised PCs from the botnet.”

The second-largest botnet is “Bobax,” which boasts an estimated 185,000 hacked systems in its collection. Able to spam approximately nine billion messages a day, Bobax has been around for some time, but recently has been in the news again, albeit under one of its several aliases.

Srizbi, Bobax, & Storm – the rankings Read More »

Number of bots drops 20% on Christmas

From Robert Lemos’ “Bot-infected PCs get a refresh” (SecurityFocus: 28 December 2006):

On Christmas day, the number of bots tracked by the Shadowserver group dropped nearly 20 percent.

The dramatic decrease in weekly totals–from more than 500,000 infected systems to less than 400,000 computers–puzzled researchers. The Internet Storm Center, a threat monitoring group managed by the SANS Institute, confirmed a drop of about 10 percent.

One of the Internet Storm Center’s network monitoring volunteers posited that the decrease was due to the large number of computers given as gifts this Christmas. The systems running Microsoft Windows XP will be using Service Pack 2, which also means the firewall will be on by default, adding an additional hurdle for bot herder looking to reclaim their drones.

“Many of the infected machines are turned off, the new shiny ones have not been infected, and the Internet is momentarily a safer place,” Marcus Sachs, director of the ISC, stated in a diary entry. “But like you said, give it a few weeks and we’ll be right back to where we started from.”

Number of bots drops 20% on Christmas Read More »

ODF compared & constrasted with OOXML

From Sam Hiser’s “Achieving Openness: A Closer Look at ODF and OOXML” (ONLamp.com: 14 June 2007):

An open, XML-based standard for displaying and storing data files (text documents, spreadsheets, and presentations) offers a new and promising approach to data storage and document exchange among office applications. A comparison of the two XML-based formats–OpenDocument Format (“ODF”) and Office Open XML (“OOXML”)–across widely accepted “openness” criteria has revealed substantial differences, including the following:

  • ODF is developed and maintained in an open, multi-vendor, multi-stakeholder process that protects against control by a single organization. OOXML is less open in its development and maintenance, despite being submitted to a formal standards body, because control of the standard ultimately rests with one organization.
  • ODF is the only openly available standard, published fully in a document that is freely available and easy to comprehend. This openness is reflected in the number of competing applications in which ODF is already implemented. Unlike ODF, OOXML’s complexity, extraordinary length, technical omissions, and single-vendor dependencies combine to make alternative implementation unattractive as well as legally and practically impossible.
  • ODF is the only format unencumbered by intellectual property rights (IPR) restrictions on its use in other software, as certified by the Software Freedom Law Center. Conversely, many elements designed into the OOXML formats but left undefined in the OOXML specification require behaviors upon document files that only Microsoft Office applications can provide. This makes data inaccessible and breaks work group productivity whenever alternative software is used.
  • ODF offers interoperability with ODF-compliant applications on most of the common operating system platforms. OOXML is designed to operate fully within the Microsoft environment only. Though it will work elegantly across the many products in the Microsoft catalog, OOXML ignores accepted standards and best practices regarding its use of XML.

Overall, a comparison of both formats reveals significant differences in their levels of openness. While ODF is revealed as sufficiently open across all four key criteria, OOXML shows relative weakness in each criteria and offers fundamental flaws that undermine its candidacy as a global standard.

ODF compared & constrasted with OOXML Read More »

My new book – Google Apps Deciphered – is out!

I’m really proud to announce that my 5th book is now out & available for purchase: Google Apps Deciphered: Compute in the Cloud to Streamline Your Desktop. My other books include:

(I’ve also contributed to two others: Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux and Microsoft Vista for IT Security Professionals.)

Google Apps Deciphered is a guide to setting up Google Apps, migrating to it, customizing it, and using it to improve productivity, communications, and collaboration. I walk you through each leading component of Google Apps individually, and then show my readers exactly how to make them work together for you on the Web or by integrating them with your favorite desktop apps. I provide practical insights on Google Apps programs for email, calendaring, contacts, wikis, word processing, spreadsheets, presentations, video, and even Google’s new web browser Chrome. My aim was to collect together and present tips and tricks I’ve gained by using and setting up Google Apps for clients, family, and friends.

Here’s the table of contents:

  • 1: Choosing an Edition of Google Apps
  • 2: Setting Up Google Apps
  • 3: Migrating Email to Google Apps
  • 4: Migrating Contacts to Google Apps
  • 5: Migrating Calendars to Google Apps
  • 6: Managing Google Apps Services
  • 7: Setting Up Gmail
  • 8: Things to Know About Using Gmail
  • 9: Integrating Gmail with Other Software and Services
  • 10: Integrating Google Contacts with Other Software and Services
  • 11: Setting Up Google Calendar
  • 12: Things to Know About Using Google Calendar
  • 13: Integrating Google Calendar with Other Software and Services
  • 14: Things to Know About Using Google Docs
  • 15: Integrating Google Docs with Other Software and Services
  • 16: Setting Up Google Sites
  • 17: Things to Know About Using Google Sites
  • 18: Things to Know About Using Google Talk
  • 19: Things to Know About Using Start Page
  • 20: Things to Know About Using Message Security and Recovery
  • 21: Things to Know About Using Google Video
  • Appendix A: Backing Up Google Apps
  • Appendix B: Dealing with Multiple Accounts
  • Appendix C: Google Chrome: A Browser Built for Cloud Computing

If you want to know more about Google Apps and how to use it, then I know you’ll enjoy and learn from Google Apps Deciphered. You can read about and buy the book at Amazon (http://www.amazon.com/Google-Apps-Deciphered-Compute-Streamline/dp/0137004702) for $26.39. If you have any questions or comments, don’t hesitate to contact me at scott at granneman dot com.

My new book – Google Apps Deciphered – is out! Read More »

Many layers of cloud computing, or just one?

From Nicholas Carr’s “Further musings on the network effect and the cloud” (Rough Type: 27 October 2008):

I think O’Reilly did a nice job of identifying the different layers of the cloud computing business – infrastructure, development platform, applications – and I think he’s right that they’ll have different economic and competitive characteristics. One thing we don’t know yet, though, is whether those layers will in the long run exist as separate industry sectors or whether they’ll collapse into a single supply model. In other words, will the infrastructure suppliers also come to dominate the supply of apps? Google and Microsoft are obviously trying to play across all three layers, while Amazon so far seems content to focus on the infrastructure business and Salesforce is expanding from the apps layer to the development platform layer. The degree to which the layers remain, or don’t remain, discrete business sectors will play a huge role in determining the ultimate shape, economics, and degree of consolidation in cloud computing.

Let me end on a speculative note: There’s one layer in the cloud that O’Reilly failed to mention, and that layer is actually on top of the application layer. It’s what I’ll call the device layer – encompassing all the various appliances people will use to tap the cloud – and it may ultimately come to be the most interesting layer. A hundred years ago, when Tesla, Westinghouse, Insull, and others were building the cloud of that time – the electric grid – companies viewed the effort in terms of the inputs to their business: in particular, the power they needed to run the machines that produced the goods they sold. But the real revolutionary aspect of the electric grid was not the way it changed business inputs – though that was indeed dramatic – but the way it changed business outputs. After the grid was built, we saw an avalanche of new products outfitted with electric cords, many of which were inconceivable before the grid’s arrival. The real fortunes were made by those companies that thought most creatively about the devices that consumers would plug into the grid. Today, we’re already seeing hints of the device layer – of the cloud as output rather than input. Look at the way, for instance, that the little old iPod has shaped the digital music cloud.

Many layers of cloud computing, or just one? Read More »

How the Storm botnet defeats anti-virus programs

From Lisa Vaas’ “Storm Worm Botnet Lobotomizing Anti-Virus Programs” (eWeek: 24 October 2007):

According to an Oct. 22 posting by Sophos analyst Richard Cohen, the Storm botnet – Sophos calls it Dorf, and its also known as Ecard malware – is dropping files that call a routine that gets Windows to tell it every time a new process is started. The malware checks the process file name against an internal list and kills the ones that match – sometimes. But Storm has taken a new twist: It now would rather leave processes running and just patch entry points of loading processes that might pose a threat to it. Then, when processes such as anti-virus programs run, they simply return a value of 0.

The strategy means that users wont be alarmed by their anti-virus software not running. Even more ominously, the technique is designed to fool NAC (network access control) systems, which bar insecure clients from registering on a network by checking to see whether a client is running anti-virus software and whether its patched.

Its the latest evidence of why Storm is “the scariest and most substantial threat” security researchers have ever seen, he said. Storm is patient, its resilient, its adaptive in that it can defeat anti-virus products in multiple ways (programmatically, it changes its signature every 30 minutes), its invisible because it comes with a rootkit built in and hides at the kernel level, and its clever enough to change every few weeks.

Hence the hush-hush nature of research around Storm. Corman said he can tell us that its now accurately pegged at 6 million, but he cant tell us who came up with the figure, or how. Besides retribution, Storms ability to morph means that those who know how to watch it are jealously guarding their techniques. “None of the researchers wanted me to say anything about it,” Corman said. “They’re afraid of retaliation. They fear that if we disclose their unique means of finding information on Storm,” the botnet herder will change tactics yet again and the window into Storm will slam shut.

How the Storm botnet defeats anti-virus programs Read More »

An analysis of Google’s technology, 2005

From Stephen E. Arnold’s The Google Legacy: How Google’s Internet Search is Transforming Application Software (Infonortics: September 2005):

The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing.

Google is hardware plus software

The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005.

How Google Is Different from MSN and Yahoo

Google’s technologyis simultaneously just like other online companies’ technology, and very different. A data center is usually a facility owned and operated by a third party where customers place their servers. The staff of the data center manage the power, air conditioning and routine maintenance. The customer specifies the computers and components. When a data center must expand, the staff of the facility may handle virtually all routine chores and may work with the customer’s engineers for certain more specialized tasks.

Before looking at some significant engineering differences between Google and two of its major competitors, review this list of characteristics for a Google data center.

1. Google data centers – now numbering about two dozen, although no one outside Google knows the exact number or their locations. They come online and automatically, under the direction of the Google File System, start getting work from other data centers. These facilities, sometimes filled with 10,000 or more Google computers, find one another and configure themselves with minimal human intervention.

2. The hardware in a Google data center can be bought at a local computer store. Google uses the same types of memory, disc drives, fans and power supplies as those in a standard desktop PC.

3. Each Google server comes in a standard case called a pizza box with one important change: the plugs and ports are at the front of the box to make access faster and easier.

4. Google racks are assembled for Google to hold servers on their front and back sides. This effectively allows a standard rack, normally holding 40 pizza box servers, to hold 80.

5. A Google data center can go from a stack of parts to online operation in as little as 72 hours, unlike more typical data centers that can require a week or even a month to get additional resources online.

6. Each server, rack and data center works in a way that is similar to what is called “plug and play.” Like a mouse plugged into the USB port on a laptop, Google’s network of data centers knows when more resources have been connected. These resources, for the most part, go into operation without human intervention.

Several of these factors are dependent on software. This overlap between the hardware and software competencies at Google, as previously noted, illustrates the symbiotic relationship between these two different engineering approaches. At Google, from its inception, Google software and Google hardware have been tightly coupled. Google is not a software company nor is it a hardware company. Google is, like IBM, a company that owes its existence to both hardware and software. Unlike IBM, Google has a business model that is advertiser supported. Technically, Google is conceptually closer to IBM (at one time a hardware and software company) than it is to Microsoft (primarily a software company) or Yahoo! (an integrator of multiple softwares).

Software and hardware engineering cannot be easily segregated at Google. At MSN and Yahoo hardware and software are more loosely-coupled. Two examples will illustrate these differences.

Microsoft – with some minor excursions into the Xbox game machine and peripherals – develops operating systems and traditional applications. Microsoft has multiple operating systems, and its engineers are hard at work on the company’s next-generation of operating systems.

Several observations are warranted:

1. Unlike Google, Microsoft does not focus on performance as an end in itself. As a result, Microsoft gets performance the way most computer users do. Microsoft buys or upgrades machines. Microsoft does not fiddle with its operating systems and their subfunctions to get that extra time slice or two out of the hardware.

2. Unlike Google, Microsoft has to support many operating systems and invest time and energy in making certain that important legacy applications such as Microsoft Office or SQLServer can run on these new operating systems. Microsoft has a boat anchor tied to its engineer’s ankles. The boat anchor is the need to ensure that legacy code works in Microsoft’s latest and greatest operating systems.

3. Unlike Google, Microsoft has no significant track record in designing and building hardware for distributed, massively parallelised computing. The mice and keyboards were a success. Microsoft has continued to lose money on the Xbox, and the sudden demise of Microsoft’s entry into the home network hardware market provides more evidence that Microsoft does not have a hardware competency equal to Google’s.

Yahoo! operates differently from both Google and Microsoft. Yahoo! is in mid-2005 a direct competitor to Google for advertising dollars. Yahoo! has grown through acquisitions. In search, for example, Yahoo acquired 3721.com to handle Chinese language search and retrieval. Yahoo bought Inktomi to provide Web search. Yahoo bought Stata Labs in order to provide users with search and retrieval of their Yahoo! mail. Yahoo! also owns AllTheWeb.com, a Web search site created by FAST Search & Transfer. Yahoo! owns the Overture search technology used by advertisers to locate key words to bid on. Yahoo! owns Alta Vista, the Web search system developed by Digital Equipment Corp. Yahoo! licenses InQuira search for customer support functions. Yahoo has a jumble of search technology; Google has one search technology.

Historically Yahoo has acquired technology companies and allowed each company to operate its technology in a silo. Integration of these different technologies is a time-consuming, expensive activity for Yahoo. Each of these software applications requires servers and systems particular to each technology. The result is that Yahoo has a mosaic of operating systems, hardware and systems. Yahoo!’s problem is different from Microsoft’s legacy boat-anchor problem. Yahoo! faces a Balkan-states problem.

There are many voices, many needs, and many opposing interests. Yahoo! must invest in management resources to keep the peace. Yahoo! does not have a core competency in hardware engineering for performance and consistency. Yahoo! may well have considerable competency in supporting a crazy-quilt of hardware and operating systems, however. Yahoo! is not a software engineering company. Its engineers make functions from disparate systems available via a portal.

The figure below provides an overview of the mid-2005 technical orientation of Google, Microsoft and Yahoo.

2005 focuses of Google, MSN, and Yahoo

The Technology Precepts

… five precepts thread through Google’s technical papers and presentations. The following snapshots are extreme simplifications of complex, yet extremely fundamental, aspects of the Googleplex.

Cheap Hardware and Smart Software

Google approaches the problem of reducing the costs of hardware, set up, burn-in and maintenance pragmatically. A large number of cheap devices using off-the-shelf commodity controllers, cables and memory reduces costs. But cheap hardware fails.

In order to minimize the “cost” of failure, Google conceived of smart software that would perform whatever tasks were needed when hardware devices fail. A single device or an entire rack of devices could crash, and the overall system would not fail. More important, when such a crash occurs, no full-time systems engineering team has to perform technical triage at 3 a.m.

The focus on low-cost, commodity hardware and smart software is part of the Google culture.

Logical Architecture

Google’s technical papers do not describe the architecture of the Googleplex as self-similar. Google’s technical papers provide tantalizing glimpses of an approach to online systems that makes a single server share features and functions of a cluster of servers, a complete data center, and a group of Google’s data centers.

The collections of servers running Google applications on the Google version of Linux is a supercomputer. The Googleplex can perform mundane computing chores like taking a user’s query and matching it to documents Google has indexed. Further more, the Googleplex can perform side calculations needed to embed ads in the results pages shown to user, execute parallelized, high-speed data transfers like computers running state-of-the-art storage devices, and handle necessary housekeeping chores for usage tracking and billing.

When Google needs to add processing capacity or additional storage, Google’s engineers plug in the needed resources. Due to self-similarity, the Googleplex can recognize, configure and use the new resource. Google has an almost unlimited flexibility with regard to scaling and accessing the capabilities of the Googleplex.

In Google’s self-similar architecture, the loss of an individual device is irrelevant. In fact, a rack or a data center can fail without data loss or taking the Googleplex down. The Google operating system ensures that each file is written three to six times to different storage devices. When a copy of that file is not available, the Googleplex consults a log for the location of the copies of the needed file. The application then uses that replica of the needed file and continues with the job’s processing.

Speed and Then More Speed

Google uses commodity pizza box servers organized in a cluster. A cluster is group of computers that are joined together to create a more robust system. Instead of using exotic servers with eight or more processors, Google generally uses servers that have two processors similar to those found in a typical home computer.

Through proprietary changes to Linux and other engineering innovations, Google is able to achieve supercomputer performance from components that are cheap and widely available.

… engineers familiar with Google believe that read rates may in some clusters approach 2,000 megabytes a second. When commodity hardware gets better, Google runs faster without paying a premium for that performance gain.

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google’s creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using multiple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google’s programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.

Eliminate or Reduce Certain System Expenses

Some lucky investors jumped on the Google bandwagon early. Nevertheless, Google was frugal, partly by necessity and partly by design. The focus on frugality influenced many hardware and software engineering decisions at the company.

Drawbacks of the Googleplex

The Laws of Physics: Heat and Power 101

In reality, no one knows. Google has a rapidly expanding number of data centers. The data center near Atlanta, Georgia, is one of the newest deployed. This state-of-the-art facility reflects what Google engineers have learned about heat and power issues in its other data centers. Within the last 12 months, Google has shifted from concentrating its servers at about a dozen data centers, each with 10,000 or more servers, to about 60 data centers, each with fewer machines. The change is a response to the heat and power issues associated with larger concentrations of Google servers.

The most failure prone components are:

  • Fans.
  • IDE drives which fail at the rate of one per 1,000 drives per day.
  • Power supplies which fail at a lower rate.

Leveraging the Googleplex

Google’s technology is one major challenge to Microsoft and Yahoo. So to conclude this cursory and vastly simplified look at Google technology, consider these items:

1. Google is fast anywhere in the world.

2. Google learns. When the heat and power problems at dense data centers surfaced, Google introduced cooling and power conservation innovations to its two dozen data centers.

3. Programmers want to work at Google. “Google has cachet,” said one recent University of Washington graduate.

4. Google’s operating and scaling costs are lower than most other firms offering similar businesses.

5. Google squeezes more work out of programmers and engineers by design.

6. Google does not break down, or at least it has not gone offline since 2000.

7. Google’s Googleplex can deliver desktop-server applications now.

8. Google’s applications install and update without burdening the user with gory details and messy crashes.

9. Google’s patents provide basic technology insight pertinent to Google’s core functionality.

An analysis of Google’s technology, 2005 Read More »