amazon

What Google’s book settlement means

Google Book Search
Image via Wikipedia

From Robert Darnton’s “Google & the Future of Books” (The New York Review of Books: 12 February 2009):

As the Enlightenment faded in the early nineteenth century, professionalization set in. You can follow the process by comparing the Encyclopédie of Diderot, which organized knowledge into an organic whole dominated by the faculty of reason, with its successor from the end of the eighteenth century, the Encyclopédie méthodique, which divided knowledge into fields that we can recognize today: chemistry, physics, history, mathematics, and the rest. In the nineteenth century, those fields turned into professions, certified by Ph.D.s and guarded by professional associations. They metamorphosed into departments of universities, and by the twentieth century they had left their mark on campuses…

Along the way, professional journals sprouted throughout the fields, subfields, and sub-subfields. The learned societies produced them, and the libraries bought them. This system worked well for about a hundred years. Then commercial publishers discovered that they could make a fortune by selling subscriptions to the journals. Once a university library subscribed, the students and professors came to expect an uninterrupted flow of issues. The price could be ratcheted up without causing cancellations, because the libraries paid for the subscriptions and the professors did not. Best of all, the professors provided free or nearly free labor. They wrote the articles, refereed submissions, and served on editorial boards, partly to spread knowledge in the Enlightenment fashion, but mainly to advance their own careers.

The result stands out on the acquisitions budget of every research library: the Journal of Comparative Neurology now costs $25,910 for a year’s subscription; Tetrahedron costs $17,969 (or $39,739, if bundled with related publications as a Tetrahedron package); the average price of a chemistry journal is $3,490; and the ripple effects have damaged intellectual life throughout the world of learning. Owing to the skyrocketing cost of serials, libraries that used to spend 50 percent of their acquisitions budget on monographs now spend 25 percent or less. University presses, which depend on sales to libraries, cannot cover their costs by publishing monographs. And young scholars who depend on publishing to advance their careers are now in danger of perishing.

The eighteenth-century Republic of Letters had been transformed into a professional Republic of Learning, and it is now open to amateurs—amateurs in the best sense of the word, lovers of learning among the general citizenry. Openness is operating everywhere, thanks to “open access” repositories of digitized articles available free of charge, the Open Content Alliance, the Open Knowledge Commons, OpenCourseWare, the Internet Archive, and openly amateur enterprises like Wikipedia. The democratization of knowledge now seems to be at our fingertips. We can make the Enlightenment ideal come to life in reality.

What provoked these jeremianic- utopian reflections? Google. Four years ago, Google began digitizing books from research libraries, providing full-text searching and making books in the public domain available on the Internet at no cost to the viewer. For example, it is now possible for anyone, anywhere to view and download a digital copy of the 1871 first edition of Middlemarch that is in the collection of the Bodleian Library at Oxford. Everyone profited, including Google, which collected revenue from some discreet advertising attached to the service, Google Book Search. Google also digitized an ever-increasing number of library books that were protected by copyright in order to provide search services that displayed small snippets of the text. In September and October 2005, a group of authors and publishers brought a class action suit against Google, alleging violation of copyright. Last October 28, after lengthy negotiations, the opposing parties announced agreement on a settlement, which is subject to approval by the US District Court for the Southern District of New York.[2]

The settlement creates an enterprise known as the Book Rights Registry to represent the interests of the copyright holders. Google will sell access to a gigantic data bank composed primarily of copyrighted, out-of-print books digitized from the research libraries. Colleges, universities, and other organizations will be able to subscribe by paying for an “institutional license” providing access to the data bank. A “public access license” will make this material available to public libraries, where Google will provide free viewing of the digitized books on one computer terminal. And individuals also will be able to access and print out digitized versions of the books by purchasing a “consumer license” from Google, which will cooperate with the registry for the distribution of all the revenue to copyright holders. Google will retain 37 percent, and the registry will distribute 63 percent among the rightsholders.

Meanwhile, Google will continue to make books in the public domain available for users to read, download, and print, free of charge. Of the seven million books that Google reportedly had digitized by November 2008, one million are works in the public domain; one million are in copyright and in print; and five million are in copyright but out of print. It is this last category that will furnish the bulk of the books to be made available through the institutional license.

Many of the in-copyright and in-print books will not be available in the data bank unless the copyright owners opt to include them. They will continue to be sold in the normal fashion as printed books and also could be marketed to individual customers as digitized copies, accessible through the consumer license for downloading and reading, perhaps eventually on e-book readers such as Amazon’s Kindle.

After reading the settlement and letting its terms sink in—no easy task, as it runs to 134 pages and 15 appendices of legalese—one is likely to be dumbfounded: here is a proposal that could result in the world’s largest library. It would, to be sure, be a digital library, but it could dwarf the Library of Congress and all the national libraries of Europe. Moreover, in pursuing the terms of the settlement with the authors and publishers, Google could also become the world’s largest book business—not a chain of stores but an electronic supply service that could out-Amazon Amazon.

An enterprise on such a scale is bound to elicit reactions of the two kinds that I have been discussing: on the one hand, utopian enthusiasm; on the other, jeremiads about the danger of concentrating power to control access to information.

Google is not a guild, and it did not set out to create a monopoly. On the contrary, it has pursued a laudable goal: promoting access to information. But the class action character of the settlement makes Google invulnerable to competition. Most book authors and publishers who own US copyrights are automatically covered by the settlement. They can opt out of it; but whatever they do, no new digitizing enterprise can get off the ground without winning their assent one by one, a practical impossibility, or without becoming mired down in another class action suit. If approved by the court—a process that could take as much as two years—the settlement will give Google control over the digitizing of virtually all books covered by copyright in the United States.

Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

Google’s record suggests that it will not abuse its double-barreled fiscal-legal power. But what will happen if its current leaders sell the company or retire? The public will discover the answer from the prices that the future Google charges, especially the price of the institutional subscription licenses. The settlement leaves Google free to negotiate deals with each of its clients, although it announces two guiding principles: “(1) the realization of revenue at market rates for each Book and license on behalf of the Rightsholders and (2) the realization of broad access to the Books by the public, including institutions of higher education.”

What will happen if Google favors profitability over access? Nothing, if I read the terms of the settlement correctly. Only the registry, acting for the copyright holders, has the power to force a change in the subscription prices charged by Google, and there is no reason to expect the registry to object if the prices are too high. Google may choose to be generous in it pricing, and I have reason to hope it may do so; but it could also employ a strategy comparable to the one that proved to be so effective in pushing up the price of scholarly journals: first, entice subscribers with low initial rates, and then, once they are hooked, ratchet up the rates as high as the traffic will bear.

What Google’s book settlement means Read More »

My new book – Google Apps Deciphered – is out!

I’m really proud to announce that my 5th book is now out & available for purchase: Google Apps Deciphered: Compute in the Cloud to Streamline Your Desktop. My other books include:

(I’ve also contributed to two others: Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning Linux and Microsoft Vista for IT Security Professionals.)

Google Apps Deciphered is a guide to setting up Google Apps, migrating to it, customizing it, and using it to improve productivity, communications, and collaboration. I walk you through each leading component of Google Apps individually, and then show my readers exactly how to make them work together for you on the Web or by integrating them with your favorite desktop apps. I provide practical insights on Google Apps programs for email, calendaring, contacts, wikis, word processing, spreadsheets, presentations, video, and even Google’s new web browser Chrome. My aim was to collect together and present tips and tricks I’ve gained by using and setting up Google Apps for clients, family, and friends.

Here’s the table of contents:

  • 1: Choosing an Edition of Google Apps
  • 2: Setting Up Google Apps
  • 3: Migrating Email to Google Apps
  • 4: Migrating Contacts to Google Apps
  • 5: Migrating Calendars to Google Apps
  • 6: Managing Google Apps Services
  • 7: Setting Up Gmail
  • 8: Things to Know About Using Gmail
  • 9: Integrating Gmail with Other Software and Services
  • 10: Integrating Google Contacts with Other Software and Services
  • 11: Setting Up Google Calendar
  • 12: Things to Know About Using Google Calendar
  • 13: Integrating Google Calendar with Other Software and Services
  • 14: Things to Know About Using Google Docs
  • 15: Integrating Google Docs with Other Software and Services
  • 16: Setting Up Google Sites
  • 17: Things to Know About Using Google Sites
  • 18: Things to Know About Using Google Talk
  • 19: Things to Know About Using Start Page
  • 20: Things to Know About Using Message Security and Recovery
  • 21: Things to Know About Using Google Video
  • Appendix A: Backing Up Google Apps
  • Appendix B: Dealing with Multiple Accounts
  • Appendix C: Google Chrome: A Browser Built for Cloud Computing

If you want to know more about Google Apps and how to use it, then I know you’ll enjoy and learn from Google Apps Deciphered. You can read about and buy the book at Amazon (http://www.amazon.com/Google-Apps-Deciphered-Compute-Streamline/dp/0137004702) for $26.39. If you have any questions or comments, don’t hesitate to contact me at scott at granneman dot com.

My new book – Google Apps Deciphered – is out! Read More »

Many layers of cloud computing, or just one?

From Nicholas Carr’s “Further musings on the network effect and the cloud” (Rough Type: 27 October 2008):

I think O’Reilly did a nice job of identifying the different layers of the cloud computing business – infrastructure, development platform, applications – and I think he’s right that they’ll have different economic and competitive characteristics. One thing we don’t know yet, though, is whether those layers will in the long run exist as separate industry sectors or whether they’ll collapse into a single supply model. In other words, will the infrastructure suppliers also come to dominate the supply of apps? Google and Microsoft are obviously trying to play across all three layers, while Amazon so far seems content to focus on the infrastructure business and Salesforce is expanding from the apps layer to the development platform layer. The degree to which the layers remain, or don’t remain, discrete business sectors will play a huge role in determining the ultimate shape, economics, and degree of consolidation in cloud computing.

Let me end on a speculative note: There’s one layer in the cloud that O’Reilly failed to mention, and that layer is actually on top of the application layer. It’s what I’ll call the device layer – encompassing all the various appliances people will use to tap the cloud – and it may ultimately come to be the most interesting layer. A hundred years ago, when Tesla, Westinghouse, Insull, and others were building the cloud of that time – the electric grid – companies viewed the effort in terms of the inputs to their business: in particular, the power they needed to run the machines that produced the goods they sold. But the real revolutionary aspect of the electric grid was not the way it changed business inputs – though that was indeed dramatic – but the way it changed business outputs. After the grid was built, we saw an avalanche of new products outfitted with electric cords, many of which were inconceivable before the grid’s arrival. The real fortunes were made by those companies that thought most creatively about the devices that consumers would plug into the grid. Today, we’re already seeing hints of the device layer – of the cloud as output rather than input. Look at the way, for instance, that the little old iPod has shaped the digital music cloud.

Many layers of cloud computing, or just one? Read More »

Bruce Schneier on wholesale, constant surveillance

From Stephen J. Dubner’s interview with Bruce Schneier in “Bruce Schneier Blazes Through Your Questions” (The New York Times: 4 December 2007):

There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”

What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.

Bruce Schneier on wholesale, constant surveillance Read More »

Tim O’Reilly defines cloud computing

From Tim O’Reilly’s “Web 2.0 and Cloud Computing” (O’Reilly Radar: 26 October 2008):

Since “cloud” seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

1. Utility computing. Amazon’s success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

This is the layer at which I don’t presently see any strong network effect benefits (yet). Other than a rise in Amazon’s commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep — very deep — well of complex APIs that bound developers ever-tighter to their platform.

So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce’s force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don’t see that with AppEngine. What’s more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that’s because developers are afraid of Google, or because the APIs that Google has provided don’t give enough control and ownership for serious applications. I’d love your thoughts on this subject.

3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term “cloud” more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as “cloud applications” in a way that they don’t think of Google search or Google maps.

This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of “my” data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you’re a step back in the direction of commodity computing.

Ideally, the user’s data becomes more valuable because it is in the same space as other users’ data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe’s bookstore, why a listing on the first results page of Google’s search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that’s what force.com has already done, and thus represents the shape of things. It’s a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

Tim O’Reilly defines cloud computing Read More »

Amazon’s infrastructure and the cloud

From Spencer Reiss’ “Cloud Computing. Available at Amazon.com Today” (Wired: 21 April 2008):

Almost a third of [Amazon]’s total number of sales last year were made by people selling their stuff through the Amazon machine. The company calls them seller-customers, and there are 1.3 million of them.

Log in to Amazon’s gateway today and more than 100 separate services leap into action, crunching data, comparing alternatives, and constructing a totally customized page (all in about 200 milliseconds).

Developers get a cheap, instant, essentially limitless computing cloud.

Amazon’s infrastructure and the cloud Read More »

My new book – Linux Phrasebook – is out!

I’m really proud to announce that my 3rd book is now out & available for purchase: Linux Phrasebook. My first book – Don’t Click on the Blue E!: Switching to Firefox – was for general readers (really!) who wanted to learn how to move to and use the fantastic Firefox web browser. I included a lot of great information for more technical users as well, but the focus was your average Joe. My second book – Hacking Knoppix – was for the more advanced user who wanted to take advantage of Knoppix, a version of Linux that runs entirely off of a CD. You don’t need to be super-technical to use and enjoy Hacking Knoppix, but the more technical you are, the more you’ll enjoy the book. Linux Phrasebook is all about the Linux command line, and it’s perfect for both Linux newbies and experienced users. In fact, when I was asked to write the book, I responded, “Write it? I can’t wait to buy it!”

The idea behind Linux Phrasebook is to give practical examples of Linux commands and their myriad options, with examples for everything. Too often a Linux user will look up a command in order to discover how it works, and while the command and its many options will be detailed, something vitally important will be left out: examples. That’s where Linux Phrasebook comes in. I cover a huge number of different commands and their options, and for every single one, I give an example of usage and results that makes it clear how to use it.

Here’s the table of contents; in parentheses I’ve included some (just some) of the commands I cover in each chapter:

  1. Things to Know About Your Command Line
  2. The Basics (ls, cd, mkdir, cp, mv, rm)
  3. Learning About Commands (man, info, whereis, apropos)
  4. Building Blocks (;, &&, |, >, >>)
  5. Viewing Files (cat, less, head, tail)
  6. Printing and Managing Print Jobs (lpr, lpq, lprm)
  7. Ownerships and Permissions (chgrp, chown, chmod)
  8. Archiving and Compression (zip, gzip, bzip2, tar)
  9. Finding Stuff: Easy (grep, locate)
  10. The find command (find)
  11. Your Shell (history, alias, set)
  12. Monitoring System Resources (ps, lsof, free, df, du)
  13. Installing software (rpm, dkpg, apt-get, yum)
  14. Connectivity (ping, traceroute, route, ifconfig, iwconfig)
  15. Working on the Network (ssh, sftp, scp, rsync, wget)
  16. Windows Networking (nmblookup, smbclient, smbmount)

I’m really proud of the whole book, but the chapter on the super-powerful and useful find command is a standout, along with the material on ssh and its descendants sftp and scp. But really, the whole book is great, and I will definitely be keeping a copy on my desk as a reference. If you want to know more about the Linux command line and how to use it, then I know you’ll enjoy and learn from Linux Phrasebook.

You can read about and buy the book at Amazon (http://www.amazon.com/gp/product/0672328380/) for $10.19. If you have any questions or comments, don’t hesitate to contact me at scott at granneman dot com.

My new book – Linux Phrasebook – is out! Read More »

Ubuntu Hacks available now

The Ubuntu distribution simplifies Linux by providing a sensible collection of applications, an easy-to-use package manager, and lots of fine-tuning, which make it possibly the best Linux for desktops and laptops. Readers of both Linux Journal and TUX Magazine confirmed this by voting Ubuntu as the best Linux distribution in each publication’s 2005 Readers Choice Awards. None of that simplification, however, makes Ubuntu any less fun if you’re a hacker or a power user.

Like all books in the Hacks series, Ubuntu Hacks includes 100 quick tips and tricks for all users of all technical levels. Beginners will appreciate the installation advice and tips on getting the most out of the free applications packaged with Ubuntu, while intermediate and advanced readers will learn the ins-and-outs of power management, wireless roaming, 3D video acceleration, server configuration, and much more.

I contributed 10 of the 100 hacks in this book, including information on the following topics:

  • Encrypt Your Email and Important Files
  • Surf the Web Anonymously
  • Keep Windows Malware off Your System
  • Mount Removable Devices with Persistent Names
  • Mount Remote Directories Securely and Easily
  • Make Videos of Your Tech-Support Questions

I’ve been using K/Ubuntu for over a year (heck, it’s only two years old!), and it’s the best distro I’ve ever used. I was really excited to contribute my 10 hacks to Ubuntu Hacks, as this is defintely a book any advanced Linux user would love.

Buy Ubuntu Hacks from Amazon!

Ubuntu Hacks available now Read More »

My new book – Hacking Knoppix – available now

Knoppix is one of the great innovations in open source software in the last few years. Everyone that sees it wants to use it, since it is that rarest of software tools: the true Swiss Army Knife, capable of use by unsophisticated, experienced, and wizardly users, able to perform any of several hundred (if not thousand) tasks in an efficient and powerful way. Best of all, it’s super easy to employ, ultra-portable, and platform- and hardware-agnostic.

Knoppix camps on your system without canceling out your regular installation or messing with your files. And it’s really fun to play with. Hacking Knoppix provides all kinds of ways to customize Knoppix for your particular needs, plus the scoop on various Knoppix distros. Learn to build a Knoppix first-aid kit for repairing cranky Windows and rescuing precious data, or create your own Live CD. In short, Hacking Knoppix will transform your ordinary powerless Knoppix-curious individual into a fearsome Knoppix ninja, able to right wrongs, recover data, and vanquish the forces of ignorance and Windows usage once and for all.

Our approach in Hacking Knoppix is smart, detailed, and fun. We know our stuff, and we want our readers to understand and enjoy all the outrageously cool things that Knoppix makes possible. If a topic is kind of hard to understand, we’ll explain it so that lesser experienced readers get it and more experienced readers still learn something new; if a point needs in-depth explanation, we’ll give it in an interesting fashion; and if it needs a splash of humor to relieve the tedium, we’ll slip in something humorous, like a banana peel in front of Bill Gates.

  • Knoppix is an innovative Linux distribution that does not require installation, making it ideal to use for a rescue system, demonstration purposes, or many other applications
  • Shows hack-hungry fans how to fully customize Knoppix and Knoppix-based distributions
  • Readers will learn to create two different Knoppix-based live CDs, one for children and one for Windows recovery
  • Teaches readers to use Knoppix to work from a strange computer, rescue a Windows computer that won’t boot, repair and recover data from other machines, and more
  • Includes Knoppix Light 4.0 on a ready-to-use, bootable live CD

Read sample excerpts, including Unraveling the Knoppix Toolkit Maze (1.7 MB PDF), the complete Table of Contents (135 kb PDF) & the Index (254 kb PDF).

Buy Hacking Knoppix from Amazon!

My new book – Hacking Knoppix – available now Read More »

My first book – Don’t Click on the Blue E! – is out!

For all those surfers who have slowly grown disenchanted with Microsoft’s Internet Explorer web browser, Don’t Click on the Blue E! from O’Reilly is here to help. It offers non-technical users a convenient roadmap for switching to a better web browser – Firefox.

The only book that covers the switch to Firefox, Don’t Click on the Blue E! is a must for anyone who wants to browse faster, more securely, and more efficiently. It takes readers through the process step-by-step, so it’s easy to understand. Schools, non-profits, businesses, and individuals can all benefit from this how-to guide.

Firefox includes most of the features that browser users are familiar with, along with several new features other browsers don’t have, such as a bookmarks toolbar and window tabs that allow users to quickly switch among several web sites. There is also the likelihood of better security with Firefox.

All indications say that Firefox is more than just a passing fad. With USA Today and Forbes Magazine hailing it as superior to Internet Explorer, Firefox is clearly the web browser of the future. In fact, as it stands today, already 22% of the market currently employs Firefox for their browsing purposes.

Don’t Click on the Blue E! has been written exclusively for this growing audience. With its straightforward approach, it helps people harness this emerging technology so they can enjoy a superior – and safer – browsing experience.

Read two sample excerpts: Counteracting Web Annoyances (651 kb PDF) & Safety and Security (252 kb PDF).

Translated into Japanese!

Buy Don’t Click on the Blue E! from Amazon!

My first book – Don’t Click on the Blue E! – is out! Read More »