web_browsers

I for one welcome our new OS overlords: Google Chrome

As some of you may have heard, Google has announced its own web browser, Chrome. It’s releasing the Windows version today, with Mac & Linux versions to follow.

To educate people about the new browser & its goals, they release a 38 pg comic book drawn by the brilliant Scott McCloud. It’s a really good read, but it gets a bit technical at times. However, someone did a “Reader’s Digest” version, which you can read here:

http://technologizer.com/2008/09/01/google-chrome-comic-the-readers-digest-version

I highly encourage you to read it. This browser is doing some very interesting, smart things. And it’s open source, so other browsers can use its code & ideas.

If you want to read the full comic, you can do so here:

http://www.google.com/googlebooks/chrome/

BTW … I don’t think Chrome has the potential of becoming the next big browser; I think instead it has the potential to become the next big operating system. See http://www.techcrunch.com/2008/09/01/meet-chrome-googles-windows-killer/ for more on that.

I for one welcome our new OS overlords: Google Chrome Read More »

To combat phishing, change browser design philosophy

From Federico Biancuzzi’s “Phishing with Rachna Dhamija” (SecurityFocus: 19 June 2006):

We discovered that existing security cues are ineffective, for three reasons:

1. The indicators are ignored (23% of participants in our study did not look at the address bar, status bar, or any SSL indicators).

2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn’t realize that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.

3. The security indicators are trivial to spoof. Many users can’t distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80% of participants. …

Currently, I’m working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user’s history information (for example, “you’ve been to this website many times” or “you’ve never submitted this form before”) can significantly increase a user’s ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I’ve been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

Rachna Dhamija: I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customized security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user’s focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password. …

With security skins, we were trying to solve not user authentication, but the reverse problem – server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like “Server X is authenticated”, or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user’s browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can’t be replayed by an attacker and it won’t reveal anything useful about the user’s password. …

Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can’t simulate is an interface he can’t predict. This is the principle that DSS relies on. We should make it easy for users to personalize their interfaces. Look at how popular screensavers, ringtones, and application skins are – users clearly enjoy the ability to personalize their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

To combat phishing, change browser design philosophy Read More »

The structure & meaning of the URL as key to the Web’s success

From Clay Shirky’s “The Semantic Web, Syllogism, and Worldview“:

The systems that have succeeded at scale have made simple implementation the core virtue, up the stack from Ethernet over Token Ring to the web over gopher and WAIS. The most widely adopted digital descriptor in history, the URL, regards semantics as a side conversation between consenting adults, and makes no requirements in this regard whatsoever: sports.yahoo.com/nfl/ is a valid URL, but so is 12.0.0.1/ftrjjk.ppq. The fact that a URL itself doesn’t have to mean anything is essential — the Web succeeded in part because it does not try to make any assertions about the meaning of the documents it contained, only about their location.

The structure & meaning of the URL as key to the Web’s success Read More »

The original description of Ajax

From Jesse James Garrett’s “Ajax: A New Approach to Web Applications“:

Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

  • standards-based presentation using XHTML and CSS;
  • dynamic display and interaction using the Document Object Model;
  • data interchange and manipulation using XML and XSLT;
  • asynchronous data retrieval using XMLHttpRequest;
  • and JavaScript binding everything together.

The classic web application model works like this: Most user actions in the interface trigger an HTTP request back to a web server. The server does some processing — retrieving data, crunching numbers, talking to various legacy systems — and then returns an HTML page to the client. It’s a model adapted from the Web’s original use as a hypertext medium, but as fans of The Elements of User Experience know, what makes the Web good for hypertext doesn’t necessarily make it good for software applications. …

An Ajax application eliminates the start-stop-start-stop nature of interaction on the Web by introducing an intermediary — an Ajax engine — between the user and the server. It seems like adding a layer to the application would make it less responsive, but the opposite is true.

Instead of loading a webpage, at the start of the session, the browser loads an Ajax engine — written in JavaScript and usually tucked away in a hidden frame. This engine is responsible for both rendering the interface the user sees and communicating with the server on the user’s behalf. The Ajax engine allows the user’s interaction with the application to happen asynchronously — independent of communication with the server. So the user is never staring at a blank browser window and an hourglass icon, waiting around for the server to do something. …

Every user action that normally would generate an HTTP request takes the form of a JavaScript call to the Ajax engine instead. Any response to a user action that doesn’t require a trip back to the server — such as simple data validation, editing data in memory, and even some navigation — the engine handles on its own. If the engine needs something from the server in order to respond — if it’s submitting data for processing, loading additional interface code, or retrieving new data — the engine makes those requests asynchronously, usually using XML, without stalling a user’s interaction with the application.

The original description of Ajax Read More »

Create web sites with PONUR

From “Dive Into Mark“:

Web Content Accessibility Guidelines 2.0 working draft from April 24, 2002. Only 3 weeks old!

The overall goal is to create Web content that is Perceivable, Operable, Navigable, and Understandable by the broadest possible range of users and compatible with their wide range of assistive technologies, now and in the future.

  1. Perceivable. Ensure that all content can be presented in form(s) that can be perceived by any user – except those aspects of the content that cannot be expressed in words.
  2. Operable. Ensure that the interface elements in the content are operable by any user.
  3. Orientation/Navigation. Facilitate content orientation and navigation
  4. Comprehendible. Make it as easy as possible to understand the content and controls.
  5. Technology Robust. Use Web technologies that maximize the ability of the content to work with current and future accessibility technologies and user agents.

I like that: perceivable, operable, navigable, understandable, and robust. That deserves to become a new acronym, PONUR

Create web sites with PONUR Read More »

Test of Flock’s blog editor

I’m trying out a new web browser – Flock – which is basically Firefox with social software tools baked in, such as blogging, Flickr, del.icio.us, & so on. Basically, all the stuff I use anyway. So far, Flock looks pretty interesting, although I’m not yet sure at all that I would use it for my daily browsing. That said, I now tend to have both Firefox & Opera open at all times on my machine, so I may just add Flock as an option as well.

Test of Flock’s blog editor Read More »

SSL in depth

I host Web sites, but we’ve only recently [2004] had to start implementing SSL, the Secure Sockets Layer, which turns http into https. I’ve been on the lookout for a good overview of SSL that explains why it is implemented as it is, and I think I’ve finally found one: Chris Shiflett: HTTP Developer’s Handbook: 18. Secure Sockets Layer is a chapter from Shiflett’s book posted on his web site, and boy it is good.

SSL has dramatically changed the way people use the Web, and it provides a very good solution to many of the Web’s shortcomings, most importantly:

  • Data integrity – SSL can help ensure that data (HTTP messages) cannot be changed while in transit.
  • Data confidentiality – SSL provides strong cryptographic techniques used to encrypt HTTP messages.
  • Identification – SSL can offer reasonable assurance as to the identity of a Web server. It can also be used to validate the identity of a client, but this is less common.

Shiflett is a clear technical writer, and if this chapter is any indication, the rest of his book may be worth buying.

SSL in depth Read More »

Mozilla fixes a bug … fast

One of the arguments anti-open sourcers often try to advance is that open source has just as many security holes as closed source software. On top of that one, the anti-OSS folks then go on to say that once open source software is as widely used as their closed source equivalents, they’ll suffer just as many attacks. Now, I’ve argued before that this is a wrong-headed attitude, at least as far as email viruses are concerned, and I think the fact that Apache is the most-widely used Web server in the world, yet sees only a fraction of the constant stream of security disasters that IIS does, pretty much belies the argument.

Now a blogger named sacarny has created a timeline detailing a vulnerability that was found in Mozilla and the time it took to fix it. It starts on July 7, at 13:46 GMT, and ends on July 8, at 21:57 GMT – in other words, it took a little over 24 hours for the Mozilla developers to fix a serious hole. And best of all, the whole process was open and documented. Sure, open source has bugs – all software does – but it tends to get fixed. Fast.

Mozilla fixes a bug … fast Read More »