How to deal with the fact that users can’t learn much about security

From Bruce Schneier’s “Second SHB Workshop Liveblogging (4)” (Schneier on Security: 11 June 2009):

Diana Smetters, Palo Alto Research Center …, started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs.

CCTV in your plane’s cabin?

From Michael Reilly’s “In-flight surveillance could foil terrorists in the sky” (New Scientist: 29 May 2008):

CCTV cameras are bringing more and more public places under surveillance – and passenger aircraft could be next.

A prototype European system uses multiple cameras and “Big Brother” software to try and automatically detect terrorists or other dangers caused by passengers.

The European Union’s Security of Aircraft in the Future European Environment (SAFEE) project uses a camera in every passenger’s seat, with six wide-angle cameras to survey the aisles. Software then analyses the footage to detect developing terrorist activity or “air-rage” incidents, by tracking passengers’ facial expressions.

“It looks for running in the cabin, standing near the cockpit for long periods of time, and other predetermined indicators that suggest a developing threat,” says James Ferryman of the University of Reading, UK, one of the system’s developers.

Other behaviours could include a person nervously touching their face, or sweating excessively. One such behaviour won’t trigger the system to alert the crew, only certain combinations of them.

The structure & meaning of the URL as key to the Web’s success

From Clay Shirky’s “The Semantic Web, Syllogism, and Worldview“:

The systems that have succeeded at scale have made simple implementation the core virtue, up the stack from Ethernet over Token Ring to the web over gopher and WAIS. The most widely adopted digital descriptor in history, the URL, regards semantics as a side conversation between consenting adults, and makes no requirements in this regard whatsoever: is a valid URL, but so is The fact that a URL itself doesn’t have to mean anything is essential — the Web succeeded in part because it does not try to make any assertions about the meaning of the documents it contained, only about their location.

Terrorist social networks

From Technology Review‘s “Terror’s Server“:

For example, research suggests that people with nefarious intent tend to exhibit distinct patterns in their use of e-mails or online forums like chat rooms. Whereas most people establish a wide variety of contacts over time, those engaged in plotting a crime tend to keep in touch only with a very tight circle of people, says William Wallace, an operations researcher at Rensselaer Polytechnic Institute.

This phenomenon is quite predictable. “Very few groups of people communicate repeatedly only among themselves,” says Wallace. “It’s very rare; they don’t trust people outside the group to communicate. When 80 percent of communications is within a regular group, this is where we think we will find the groups who are planning activities that are malicious.” Of course, not all such groups will prove to be malicious; the odd high-school reunion will crop up. But Wallace’s group is developing an algorithm that will narrow down the field of so-called social networks to those that warrant the scrutiny of intelligence officials. The algorithm is scheduled for completion and delivery to intelligence agencies this summer. …