design

9 reasons the Storm botnet is different

From Bruce Schneier’s “Gathering ‘Storm’ Superworm Poses Grave Threat to PC Nets” (Wired: 4 October 2007):

Storm represents the future of malware. Let’s look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. …

3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. …

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. …

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect. …

5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” …

6. Storm’s payload — the code it uses to spread — morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites — anything to entice users to click on a phony link. …

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. …

9. Last month, Storm began attacking anti-spam sites focused on identifying it — spamhaus.org, 419eater and so on — and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

9 reasons the Storm botnet is different Read More »

The latest on electronic voting machines

From James Turner’s interview with Dr. Barbara Simons, past President of the Association for Computing Machinery & recent appointee to the Advisory Board of the Federal Election Assistance Commission, at “A 2008 e-Voting Wrapup with Dr. Barbara Simons” (O’Reilly Media: 7 November 2008):

[Note from Scott: headers added by me]

Optical Scan: Good & Bad

And most of the voting in Minnesota was done on precinct based optical scan machines, paper ballot which is then fed into the optical scanner at the precinct. And the good thing about that is it gives the voter immediate feedback if there is any problem, such as over-voting, voting twice for a candidate.

Well there’s several problems; one is–well first of all, as you say because these things have computers in them they can be mis-programmed, there can be software bugs. You could conceivably have malicious code. You could have the machines give you a different count from the right one. There was a situation back in the 2004 race where Gephardt in one of the Primaries–Gephardt received a large number of votes after he had withdrawn from the race. And this was done–using paper ballots, using optical scan paper ballots. I don’t know if it was this particular brand or not. And when they were recounted it was discovered that in fact that was the wrong result; that he had gotten fewer votes. Now I never saw an explanation for what happened but my guess is that whoever programmed these machines had mistakenly assigned the slot that was for Kerry to Gephardt and the slot that was for Gephardt to Kerry; that’s my guess. Now I don’t know if that’s true but if that did happen I think there’s very little reason to believe it was malicious because there was really nothing to be gained by doing that. So I think it was just an honest error but of course errors can occur.

DRE Studies

Ohio conducted a major study of electronic voting machines called the Everest Study which was commissioned by the current Secretary of State Bruner, Secretary of State Bruner and this study uncovered huge problems with these–with most of these voting systems, these touch screen voting systems. They were found to be insecure, unreliable, difficult to use; basically a similar study had been studied in California not too much earlier called the Top to Bottom Review and the Ohio study confirmed every–all of the problems that had been uncovered in California and found additional problems, so based on that there was a push to get rid of a lot of these machines.

States Using DREs

Maryland and Georgia are entirely touch screen States and so is New Jersey. In Maryland they’re supposed to replace them with optical scan paper ballots by 2010 but there’s some concern that there may not be the funding to do that. In fact Maryland and Georgia both use Diebold which is now called Premier, paperless touch screen voting machines; Georgia started using them in 2002 and in that race, that’s the race in which Max Cleveland, the Democratic Senator, paraplegic from–the Vietnam War Vet was defeated and I know that there are some people who questioned the outcome of that race because the area polls had showed him winning. And because that race–those machines are paperless there was no way to check the outcome. Another thing that was of a concern in Maryland in 2002 was that–I mean in Georgia in 2002 was that there were last minute software patches being added to the machines just before the Election and the software patches hadn’t really been inspected by any kind of independent agency.

More on Optical Scans

Well I think scanned ballots–well certainly scanned ballots give you a paper trail and they give you a good paper trail. The kind of paper trail you want and it’s not really a paper trail; it’s paper ballots because they are the ballots. What you want is you want it to be easy to audit and recount an election. And I think that’s something that really people hadn’t taken into consideration early on when a lot of these machines were first designed and purchased.

Disabilities

One of the things that was investigated in California when they did the Top to Bottom Review was just how easy is it for people with disabilities to use these touch screen machines? Nobody had ever done that before and these test results came back very negatively. If you look at the California results they’re very negative on these touch screen machines. In many cases people in wheelchairs had a very difficult time being able to operate them correctly, people who were blind sometimes had troubles understanding what was being said or things were said too loudly or too softly or they would get confused about the instructions or some of the ways that they had for manual inputting; their votes were confusing.

There is a–there are these things called Ballot Generating Devices which are not what we generally refer to as touch screen machines although they can be touch screen. The most widely used one is called the Auto Mark. And the way the Auto Mark works is you take a paper ballots, one of these optical scan ballots and you insert it into the Auto Mark and then it operates much the same way that these other paperless–potentially paperless touch screen machines work. It has a headphone–headset so that a blind voter can use it; it has–it’s possible for somebody in a wheelchair to vote, although in fact you don’t have to use this if you’re in a wheelchair; you can vote optical scan clearly. Somebody who has severe mobility impairments can vote on these machines using a sip, puff device where if you sip it’s a zero or one and if you puff it’s the opposite or a yes or a no. And these–the Auto Mark was designed with disability people in mind from early on. And it faired much better in the California tests. What it does is at the end when the voter with disabilities is finished he or she will say okay cast my ballot. At that point the Auto Mark simply marks the optical scan ballot; it just marks it. And then you have an optical scan ballot that can be read by an optical scanner. There should be no problems with it because it’s been generated by a machine. And you have a paper ballot that can be recounted.

Problems with DREs vs Optical Scans

One of the things to keep in–there’s a couple things to keep in mind when thinking about replacing these systems. The first is that these direct recording electronic systems or touch screen systems as they’re called they have to have–the States and localities that buy these systems have to have maintenance contracts with the vendors because they’re very complicated systems to maintain and of course the software is a secret. So some of these contracts are quite costly and these are ongoing expenses with these machines. In addition, because they have software in them they have to be securely stored and they have to be securely delivered and those create enormous problems especially when you have to worry about delivering large numbers of machines to places prior to the election. Frequently these machines end up staying in people’s garages or in churches for periods of time when they’re relatively insecure.

And you need far fewer scanners; the security issues with scanners are not as great because you can do an audit and a recount, so altogether it just seems to me that moving to paper based optical scan systems with precinct scanners so that the voter gets feedback on the ballot if the voter votes twice for President; the ballot is kicked out and the voter can vote a new ballot.

And as I say there is the Auto Mark for voters with disabilities to use; there’s also another system called Populex but that’s not as widely used as Auto Mark. There could be new systems coming forward.

1/2 of DREs Broken in Pennsylvania on Election Day

Editor’s Note: Dr. Simons wrote me later to say: “Many Pennsylvania polling places opened on election day with half or more of their voting machines broken — so they used emergency paper ballots until they could fix their machines.”

The latest on electronic voting machines Read More »

The X Window System defined

From Ellen Siever’s “What Is the X Window System” (O’Reilly Media: 25 August 2005):

X was intentionally designed to provide the low-level mechanism for managing the graphics display, but not to have any control over what is displayed. This means that X has never been locked into a single way of doing things; instead, it has the flexibility to be used in many different ways. Both the simplest window manager and the most complex desktop environment can, and do, use the X Window System to manage the display.

When you run the X Window System, the X server manages the display, based on requests from the window manager. The window manager is an application that is itself an X client, with responsibility for managing the appearance and placement of the windows on the screen.

X itself has no role in determining the appearance of the screen, or what users are allowed to do with windows. That is the job of the window manager. For example, some window managers allow you to double-click in a window’s title bar and roll up the window into the title bar like rolling up a window shade (this is referred to as shading). Other window managers don’t have that feature. X doesn’t care; it’s a window manager concern. The X server’s job is to provide the low-level support so the window manager and other applications can shade or not, as they choose.

The X server manages the display hardware. The server captures input events from the user via keyboard or mouse (or other input device) and passes the information to a client application that has requested it. It also receives requests from the application to perform some graphical action. For example, if you use your mouse to move a window on the screen, the X server passes the information to the window manager, which responds by telling the server where to reposition the window, and the X server performs the action. If the client is a calculator, such as xcalc, it might request that digits be displayed into the window as the user clicks on buttons to enter a number.

The window manager controls the general operation of the window system; in particular, it controls the geometry and aesthetics of your X display. With the window manager you can change the size and position of windows on the display, reshuffle windows in a window stack, and so on.

The X Window System defined Read More »

Fat footers

Jerry wrote this & sent it to a client;

A fat footer is a means of showing secondary navigation, or
showcasing primary navigation, or reinforcing selected pieces of your
navigation. Here are some examples:

On a long-scroll blog page, put some choices at the bottom:
http://bokardo.com/

Put sales and branding at the top and navigation at the bottom:
http://www.dapper.net/

Promote the pages you really want them to visit:
http://www.blog.spoongraphics.co.uk/

Pizazz at the top, decision-making choices at the bottom:
http://www.apple.com/iphone/

We think it’s usually best to have a color change for the footer:
http://billyhughes.oph.gov.au/

Fat footers Read More »

Web design contrasted with graphic design

From Joshua Porter’s “Do Canonical Web Designs Exist?” (Bokardo: 14 November 2007):

… web designers necessarily approach design from a different perspective than graphic designers.

Graphic designers can judge by looking. Web designers cannot. Web designers must judge by doing (or observing others doing). The problem is that too many people judge web designs without actually using them. Instead, they look. When you use the shortcut of looking, you tend to judge what you’re looking at: the visuals. But when you use something, your relationship to that thing necessarily changes. I wonder how often Armin uses Google.

That’s why web design is different. Peer production, in particular, is extremely different. When I buy a book on Amazon, when you buy a book, we change the way the site works for someone else buying books, which is in turn changed by the reviews we write afterward. Is this not amazing design?

Web design contrasted with graphic design Read More »

Sneaky advertising

I bought a mug that has no handles on it at all. I noticed that the accompanying slip of paper said, “Most Copco travel mugs are intended for right or left hand use.” Well, yes, if there are no handles, that would make sense. It goes on, “If your mug is handled, the lid is designed to fit securely in two positions, allowing for right or left hand use.” What fantastic advertising copy, creating something out of nothing! It’s like saying, “Our handles can be used by people who are right- OR left-handed! Amazing!”

Sneaky advertising Read More »

Great, wonderfully-designed consumer products

From Farhad Manjoo’s “iPod: I love you, you’re perfect, now change” (Salon: 23 October 2006):

There are very few consumer products about which you’d want to read a whole book — the Google search engine, the first Mac, the Sony Walkman, the VW Beetle. Levy proves that the iPod, which turns five years old today, belongs to that club.

Great, wonderfully-designed consumer products Read More »

Failure every 30 years produces better design

From The New York Times‘ “Form Follows Function. Now Go Out and Cut the Grass.“:

Failure, [Henry] Petroski shows, works. Or rather, engineers only learn from things that fail: bridges that collapse, software that crashes, spacecraft that explode. Everything that is designed fails, and everything that fails leads to better design. Next time at least that mistake won’t be made: Aleve won’t be packed in child-proof bottles so difficult to open that they stymie the arthritic patients seeking the pills inside; narrow suspension bridges won’t be built without “stay cables” like the ill-fated Tacoma Narrows Bridge, which was twisted to its destruction by strong winds in 1940.

Successes have fewer lessons to teach. This is one reason, Mr. Petroski points out, that there has been a major bridge disaster every 30 years. Gradually the techniques and knowledge of one generation become taken for granted; premises are no longer scrutinized. So they are re-applied in ambitious projects by creators who no longer recognize these hidden flaws and assumptions.

Mr. Petroski suggests that 30 years – an implicit marker of generational time – is the period between disasters in many specialized human enterprises, the period between, say, the beginning of manned space travel and the Challenger disaster, or the beginnings of nuclear energy and the 1979 accident at Three Mile Island. …

Mr. Petroski cites an epigram of Epictetus: “Everything has two handles – by one of which it ought to be carried and by the other not.”

Failure every 30 years produces better design Read More »