Human factors experts are often called upon to evaluate “things”. The general issue we confront is whether a product, procedure, workspace, tool, or interface will fulfill the goals of those designing and manufacturing it. So, how do we make this judgment?

Typically, we evaluate the product itself, its intended users, the conditions of use, and any external elements that might affect its use. In effect, we must try to foresee how the product will be used and who will use it before we can make a judgment regarding the adequacy of its design. In this column, I will describe some of the foreseeable components of human behavior.

Reasonable Prediction

First, let’s acknowledge that human behavior is extremely variable. Therefore, instead of trying to predict precisely what will happen in a given situation, we try to identify the ranges of human performance. While we can’t say with absolute certainty what will or will not happen in a specific instance, we often know the sort of design and task elements that are likely to lead either to acceptable or poor performance. Rather than describe all of the ways in which products can be designed to promote good performance, I’ll list a few of the practices that almost always produce errors and unacceptable levels of performance.

Violating Design Guidelines

Some of the most serious, frequent, and foreseeable usability problems are caused by ignoring basic design guidelines. Over five decades of laboratory research, field studies, and empirical data gathering have produced a huge volume of detailed guidance for designing products, procedures, user interfaces, and systems. We ignore these design guidelines at our peril. For those of us in the human factors profession, the failure to follow such basic guidance seems inexplicable.

Not Testing
In the human factors community, our credo is to test early and often. My philosophy on testing is that we should test only when necessary and then test in the most cost-effective manner. There are some things we already know and do not have to test. On the other hand, if I’m thinking of using a novel design feature, I don’t necessarily know whether it will work. However, I can conduct a very simple test to find out. The idea is to always test as early as possible, using the lowest fidelity (and lowest cost) method at hand.

Ignoring Customer Experience

One obvious way of predicting what people will do is to evaluate what they’re doing now and what they’ve done in the past. Most products and systems are based on products that already exist. We can observe, interview, and survey existing product users to get an idea of how they’re using similar products. These techniques derive in large measure from the field of anthropology and are known as “ethnographic” methods. This seems like such a common-sense approach to predicting behavior that I wonder why it is not used more often.

Using Confusing Terminology
Ambiguous, misleading, and otherwise confusing terms in labels and instructions are a sure source of performance problems. The existence of poor terminology is almost always the result of failing to consider the likely background, experience, education, and general knowledge of the product users. Designers and manufacturers almost never have the same background as the people who will use their products. The cardinal rule here is that the broader the range of intended users, the less forgiving is the use of terminology.

Violating Stereotypes
When we grow up in a particular culture, we come to expect things to work a certain way. For example, in this country, we generally move the switch into the “up” position to turn on a light. Such common understandings are called population stereotypes. These stereotypes can vary from country to country. In most of Europe, for example, light switches are moved down to turn on the lights. When we design something that doesn’t function the way most users expect, then we have violated a stereotype. Violating stereotypes is a sure way to cause errors and ensure lousy performance.

Requiring Mental Calculations

As human beings, we are good at some tasks and notoriously poor at others. We have a very hard time performing explicit mental calculations. For example, if we have to mentally convert from one set of units to another, we are very likely to commit errors. The same is true for doing any type of arithmetic.

Requiring Time Sharing
Have you ever tried to watch television and carry on a conversation with your spouse at the same time? I’m guessing that didn’t turn out too well. In fact, all humans (both male and female) are very poor at time sharing tasks. We are much better at concentrating on one thing at a time, completing it, and then moving on to something else. Any product or process that requires people to simultaneously do more than one thing will produce errors and cause poor performance.

So what is the bottom line regarding foreseeable behavior? From my perspective, there are essentially three categories of human behavior. These categories form a continuum in the realm of foreseeability. We can predict some behaviors with a great degree of confidence. There is no technical reason that such behavior cannot or should not be taken into account by designers and manufacturers.

At the other end of the foreseeability spectrum are behaviors that we simply cannot predict or foresee. Human behavior has a rather large random component and it is not possible to know what every person will do in every situation.

In the middle of the spectrum are those behaviors that might not be immediately obvious, but could be identified by appropriate analysis and testing. This is where ethnographic methods, task analysis, and formal testing should come into the product design process.

The bottom line, from a human factors viewpoint, is that we already know enough to foresee some behavior, we can find out enough to foresee other behavior, and some behavior we cannot reasonably predict. The last category is not as large as many manufacturers would like to believe and the first two categories could be applied much more generally.

Michael E. Maddox is a senior scientist at the Human Center