by Bruce G. Allen and Elizabeth Buie (copyright notice)

When you say something is intuitive, do you mean that absolutely everybody understands it right away? If you say a program is logical, what help is that to a user? Does user-friendly mean anything these days? (Did it ever?)

We create meaning by placing words in a context where they absorb their surroundings like herring in a sherry marinade. Use a word in a context that supports clear understanding of the intended meaning, and you make the word richer and stronger for yourself and for the time when someone else takes their turn at using it. Overlook your intended meaning, and you defocus the image — scatter the energy — of the word.

Word meanings can change over time; that’s all well and good. "Awful" used to mean "full of awe", and seven centuries ago "nice" meant "foolish" (some might say it still does!). The rich garden of English vocabulary has grown from the endless planting of new words from foreign sources, jostling for their place in our prose and poetry. A living language is always on the move — good thing for us.

There’s a bit of a problem, though, when we want a word to conjure up something more concrete than a poetic image in the reader’s mind. When we want to use a word as terminology. Terminology is to vocabulary as bulldog is to kennel: It demands a certain kind of care. We want a term to hold its value. It has to say the same thing to everyone who needs to read or hear it.

With that in mind, let’s take a look at some usability terms and how their meanings might be compromised.

  1. Intuitive: It is said "The only intuitive user interface is the nipple"; and how that might be is not within the scope of this article! Once we move beyond the primal food supply to more contrived appliances, the decision about what is intuitive and what is not becomes a lot more difficult. If intuitive means natural, in the sense that comprehension requires no thinking, where is the division between those things that are natural and those that are familiar?

    This isn’t just nitpicking — a rich and evocative word like intuitive is wasted as long as it sits in a fog of uncertain associations. So let’s help rescue it by saying this: An intuitive interface asks no more of the user than what they either already know, or can immediately deduce from previous life experience. Implied is that intuition, as a term in usability, is wisdom assumed and shared within a community — the community of users familiar with the task and with the environment in which it is performed.

    No Windows program, for instance, could bear the burden of being "intuitive" to someone who's never before seen a Windows program! What use, for instance, is that little "ridged" wedge at the lower right corner of a resizable window, to someone who doesn’t recognize it as a grip (from life experience with real grips) to click and drag (from familiarity with MS Windows)?

  2. User-friendly: This antiquated term dates back to a time when there wasn't much distinction between someone who knew applications and someone who knew computer systems. As applications began to be used by people whose lives were not immersed in computing, the assumptions made by developers about what the user knows had to change dramatically. Developers thought of users as constraints — ill-informed and hostile ones, at that — and they reluctantly cobbled user interfaces together as concessions to crankiness.

    We’ve moved on since the days when anyone who worked with computers was thought to know it all. Specialists in graphic design, interaction design, human factors, typography, database, hypertext, documentation, and task-specific programming languages all contribute jointly to a successful design. Every specialist will have his or her agenda for the project and a personal picture of the user, and what that user might think of as friendly behavior. Is fast response friendly or just a Good Thing? Are clearly labelled icons friendly or just easy to understand? If the application controls a nuclear reactor, will the user care if a control screen is friendly if it's easy to comprehend and it’s fast acting?

    Users don't want the computer as a friend — they want it as a tool that will do stuff for them.

  3. Logical: An application that is logical in its internal design and produces accurate results may nevertheless be difficult to use. Of all the pitches one could devise to sell the merits of an interface, logical might be the most persuasive. On the surface of it, how could you argue against anything being logical? Well — try asking whose logic has been applied and why. Is it the logic of the analyst who broke the task into parts? Is it the logic of the programmers who coded input fields in alphabetical order because they weren't told what was wanted? Is it the logic embedded in the design of a legacy database, created by someone who retired to a cabin in the Rockies ten years ago?

    All of those practitioners had a reason for building their product the way they did — reasons that seemed sound to them at the time and within the scope of their work. But their logic applies to the internals of the application — the plumbing within the walls. If you find one sink in your house with the hot and cold backwards, you aren’t going to be very sympathetic when the plumber says it made his life easier to leave it that way!

  4. Heuristic: This word often appears in usability paperwork in the phrase "heuristic evaluation". A heuristic evaluation is a critical scrutiny of the interface by trained examiners who look for potential problems that a user might have, with reference to guidelines derived from research and practical experience (the heuristics). The guidelines may be a list of desirable features that the examiners look for and comment on; or it may be a full-blown rubric of the sort teachers use for student assignments.

    Unfortunately, "heuristic evaluation" has become a catchall label for several kinds of inspection. The point is that heuristics are general principles, and are not the only kind of guidelines on which a usability inspection can be based. For example, when an interface must conform to a standard set by a project or contract, that standard can be very specific in its requirements, going way beyond heuristics. If that standard embodies usability principles, you’re doing a usability inspection. Let’s reserve "heuristic" evaluation for when we have used general heuristics and not some other evaluation criteria.

  5. Subjects: This term comes into the usability field from human factors research, where studies compare design features and their support for human performance (effectiveness and efficiency) and safety. The term has been used even longer in medical and educational/psychological research, whose studies use "human subjects" to test and compare the safety and effectiveness of treatments (pharmaceuticals, surgical methods, instructional materials, counselling methods).

    However, the usability engineering community has come to view the people who participate in our studies not as subjects but as partners. To us it seems that calling them "subjects" dehumanizes them and discounts their contributions to the work. There are other differences as well: Usability tests do not deliver "treatments" to see how "subjects" improve or change; instead, we collect users' feedback and impressions of our products (subjective satisfaction) as well as look at the overall performance of the interaction between the two. Users’ opinions count.

    In addition, usability testing yields the most valid results when the users who participate have the feeling of using the product "naturally," as they would in their typical setting (office, home, etc.). We have found it important to stress to them that it is not they who are being tested, but that they are helping us test the product. We do not want the testing itself to put pressure on them, beyond what they would feel from just using the product to do whatever they use it for. The usability community generally believes that treating them as "subjects" can increase the pressure to perform. Most usability engineers call these people "participants" to reflect this perspective.

  6. Subjective: The statement "Design is all subjective" is flat wrong. This word subjective pops up repeatedly in reference to the design of consumer items from toasters to stereo equipment to websites, with the intent to say that there are no firm rules or principles governing the design process or the quality of its results. Everything is up for grabs — the user is fickle; the intent is to capture attention and differentiate oneself with novelty.

    People who say that design is all subjective are not talking about design for usability — they are talking about style. Where web sites are concerned, style is a major factor in successfully pitching a product or a point of view against a background of competition. Style is vital to brand identification.

    Not all user interfaces are billboards, though, and usability is far more than style. When the computer controls a vehicle, a power plant, a million-dollar account — the user in front of the display is presumably sold on the importance of the task, and what remains is to get that task done as smoothly as possible. The user is the operator of a machine doing a job. The interface to that machine doesn't have to grab attention or entertain — it has to facilitate accurate and efficient work. You don't design interfaces like that by having the graphic arts department try Miracle Pink for the first time; no matter how much they like it. There is a time to be pleasing, but this isn't it: What the operator needs is an interface that has been designed and tested to get that particular job done.

    One argument seems most telling against the notion that good design is subjective: the research findings that show no relationship between users' subjective ratings of a product and their objectively measured performance in using it.

  7. Tester: In software and system testing, the tester is normally a person who (among other things) sits in front of the computer to use the product. The tester knows what's being tested for, and actively looks for problems in the product. In beta testing, the tester is usually the end user. In usability testing, however, the tester is someone else altogether. Think about it: Who designs and plans the usability test, facilitates its conduct, collects the data and analyzes the results, devises recommendations for design changes, and writes the test report? It's not the user, but the usability engineer.

    Usability testing does not want the user to focus on cataloging perceived defects but on performing the tasks. The user's participation in the testing allows the usability engineer to learn about how well the product is likely to support real users in doing the work that it's designed to support. Although users will, of course, be asked for feedback, in most cases usability testing is concerned at least as much with performance as with satisfaction (if not considerably more). We want the performance to be as natural and realistic as possible, so we want the users thinking about the tasks and not unduly about the interface.

    When we conduct a usability test, we are the tester.

  8. Testing: This term is often misused as a synonym for the larger field of evaluation or verification. You may hear people say that they've "tested" their user interface when what they've done is demonstrate it to users or test the code that implements it. Testing, however, is only one method of evaluation, in usability as in other engineering disciplines. Software and system engineering, for example, define non-test methods such as analysis, simulation, inspection/examination, and demonstration. (Heuristic evaluation is a type of inspection, as described above.)

    Just as in other disciplines, testing in usability is empirical, relying on observations and sometimes measurements. Usability testing observes representative users as they employ the product in performing realistic tasks. It may aim to find usability problems so that they can be fixed, or to verify that the product meets its usability requirements or goals. Usability testing includes measurement of user task performance, observation of user behavior and identification of problems, and/or assessment of user satisfaction.

    We do not claim that testing is necessarily better than other forms of usability evaluation; in fact, we believe that usability testing and usability inspection complement each other because they tend to find different kinds of problems. What we insist is that, if you haven't subjected your product to an empirical method in which you observe users using the product to perform typical tasks, you haven't tested its usability.

"But all this is just semantics!" we hear you cry.

Let us say this outright: There is no such thing as "just" semantics. Semantics is all about the very meanings of the words we use, the intention with which we use them, and the understanding they create in our audience. In communication, nothing is more important than semantics.

Language shapes reality. It sets up for us the reassuring feeling that reality, or at least the small part of it that we're responsible for, can be comprehended in the way we intend. This is especially relevant to that small part of reality that belongs to usability, as its contributors come from diverse backgrounds and greatly need a shared terminology where meanings are controlled more rigorously than in general vocabulary.

We've looked at eight terms that are in frequent use in usability engineering or user interface design/development; and we have seen how, unless we use them with great care, they can create a reality that is different from the one we intend. To give our usability efforts a reality that is effective, efficient, and satisfying — particularly when talking to nonspecialists — we need to use our terminology carefully.


Brief biographies

Bruce Allen is an electronics technologist with Nav Canada in Ottawa, Ontario. A slothful and difficult personality, Bruce spends about half his time whining to his manager about spending half his time whining to software types who seem never to have heard of users or hardware. In his spare time Bruce stands in the cold and dark taking photographs where there's no light, or whines about mopping up after stirring brown powder into fruit juice in the hope of transmogrifying it into wine.

Elizabeth Buie [at the time of this writing was] a usability specialist with Computer Sciences Corporation in Rockville, Maryland. A cranky and contentious personality, Elizabeth has griped about usability and HCI design since 1977, on projects from spacecraft control to mobile phone service provisioning. These days she focuses on Web sites and applications for the US Government. In her spare time Elizabeth sits at one keyboard seeking to commune with the alto part in the choir's upcoming pieces, or at the other one seeking to grok the digital images that are mere shadows of her photographs.

Elizabeth's griping about usability and its language managed to convince Bruce, long linguistically disgruntled himself and ever ready to add another subject to his own whining. This article is their first effort to whine and gripe as a team.


This article was published in the March/April 2002 issue of interactions, the ACM/SIGCHI magazine of human-computer interaction. Copyright © 2002, Bruce G. Allen and Elizabeth A. Buie. All rights reserved. Permission is granted to print this page or link to it, as long as such use is personal or educational and is not for commercial gain or profit. This article may not be republished or redistributed without permission.
Contact me for more information or to ask about usage permission.

 

n = navigation bar. t = top of page.