Usability Evaluation. In Soegaard, M. The encyclopedia of Human-Computer Interaction 2 nd Edition. Five Simple Usability Tips Quesenbery, W. What does Usability mean? In Annual conference-society for technical communication Vol. Log in Join our community Join us. Open menu Close menu.
Join us. An Introduction to Usability by Andreas Komninos 1 year ago 12 min read. Why Does Usability Matter? It's a way to let our ideals shine through in our software, no matter how mundane the software is.
You may think that you're stuck in a boring, drab IT department making mind-numbing inventory software that only five lonely people will ever use. But you have daily opportunities to show respect for humanity even with the most mundane software. Effectiveness Effectiveness is about whether users can complete their goals with a high degree of accuracy. Efficiency Effectiveness and efficiency have come to be blurred in the mind. Engagement Engagement refers to the level of engagement a system offers.
Promoting error tolerance, according to Whitney Quesenbery, requires: Restricting opportunities to do the wrong thing. Limit options to correct choices if you can and give examples and support when asking people to provide data.
Dropbox has an undo function, in case users accidentally delete items in their folders. Or more precisely, on to ways of further developing useful connections between visualization and psychology. To begin with, there is potential for considerably more integration between vision and visualization than currently exists; much more processing could be offloaded to the viewer's visual cortex.
As Stephen Few mentions, one way of doing so is by making use of simple preattentive properties such as length, orientation, and hue. But recent work in vision science has shown that the preattentive level of vision contains far more visual intelligence than that. Among other things, preattentive processes can determine shadows, extract three-dimensional orientation, and link scattered elements of the image into unified groups. These abilities could be exploited in higher-powered visualizations.
Another area of recent progress is our understanding of visual attention and scene perception. Our visual perception of the world seems to be based on a just-in-time architecture in which attention is directed to the right object at the right time. If the co-ordination mechanisms involved can be handled correctly, it would open up the prospect of "seeing" abstract datasets in a way that is as natural and effortless as seeing the physical world.
A brief overview of these developments and their implications can be found in Rensink, A related opportunity is the greater use of visual analogy or metaphor. Here, the emphasis is no longer on bypassing conscious thought, but on using modes of thought best suited for reasoning about visuospatial objects and processes. For example, when reasoning about physical force, a highly useful metaphor is the directed line, or arrow. A more modern example is the desktop, which allows a user to reason about possible actions on their computer.
As in the case of visual perception, many - if not most - developments to date have been based on a relatively shallow understanding of the mechanisms involved.
But given that cognitive scientists have learned much more about metaphor, it may be time to consider its use in a more sophisticated fashion. Ultimately, visualizations might be able to create mental images that correspond in a natural way to the structure of any process or task. For an interesting discussion of this, see Paley, A third direction of potential importance is the creation of more powerful evaluation methods based on the methodologies developed in experimental psychology.
Psychologists have spent centuries learning what to do and not to do to obtain precise measurements of various aspects of human behaviour. It would be good to learn from this. Of course, some of these techniques have already been adapted to evaluation.
But as in the case of cognitive and perceptual mechanisms, the transfer of knowledge here is far from complete, and there is much that could still be done. For example, consider evaluating how well a given scatterplot design conveys the correlation in a dataset. In the past, this was done by presenting the viewer with the scatterplot and asking for a numerical estimate of the perceived correlation. But a more powerful approach is to borrow the experimental methodology of measuring just noticeable differences jnds : the viewer is presented with two side-by-side scatterplots, and asked to choose the more correlated one.
Results based on this approach show both precision and accuracy to be specified over all correlations by two functions governed by only two parameters.
As a consequence, a given scatterplot design can be completely evaluated based on just two simple measurements. For details, see Rensink and Baldridge, A final direction to consider - perhaps the most challenging of all - is to develop a systematic way of ensuring that visualization designs make optimal or at least, good use of human perception and cognition. In theory, this could result in a "science of design".
In practice, this might not be possible, if only because the number of possible designs is so immense and our understanding of human cognition so incomplete. But it may be possible to follow the example of several other areas of design, and aim for a set of principles that would at least constrain the space of possibilities to consider.
For example, constraints based on physical forces or material properties can be applied to any architectural design, determining whether or not it is viable. There is no a priori reason why a similar approach would not also work for visualization. The efforts of Bertin are perhaps a start in this direction, providing suggestions about the kinds of graphic representation that might be applied to various kinds of problems.
Work by Tufte, Mackinlay, Ware, and others have extended this further. But however useful these suggestions are, we are still a long way from a solid foundation for thinking about effective visualizations.
Many foundational issues are still poorly understood. What is really going on in a visualization? Is there a way to describe this process precisely and objectively? Is it even possible in principle to determine if a given visualization draws upon the perceptual and cognitive resources of the viewer in an optimal way?
The answers to these questions and others like them will be difficult to find. But they will determine the extent to which we can enable humans and machines to best combine their respective strengths. Stephen Few wrote an excellent description of data visualization and the necessity for designing graphics to take advantage of our knowledge of human perception and cognition.
In this commentary I question who is responsible for the myriad of visualizations that ignore this knowledge: the software vendors, the software users or others? In addition, I point out important work that deserves greater exposure on the integration of geo-spatial and other forms of data display, a topic on Few's most-needed list.
I end with additional sources for learning more. Certainly, software vendors are responsible for offering many graph forms that hinder rather than help the reader to understand the data. The vendors offer graphs to wow the audience rather than to communicate clearly and they create demand for ineffective graphs. But they are not solely responsible for the myriads of graphs with perceptual problems. People learn from what they see and they see many ineffective graphs.
The software users then demand software that allows them to imitate these ineffective designs. This gets us in a chicken and egg situation: Do vendors produce these awful visualizations because their customers demand them, or do the customers become attracted to them when they see what vendors market? An example of the ineffective ways includes pseudo-third dimensions in bar charts. Figure 1 shows a pseudo-three-dimensional bar chart in Excel.
Almost no one reads it correctly. I describe other problems with this graph in Creating More Effective Graphs [1].
A number of graphic artists have made major contributions to the field of data visualization. However, there are some graphic artists who have no appreciation of numbers and don't realize that the representation of numbers in graphs should be proportional to the numbers they represent. As a result, it is common to see graphs that are not drawn to scale. Some graph designers want to give the impression of better performance than is actually the case and intentionally design graphs that mislead to achieve this impression.
Other graph designers may be more concerned with demonstrating their technological abilities or artistic abilities than in communicating clearly and accurately. Until recently, our educational system did not provide training in communicating numbers.
Today, there are some excellent courses at the college level but the majority of people receive little, if any, training in presenting numerical information. Therefore, many graph designers are unaware of the principles of effective graphs.
Some of the problems occur from a lack of proofreading and careless errors. As an analogy, a current style in fashion is high-heeled shoes. A quick search on "dangers of high heels" revealed that there has been an increase in the number of bunion operations on wearers of high heels as well as foot pain, back pain and neck pain.
In some cases the Achilles tendon grows shorter. Balance is affected so that the risk of falls is greater. The list of problems goes on and on. Is the shoe designer, the shoe manufacturer, the retail outlet that sells the shoes or the customer who buys them responsible for this increase in medical problems? Is this situation analogous to the data visualization one? Both cause serious problems: poor business decisions in one case and pain and suffering as well as unnecessary medical expenses in the other.
I hope that these questions stimulate interesting discussion. In his section on future directions, Few mentions areas that offer the potential for enrichment including the integration of geo-spatial displays with other forms of display for seamless interaction and simultaneous use. Several researchers have made advances in this area. For example, the micromap designs of Dan Carr [1] and [2] add a geographic context to statistical information, allowing for the joint exploration of statistical and geographic patterns in data.
As illustrated in Figure 2, statistical graphics, here dots, are linked to small maps by color. In the first row, we can see that Maryland is represented by red dots and so Maryland is shaded red on the right-hand map. Sorting by poverty level, we see that not only are poverty and education inversely related, but that there is a geographic clustering of southern U.
Iterate continuously. Start Free Trial. Like Print Bookmarks. Project Management Institute, Inc. Robert C. Kenneth S. Steven C. David J. Loeffler M. NET - Rossberg J. Davis C. Smith G. Agile nt VEE 9. Agile nt EMPro Agile [xenforo 1. Agile nt Advanced Design System Agile nt Genesys
0コメント