logo image

IT Management: Annotated Bibliography

Abstract

This paper provides an annotated bibliography of three related articles concerning the evaluation of an ambient display, human factor testing of a user interface and the design of usable systems. Each article is further explored by looking at its relevance in the field of Human Computer Interaction and by relating it to the research of interactive designs.

Annotated Bibliography on Interactive Designs
Evaluating an Ambient Display for the Home
The article presented a review of two methodologies employed in th...

Need essay sample on "IT Management: Annotated Bibliography"? We will write a custom essay sample specifically for you for only $13.90/page

...e evaluation of the CareNet Display, an ambient display that was being designed to efficiently provide a network of caregivers, the necessary information about the elders who need day-to-day care in their place of residence. More specifically, this ambient display prototype is an interactive digital picture frame which gives relevant information to the family and friends of an elder who would take turns in attending to her or his regular needs at home.

The CareNet Display prototype that was the subject of evaluation contained regular updated information about the status of the elder’s meals, medications, outings, activities, mood, falls, and calendar. The information can be accessed by the caregivers through the touch screen option features of the ambient display where details of the elder’s activities can be viewed. The elder however can use the privacy features of the ambient display to choose which caregiver has access to the information that the elder wishes to disclose.

The two methodologies examined in the article were: ‘Heuristic Evaluation of Ambient Displays’ and the in situ, 3-week long, Wizard of Oz evaluation.

Thirteen (13) people composed of elders and their respective family members participated in the in situ evaluation where they were provided with their CareNet Display prototypes for three-weeks. Before and after the deployment of CareNet Display prototypes, the participants were interviewed and made to fill up questionnaires that contained relevant evaluation questions.

Eight (8) evaluators who had experience in user-centered evaluation were recruited but none of them had prior experience in the heuristics that were used in the evaluation of the ambient display. These evaluators were provided with an email copy of the 12 heuristics used in the evaluation, a website link to the detailed description of how the ambient display worked, and access to a similar prototype of the CareNet Display but with hypothetical information of the elders only. Based on the materials provided to them, the evaluators were asked to list down violations to the 12 heuristics (Mankoff et al) with corresponding description of the problems encountered during the evaluation of the ambient display. All the violations listed by the 8 evaluators were aggregated and rated by each of them to gain insight on the severity of the aggregated problems.

By way of summary, the evaluations yielded the following results: (1) 39-55% of the known issues identified in the in situ evaluation were found by 3-5 heuristic evaluators; (2) Of the 8 known usability issues, zero to three were reported by a single evaluator; (3) all in all, 75% of the usability issues pertaining to the violations were reported by the eight evaluators; (4) six known heuristic violations were identified with 15 usability issues but two of the known violations were not detected by the heuristic evaluators; (5) Sixty additional heuristic violations (not detected by the in situ evaluation) were also identified but none of them had corresponding usability issues; (6) Heuristics 6 and 10: useful and relevant information as well as peripherality of display were not addressed in the study; and (7) the most severe usability problems identified by the in situ evaluation were overlooked by the heuristic evaluators and most of these had something to do with peripherality of display.

Despite the fact that the heuristics used in this evaluation were based on scenarios that were quite different from the evaluation of the CareNet Display and despite the fact that some of the heuristic violations involving peripherality of display were missed by the heuristic evaluators, the authors saw that the heuristic evaluation yielded successful results. The authors also saw the relevance of the study in terms of providing relevant input to ambient display designers.

Heuristic evaluations normally miss domain-specific problems, especially if the evaluators are not really familiar with the interactive design; in this case, an ambient display. The instance wherein the heuristic evaluators missed the 10th heuristic in the evaluation of the CareNet Display is classic evidence that indeed domain-specific problems like peripherality of display may not be easily detected. This clearly emphasizes the beauty of an in situ evaluation and how it complements heuristic evaluation in this particular case. In situ evaluation has a greater chance of detecting domain-specific problems as they get manifested right on the very moment the ambient display is being evaluated. Therefore, one would arrive at a decision to complementarily use both evaluation methods (i.e., heuristic and in siu evaluation) to increase the chances of detecting more heuristic violations, including those that are domain-specific.

A question comes to mind when it comes to the intance described above: would the 10th heuristic would not have been overlooked had the heuristic evaluators been provided with a video footage of the ambient display that was being evaluated in situ?

This only shows that the study of Human Computer Interaction involves thinking out of the box and that evaluating the usability of a certain interactive design could benefit from different approaches that yield complementary and reliable results. Heuristic and in situ evaluation of ambient displays such as the one presented in this article indeed contributes to the growth of the body of knowledge that is Human Computer Interaction. Furthermore, studies such as the one discussed in this article highlights the evolution of the heuristics being used in evaluating the usability of an interactive design. From the Nielsen to Mankoff, heuristics continue to evolve as the field of Human Computer Interaction also develops.

Heuristic evaluations as well as in situ evaluations of ambient displays similar to the one discussed in this article indeed yield a significant amount of usability issues that are very helpful in improving the interactive design of ambient displays. The usability issues identified in this study will certainly guide improvements in the design of ambient displays. For this particular study, its social benefit is the contribution to the development of ambient displays that could make home caregiving less tedious and more efficient.

A more efficient home caregiving will certainly help the elders, their families and friends experience greater quality of life as the burden of old age at home is better managed through the deployment of more user-friendly ambient displays.

The insights and experiences generated out of the methodologies tried in the study being discussed in this article will surely become stepping stones in improving future researches aimed at improving the interactive design of ambient displays and other related systems.

The results of these researches will serve as guides in the design or redesign of human computer interaction and simplify the usability of future interactive designs.

Human Factors Testing in the Design of Xerox’s 8010”Star” Office Workstation
The article basically provides an exposition of the human factors testing involved when Xerox’s 8010 “Star” Office Workstation was being designed. Using the principles of Cognitive Psychology, the designers of the Star approached the task of developing its user interface through various human factor tests.

The user interface of Star was patterned after the office; that is why Star’s personal computer screen used icons that were associated with the office environment such as documents, folders, and file drawers. The processes involved in the handling of data icons can be seen further through the representation of icons associated with mailing, filing, or printing such as the outbaskets, file drawers and printers.

Under the Star user interface design, users are to select the action object by pointing at the icons that represent the movement that the documents will have to undergo in the process. The pointing function as earlier indicated was based on the cognitive psychological principle that says “recognition is generally easier than recall.” The actions under the Star user interface were primarily performed by four function keys: delete; move; copy; and show properties.

The article explains that the design of the Star user interface was a result of three human experiments: selection schemes test; icon shape test; and the graphics test.

Selection Schemes Tests. Under these human factor tests, selection behaviors of users such as pointing, selecting, and extending were manifested by clicking the corresponding function keys through the use of the mouse. The first test involved comparing six selection schemes that differed in mapping between mouse buttons and three corresponding operations. Under the said experiment, the six schemes were assigned to six different groups, with each group composed of subjects that were either experienced or not experienced in the use of the mouse. Each of the subjects were trained and made to do editing techniques using the mouse under the six schemes. Among the six trials made, it was found out that the sixth scheme was better than the others in the sense that it entailed lesser button clicks and lesser selection errors.
Drawing from the lessons obtained in the first test, the second test tried to minimize selection errors and too much “button clicking.” Under the second scheme test, the best selection scheme was performed with certain modifications that avoided the selection errors experienced in the first tests and instead provide for quicker selection through fewer “button clickings.” The resulting experiment yielded the fastest clicking time and fewer selection errors compared to the first selection schemes tests. Thus, the said scheme was chosen as the optimal selection scheme.
Icon Shape Test. Under this human factor test, icon shapes where chosen to determine which icons would be readily identifiable, distinguishable, and easy to learn. This test was primarily aimed at guiding the icon designers in pinpointing the difficulties that may be encountered in the use of a certain set of icons and how these icons can be better designed to facilitate ease of selecting appropriate commands (i.e., deleting, moving, copying and showing properties). The icon shape test required 4 different designers to come up with 4 different sets of 17 icons. Five subjects were assigned to each of the 4 different sets, thereby requiring a total of 20 subjects. These subjects were made to do the naming test, the timed tests, and the rating tests, using the 4 different sets of icons. Among the 4 sets of icons, the experiment yielded that Set 1 was the chosen set of icons but this decision was made with some refinements to the design of these icons.
Graphics Test. The objective of this last human factor test was not so clear cut but it was done to find out how easy the user interface was to learn and pinpoint where difficulties were, as far as the graphics were concerned. Under this particular test, subjects were asked to do series of exercises by drawing lines and other shapes such as rectangles through the click of the mouse using the original user interface. After analyzing the video footage of the experiments, designers found ways to redesign and retest the graphics function of the new user interface with greater enhancements that minimized selection errors, lesser clicks, and with relative ease on the user.
The three human factor experiments made in the design of the Star user interface were indeed very tedious but it yielded significant impacts in the design of the user interface as lessons from the experiments where being used to improve the design of  Star’s user interface.

The article has shown a process by which a user interface can be accepted by the user since it yielded results that made human computer interaction more efficient and less tedious. The article highlights the importance of painstakingly improving the user interface so that the technological sophistication of a system will not go to waste.

In other words, the process of understanding how the user experiences the user interface and learning from the lessons gathered from the sets of user experiences, serves as the key to the acceptance of the user interface. The point is that no amount of technological sophistication that accompanies the system design if the users find it difficult to accept the user interface. The work of the product engineers will just go to waste if the usability specialists will not do its job of making the user interface more efficient, effective, and acceptable to the users.

Therefore, many technological innovations must rely upon the user interface design to elevate its technological complexity into a usable, thus marketable product.

The article has shown that in order to arrive at an optimal user interface design, rigorous human factor and usability testing is a basic necessity. This is precisely the relevance that the article is contributing to the field of human computer interaction.

The empirical testing required in the design of user interface, allows naive users to provide data on what works as expected and what does not work. Only then the designers will be able to come up with the necessary adjustments in the design of the user interface to make the product usable and therefore acceptable.

This significance of a good user interface design as a product of the human factor testings and other usability testings for that matter could spell the difference between product acceptance and rejection in the marketplace. This is the value of the discussions made in the article concerning human factor testings in the user interface of Xerox’s 8010 Star Office Workstation.

Relating this article to the research of interactive design, one would conclude that human factor testing is indeed a crucial ingredient in researching an interactive design. Human factor testing plays a significant role in addressing the usability issue of a particular interactive design as pointed in the preceding discussions.

Emprical testing or in the case of the article, human factor testing is part of the process involved in the research of interactive design. Interactive designs must address the cognitive psychological principles identified in this article and which were used as a basis for conducting the experiments for this particular user interface.

Furthermore, research in interactive design entails a tedious iterative process like the one described in this article in order to produce relevant results that can be used as a basis for the improvement of a particular interactive design. Thus, the human factor tests described in this article (i.e., selection schemes test, icon shape test, and graphics tests) gave a glimpse into the importance of paying close attention to human factors when doigna research on interactive designs.

Designing for Usability—Key Principles and What Designers Think
The article pointed out that attaining the goal of designing systems that is easy to learn, useful, easy to use, and pleasant to use involves following four basic principles: designers must understand who the users will be; expected users must work closely with the designers at the early stage of the design formulation; intended users must be given the chance at the early stage of the development process to try using the simulations and prototypes to do real work, with their performance and reactions actually measured; and lastly, that problems must be fixed along the iterative process of designing, testing, measuring, and redesigning.

These principles were recommended to designers for sometime but because of its seemingly obvious nature, they don’t seem to be actually considered when designers do their system design. In a survey made to validate this observation, it was found out that most of the participants failed to mention most of the four design principles. When asked about the key steps in the development process, it was found out in the survey that some wrote goals for the system such as making a system “easy to use”, “user-friendly”, “easy to operate”, “flexible”—goals that were difficult to achieve without knowing the steps in the development process.

Examining the responses in the survey would lead one to the conclusion that indeed, the principles earlier enumerated are not really obvious and that they should not be percieved as mere “common sense”. The article argues that the designers’ response may appear similar to the recommended principles but in terms of intent and the means by which they think of carrying out these goals and its impact sound quite different altogether.

The article futher belabors the significance of the recommended principles and the seeming lack of emphasis being given by designers to it. In giving further emphasis to the importance of the recommended principles, the articles makes a contrast between what the authors are mean and what the designers say.

First, the authors recommend the understanding of the potential users instead of just “identifying or describing” them. This implies that designers must have direct contact with potential users instead of just hearing or reading about them in through secondary sources.

Second, the authors recommended that potential users become part of the design team, even for a brief moment. More importantly, the authors emphasized that if the potential user should become part of the design team, it should be done at an ealier stage when their inputs would matter most instead of “hearing” them out in the review process.

Third, the authors emphasized the significance of actual behavioral measurement of learnability, usability, and conducting these studies in various stages of the development process and that the tests conducted in this regard must be user-centered instead of being system-centered.

Fourth, the authors highlighted the need to incorporate the results of the behavioral testing into the new version of the system and that this process must be more than a single iterative process.

The article then proceeds to analyze why in the survey conducted, designers seem not to see a compelling reason in adopting the recommended principles and that said principles are percieved to be just mere “common sense”. The authors observe that designers tend to disregard the diversity in the characteristics of the users, and this is because they have lost direct contact with the users. The article also points out those designers seem to get lost in the rational analysis of systems design that they fail to recognize the crucial role of the users in the empirical process. Furthermore, the article highlights the importance of soliciting the inputs of potential users to find out what they actually need from the system that is being designed. Then the articles points out that design guidelines are not sufficient. Human psychology must provide enhancements to these design guidelines if it were to become relevant to the changing demands of users.

The author also poinst out that getting it right the first time is not possible in the field of designing user interface since its involves a whole community of users. This means that at best, a forecast of the most ideal user interface can only be done empirically but this needs to be validated and refined through actual user intervention.

Furthermore, the author argues that the process of the developing a system need not be lengthened as user testing can be done even before a system is built. Finally, the authors emphasize that constant iteration ensures excellence in a system design. The article concludes that designers must not loose sight of the four principles being forwarded by the authors in this article as this will have significant bearing in the usability of their system designs.

The emphasis being placed by the article on the four principles of designing user interface gives credence to its significance in the field of human computer interaction. This is because designing for usability is what human computer interaction is all about.

Human computer interaction places a high premium on the potential users with all their human characteristics and attributes. In this sense, the author was right in bringing the users at the center of the discussion of the four principles that they were trying to advocate. In other words, all the principles being emphasized in the article revolves around the user and this is precisely the reason why the view expressed in the article augurs well with the goals of human computer interaction.

The concerns about focusing early on understanding the potential users, letting the potential users become part of the design team, putting greater emphasis on user test rather than system test, recognizing the diversity of potential users, and trying to know the needs of the users are just some of the tenets behind human computer interaction. This means that the be all and end all of any system design is in improving human performance and that goes back to the issue of putting the users at the center of the discussion.

Relating the article to the research of interactive design, it is important to point out during the research process that potential users are the source of more relevant data. The research on the system design becomes less relevant when it is not anchored on the end-users of the system. Therefore, it must also be emphasized that the user must be made a central focus of any research being conducted for the development of a particular interactive design.

The article highlighted the importance of iteration and this has significant bearing on how research on interactive design must be conducted. The article recommends that for a research on an interactive design to yield fruitful results, iteration must be done more than once. In other words, excellence in system designs is made possible by iterative research.

More importantly, the article emphasized that empirical or experimental research that involves actual behavioral measurement of learnability and usability is the only way in which the usability of a system can be determined. This bias in empirical research requires that the system be actually tested by the end users. Only then it can be told that the system is actually usable. Again, the value of user testing as a crucial part in the research process of determining usability of an interactive design must be highlighted.

Finally, putting the potential end user at the center of the research process must be constantly emphasized; and this is the contribution of this article in the discussion of how the user plays an indispensable role in the research of interactive designs.

 

Read full document

Can’t wait to take that assignment burden offyour shoulders?

Let us know what it is and we will show you how it can be done!
×
Sorry, but copying text is forbidden on this website. If you need this or any other sample, please register
Signup & Access Essays

Already on Businessays? Login here

No, thanks. I prefer suffering on my own
Sorry, but copying text is forbidden on this website. If you need this or any other sample register now and get a free access to all papers, carefully proofread and edited by our experts.
Sign in / Sign up
No, thanks. I prefer suffering on my own
Not quite the topic you need?
We would be happy to write it
Join and witness the magic
Service Open At All Times
|
Complete Buyer Protection
|
Plagiarism-Free Writing

Emily from Businessays

Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/chNgQy