Friday 30 March 2007

HCI Extended - Yoseph Samuel Sultan - Affective Interaction

Affective Interaction – by Yoseph Samuel Sultan 520976

Affect Interaction falls under the umbrella of Affect Computing. This is the study of computational systems that can be aware of and respond to human emotions. [Höök] This includes systems that intend to evoke certain emotions as well. Humans are not only cognitive beings, but emotional ones, which is why it is important to consider emotions in creating an environment that seems natural and usable. Due to the length restrictions of this report, carefully selecting what to include and leave out was very important. In the next paragraph, Emotional Theory will be introduced as this provides the essential foundation for work in this field. After that, some of the major challenges posed by Affect Interaction are discussed. Following is a presentation of some real-world applications that allow us to see the theory applied. Finally, there is a conclusion rounding off this article and presenting personal views on the matter.

Affect Interaction is a multidisciplinary task. Computer scientists who try to create such systems have gathered information about human emotion from fields such as medicine, neurology, psychology and even philosophy. One of the problems with implementing such a system is that the theory behind still remains very abstract. Furthermore, there is no unity among the community as to which emotional model describes humans the best. The bridge between current theory and implementation is still huge. All though many of the suggested theories are grand and attempt to encapsulate everything, when creating practical/useful affect interaction systems, these theories should only be used as inspiration and a source of ideas. Not as a model for implementation. For a more detailed discussion on various emotional models, I refer the reader to [Scherer 2002] for a good review of the most popular models.

There are several challenges that must be addressed before an emotionally interactive system can be created. One of the major challenges is to bridge the gap between ‘the constructive rational nature’ of software and the ‘interpretative subjective complexity’ of a user’s personal experience. At the end of the day, users do not naturally rationalize about their daily emotions, that is part of what makes them emotions. This level of unconscious involvement leads to very subjective and personal behavior. Two people may react in two completely different emotional ways to the very same stimulus. This tells us that Affect Interaction design must be very user-centered. Regardless of whether the universe has a unifying explanation to all peoples emotions, because we currently don’t, a practical system will only be developed when the design is user-centered. This point cannot be stressed enough. There is a very big difference between reality of emotion/affect and useful practical implementations of theory.

When discussing topics such as affect interaction, it is very useful after a point to step back and see what is actually ‘out there’. It is very easy to be caught up in the very interesting philosophical and psychological discussion, but we must not forget our drive, which is to create systems that are useful to users and improve interaction. One such system was proposed by [Ståhl 2005]. This application attempts to provide emotive expression in mobile messaging. Again, if this was perfectly achieved it would be an application yearned for by many. The emphasis of the application was the use of “sub-symbolic expressions; colors, shapes and animations, for expressing emotions in an open-ended way”. [Ståhl 2005] The picture below shows some of the emotions expressed by the system. Another application that attempts to achieve affect interaction is another mobile application called eMoto[Sundström 2005]. This applications aims to be guide via user gestures. These gestures are measures via an extended stylus accelerometer and a pressure sensor added. On general both these applications do not perform successfully, but act as stepping stones to move the entire field forward.


Personally, I cannot deny the potential brought on by successful affect interaction development. However, I do remain skeptically as to the feasibility of the task. Some have suggested that the fully understanding of human emotion is an impossible task. On the contrary I do not believe the ‘full understanding of human emotion’ is required to develop a useful system that integrates information about emotion. In my words I would like to say ‘a useful understanding of human emotion’ is what is required. By useful I mean as simplistic or as complicated as required, but where the focus remains on the user and system usability. After all nature inspired design is not about replicating natures actions, but only getting inspiration there from. In the same manner, we should learn about human emotion, and scientifically experiment and test, in order to create systems that are functional. I believe that is one of the elements that distinguishes a computer scientist from a philosophical theorist. We must take theories, learn, adapt, change, truncate and produce a deliverable that is functional. This is not to say that further research on theoretical aspects is not necessary or even that is it not the role of computer scientist. It is necessary and computer scientist should be involved. The only point made here is that there are two fronts to the problem. Theoretical which relates to reality, and practical which relate to functional system . Both must be worked on and both must be advanced.

by Yoseph Samuel Sultan 520976

References:

Boehner, K., DePaula, R., Dourish, P., and Sengers P., 2005. Affect: From Information to Interaction. Critical computing Conference 2005, Århus, Denmark http://portal.acm.org/citation.cfm?id=1094570&coll=portal&dl=ACM&CFID=66824627&CFTOKEN=24952427

Ståhl, A. (2006) Designing for Emotional Expressivity, Licenciate Thesis, Institute of Design, Umeå University, Umeå, Sweden. Chapter 4 (and for those extra interested, Chapter 3): http://www.sics.se/~annas/lic/Binder1.pdf

Höök, K (2004) User-Centred Design and Evaluation of Affective Interfaces, In: Zs. Ruttkay, C. Pelachaud (Eds.), From Brows till Trust: Evaluating Embodied Conversational Agents, Kluwer, 2004. http://www.sics.se/~kia/papers/hook1.pdf

Sundström, P. (2005) Exploring the Affective Loop, Licentiate Thesis, Stockholm University, Stockholm, Sweden, Chapter 3: http://www.sics.se/~petra/eMoto/licen.pdf

Anna Ståhl, Petra Sundström, and Kristina Höök (2005) A Foundation for Emotional Expressivity, In Proceedings of DUX 2005, San Francisco, CA, USA. http://www.sics.se/~petra/eMoto/stahl_affee.pdf

HCI Extended - Christos Yiacoumis - Context Aware Systems

Id:544851

Context-Aware Systems

In the past, hardware and software were considered as units of processing data which were provided by the user in order to produce some outcome. However, as users become more aware and familiar with systems, their expectations rise and they don’t just want a system that produces some sort of outcome on their behalf but demand that this adjusts to user preferences and habits as well. This need calls for systems that carry some sort of intelligence and know how to manipulate context in such a way that they help users on carrying out their tasks, thus a need for context-aware systems.

Context is defined as any information that can be used to characterize the situation of entities that are considered relevant to the interaction between a user and an application[1].Context-aware systems make use of technologies such as sensors with the help of heuristics and other semi-intelligent means[2] to predict what the user wants. A stated goal of context-aware systems is to reduce the overhead when it comes down in executing some tasks. If the system can create and provide the user with shortcuts in doing things (i.e. writing txt messages on mobiles) that are of great importance then the system learns from its user. Situational awareness can be used then to reduce the amount of input the user is required to supply the system with.
Every person has its own way of doing things. We follow different patterns to achieve a goal. It is not the possible to have one system that fully satisfies each user individually unless this system senses or remembers information about the user, records and ‘understands’ the emotional and physical situation in order to reduce effort.

Design of context-aware systems is thus based on sensing and modeling situations and recognizing rules of engagement. These systems are driven by models of the task, user and system. User model consists of information concerning the background of the user in doing tasks. The user model has two important elements that need to be fulfilled: user comfort and user congruency. The first one affects usage while the latter has to do with the compatibility between a user’s preference and the designed artifact[1][2].The system model refers to the system’s ability to carry out a task according to the capabilities it consists.
In context-aware systems we have the software but also the hardware side to consider. In the software side agents are used to monitor and understand the intentions of the user in order to improve the manipulation of the system. However, this may sometimes lead to inconsistencies between what the user wants and what the system understands. In other cases the users adopt strategies in order to protect themselves against errors[5]

Dix,Finlay,Abowd,Beale suggest that context-aware applications should follow the principles of appropriate intelligence[3]:
1. Be right as often as possible and useful when acting on these correct predictions
2. Do not cause inordinate problems in the event of an action resulting from a wrong prediction.

On the hardware side, the improvements made in sensor devices reduce the computation needed. However, a big challenge is how these hardware devices filter out the data that are unnecessary and process the ones that are worth for each task.
In designing such systems an important factor that needs to be considered is the aesthetics of the system. Aesthetics affect user’s perception of the system and is as important as the system’s ability to carry out a task.

As it was mentioned earlier the context data need to be processed using some sort of intelligent way. This is referred to as context-reasoning. In the “Reasoning in Context-Aware Systems” four perspectives of approaching the context-reasoning problem are suggested[4]. These are:

a) low-level approach
b)
application view approach
c)
context monitoring
d)
model monitoring

Each of these is explained in more detail in the paper mentioned before.

Conclusion

This brief essay has looked at some of the basic concepts of Context-Aware Systems. The importance of such systems is high and a lot of research is carried out in developing systems that can help users accomplish their tasks but also learn from the user. The systems must be designed in such a way that they fit in users’preferences and habits. The ultimate goal of these systems should be to reduce the communication barriers between user and system and in no way interfere or obstruct users from executing their tasks.

References

[1]Context-aware design and interaction in computer systems by T. Selker and W. Burleson - http://www.research.ibm.com/journal/sj/393/part3/selker.html

[2] Out of context: Computer systems that adapt to, and learn from, context http://www.research.ibm.com/journal/sj/393/part1/lieberman.html

[3] Human Computer Interaction by A. Dix,J. Finlay,G.Abowd, R.Beale -3rd Edition

[4] Reasoning in Context-Aware Systems, P.Nurmi,P.Floreen (Helsinki Institute for Information Technology)

[5]
D. A. Norman, “Some Observations on Mental Models,” Mental Models, D. Gentner and A. L. Stevens, Editors, Lawrence Erlbaum Associates, Hillsdale, NJ (1983), pp. 15­34.

[6] HCI 2 course website: http://www.cs.bham.ac.uk/~rxb/Teaching/HCI%20II/index.htm

[7] Context-Aware Computing – Thomas P.Moran(IBM Almaden Research Center), Paul Dourish(University of California,Irvine)

[8]User-Centered Task Modeling for Context-Aware Systems, Tobias Klug, Technical University of Darmstadt, Germany

HCI Extended - Stelios Katsavras - Affective Interaction

Affective Interaction

An affective human-computer interaction [1] is considered one in which emotional and similar information is communicated by the user to the computer in a comfortable and not disruptive way. The computer translates the information and tries to improve the interaction with the user. For a long time [2], emotions were considered undesirable for rational behaviour, but there is now evidence that they are essential in problem-solving capabilities and intelligence. This resulted in a new emerging field, Affective Interaction.

Communicating emotional information is often not considered important in interaction design. There are systems that attempt to please the user but don’t check if their actions made the user angry or happy. The system should be able to receive some feedback from the user to constantly adapt and reinforce or remove negatively-perceived actions [1]. A number of learning systems, mainly robots were designed with the ability to passively sense emotions. A well-known example is “Kismet”, a robot that learns from a person’s affective positive or negative information that can be extracted from the person’s speech.

No matter how usable or well-designed an interface is, it will not please all types of users [1]. Modern software can be tweaked by the user to adapt to their needs or likings, but software that self-adapts is hard to design. A design that changes constantly or one that does not change when it should, will be annoying. It’s hard to design software that knows when to change and also know if the changes made affected the user positively or not. To attain the right balance in the interaction in order to please the user, human-human interactions can be used to inform human-computer interactions. Some “smart” systems [1] attempt to please the user by automating the tasks that the user is doing. However, the assumptions and decisions of such systems are often false. Affective interaction can solve this kind of problems by observing the user’s behaviour and emotions after a change and adapting accordingly. For example if the user is happy when the software checks the spelling of some text, the system will continue doing the spell-checking process until the user is unhappy or gets frustrated about it. In the later case, the system will either disable spell-checking or adapt in some other way.

Sensors that are used to detect the affective information are either passive or require the user’s intent. Passive sensors though raise some privacy concerns and sometimes users prefer to express an emotion intentionally to a computer interface [1]. The devices that record and communicate affective information must be comfortable and not disruptive. For this purpose some prototypes were developed such as the Pressure Sensitive Mice (SqueezeMouse) and Frustration Feedback Widgets (Frustrometer).

Affective interfaces are often divided into 3 categories [3]: 1) those that express emotions, 2) those that process emotions and use affect as part of the system’s intelligence and 3) those that attempt to understand emotions. User evaluation is not done very often and sometimes it’s not clearly defined and understood what should be evaluated in a user study. Design of such systems should involve the user in all stages to constantly receive feedback. A user-centered approach to affective interaction involves the user in the so called affective loop. In an affective loop [4], users express their emotions to a system through their psysical bodily, behaviour regardless if it was felt at that moment or not. As the system responds through suitable feedback the users get more involved with their expressions.

Affective Interaction is a field for which no much research has been done yet. Some debated that interface characters used in affective interaction violate good usability principles [3] and confuse designers and users, whereas others insist the affective interaction is essential in order to please the user. However, it is a new field that will continue to grow as technology makes it possible to detect emotions and process them to entertain and please the user during their experience with interfaces. The user in turn, should be involved in the design of such systems to provide feedback that is necessary to improve the system and to understand its weaknesses.

References

[1] Carson Reynolds & Rosalind W. Picard, Designing for Affective Interactions
http://www.cs.chalmers.se/idc/ituniv/kurser/04/projektkurs/artiklar/TR-541.


[2] Ana Paiva, Affective Interactions: Toward a New Generation of Computer Interfaces?
http://books.google.com/books?hl=en&lr=&id=N3D84amSg50C&oi=fnd&pg=PA1&sig=Uy1pofGuGUqptzi4xpCAK5rr3Rs&dq=Affective+interaction#PPA3,M1


[3] Kristina Hook, Evaluation of Affective Interfaces
http://www.sics.se/safira/publications/P-AAMAS-Hook.pdf



[4] Petra Sundstrom, Anna Stahl & Kristina Hook, A User-Centered Approach to Affective Interaction

http://eprints.sics.se/149/01/kina.pdf


Wednesday 28 March 2007

Conclusion

The blog was set up as part of the requirements of HCI II and HCI II (Extended) in which a new product had to been designed using the User-Centered Design process.

After our brainstorming session where the members of our group came up with some really good ideas, we decided to design an eTrolley which will help people aged >60 do their shopping.

The next step was to create our personas who would help us identify some of the problems of the elderly people and to also take part in the testing and evaluation phase.

We started gathering the requirements for the eTrolley by giving out questionnaires to our personas, in order to learn more about their lifestyle and the way the carry out their shopping. An experiment was also taken place to find out the time needed to shop some random products from a local store using nothing but a trolley. Another member of the group observed how the elderly people shop and noted down some potential functionalities of the eTrolley that would help to reduce effort and shopping time.

We then came up we some creative designs of the hardware and different screens of the system. Some discussions took place between the group members to identify weaknesses or improvements in our designs before beginning to create the prototyping phase. During each step of the cycle, each prototype was tested by our personas who completed a questionnaire. This provided us with some feedback to help us assess our design and improve it where possible.

Finally, out last prototype was evaluated using two well-known evaluation methods:

a) Heuristic evaluation makes use of several evaluators critique a system independently to come up with potential usability problems.
b) Cognitive Walkthrough

Monday 26 March 2007

Heuristic Evaluation

Heuristic evaluation is a method for structuring the critique of a system using a set of relatively simple and general heuristics. The general idea behind heuristic evaluation is that several evaluators critique a system independently to come up with potential usability problems.

1. Visibility of system status/Feedback

The user's constant interaction with the eTrolley provides enough feedback to show that the system is active or not. For example, in Nav Guide, which is the most used feature, the red dot representing the location of the customer in the store, blinks and moves when the eTrolley is on the move. Moreover, the shopping list and total amount in the right side of the screen is updated whenever the customer adds in or removes a product from the trolley.

2. User control and freedom

The user can select any function of the eTrolley to use at any time. The menu at the left has all the available options for the user to select. If they are using the Nav Guide and at the same time want to search for a product or request for directions, they can do so without the system emptying the shopping list.

3. Match between system and the real world

Shopping with the eTrolley is very similar shopping using the traditional trolleys. Also, the maps that the eTrolley accesses in the store's central database match the arrangement of the shelves in the store and checking out is the same process as going to the cashier, but instead done using the trolley.

4. Everyday language

Our system's language is very simple and undestandable by the users. There is no technology related words or expressions and our personas did not have any difficulties when testing the system. The words used are those that the users read and speak when they are shopping.

5. Consistency

To improve consistency throughout the user interface, we changed the theme to something more colorful and the same color was used for almost all buttons in order to keep things simple for the user.

6. Recognition not Recall

The user doesn't have to remember or recall anything during their shopping experience with theeTtrolley. Elderly people are often forgetful and a system that would require them to recall something would probably frustrate them. The options are always available through the menu that is always at the left and the shopping list is constantly updated in the Nav Guide mode.

7. Flexibility and Ease of Use

The eTrolley is as easy to move around as any other trolley but makes shopping easier and less time consuming. The customer is able to remove any products from the eTrolley by re-scanning it and they can checkout instantly at any time. The system requires few button presses from the customers to produce the required result.

8.Error Prevention and Recovery

Our latest prototype now includes a "Help" section that allows the users to talk to a member of staff if any problem occured during the use of the system or if the user has any questions about using some features.

9.Aesthetic Design

The whole design is quite minimalist and aesthetic. The colors used are not distractive and help the user identify the different buttons on the screen and clearly distinguish important information.

10. Documentation

No documentation was produced for the eTrolley due to the simplicity of it's use. Although, the system could be enchanced with a tutorial to help customers familiarise quickly.

Cognitive Walkthrough

"In the cognitive walkthrough, experts follow the series of actions that an interface will require a user to perform in order to accomplish some task to check the interface for potential usability problems. Usually, the main focus of the cognitive walkthrough is to establish how easy a system is to learn. More specifically, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual. So the kinds of checks that are made during the walkthrough ask questions that address this exploratory kind of learning. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user." Definition given on course website.

Steps:
1.Start
2.Search
3.Search by Keyword
4.Enter Keyword through keyboard
5.A variety of the product is displayed
6.Select a product
7. Navigator provides user with shortest route to the product selected

Sunday 25 March 2007

Use Case - Flash

Watch a use case...

It uses predictive text, to intelligently guess what the user is trying to say.

Thursday 22 March 2007

Prototype III - Analysis Review of Results

After the end of prototype 3, many of the problems found in the previous stages were resolved. To mention a few, inserting data through the keyboard was time-consuming but predictive text reduced the effort needed to search for a product. The shopping list is displayed in the right side of the screen while the customer is using the "Nav Guide" and a the "Help" section was added as it has been previously ignored during the design of the interface.

Ingrid and Goldfinger did not suggest further improvements after experiencing the third prototype which means that the interface has become more usable through the whole prototyping phase. However, this does not necessarily mean that there is no margin for improvement. Testing and evaluation will continue after the final release of our product.

Prototype III - Questionnaire Response (Goldfinger)

1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
This version looks and feels so much better. Great work guys!

Prototype III - Questionnaire Response (Ingrid)

1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?

____________________________
____________________________
____________________________

Prototype III



This is the third and final prototype and it implements the suggestions from the latest survey.

Goals:

1. Predictive text
2. Display shopping list along with total amount in NAV mode
3. Include a HELP section to provide support to the customers

Saturday 17 March 2007

Prototype II - Analysis Review of Results

The questionnaires revealed several properties that can be improved in the prototype.

Although the keyword search functionality was added to the system, some users, particularly Ingrid found it very difficult to type words. Older users may not be used to the QWERTY layout found on most keyboards. Furthermore, some longer words might take a long time to type in. For this reason, the next prototype will include predictive text. This will increase the speed with which data are inputted thus reduce customer effort.

During the NAV mode customers are not given with the option to view the products that are already in the trolley. The next prototype should provide the user with a list of these items through the user interface.

An important feature that was wrongly ignored during the design stage was to include a HELP section. It is vital to provide some sort of support to the customer. For instance let us assume that Ingrid wishes to talk to one of the staff members. It could be possible to do this through the HELP section by selecting the feature "Get In Touch with a Staff Member".

All the above were recorded and changes to the design and implementation are to be carried out during the next phase of the prototyping process.

Prototype II - Questionaire Response (Ingrid)

1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
It takes to long to input using the letters on the screen (virtual keyboard). It took me 5 min to find where the 'y' key was. And don't get me started on how long it took me to find the space maker to separate words. Although, i liked the new larger screen.

Prototype II - Questionaire Response (Goldfinger)

  1. The size of 'words' and menus was:
1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
I want to see all the items I have selected in a list when I'm in NAV mode

Prototype II




This is the second prototype. Its purpose is to implement the improvements discovered from the surveys contacted.

Goals:

1. The ability to perform keyword searches.
2. Provide the shopping list to view all items that have been selected
3. Use a uniform button size to improve consistency
4. Use larger text
5. The buttons were too close to each other so sometimes people would press the wrong button. This is now fixed.
6. Increased screen size

Tuesday 13 March 2007

Prototype I - Testing/Questionnaire/Evaluation

The first prototype is now ready for deployment and thus needs critical testing. Ingrid and John,our 2 personas, are asked to try the system and answer a short questionnaire. The same procedure is followed in the next iteration.

-------------Questionnaire ---------------
1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
______________________________________
______________________________________
______________________________________
______________________________________

Prototype I

This is the first prototype. Its purpose is to set the framework of the interface.

Goals:
1. Create a template framework , that can be used on most Screens. This is a very important concept in HCI, as familiarity is crucial in making a system more usable.

2. Use buttons with familiar terminology. This will increase the obviousness of the system. Users should be clear on what function each button serves. Such as "Just Start Shopping".

3. Reduce the number of buttons on each screen. Keeping the interface simple will help prevent users from 'getting lost' in the system.


In the following post, we will test the prototype, highlighting its deficiencies and main successes. This will allow us to review the prototype.

Preparing For Prototyping

We are now at the stage of prototyping. We will be using all the information gathered from the previous stages, such as the questionnaire, social observation and other parts of the Creative Design to guide the prototyping.

We will be following an iterative process to encourage the development of the product. The stages of iteration we will follow are: Prototyping - Testing - Review. Doing this allows for adding useful functionality as well as discovering and fixing problems in the prototypes.

Prototype I - Analysis Review of Results

The questionnaires revealed several properties that can be improved in the prototype.
Firstly the responses we received to the question concerning the size of the menus and text displayed showed that an increase in the size was necessary. John stated that the size was "Just Fine" while Ingrid found it "A bit small". For people like Ingrid this can become really annoying especially after a 30 minute shopping walk. The size of the menus and text has to be larger. Using capital letters might also make the system more usable.

Due to the 2-dimensional nature of the screen, buttons may not be as easy to distinguish especially for Ingrid who hardly has any experience with technology. Furthermore Ingrid is colorblind.
This has shown that the e-trolley user interface should clearly display buttons through intelligent use of colors and thick borders.

The original interface did not make it clear where to start. Although a modern user will intuitively know to look at the menus at the sidebar for guidance in the tasks he wishes to complete. But for the elderly this cannot be taken for granted. A clear sign must be given to indicate where to begin. This will be added to the next prototype that we create.

Finally our personas have made two suggestions regarding system functionalities:
a) The ability to perform keyword searches.
b) Provide the shopping list to view all items that have been selected.
c) Increase screen size


Prototype I - Questionaire Response (Goldfinger)

1. The size of 'words' and menus was:
a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
I liked the idea. It would be nice if I could type a keyword for searching for items.

Prototype I - Questionaire Response (Ingrid)

Ingrids Responses:

1. The size of 'words' and menus was:
a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?

It would be nice if I could see all the items that I have in my shopping list. Also the screen size could be larger for people that can't see clearly.

Saturday 10 March 2007

Creative Design - Shop & Go




Shopping is an exhausting and stressful task at most times. The long queue delay that most super markets have at their checkouts, makes customers only feel worst. The NAV GUIDE deals with this issue by introducing the Shop & Go concept. Shop & Go updates the customers total as they are doing their shopping. When an item is added to the trolley, the sensors pick up the id and price of the product and add it to the balance. If an item is taken off the trolley, then of course the item would be taken off the total.

When the customers have finished their shop, they do not need to queue for a long time to pay. In fact, they do not have to queue at all, as the Shop & Go service allows them to pay the NAV GUIDE by debit/credit card. This allows the customers to pay and leave the store within minutes off them finishing their shop. This would decrease the amount of stress shoppers experience and also give them a much more relaxed environment they can shop in.

Creative Design - Hardware


This is the first hardware design. To avoid making the eTrolley cumbersome it is fitted with a 10.1" screen. This will allow the user to see enough information and make use of its touch-screen interface. It has a joystick which the customers can use to propel the trolley. The antennae gives eTrolley wireless capabilities to connect to the database. The database stores information about the location of the shelves and their contents. It also stores information about navigating through the shop. This includes recognizing where the eTrolley currently is so that the shortest distances can be calculated and a suitable short path suggested.

Tuesday 6 March 2007

Creative Design - Checkout (2)

Since the customers can pay by either using the Checkout function of the eTrolley or by going to the cashier (only God knows why!), two buttons are added below the total amount to enable this functionality. If the customer prefers to go to the cashier, then the route from the current location to the cashier is displayed.

Creative Design - Checkout (1)


In the final screen, all products in the trolley are listed along with their prices and the total amount. If the customer wants to remove an item in the list, they just scan it again and remove it from the trolley. The total amount and the list are automatically updated.

Creative Design - Search by Shelf/Brand/Keyword




The search page will have the following three options:
  1. Search by shelf
  2. Search by brand
  3. Search keyword
"Searching by shelf" and "Searching by brand" pages will display a list of shelves and brands respectively for the customer to select and they can only select one option at a time. In the "Searching by keyword" page where the customer has to enter the keyword to search for, a virtual keyboard will appear at the bottom of the screen.

Monday 5 March 2007

Creative Design - Nav Guide (6)


The final page with the route to the destination will be similar to the sketch above. If the destination shelf or product is not shown in the map (i.e. at a different section of the store not shown in the map at that moment) then the customer will need to move towards the direction of the line to see where it ends. There also has to be some way to indicate whether the user moves in the right direction or not.

Creative Design - Nav Guide (5)


After our last group meeting, it was decided to alter the "Get Directions" screen in order to allow the customer to select specific products as well. Thus, the screen is split in half with the names of the shelves on the left and the products of a shelf on the right part of the screen. For example, when the customer selects "Vegetables", the right part of the screen will show all available products on that shelf and as a destination, either a shelf on the left or a product on the right can be selected.
The "Detailed View" button at the bottom shows or hides the products on the right. This is to let the old people choose the level of detail they perfer in case they find it too simple or confusing to use.

Creative Design - Nav Guide (4)


An option that the Nav Guide component provides is that of requesting directions from the current location of the trolley to another shelf. A drop-down list can be used to present the names of the shelves to the customer and after the selection, a map will be displayed with a red line indicating the route to that shelf.

Creative Design - Nav Guide (3)


In the screen where the shelves and the location of the customer are displayed, there needs to be some functionality that allows the customer to explore the area and see what shelves are in different parts of the store that are not displayed in the map at that moment. As with online city maps, arrows can be added to make this possible. Touching one of the arrows will trigger the system to immediately update the map and display the other shelves.

Creative Design - Nav Guide (2)


Some group members suggested a basic menu on the side of the screen for easy access to other parts of the system such as "Offers", "Checkout" and "Help". The menu will be static, meaning that on each click only the contents of the main screen will change. Another idea is to shrink the map and place it at the bottom right corner of the screen and indicate the location of the customer when the Nav Guide is not active. This will save the user time from returning back to the Sat Nav screen to check their location but it will be difficult for the old people to read the names of the shelves in such a small map.

Creative Design - Nav Guide (1)


Firstly, the Nav Guide component must contain a screen in which the map of the store and the customer's position are constantly updated. Two buttons at the bottom of the screen allow the customer to return to the previous screen or to request directions from their current location to a shelf or specific product. The concept is to keep the screen clean and simple and not to confuse the old people with many buttons or options.

Friday 2 March 2007

Creative Design - Welcome and Login Screen










The welcome screen will have two buttons to allow the user to login or start shopping immediately. Since the system is equipped with fingerprint recognition capabilities, the user can login by either placing their finger on the sensor at the right of the screen or by using their credit card.

Task Analysis

It was important to break the tasks of the e-trolley down. Doing so gives a clear picture of the objectives of the project and allows us to plan the work better. Below is a task break down of the e-trolley. It is not a detailed description of the flow of the system, but it describes the functionality provided by the system.

Thursday 1 March 2007

Hardware Specification - Alex

The proposed human-computer interface (HCI) hardware for the e-trolley consists of:

  • An embedded computer with low voltage embedded CPUs which will be responsible to: 1) run the software which controls the motion of the e-trolley 2) temporarily save the current user's configuration 3) handle the wireless network connectivity which will transfer and save each user's configurations (favorite products, checkout etc.) to the supermarket's central database server. For the network's security, SSL and IPSec are proposed to be used.
  • A 10.1" LCD Touch Screen Monitor with embedded speakers. Elderly people who have sight problems can use the screen reader functionality.
  • An Absorbed Glass Mat (AGM) Battery to supply power to the wheel propultion mechanism. An AGM battery does not leak, freeze and does not require any maintenance. Thus, the battery will not contaminate the food products within the supermarket if the battery is damaged and can enable the e-trolley to be stored for a long period of time without charging.
  • A force-sensing joystick, similar to the one electric powered wheelchairs use, to control the position of the e-trolley. It is intended to be used as a human-friendly and ergonomic interface for elderly people to operate our e-trolley with a limited wristle motion required. The speed of the trolley should be between 2 and 4 miles per hour but further research is needed to establish the optimal speed.
  • Elderly people have a slow reaction time therefore Proximity Sensors are suggested to be used in order to identify obstacles and avoid collisions.

Wednesday 28 February 2007

Shopping Experiment - Osama and Alex

As part of the designing process of the e-trolley we went to Sainburry's to do a shop and document the shopping process. A map of the store was kindly given to us and we started our shopping. Our shopping list was as follows:

Meat

  • Mince
  • Chicken breast
  • Whole chicken
  • Chicken legs

Vegetables

  • Onion
  • Couple of carrots
  • Couple of leek
  • Garlic
  • Lemon
  • Pepper (mixed)
  • Potatoes
  • Tomatoes
  • Ginger
  • Chillies
  • Mushrooms

Selection of fruits

  • Apple
  • Banana
  • Blueberries
  • Coriander
  • Avocado

Dairy

  • Cream
  • Milk
  • Natural Yogurt
  • Eggs
  • Soured cream
  • Organic Butter
  • Cheese

Household

  • Small Size Rubber Gloves- the pink ones
  • Softener
  • Tissue paper

Other
  • Couscous
  • Vegetable Oil
  • Tinned tomatoes
  • Tinned chickpeas
  • Cereal
The purpose of our shopping experiment was to discover whether the shopping process is indeed complicated enough to require a solution, especially when it comes to elderly people. The following is a description of the actual shop:

We entered the store facing the vegetables and fruits isle. We started with the vegetables in our list. The vegetables are between two horizontal isles that have four vertical isles in between them. The vertical isles are in the shape of rectangle and have vegetables and fruits on all four sides. We started by going on the left side looking for the herbs but on the map it doesn't say where the herbs are, it only classifies the products as vegetables or fruits.

As the shopping went along this became a reoccurring problem with the map. This is because the map keys are difficult to navigate with, as they are classified by categories. Since there are hundreds of products in each section, it was a very tedious task going through them, looking for an individual product. We were going in circles around the isles many times. For instance, the vegetable vertical isles were looped many times and we went up and down different isles several times, looking for an item. As some stages, when searching for a product became impossible, we resorted to asking for help. The shop assistants were busy with different customers who could not find their products, so we had to wait a while till we could find an assistant ready to help us.
When we finished our shopping we had to queue for 15 minutes.

Below is the map of the store (click on it for full size), with the route that we took to complete the shop, drawn on it.



The e-trolley will help eradicate many of these problems. This means it will save customers time because they will shop quicker and it will give them a stress free shopping environment. The e-trolley will guide the elderly customer through his/her shop. The customer will be taken to each product individually. As the products are added onto the trolley, the trolley will be adding the ammounts of the products to the total bill. This way, the customer will not have to wait at the end of the shop and queue for the till.

Tuesday 27 February 2007

Defining User Requirements/Observation - Stelios

In order to define what the user requirements are, which will then help us in the design stage, it was essential to go all the way down to the root of the problem. That is taking a visit to one of the large stores in my area and observing the way people do their shopping. Based on this we can then decide whether there is space for any improvements in several areas of shopping.

Morning is usually the time older people choose to do their shopping, since they don’t work and they want to avoid congestion and long queues that occur in the afternoon. The plan was to follow 4-5 old people from the time they entered the shop and grabbed their trolley to the point they paid and left the store. I arrived at the store at 11am and got myself a trolley and started going round the store pretending I was shopping groceries, hoping they wouldn’t suspect that they were being followed!


Outcome

I was surprised to see that no one followed the order in which the products were written in their list and all of them returned to an aisle they’ve been before to get some other product that came later in the list. This means that they neither shop in the order the products are written down nor do they make sure that there are no more products in an aisle they need before they proceed to the next one. This simply leads to extra walking, tiredness and delay. Three of the old people were going around in the store to find a specific product or food and in some cases it took a total of 15 minutes to find it. The total shopping time for each individual ranged from 50mins to 1h20mins including checkout time.


Suggestions

  • Total shopping time can be minimised by:
      • Giving directions to the customer for finding a specific product
      • Constructing a time-efficient shopping-route according to customer’s previous purchases. The customer would have the option to modify the route by adding or removing products.
  • Checkout time can be reduced by adding some functionality to the trolleys that will allow customers to scan items before putting them in the trolley and pay using their cards as soon as they finish. No need to queue.

Monday 26 February 2007

Questionnaire Results and Analysis - Christos

The questionnaire survey contacted had 4 objectives which were defined under the post "Spec. Requirements / Design Questionnaire". This section summarizes the outcomes of this survey on the various issues the elderly encounter with during their shopping.

  1. To identify the problems encountered by the elderly while grocery shopping - Related questions: 1,2,8,11,14
  2. To assess the familiarity with various technologies - Related questions: 3,4,5,6,7
  3. To define measures of improvement - Related questions: Most of the questions
  4. To identify age bias disabilities - Related questions: 13,14


In order to contrsuct some results to what level the e-trolley should be technologically advanced the personas were asked questions related to the type of technology they are familiar with.

Question 3 asks, "Do you have a computer at home?" .Both personas answered that they do have one. This indicates that at least the personas are either using a computer or someone else in the house is using it.
Question 4, "How often do you use the computer?", persona Ingrid Pollard has given the answer "I don't" while persona John Goldfinger answered "I use it often".
This indicates that we are obviously dealing with a target audience that has either never or barely used a computer before or the other extreme where they have actually been using one due to some circumstances(due to their jobs,children).
On question 5 the personas were asked to select a list of computer programs they are familiar with. Persona John Goldfinger has listed the following: 1. Web Browser (e.g. Firefox, Internet Explorer) 2. Word Processor (e.g. Open Office, Microsoft Word) 3. Slideshow (e.g. Photo Album, Picture Viewer) 5. E-mail (e.g. Outlook, Thunderbird) . This shows that the personas that are using a computer are familiar with the basic tools for doing a variety of tasks. From this we can conclude to the fact that at least this portion of the population has encountered with the most known user interfaces in the market.This is something that we need to take into consideration during the design stage of the product.

Questions 6 and 7 ask the personas if they have in their possession a mobile and if so its'brand. Both personas have answered "Yes" indicating that they have both encountered with mobile interfaces before. In the case of persona Ingrid Pollard the use of mobile is restricted in just making phone calls whereas John Goldfinger uses his mobile for taking pictures,sending sms etc. We can conclude once again that the target audience varies in skills when it comes to mobile devices.The fact though that a number of those have probably used a mobile somehow is at least a good sign when it comes down to designing the product.

Questions 13 and 14 aimed to discover what kind of disabilities are common and in what way the design of the system can be adjusted in order to avoid any dissapointments.
Ingrid has complaint for back pains which has direct affect to how long she can stand or walk each time. John had a leg operation recently and his doctor adviced him not to stress his leg. Both our personas have some sort of related problems that have to do with walking. Since shopping involves walking long distances and carrying heavy loads it is then e-trolley's main priority to reduce the amount of shopping time by estimating and suggesting faster routes. Wondering in a store trying to figure out where an item is kept is both annoying and tiring. For the elderly is an another burden to the many more they already have.