Friday 30 March 2007

HCI Extended - Yoseph Samuel Sultan - Affective Interaction

Affective Interaction – by Yoseph Samuel Sultan 520976

Affect Interaction falls under the umbrella of Affect Computing. This is the study of computational systems that can be aware of and respond to human emotions. [Höök] This includes systems that intend to evoke certain emotions as well. Humans are not only cognitive beings, but emotional ones, which is why it is important to consider emotions in creating an environment that seems natural and usable. Due to the length restrictions of this report, carefully selecting what to include and leave out was very important. In the next paragraph, Emotional Theory will be introduced as this provides the essential foundation for work in this field. After that, some of the major challenges posed by Affect Interaction are discussed. Following is a presentation of some real-world applications that allow us to see the theory applied. Finally, there is a conclusion rounding off this article and presenting personal views on the matter.

Affect Interaction is a multidisciplinary task. Computer scientists who try to create such systems have gathered information about human emotion from fields such as medicine, neurology, psychology and even philosophy. One of the problems with implementing such a system is that the theory behind still remains very abstract. Furthermore, there is no unity among the community as to which emotional model describes humans the best. The bridge between current theory and implementation is still huge. All though many of the suggested theories are grand and attempt to encapsulate everything, when creating practical/useful affect interaction systems, these theories should only be used as inspiration and a source of ideas. Not as a model for implementation. For a more detailed discussion on various emotional models, I refer the reader to [Scherer 2002] for a good review of the most popular models.

There are several challenges that must be addressed before an emotionally interactive system can be created. One of the major challenges is to bridge the gap between ‘the constructive rational nature’ of software and the ‘interpretative subjective complexity’ of a user’s personal experience. At the end of the day, users do not naturally rationalize about their daily emotions, that is part of what makes them emotions. This level of unconscious involvement leads to very subjective and personal behavior. Two people may react in two completely different emotional ways to the very same stimulus. This tells us that Affect Interaction design must be very user-centered. Regardless of whether the universe has a unifying explanation to all peoples emotions, because we currently don’t, a practical system will only be developed when the design is user-centered. This point cannot be stressed enough. There is a very big difference between reality of emotion/affect and useful practical implementations of theory.

When discussing topics such as affect interaction, it is very useful after a point to step back and see what is actually ‘out there’. It is very easy to be caught up in the very interesting philosophical and psychological discussion, but we must not forget our drive, which is to create systems that are useful to users and improve interaction. One such system was proposed by [Ståhl 2005]. This application attempts to provide emotive expression in mobile messaging. Again, if this was perfectly achieved it would be an application yearned for by many. The emphasis of the application was the use of “sub-symbolic expressions; colors, shapes and animations, for expressing emotions in an open-ended way”. [Ståhl 2005] The picture below shows some of the emotions expressed by the system. Another application that attempts to achieve affect interaction is another mobile application called eMoto[Sundström 2005]. This applications aims to be guide via user gestures. These gestures are measures via an extended stylus accelerometer and a pressure sensor added. On general both these applications do not perform successfully, but act as stepping stones to move the entire field forward.


Personally, I cannot deny the potential brought on by successful affect interaction development. However, I do remain skeptically as to the feasibility of the task. Some have suggested that the fully understanding of human emotion is an impossible task. On the contrary I do not believe the ‘full understanding of human emotion’ is required to develop a useful system that integrates information about emotion. In my words I would like to say ‘a useful understanding of human emotion’ is what is required. By useful I mean as simplistic or as complicated as required, but where the focus remains on the user and system usability. After all nature inspired design is not about replicating natures actions, but only getting inspiration there from. In the same manner, we should learn about human emotion, and scientifically experiment and test, in order to create systems that are functional. I believe that is one of the elements that distinguishes a computer scientist from a philosophical theorist. We must take theories, learn, adapt, change, truncate and produce a deliverable that is functional. This is not to say that further research on theoretical aspects is not necessary or even that is it not the role of computer scientist. It is necessary and computer scientist should be involved. The only point made here is that there are two fronts to the problem. Theoretical which relates to reality, and practical which relate to functional system . Both must be worked on and both must be advanced.

by Yoseph Samuel Sultan 520976

References:

Boehner, K., DePaula, R., Dourish, P., and Sengers P., 2005. Affect: From Information to Interaction. Critical computing Conference 2005, Århus, Denmark http://portal.acm.org/citation.cfm?id=1094570&coll=portal&dl=ACM&CFID=66824627&CFTOKEN=24952427

Ståhl, A. (2006) Designing for Emotional Expressivity, Licenciate Thesis, Institute of Design, Umeå University, Umeå, Sweden. Chapter 4 (and for those extra interested, Chapter 3): http://www.sics.se/~annas/lic/Binder1.pdf

Höök, K (2004) User-Centred Design and Evaluation of Affective Interfaces, In: Zs. Ruttkay, C. Pelachaud (Eds.), From Brows till Trust: Evaluating Embodied Conversational Agents, Kluwer, 2004. http://www.sics.se/~kia/papers/hook1.pdf

Sundström, P. (2005) Exploring the Affective Loop, Licentiate Thesis, Stockholm University, Stockholm, Sweden, Chapter 3: http://www.sics.se/~petra/eMoto/licen.pdf

Anna Ståhl, Petra Sundström, and Kristina Höök (2005) A Foundation for Emotional Expressivity, In Proceedings of DUX 2005, San Francisco, CA, USA. http://www.sics.se/~petra/eMoto/stahl_affee.pdf

HCI Extended - Christos Yiacoumis - Context Aware Systems

Id:544851

Context-Aware Systems

In the past, hardware and software were considered as units of processing data which were provided by the user in order to produce some outcome. However, as users become more aware and familiar with systems, their expectations rise and they don’t just want a system that produces some sort of outcome on their behalf but demand that this adjusts to user preferences and habits as well. This need calls for systems that carry some sort of intelligence and know how to manipulate context in such a way that they help users on carrying out their tasks, thus a need for context-aware systems.

Context is defined as any information that can be used to characterize the situation of entities that are considered relevant to the interaction between a user and an application[1].Context-aware systems make use of technologies such as sensors with the help of heuristics and other semi-intelligent means[2] to predict what the user wants. A stated goal of context-aware systems is to reduce the overhead when it comes down in executing some tasks. If the system can create and provide the user with shortcuts in doing things (i.e. writing txt messages on mobiles) that are of great importance then the system learns from its user. Situational awareness can be used then to reduce the amount of input the user is required to supply the system with.
Every person has its own way of doing things. We follow different patterns to achieve a goal. It is not the possible to have one system that fully satisfies each user individually unless this system senses or remembers information about the user, records and ‘understands’ the emotional and physical situation in order to reduce effort.

Design of context-aware systems is thus based on sensing and modeling situations and recognizing rules of engagement. These systems are driven by models of the task, user and system. User model consists of information concerning the background of the user in doing tasks. The user model has two important elements that need to be fulfilled: user comfort and user congruency. The first one affects usage while the latter has to do with the compatibility between a user’s preference and the designed artifact[1][2].The system model refers to the system’s ability to carry out a task according to the capabilities it consists.
In context-aware systems we have the software but also the hardware side to consider. In the software side agents are used to monitor and understand the intentions of the user in order to improve the manipulation of the system. However, this may sometimes lead to inconsistencies between what the user wants and what the system understands. In other cases the users adopt strategies in order to protect themselves against errors[5]

Dix,Finlay,Abowd,Beale suggest that context-aware applications should follow the principles of appropriate intelligence[3]:
1. Be right as often as possible and useful when acting on these correct predictions
2. Do not cause inordinate problems in the event of an action resulting from a wrong prediction.

On the hardware side, the improvements made in sensor devices reduce the computation needed. However, a big challenge is how these hardware devices filter out the data that are unnecessary and process the ones that are worth for each task.
In designing such systems an important factor that needs to be considered is the aesthetics of the system. Aesthetics affect user’s perception of the system and is as important as the system’s ability to carry out a task.

As it was mentioned earlier the context data need to be processed using some sort of intelligent way. This is referred to as context-reasoning. In the “Reasoning in Context-Aware Systems” four perspectives of approaching the context-reasoning problem are suggested[4]. These are:

a) low-level approach
b)
application view approach
c)
context monitoring
d)
model monitoring

Each of these is explained in more detail in the paper mentioned before.

Conclusion

This brief essay has looked at some of the basic concepts of Context-Aware Systems. The importance of such systems is high and a lot of research is carried out in developing systems that can help users accomplish their tasks but also learn from the user. The systems must be designed in such a way that they fit in users’preferences and habits. The ultimate goal of these systems should be to reduce the communication barriers between user and system and in no way interfere or obstruct users from executing their tasks.

References

[1]Context-aware design and interaction in computer systems by T. Selker and W. Burleson - http://www.research.ibm.com/journal/sj/393/part3/selker.html

[2] Out of context: Computer systems that adapt to, and learn from, context http://www.research.ibm.com/journal/sj/393/part1/lieberman.html

[3] Human Computer Interaction by A. Dix,J. Finlay,G.Abowd, R.Beale -3rd Edition

[4] Reasoning in Context-Aware Systems, P.Nurmi,P.Floreen (Helsinki Institute for Information Technology)

[5]
D. A. Norman, “Some Observations on Mental Models,” Mental Models, D. Gentner and A. L. Stevens, Editors, Lawrence Erlbaum Associates, Hillsdale, NJ (1983), pp. 15­34.

[6] HCI 2 course website: http://www.cs.bham.ac.uk/~rxb/Teaching/HCI%20II/index.htm

[7] Context-Aware Computing – Thomas P.Moran(IBM Almaden Research Center), Paul Dourish(University of California,Irvine)

[8]User-Centered Task Modeling for Context-Aware Systems, Tobias Klug, Technical University of Darmstadt, Germany

HCI Extended - Stelios Katsavras - Affective Interaction

Affective Interaction

An affective human-computer interaction [1] is considered one in which emotional and similar information is communicated by the user to the computer in a comfortable and not disruptive way. The computer translates the information and tries to improve the interaction with the user. For a long time [2], emotions were considered undesirable for rational behaviour, but there is now evidence that they are essential in problem-solving capabilities and intelligence. This resulted in a new emerging field, Affective Interaction.

Communicating emotional information is often not considered important in interaction design. There are systems that attempt to please the user but don’t check if their actions made the user angry or happy. The system should be able to receive some feedback from the user to constantly adapt and reinforce or remove negatively-perceived actions [1]. A number of learning systems, mainly robots were designed with the ability to passively sense emotions. A well-known example is “Kismet”, a robot that learns from a person’s affective positive or negative information that can be extracted from the person’s speech.

No matter how usable or well-designed an interface is, it will not please all types of users [1]. Modern software can be tweaked by the user to adapt to their needs or likings, but software that self-adapts is hard to design. A design that changes constantly or one that does not change when it should, will be annoying. It’s hard to design software that knows when to change and also know if the changes made affected the user positively or not. To attain the right balance in the interaction in order to please the user, human-human interactions can be used to inform human-computer interactions. Some “smart” systems [1] attempt to please the user by automating the tasks that the user is doing. However, the assumptions and decisions of such systems are often false. Affective interaction can solve this kind of problems by observing the user’s behaviour and emotions after a change and adapting accordingly. For example if the user is happy when the software checks the spelling of some text, the system will continue doing the spell-checking process until the user is unhappy or gets frustrated about it. In the later case, the system will either disable spell-checking or adapt in some other way.

Sensors that are used to detect the affective information are either passive or require the user’s intent. Passive sensors though raise some privacy concerns and sometimes users prefer to express an emotion intentionally to a computer interface [1]. The devices that record and communicate affective information must be comfortable and not disruptive. For this purpose some prototypes were developed such as the Pressure Sensitive Mice (SqueezeMouse) and Frustration Feedback Widgets (Frustrometer).

Affective interfaces are often divided into 3 categories [3]: 1) those that express emotions, 2) those that process emotions and use affect as part of the system’s intelligence and 3) those that attempt to understand emotions. User evaluation is not done very often and sometimes it’s not clearly defined and understood what should be evaluated in a user study. Design of such systems should involve the user in all stages to constantly receive feedback. A user-centered approach to affective interaction involves the user in the so called affective loop. In an affective loop [4], users express their emotions to a system through their psysical bodily, behaviour regardless if it was felt at that moment or not. As the system responds through suitable feedback the users get more involved with their expressions.

Affective Interaction is a field for which no much research has been done yet. Some debated that interface characters used in affective interaction violate good usability principles [3] and confuse designers and users, whereas others insist the affective interaction is essential in order to please the user. However, it is a new field that will continue to grow as technology makes it possible to detect emotions and process them to entertain and please the user during their experience with interfaces. The user in turn, should be involved in the design of such systems to provide feedback that is necessary to improve the system and to understand its weaknesses.

References

[1] Carson Reynolds & Rosalind W. Picard, Designing for Affective Interactions
http://www.cs.chalmers.se/idc/ituniv/kurser/04/projektkurs/artiklar/TR-541.


[2] Ana Paiva, Affective Interactions: Toward a New Generation of Computer Interfaces?
http://books.google.com/books?hl=en&lr=&id=N3D84amSg50C&oi=fnd&pg=PA1&sig=Uy1pofGuGUqptzi4xpCAK5rr3Rs&dq=Affective+interaction#PPA3,M1


[3] Kristina Hook, Evaluation of Affective Interfaces
http://www.sics.se/safira/publications/P-AAMAS-Hook.pdf



[4] Petra Sundstrom, Anna Stahl & Kristina Hook, A User-Centered Approach to Affective Interaction

http://eprints.sics.se/149/01/kina.pdf


Wednesday 28 March 2007

Conclusion

The blog was set up as part of the requirements of HCI II and HCI II (Extended) in which a new product had to been designed using the User-Centered Design process.

After our brainstorming session where the members of our group came up with some really good ideas, we decided to design an eTrolley which will help people aged >60 do their shopping.

The next step was to create our personas who would help us identify some of the problems of the elderly people and to also take part in the testing and evaluation phase.

We started gathering the requirements for the eTrolley by giving out questionnaires to our personas, in order to learn more about their lifestyle and the way the carry out their shopping. An experiment was also taken place to find out the time needed to shop some random products from a local store using nothing but a trolley. Another member of the group observed how the elderly people shop and noted down some potential functionalities of the eTrolley that would help to reduce effort and shopping time.

We then came up we some creative designs of the hardware and different screens of the system. Some discussions took place between the group members to identify weaknesses or improvements in our designs before beginning to create the prototyping phase. During each step of the cycle, each prototype was tested by our personas who completed a questionnaire. This provided us with some feedback to help us assess our design and improve it where possible.

Finally, out last prototype was evaluated using two well-known evaluation methods:

a) Heuristic evaluation makes use of several evaluators critique a system independently to come up with potential usability problems.
b) Cognitive Walkthrough

Monday 26 March 2007

Heuristic Evaluation

Heuristic evaluation is a method for structuring the critique of a system using a set of relatively simple and general heuristics. The general idea behind heuristic evaluation is that several evaluators critique a system independently to come up with potential usability problems.

1. Visibility of system status/Feedback

The user's constant interaction with the eTrolley provides enough feedback to show that the system is active or not. For example, in Nav Guide, which is the most used feature, the red dot representing the location of the customer in the store, blinks and moves when the eTrolley is on the move. Moreover, the shopping list and total amount in the right side of the screen is updated whenever the customer adds in or removes a product from the trolley.

2. User control and freedom

The user can select any function of the eTrolley to use at any time. The menu at the left has all the available options for the user to select. If they are using the Nav Guide and at the same time want to search for a product or request for directions, they can do so without the system emptying the shopping list.

3. Match between system and the real world

Shopping with the eTrolley is very similar shopping using the traditional trolleys. Also, the maps that the eTrolley accesses in the store's central database match the arrangement of the shelves in the store and checking out is the same process as going to the cashier, but instead done using the trolley.

4. Everyday language

Our system's language is very simple and undestandable by the users. There is no technology related words or expressions and our personas did not have any difficulties when testing the system. The words used are those that the users read and speak when they are shopping.

5. Consistency

To improve consistency throughout the user interface, we changed the theme to something more colorful and the same color was used for almost all buttons in order to keep things simple for the user.

6. Recognition not Recall

The user doesn't have to remember or recall anything during their shopping experience with theeTtrolley. Elderly people are often forgetful and a system that would require them to recall something would probably frustrate them. The options are always available through the menu that is always at the left and the shopping list is constantly updated in the Nav Guide mode.

7. Flexibility and Ease of Use

The eTrolley is as easy to move around as any other trolley but makes shopping easier and less time consuming. The customer is able to remove any products from the eTrolley by re-scanning it and they can checkout instantly at any time. The system requires few button presses from the customers to produce the required result.

8.Error Prevention and Recovery

Our latest prototype now includes a "Help" section that allows the users to talk to a member of staff if any problem occured during the use of the system or if the user has any questions about using some features.

9.Aesthetic Design

The whole design is quite minimalist and aesthetic. The colors used are not distractive and help the user identify the different buttons on the screen and clearly distinguish important information.

10. Documentation

No documentation was produced for the eTrolley due to the simplicity of it's use. Although, the system could be enchanced with a tutorial to help customers familiarise quickly.

Cognitive Walkthrough

"In the cognitive walkthrough, experts follow the series of actions that an interface will require a user to perform in order to accomplish some task to check the interface for potential usability problems. Usually, the main focus of the cognitive walkthrough is to establish how easy a system is to learn. More specifically, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual. So the kinds of checks that are made during the walkthrough ask questions that address this exploratory kind of learning. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user." Definition given on course website.

Steps:
1.Start
2.Search
3.Search by Keyword
4.Enter Keyword through keyboard
5.A variety of the product is displayed
6.Select a product
7. Navigator provides user with shortest route to the product selected

Sunday 25 March 2007

Use Case - Flash

Watch a use case...

It uses predictive text, to intelligently guess what the user is trying to say.

Thursday 22 March 2007

Prototype III - Analysis Review of Results

After the end of prototype 3, many of the problems found in the previous stages were resolved. To mention a few, inserting data through the keyboard was time-consuming but predictive text reduced the effort needed to search for a product. The shopping list is displayed in the right side of the screen while the customer is using the "Nav Guide" and a the "Help" section was added as it has been previously ignored during the design of the interface.

Ingrid and Goldfinger did not suggest further improvements after experiencing the third prototype which means that the interface has become more usable through the whole prototyping phase. However, this does not necessarily mean that there is no margin for improvement. Testing and evaluation will continue after the final release of our product.

Prototype III - Questionnaire Response (Goldfinger)

1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
This version looks and feels so much better. Great work guys!

Prototype III - Questionnaire Response (Ingrid)

1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?

____________________________
____________________________
____________________________

Prototype III



This is the third and final prototype and it implements the suggestions from the latest survey.

Goals:

1. Predictive text
2. Display shopping list along with total amount in NAV mode
3. Include a HELP section to provide support to the customers

Saturday 17 March 2007

Prototype II - Analysis Review of Results

The questionnaires revealed several properties that can be improved in the prototype.

Although the keyword search functionality was added to the system, some users, particularly Ingrid found it very difficult to type words. Older users may not be used to the QWERTY layout found on most keyboards. Furthermore, some longer words might take a long time to type in. For this reason, the next prototype will include predictive text. This will increase the speed with which data are inputted thus reduce customer effort.

During the NAV mode customers are not given with the option to view the products that are already in the trolley. The next prototype should provide the user with a list of these items through the user interface.

An important feature that was wrongly ignored during the design stage was to include a HELP section. It is vital to provide some sort of support to the customer. For instance let us assume that Ingrid wishes to talk to one of the staff members. It could be possible to do this through the HELP section by selecting the feature "Get In Touch with a Staff Member".

All the above were recorded and changes to the design and implementation are to be carried out during the next phase of the prototyping process.

Prototype II - Questionaire Response (Ingrid)

1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
It takes to long to input using the letters on the screen (virtual keyboard). It took me 5 min to find where the 'y' key was. And don't get me started on how long it took me to find the space maker to separate words. Although, i liked the new larger screen.

Prototype II - Questionaire Response (Goldfinger)

  1. The size of 'words' and menus was:
1. The size of 'words' and menus was:

a) Too large
b) A bit too large
c) Just fine
d) A bit too small
e) Too small

2. How easy was to recognize button icons on the screen:

a) I can see the buttons clearly
b) I sometimes confuse the buttons with the background
c) What buttons?

3. How easy were the menus to navigate:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

4. How easy was it to find a desired product:

a) Very easy
b) Moderately easy
c) Sometimes confusing
d) I could never find what i wanted

5. After selecting a product, the e-trolley directed you:

a) Very quickly to the item
b) Adequately to the item
c) In a long path to the item
d) I could never find what i wanted

6. The e-trolley display shows:

a) Way too much information
b) Too much information
c) Just the right amount
d) Not enough information
e) Very little information

7. The subtotal price of the products in the trolley so far is:

a) very useful
b) useful
c) not important
d) useless

8. How was your overall experience with the e-trolley?

a) I was very satisfied
b) Satisfied
c) Neutral
d) Disappointed
e) Very disappointed

9. Do you have any further comments?
I want to see all the items I have selected in a list when I'm in NAV mode

Prototype II




This is the second prototype. Its purpose is to implement the improvements discovered from the surveys contacted.

Goals:

1. The ability to perform keyword searches.
2. Provide the shopping list to view all items that have been selected
3. Use a uniform button size to improve consistency
4. Use larger text
5. The buttons were too close to each other so sometimes people would press the wrong button. This is now fixed.
6. Increased screen size