Category Archives: Better websites

Don’t move my stuff! Reaction to change AND when to measure usability

Systems, websites and apps get old. They need to be updated. New functionality or a new user flows may be a result of this. Some users like – or at least tolerate – change. But, some users find it very upsetting, on the verge of a crisis when you “move their stuff”. To measure usability in the middle of a crisis would not give a fair result to the new system.

First a little about the emotional journey of the user. The emotional curve a user goes though when they get upset because you update their favourite website can be the same curve as when having a personal crisis. Though, the feelings are not as deep, hurtful and painful, they still follow the same pattern.

Information needed during a changeThere are ways to handle this – usually with information. And that is the right kind of information at the right time.
Continue reading →

Advertisements

Arrogant iPhone users or dedicated Android users?

Mäta känslor i ett användartest – hur och varförandroid-appleA company’s apps were tested, one app for Android and one for iPhone. Both apps are very similar both in features, design and function.

When recruiting users for the UX tests 40 Android users and 50 iPhone users – who in the past two months have provided feedback in respective app – were contacted. Both groups received identical e-mails with the information that I wanted to ask a few questions about the app but needed their phone number – they had only provided the e-mail address on the feedback form.

A strange phenomenon occurred. The number of Android users that responded to the e-mail was significantly higher than the number of iPhone users. 22 (or 55%) of Android users and 6 (or 12%) of iPhone users replied and sent their phone number, willing to be interviewed.
Although it is not a high enough number to be statistically significant, there was a noticeable difference in response rate. Why are Android users more engaged, eager to participate and provide feedback? Unfortunately, I have no clear answer. I do however have a few guesses:
• It has something to do with that Android attracts more Open Source users who are more eager to share.
• The selection of apps is smaller for Android, so you are more concerned about ensuring the quality of those that do exist.
• Android is more of a newcomer on the market, so you feel more like part of a community.

Anyone else have a theory? Please share!

Quick guide to consumer insights – how, when and where should you use what?

There are 4 types of consumer insights – very simplified. You can do user tests and interview users to find out how they think, or use eye tracking to see what the user sees, take a look in Google analytics to see what the user does or ask users to fill in a questionnaire to see what they say. All methods are good and fill a purpose, but most of the time it is more effective to use several methods in combination. You can use:

user-thinksUser tests – what the user thinks

User tests or user interviews work very well before a project start to feed into the project brief. It can also be used to generate ideas with the user, or during the course of a project to test solutions in the shape of wire frames – does the user understand how to navigate through and how did they experience the process? It can also be used at the end of a project to verify the final solution to see if it answers to the user’s goal.

Continue reading →

Success rate assessment – test of specific flows and functions on a website

Success rate assessments are a good way to identify flows that work well or badly on a website. It is also good to know why you assess the website. Do you want to know how you are progressing in a project, or are you planning on making changes if the results go down?

The purpose of the assessments

The purpose of the Success rate assessment is to see how well a workflow or function on a website is working. How many of the users could finish a task without problem and how was their experience whilst doing so? And finally, how satisfied were they with the experience?

Continue reading →

SUS – System Usability Scale, a summary

SUS stands for System Usability Scale. SUS was developed by Brooke (1996) as ”the quick and dirty” assessment tool – it was used to quickly and cheaply assess a product or a service. (1). A low cost usability test.

A SUS test is supposed to cover:

  • Effectiveness (the ability of users to complete tasks).
  • Efficiency (how difficult it was to perform the task).
  • Satisfaction (how the users felt when performing the task).

A system can be a car, a website or a product. SUS assessment has a reliability value of 0.85 (2).

A SUS assessment is constructed of 10 statements that are valued on a 5 point scale, according to how much the user agrees or disagrees with the statement. The result can vary between 0 and 100, where a higher score indicates better usability. Every other question is a positive statement and every other question is a negative statement, so both the user and the person analyzing the result have to stay focused!

Continue reading →

Quality assessment: Measuring specific experiences of a website

Apart from a SUS (System Usability Scale) assessment – which is an established method of measuring usability – there are several other assessments that can be used for a website.

The SUS assessment is a good method for measuring the users experience of a website and how they grade the usability of a system, but it does not give any indications to what can be improved or where the users get stuck. A quality assessment can give you more values. Now, we’re still talking about experiences of a website, and not specific tasks or paths a user will take though a website, we are measuring the feeling associated with the system. Before you start you should also have a good reason for measuring and a plan for how to treat or act on the result.

Continue reading →