Define goals

Define your goals and research questions clearly. Ask yourself; What am I trying to prove or validate? Do I want to validate design choices? Do I want to validate layout? Does the language make sense? Does the functionality assist the user? It helps to break down testing goals into categories (eg functional, editorial and design goals).

Strong simple goals help you to:

  • see if users understand what they need to do and can complete all relevant tasks
  • identify usability issues
  • generate ideas for how to improve

Specify target group

Your research goals with your user personas will help you select your target group; demographic, digital skills, social and economic status and/or education level.

Review the criteria with your team to make sure you’re selecting the right people for the user testing.


Before planning any testing session, work with your team to agree the research questions, types of users and type of prototype you want to test. Prototypes can take the form of paper, wireframes or interactive user journeys for example.

User testing sessions can take up to 60 minutes, depending on the complexity of the tasks. Allow at least 15 minutes between sessions.

Design the tasks

Tasks need to be designed carefully to make sure they answer your research questions. Well defined goals will make this process easier.

GDS has outlined these points as good test tasks:

  • set a clear goal for participants to try and achieve
  • are relevant and believable to participants
  • are challenging enough to uncover usability issues
  • don’t give away ‘the answer’ or hint at how a participant might complete them

Rehearse and revise the tasks with colleagues. Doing a rehearsal of the test will reduce on-the-day stresses.

Maintain a record of each test session. Ideally, You’d want users to voice their feelings and thoughts out loud as they navigate the test.

Running user test sessions

For each participant, introduce yourself, explain the research and remind them about things like how you are recording the session.

Explain each test task; what you want the participant to do using clear instructions. Ask the participant to tell you their thoughts as they run through the task.

During the task, mostly watch and listen.

At the end of the session, ask follow-up questions about the things you observed and check if the participant has any final thoughts.


Organise the findings into categories; functionality, editorial, design or per task. Did a majority of users stumble during a particular task? Did the layout leave them completely clueless about how to proceed? Was the language confusing?

Next rank the findings according to severity. Ranking findings helps the team understand how critical each issue is. Don’t forget to include positive findings as well, letting the team know what’s already working.