A how-to guide for inclusive user testing

Creating inclusive digital products isn’t just about checking accessibility boxes โ€” it’s about understanding the full spectrum of human experience.

Inclusive user testing is how we bridge that gap between intention and impact, building experiences that truly work for everyone.

This how-to guide brings together practical advice, proven methodologies, and a human-centered mindset to help you plan, conduct, and reflect on inclusive user research that makes a difference.

A blind woman smiling while using a smartphone, seated in a brightly lit library or classroom with bookshelves in the background.

What is inclusive user testing?

Inclusive user testing involves observing real peopleโ€”especially those with different access needsโ€”using your website, app, or service. It goes beyond generic usability to surface specific barriers faced by users who rely on assistive technologies or experience digital spaces differently.

Involving a diverse group of users ensures that:

  • You discover issues both automated and expert manual testing will miss
  • You uncover friction in real-world use cases
  • You gather empathy-driven insights to improve your design
  • You co-create better digital experiences for all

Step 1: Build inclusion into your research plan

Start by asking: who isn’t in the room?

Design your research to proactively include diverse participants:

  • People with permanent, temporary, or situational disabilities
  • Neurodivergent users
  • Older people
  • People from different cultures, languages, and age groups
  • Users across socioeconomic and educational backgrounds

Create multiple ways for people to participate in your research, from user interviews conducted remotley or in person, to usability tests with assitive technology, and surveys. Use accessible platforms and clear, jargon-free language in all your communication and session content.

Step 2: Recruit diverse participants

Representation isnโ€™t optional โ€” itโ€™s foundational. Reach out to disability advocacy organisations, inclusive recruitment agencies, or local community networks to source participants with a range of access needs. Aim to test with:

Assistive technology users:

  • Blind screen reader users across devices and operating systems (e.g. VoiceOver on iOS, NVDA on Windows)
  • People with low vision who rely on screen magnifiers or browser zoom
  • Motor-impaired participants who use tools like switch devices, voice commands, or head tracking

Non-assistive technology users with access needs:

  • A deaf or hard of hearing participant
  • An older adult (e.g. over 75) who may have limited tech confidence
  • Someone with dyslexia
  • A participant with ADHD or a learning disability

Use inclusive screener questions to identify preferred tech setups, assistive tools, and accommodations. Make sure your sample includes a mix of experience levels with digital products and assistive technologies.

Rather than testing the same setup with every user, match assistive technologies to the platforms you’re launching onโ€”and be ready to explain how each participant group contributes to your testing goals. A thoughtful approach ensures meaningful representation without inflating scope.

At Arc Inclusion, we’ve found that blending disabled and non-disabled participants in the same research rounds can reduce costs and accelerate learning.

Step 3: Prepare an accessible setup

Create a remote or in-person environment that supports inclusion:

  • Confirm compatibility with assistive technology
  • Offer alternative formats and languages
  • Provide a clear agenda and test flow in advance
  • Use accessible platforms (Zoom with captions, for example)
  • Offer flexible scheduling across time zones

Bonus: Include a visual self-description in your introduction to create comfort and inclusion.

Step 4: Facilitate with empathy and precision

Moderating inclusive sessions means creating a safe, respectful environment:

  • Be patient: allow time for navigation and response
  • Stay neutral and curious
  • Donโ€™t assume โ€” ask users to show and describe their experience
  • Use clear, simple instructions
  • Let participants guide the pace
  • Donโ€™t be afraid to go back and repeat certain parts of the user journey

Listen deeply. Validate your participant’s input. And let silence work its magicโ€”powerful insights often come after “the end.”

Step 5: Use multiple methods for deeper insights

Combine:

  • Interviews to explore their lived experiences, motivations, and barriers in detail.
  • Usability testing for observing users as they complete typical tasks on your digital product to identify usability issues and accessibility blockers.
  • Session recordings for behavioral analysis of real user sessions to understand patterns, hesitations, intentions, and drop-offs without direct moderation.
  • Focus groups for bringing together small groups of users to spark discussion, gather diverse perspectives, and explore shared challenges.

Each layer enriches your understanding of the user experience.

Step 6: Watch, listen, and learn from assistive tech users

Nothing replaces firsthand observation. During testing:

  • Observe how screen readers read out content
  • Track how a screen magnifier user scans the page
  • Watch how speech recognition users navigate forms
  • Note where frustration or delight emerges
  • Track technically compliant elements that produce a poor accessibility UX

Step 7: Reflect and improve the process

At the end of each session:

  • Ask what worked and what didnโ€™t
  • Invite feedback on the testing experience itself
  • Capture follow-up questions and opportunities to go deeper
  • Document accommodation requests and adjust your process

Inclusive research is iterative. Every session helps you do better next time.

Best practices to embed in your team

  • Make inclusive testing a default, not a one-off
  • Budget and plan for accessibility from the start
  • Coach your team to design and moderate inclusively
  • Prioritise user stories over compliance metrics
  • Treat accessibility testing as ongoing maintenance, not a launch checklist

Why inclusive user testing pays off

Beyond ethics and compliance, the ROI of inclusive user testing is undeniable:

  • Better design: Address usability for all, not just some
  • Customer loyalty: Users feel valued when you listen and respond
  • Wider reach: Better accessibility means your service is more usable for a greater number of people
  • Fewer bugs: Real-world testing catches edge cases missed in QA
  • Culture change: Empathy becomes embedded in how your team works

Final Thought

Inclusive user testing is one of the most powerful tools we have to create digital experiences that truly work for everyone. It takes effort, curiosity, and humility โ€” but the result is more than just accessible interfaces.

Start where you are. Learn as you go. And let real users lead the way.

Similar posts

Discover how weโ€™ve helped organisations overcome accessibility challenges and achieve success.

FAQs

โ€˜User testingโ€™ is a broad term for observing how real users interact with a product to understand overall experience and identify any issues they may encounter. It is more holistic and focuses on questions about user experience and satisfaction.

ย 

The main goals of user testing are identifying user needs, evaluating their experience, uncovering pain points, and gathering user feedback.

ย 

On the other hand, usability testing is a subset of user testing and focuses specifically on how easily and effectively users can complete tasks. This method targets the productโ€™s design, navigation, and functionality, highlighting issues that hinder the users’ ability to use the product efficiently.

User testing becomes inclusive when it intentionally involves participants with a wide range of abilities, backgrounds, and lived experiences, rather than relying on a uniform group of participants.

ย 

The aim is to uncover barriers real users face and design products that work for everyone.

The main challenges of inclusive user testing are often related to planning and logistics. Recruiting participants with diverse needs can take extra time and requires extra care. Setting up accessible platforms or environments also requires additional preparation.

ย 

It is important to remember that sessions may also take longer, as facilitators need to allow flexibility, provide clear instructions, and accommodate different ways of interacting with technology.

ย 

Despite these challenges, the insights gained are far richer than testing with a narrow group, helping teams build usable and effective products.

To ensure your user testing sessions go off without a hitch, here are some of the most common mistakes which can limit the quality of insights:

ย 

  • Recruiting too narrow a group – relying on internal teams or the same users, rather than a diverse mix.

  • Treating testing as a one-off – only testing occasionally instead of treating it as a continuous, iterative process.

  • Asking leading questions – instead of letting users show how they naturally interact, and leading to biased results

  • Focusing only on compliance checks – accessibility goes way beyond compliance, and lived experience and real barriers should also be a focal point.

  • Failing to act on feedback – collecting valuable insights but failing to prioritise or implement the changes needed.

Website accessibility monitoring is the fundamental process of scanning your website to detect any issues that could prevent users with disabilities from using it. Automated web accessibility monitoring tools continuously check for accessibility issues across your site, providing instant alerts for new and updated content, as well as your overall site health.

ย 

They track compliance with standards like the Web Content Accessibility Guidelines (WCAG) and show you how accessible your site is, where it should be, and what improvements should be made to deliver a better experience for all users.

ย 

In addition to measuring your compliance, they also provide a clear picture of your progress over time, so you can track the impact of your improvements and maintain ongoing accessibility.

The two main types are automated and manual monitoring. Together, they provide you with a comprehensive view of how accessible your site is and where improvements are needed.

ย 

  • Automated monitoring uses specialised web accessibility monitoring tools to scan your website for non-compliant features and common issues, such as missing alt text, poor colour contrast, or keyword navigability issues. These tools can also provide instant alerts for when site elements present accessibility risks and site health reports so you can prioritise any issues.

  • Manual monitoring is where accessibility experts and testers come in to review your site as a real user would, often using assistive technologies like screen readers. They will usually check how easy it is to navigate through pages, interact with content, and understand messages or instructions. The aim is to identify any areas which may present barriers for individuals with disabilities.

Accessibility monitoring is crucial for ensuring that everyone can use and experience your site in the same way, regardless of ability. It is also essential for staying compliant with standards like WCAG and with laws like The European Accessibility Act 2025.

ย 

Without regular monitoring, accessibility issues can easily appear when new pages are added, content is updated, or designs are changed.

ย 

Continuous website accessibility monitoring gives you a framework to:

  • Stay compliant

  • Improve user experience

  • Respond to issues quickly

  • Track progress over time

Accessibility monitoring should be integrated into your process rather than a one-time check. Websites can change frequently, with new pages, designs, and content changes, but each update can introduce accessibility issues.

ย 

Continuous monitoring, both manual and through an automated website monitor, is recommended to catch any issues as soon as they appear, particularly after any big changes, such as adding interactive elements, redesigns, and when legal or accessibility guidelines are updated.

ย 

Even without significant changes, monitoring should be a consistent part of your organisations website maintenance.

ย 

The more you test the better, but for those looking for an exact amount, ideally once a month is a good starting point to catch any emerging issues.

Book a meeting

This field is for validation purposes and should be left unchanged.

Signup

This field is for validation purposes and should be left unchanged.