Application artifact · Maura K. Randall · April 2026

The next chapter of UserTesting’s mission: AI that keeps up with the teams you serve

A history of removing barriers

Recruitment used to take six weeks and three people. Testing required a budget line item you had to fight for. Synthesis meant a researcher manually reviewing hours of footage before anyone else could act. One by one, UserTesting made those barriers disappear — and with each one gone, more teams could say: this is just how we work now.

The risk

AI-augmented teams are now prototyping in hours and shipping in days. The risk isn’t that AI makes user testing obsolete. The risk is that development velocity outpaces the research infrastructure designed to keep customers at the center — and teams start making fast, confident decisions without the human signal that makes those decisions trustworthy.

The opportunity

UserTesting’s next barrier to topple: make user testing scale at the same speed as the teams it serves — so that no matter how fast AI accelerates development, the customer’s voice never gets left behind.

What that looks like in practice

AI-augmented experiences across every stage of the research lifecycle — from assembling the right test to attracting the right participants to getting findings to the right people in the right format. Not AI replacing the research process. AI keeping pace with it, so the human insight loop stays intact no matter how fast the build cycle moves.

Target Reach the right audience

What exists today

  • Global participant network
  • Demographic and behavioral targeting
  • Screener design tools
  • Fast recruitment at a fraction of historical cost and time

Where AI takes it further

  • Study design generated from a roadmap item or product brief
  • Screener criteria drafted from plain-language input
  • Recruitment messaging written and optimized automatically
  • Weak hypothesis flagged before recruiting begins — not after

Human judgment stays here

  • Is this the right question to test right now?
  • Does this participant profile represent the user we’re actually worried about?
  • AI sharpens the research question. It doesn’t set it.
Gather Comprehensive testing capabilities

What exists today

  • Unmoderated and moderated testing
  • Live conversation features
  • Video, audio, and session transcription
  • Prototype and live site testing across devices

Where AI takes it further

  • Real-time behavioral tagging as sessions unfold
  • Follow-up questions surfaced to moderators mid-session
  • High-signal moments flagged automatically — nothing gets buried
  • Agentic facilitation for unmoderated tasks, with carefully governed guardrails

Human judgment stays here

  • This stage requires the most careful AI governance
  • An AI that cuts off a productive tangent misses the insight entirely
  • Expand AI’s role here deliberately, with continuous validation of what it gets wrong
Analyze Identify insights and measure performance

What exists today

  • AI Insight Summary — LLM synthesis with video citations
  • Insights Discovery — natural language queries across studies
  • Sentiment analysis and Smart Tags
  • Path Flows behavioral analysis
  • “Trust, but click” — every AI insight linked to source

Where AI takes it further

  • Domain-tuned models that learn your team’s language and priorities
  • Automatic hypothesis verification — did this test answer the question we asked?
  • Gap detection — what did we not learn that we needed to?
  • Cross-study pattern surfacing across an org’s full research history

Human judgment stays here

  • “Trust, but click” extends beyond verifying summaries
  • What data means for this product, this team, this moment requires full context
  • AI surfaces. Humans conclude.
Amplify Share and scale insights across your organization

What exists today

  • Insights Hub for centralizing and sharing research
  • Integrations with Figma, Slack, Jira
  • Research democratization — PMs and designers running their own studies
  • Cross-team visibility into findings

Where AI takes it further

  • Audience-tailored packages — engineer’s, executive’s, and marketer’s versions of the same test, generated automatically
  • Insights flowing directly into sprint planning and roadmap tools
  • Closing the loop — findings mapped back to the original roadmap item, changes flagged automatically

Human judgment stays here

  • Insight without advocacy is just data
  • Someone has to walk the finding into the room and make the case for why it matters now
  • AI packages the evidence. It can’t replace the person who champions it.

The principle running through all four stages

UserTesting’s “trust, but click” philosophy — every AI insight linked to its source — is the right instinct applied to one stage. The Head of Internal AI role is about extending that principle across the entire lifecycle: AI keeps pace with the development velocity of the teams UserTesting serves, and humans stay in the decision seat at every stage that matters. Not as a constraint on AI. As the thing that makes the insights worth trusting.

A note on this artifact

Built from two years of daily human-AI practice, direct experience using UserTesting at Atlassian, and careful study of what UserTesting has built and where they’re going. Not a roadmap proposal — a point of view worth pressure-testing with people who know the platform from the inside. The best version gets smarter in that conversation.

Maura K. Randall

maurakrandall@gmail.com · Austin, TX