User Testing Guide
🧪 How to Run Effective User Tests
This approach is fast, reliable, and replaces a days of manual set up and analysis with AI.
Context
We used to do user testing by the process out of Google Ventures’ Design Sprint. At my last business, a design studio called Deep Work, we kept the heart—align, prototype, observe—and stripped down to the essentials whilst improving the process after each client, aimed at early stage startups.
- Align. Gather the team, agree on one sharp question to answer.
- Prototype. Build the quickest workable version—a clickable app, a paper sketch, or a staged space.
- Observe. Test one-on-one, watch the user work, listen for friction.
Most sessions happen over Zoom with a digital interface on-screen, but the framework is elastic. Swap the screen for a counter and the bakery itself becomes the prototype: walk, talk, and note what the customer does next. Same rhythm, same insights—just a different stage.
I created this guide as a short overview for how to run effective user tests and leverage AI to speed up as much as possible.
Step 1: Define Your Hypotheses
Start with 1–3 clear hypotheses about what you expect or want to learn. For example:
-
“Users don’t notice the call-to-action.”
-
“People understand the pricing model.”
-
“The onboarding flow is too long.”
Good hypotheses are focused, testable, and tied to user behaviour—not your opinion.
Step 2: Find the Right Testers
Use your customer base or network to recruit 5–6 users who match your ideal user profile.
Research shows this number reveals about 85% of usability issues without becoming data-heavy.
Keep it lightweight: one-on-one is best. No need for big panels or endless scheduling.
Step 3: Write a Script
Your goal isn’t to lead users—it’s to observe them.
Create a loose script with:
-
A short intro (who you are, why they’re here)
-
Tasks to complete (e.g. “Find a plan you’d buy.”)
-
Open-ended prompts (e.g. “What would you do next?”)
Avoid yes/no questions. You're listening for confusion, hesitation, and unexpected paths.
Note: you can quickly generate a script by putting in your hypotheses and details of your prototype into ChatGPT.
Step 4: Run the Tests
Do each test live over Zoom or in-person.
While the user talks:
-
Take simple notes (what they do, where they pause, what they say).
-
Always record the session. AI tools like Otter.ai (opens in a new tab) can transcribe automatically.
It’s less about what they say they’ll do—watch what they actually do.
Step 5: AI-Generated Report
After all sessions:
-
Drop your raw notes and original hypotheses into ChatGPT.
-
Prompt it:
“Here are 5 user testing transcripts and 3 hypotheses I wanted to test. Write a summary report with findings, patterns, and suggested actions.”
This removes subjective bias and turns a day of analysis into 15 minutes.
Step 6: Go Deeper with AI
Use follow-up prompts in ChatGPT like:
-
“Which hypothesis was most strongly validated?”
-
“What are recurring usability issues?”
-
“Summarise what confused users and how we might fix it.”
It’s a conversation. Think of GPT as your co-pilot, not a report generator.
Step 7: Decide on Actions
Review the report with your team. You can do this as a quick async review or in a short workshop.
Pick 1–3 changes you’ll make based on what you learned. Then test again.
Why This Works
This process blends what we did at Deep Work and Google Ventures sprints with the speed of AI.
📌 Less subjective
📌 Faster turnaround
📌 Action-oriented insights
📌 Works well with lean teams