Exploring Human–AI Interaction @ HP.com (2025)
Evaluating the Hewlett-Packard (HP) Virtual Assistant (VA) Customer Experience.
How well do AI-powered customer service tools meet real user expectations?
This case study evaluates a real-time interaction with HP’s Virtual Assistant—focusing on clarity, accuracy (both perceived and observed), and overall usability in a live support scenario.
Project Purpose:
Evaluated a real-time AI support interaction to assess clarity, effort, accuracy, and trust.
Applied prompt engineering as a UX lens—examining how phrasing shaped the experience.
Used findings to strengthen my research process and inform future human–AI design decisions.
UX Questions Explored:
Is the VA findable, usable, and efficient?
Does it return accurate, context-aware answers?
How much effort does the user need to reach their goal?
Does the interaction build trust — or lead to frustration?
Observed Task:
Find accurate memory (RAM) specs for a specific HP product using the product ID.
Participant Snapshot
Age range: 55–65
Moderate tech literacy.
Limited prior experience with AI chat tools.
-
Found the HP Virtual Assistant on the support homepage
Entered a question using the product ID.
Reviewed responses and checked if the info made sense.
Reworded the prompt multiple times to try getting a better answer.
Repeated this process—or considered giving up if results stayed unclear.
Methods & Data
Methods Used:
Observed a real-time user task with no script or setup
Asked reflection questions after the task
Reviewed transcript to gather missed details
Data Collected:
Quantitative: Task time, number of prompts, accuracy of answers
Qualitative: Usability issues, signs of frustration, emotional tone
Closing Thoughts
Even one session can reveal where things break down.
This task showed how confusing prompts, unclear system responses, and repeated rewording can lead to frustration and lower trust—both in the tool and the brand.
It also highlighted how much impact prompt clarity, feedback, and guidance have on user effort—especially in AI support tools.
-
Single-user session—findings aren't broadly generalizable.
Used a live system with no ability to adjust the design or flow.
Insights were based on user-facing behavior; the assistant’s logic wasn’t visible.
-
Revisit the research question with clearer focus and more data-backed framing.
Review literature on conversational UX to strengthen the next phase and support deeper insight.
Test refined design ideas with more participants to increase usability feedback and generalizability.
Share insights with HP or similar teams as a case for improving AI-driven support.