Exploring Human–AI Interaction @ HP.com (2025)

Evaluating the Hewlett-Packard (HP) Virtual Assistant (VA) Customer Experience.

How well do AI-powered customer service tools meet real user expectations?
This case study evaluates a real-time interaction with HP’s Virtual Assistant, focusing on accuracy (both perceived and observed), usability, and other user considerations in a live support scenario.

    • Explore human–AI interaction:
      Use a real-time AI support case to examine early UX questions around system guidance, support quality, and user expectations.

    • Spot UX breakdowns:
      Investigate how clarity, task flow, and user confidence were affected during a guided task.

    • Analyze prompt behavior:
      Study how phrasing and system feedback influenced trust, decision-making, and user interpretation.

    • Is the HP VA findable, usable, and efficient?

    • Does it return accurate, context-aware answers?

    • How much effort does the user need to reach their goal?

    • Does the interaction build trust, or lead to frustration?

Study Details:

    • Age range: 55–65.

    • Moderate tech literacy.

    • Limited prior experience with AI chat tools.

  • Find memory (RAM) specs for a specific HP product.

Observed Task:

    1. Found the HP Virtual Assistant on the support homepage.

    2. Entered a question using the product ID.

    3. Reviewed responses and checked if the info made sense.

    4. Reworded the prompt multiple times to try getting a better answer.

    5. Repeated this process, or considered giving up.

Methods:

  • Observed a real-time user task with no script or setup.

  • Asked reflection questions after the task.

  • Reviewed transcript to gather missed details.

Data:

  • Quantitative: Task time, number of prompts, accuracy of answers.

  • Qualitative: Usability issues, signs of frustration, emotional tone.

Closing Thoughts:

Even one session can reveal where things break down.

This study showed how confusing prompts, unclear system responses, and repeated rewording can lead to frustration and lower trust, both in the tool and the brand.

Also highlighting how much impact prompt clarity, feedback, and guidance have on user effort, especially in AI support tools.

    • Revisit the research question with clearer focus and more data-backed framing.

    • Review literature on conversational UX to strengthen the next phase and support deeper insight.

    • Test refined design ideas with more participants to increase usability feedback and generalizability.

    • Share insights with HP or similar teams as a case for improving AI-driven support.

    • Single-user session—findings aren't broadly generalizable.

    • Used a live system with no ability to adjust the design or flow.

    • Insights were based on user-facing behavior; the assistant’s logic wasn’t visible.