Exploring Human–AI Interaction @ HP.com (2025)

Evaluating the Hewlett-Packard (HP) Virtual Assistant (VA) Customer Experience.

How well do AI-powered customer service tools meet real user expectations?
This case study evaluates a real-time interaction with HP’s Virtual Assistant, focusing on accuracy (both perceived and observed), usability, and other user considerations in a live support scenario.

    • Explore human–AI interaction:
      Use a real-time AI support case to examine early UX questions around system guidance, support quality, and user expectations.

    • Spot UX breakdowns:
      Investigate how clarity, task flow, and user confidence were affected during a un-guided task.

    • Analyze prompt behavior:
      Study how phrasing and system feedback influenced trust, decision-making, and user interpretation.

    • Is the HP VA findable, usable, and efficient?

    • Does it return accurate, context-aware answers?

    • How much effort does the user need to reach their goal?

    • Does the interaction build trust, or lead to frustration?

Study Details:

    • Age range: 55–65.

    • Moderate tech literacy.

    • Limited prior experience with AI chat tools.

  • Find memory (RAM) specs for a specific HP product.

Observed Task:

    1. Found the HP Virtual Assistant on the support homepage.

    2. Entered a question using the product ID.

    3. Reviewed responses and checked if the info made sense.

    4. Reworded the prompt multiple times to try getting a better answer.

    5. Repeated this process, or considered giving up.

Methods:

  • Observed a real-time user task with no script or setup.

  • Asked reflective, open-ended questions post-task to clarify observed behavior.

  • Reviewed and transcribed field notes to extract reference points, expand variable definitions, and provide structure for future research opportunities.

Data Collected:

  • Quantitative: Task time, number of prompts, accuracy of answers.

  • Qualitative: Usability issues, signs of frustration, emotional tone.

Closing Thoughts:

Even one session can reveal where things break down.

This study demonstrated how confusing prompts, unclear system responses, and repeated rewording can lead to frustration and lower trust in both the tool and the brand.

Also highlighting the significant impact that prompt clarity, feedback, and guidance have on user effort, especially in AI support tools.

    • Review literature on conversational AI, UX, and technology to strengthen the next phase.

    • Refine the research question with sharper focus, updated inquiry, and data-backed framing.

    • Re-Test design using defined research methods to address validity and scope.

    • Present UX case study to HP Design or similar teams to support and guide user-centered strategies in AI-driven support systems.

    • Single-user session, findings aren't broadly generalizable.

    • Used a live system with no ability to adjust the design or information flow.

    • Insights were based on user-facing behavior; the VA’s logic wasn’t visible.