Zendesk

Introducing automation to customer support

Support automation

Support automation

Support automation

Trust and reliability

Trust and reliability

B2B2C experience

B2B2C experience

Trust and reliability

Trust and reliability

B2B2C experience

B2B2C experience

10-12%

Ticket deflection in early deployments

+4,500

Tickets resolved automatically

12+ min

Estimated agent time saved per ticket

10-12%

Ticket deflection in early deployments

+4,500

Tickets resolved automatically

12+ mins

Estimated agent time saved per ticket

Overview

As Zendesk customers scaled, support teams faced rising volumes of repetitive, low-complexity tickets, despite having Help Centre content in place. On mobile, users frequently escalated to human support before finding relevant answers, increasing operational cost and response times.

Answer Bot was introduced to automatically surface relevant Help Centre articles using non-generative machine learning. Mobile was a high-risk, high-impact surface for this work: incorrect or overconfident automation could quickly erode trust, but even modest deflection could materially reduce support load.

I led design for Answer Bot across Zendesk’s mobile SDKs, shaping how early ML-driven automation behaved, failed, and handed off to human support inside third-party consumer apps.

Product

Zendesk mobile SDK's

Role

Lead Product
Designer

Platform

iOS, Android

Team

Product manager, Mobile engineers, Web-widget designers (MEL), Researcher (SIN) 

Business context

Zendesk provides customer support software used by businesses to manage conversations across email, chat, messaging, and in-app support.

Although Zendesk sold to businesses, the mobile SDK experience was used by end consumers inside third-party apps, making this a B2B2C problem. The experience needed enterprise-grade reliability while operating in consumer-grade mobile environments with low tolerance for friction or error.

Problem

Rising ticket volume created compounding risks:

  • Slower response times for complex issues

  • Increased operational cost for support teams

  • Poor consumer experiences for users seeking fast, in-context answers

The challenge was not maximising automation, but introducing it safely, without damaging trust or blocking access to human support.

Constraints

Several constraints shaped how Answer Bot could operate on mobile. The underlying system relied on non-generative machine learning, meaning it could only retrieve and rank existing Help Centre articles rather than generate answers or resolve ambiguity. At the same time, knowledge base quality varied significantly between customers, as content was customer-managed and often inconsistent.

These limitations were amplified by the mobile context, where small screens, short sessions, and frequent interruptions increased the cost of friction and error. In this environment, user trust was fragile. Overconfident or unclear automation would quickly reduce adoption and push users back to human support.

Given these constraints, the goal was predictable behaviour and safe failure, not conversational depth or technical sophistication.

What shaped the experience

Retrieval-only ML

Inconsistent content

Mobile friction

Trust risk

Role and collaboration

I worked day to day with a local Product Manager and mobile engineering team to design and ship the mobile experience.

In parallel, I partnered with:

  • A web-widget design team in Melbourne, working in tandem to align interaction models

  • Machine learning engineers in Melbourne, translating model confidence into user-facing states

  • A shared researcher based in Singapore, supporting both mobile and web workstreams

Across teams, we jointly explored interactions, aligned on shared language, and ensured Answer Bot behaved consistently across platforms while respecting surface-specific constraints.

Key insight

Early ML systems do not earn trust by sounding human.

Trust comes from predictable behaviour, transparency, and fast recovery, especially in mobile contexts where errors are costly.

This reframed the work from “making the bot smarter” to making its behaviour legible and safe under uncertainty.

Trust in early ML doesn’t come from sounding human. It comes from predictable behaviour and safe recovery.

Design approach

We treated Answer Bot as a structured decision system, taking a logic-first approach that aligned the interface closely with the model’s retrieval capabilities.

Given the limitations of early, non-generative ML and the cost of error in mobile contexts, the goal was not to simulate human dialogue, but to guide users through clear, bounded decisions with predictable outcomes.

This framing allowed us to translate abstract trust requirements into concrete design principles and enforce them consistently across interactions.

Principles

Predictable behaviour

Explicit uncertainty

Reliability

Decisions

Short, task-focused interactions

Visible confidence thresholds

Always-available escalation paths

Core support journeys

The mobile experience was designed around explicit system states rather than open-ended conversation. Each journey balanced automation with clear recovery paths, ensuring users could move forward without friction or loss of context.

Article-based resolution

When a user’s query could be confidently matched, a relevant Help Centre article was surfaced inline. The presentation prioritised readability and actionability to support fast comprehension on small screens.

Each successful resolution ended with a clear completion state, allowing users to confirm the outcome, provide lightweight feedback, and deliberately exit the flow.

Guided clarification

When confidence was insufficient for a direct match, the system avoided presenting a best-guess result. Instead, users were guided to refine their input or choose from clearer options, keeping uncertainty visible without blocking progress.

Escalation to human support

When automation was not appropriate, users could escalate without re-entering information. Escalation respected each customer’s configured support channels, routing users to chat, talk, or ticket creation as appropriate.

Unrecognised input

If a query could not be interpreted, the system explicitly communicated that limitation and prompted the user to rephrase, rather than returning irrelevant content.

Outcomes

Answer Bot shipped to Early Access Programme customers, where the focus was validating behaviour, trust, and failure handling rather than optimising metrics.

After establishing the core interaction model and platform patterns, I transitioned to a separate project. Subsequent rollout and metric tracking were led by partner teams, building on the foundations established during EAP.

Later deployments reported:

  • 10–12 percent ticket deflection

  • Some customers deflecting +1,600 tickets in 30 days

  • Estimated +12 minutes of agent time saved per automated resolution

What this enabled

This project established foundational patterns for how automation should behave in high-risk, customer-facing contexts:

  • Making system uncertainty visible

  • Preserving access to human support

  • Preventing early trust erosion

These patterns informed later automation and AI initiatives across Zendesk products.

Reflection

This work reinforced that introducing automation is primarily a trust and risk problem, not a capability problem.

By constraining scope, making uncertainty explicit, and prioritising safe recovery, we delivered meaningful value while reducing organisational risk, particularly in mobile environments where failure is amplified.

Open to new roles

Open to full-time or contract roles on thoughtful product work.

Let's chat

©️ 2026 Reiner Wendland

Open to new roles

Open to full-time or contract roles on thoughtful product work.

Let's chat

©️ 2026 Reiner Wendland

Open to new roles

Open to full-time or contract roles on thoughtful product work.

Let's chat

©️ 2026 Reiner Wendland

Open to new roles

Open to full-time or contract roles on thoughtful product work.

Let's chat

©️ 2026 Reiner Wendland