Zendesk
Making customer support reliable at mid-market scale
Overview
As Zendesk expanded from SMB customers into the mid-market, mobile needed to support real, in-flight support work, not just quick check-ins. While the Zendesk Support mobile apps were widely used, confidence in key workflows was uneven, and it wasn’t clear which improvements would most effectively support mid-market adoption.
As Lead Mobile Product Designer, I partnered closely with Product and Engineering to use Voice of the Customer and usage data to prioritise high-impact work. Mobile search emerged as a clear point of failure. Improving it helped deepen engagement and made mobile feel like a reliable execution surface for agents working under pressure.
Product
Zendesk Support mobile apps
Role
Lead Product
Designer
Platform
iOS, Android
Team
Product manager, Technical architect, Mobile engineers, Internal support advocates
Business context
Zendesk provides customer support software that helps teams manage customer conversations across email, chat, messaging, and in-app support.
By 2018, the Support mobile apps had grown to 100k+ monthly active users and were embedded in daily workflows, particularly for SMB teams. As Zendesk invested in mid-market expansion, expectations of mobile changed:
Larger ticket queues
More concurrent, active tickets
Less tolerance for friction or ambiguity
Mobile could no longer be a companion. It needed to reliably support progressing and resolving work.
Problem
Several areas of the mobile app were showing strain, but the core risk was loss of trust.
Agents relied on mobile search to:
Find active tickets quickly
Resume ongoing conversations
Confirm status before acting
When search results felt unpredictable, agents abandoned mobile and returned to desktop, undermining mobile’s role in daily workflows.
The challenge wasn’t fixing search in isolation, but identifying which problems most directly blocked mid-market adoption.
Prioritisation
I established and maintained a shared Voice of the Customer system for the mobile apps, partnering with the Product Manager to ensure it informed roadmap decisions.
This consolidated:
App Store and Play Store reviews
In-app feedback via the Support SDK
Help Centre feedback
Input from internal customer support advocates
Alongside VOC, usage data showed that mobile was already supporting meaningful work:
40% of sessions resulted in tickets being solved
42% resulted in tickets being updated
This told us two things: mobile mattered, and improving confidence could have outsized impact without changing behaviour.
Why search?
When we compared Voice of the Customer themes with usage data, mobile search stood out immediately.
Agents relied on search to find active tickets, resume conversations, and double-check status before acting, yet it was one of the most criticised parts of the app.
“With 4000+ results, the feature becomes entirely useless.”
The issue wasn’t discoverability. Agents didn’t trust the results enough to act, which often led to abandoning mobile for desktop.
To clarify search intent, we ran a short survey with 30 support agents. The results showed that most mobile searches were highly intentional, focused on specific tickets and, to a lesser extent, people.
This confirmed that mobile search needed to support fast confirmation and action, not deep exploration.
What agents most frequently search for on mobile
Survey of 30 support agents using Zendesk mobile apps.
Constraints
Mobile search relied on a shared Search API designed primarily for desktop.
Desktop parity was neither feasible nor desirable. A feature-rich but unreliable experience would have caused more harm than a simpler, predictable one.
The goal was maximum reliability under real-world conditions, not maximum power.
System Constraints
API result caps
API payload latency
Network instability
Partial result risk
Key insight
Agents weren’t using mobile search to explore or browse.
They were using it to quickly confirm what to do next, often under time pressure and in interrupted sessions.
In practice, this meant answering a small set of time-critical questions:
Is this the right ticket?
Is it still active?
Do I need to take action now?
If those questions couldn’t be answered confidently from search results alone, agents abandoned mobile and returned to desktop.
This insight became the north star for the solution, shaping decisions around result ordering, segmentation, and the amount of context shown in each result.
Agents weren’t searching to explore. They were searching to confirm and act.
Turning insight into a direction
To move quickly from insight to a viable solution, we ran a focused 3-day design sprint with Product, Engineering, and internal customer support advocates.
The sprint combined VOC synthesis, usage data, and technical constraints with rapid exploration and high-fidelity prototyping. Rather than exploring many concepts, we used the sprint to pressure-test assumptions about what would most improve trust in mobile search.
We tested whether agents could identify the right ticket without opening it, whether recency-first ordering felt reliable, and which controls reduced cognitive load versus adding confusion.
Usability testing showed that agents were more confident acting directly from search when results prioritised recency, segmentation, and context over configurability. These learnings allowed us to narrow scope and commit to a simpler, more predictable search experience.
Insight
Agents search to confirm and act
Principles
Recency
Context at a glance
Strong defaults
Reliability
Decisions
Segmented results
Recency-first ordering
Focused filters
Solution
We reframed mobile search from a flat list into a reliable triage system:
Segmented results for Tickets, Users, and Organisations
Recency-first ordering with strong defaults
Context-rich results that support action without opening tickets
Focused filters for status, type, tags, assignee, and date
Clear loading, empty, and error states to build trust at system edges
Several ideas were intentionally deprioritised, including saved searches and explicit “order by” controls, to preserve predictability.
Outcome
A vendor change during rollout limited access to some planned metrics. We relied instead on adoption trends, session depth, and qualitative feedback to assess impact.
Following release:
Average session duration increased by +60 seconds
Mobile usage grew to +25k monthly active agents
iOS App Store rating improved, with Android sentiment maintained
Together, these signals showed that mobile had become a more reliable place for agents to progress and resolve work, supporting Zendesk’s broader mid-market strategy.
Reflection
This project reinforced that good product design starts with choosing the right problem.
By combining Voice of the Customer, behavioural data, and technical constraints, we focused effort where it mattered most. Improving mobile search wasn’t about adding power. It was about restoring trust, so agents could rely on mobile when it mattered.
























