All Guru Customer Stories

How PetDesk Used Quality Automations to Create Operational Clarity

With a few targeted quality rules and a fearless testing approach, PetDesk's CS Ops team cleared the noise from Guru and changed hearts and minds about the value of knowledge maintenance.

Reading time
0
 minutes
“Because of the Quality automations, what’s unverified now - it’s not just noise anymore — it’s the stuff that actually needs love.”

— Shona Fenner
, Director of CX Ops

The Challenge: Years of Growth Left Knowledge Scattered and Distrusted

PetDesk is a growing pet care software company that has expanded, in-part, through acquisitions, bringing together multiple brands, teams, and processes under one roof. As the organization grew, so did their Guru workspace — but not always in the best ways.

Duplicate Processes, Conflicting Cards

As teams merged at PetDesk, documentation multiplied. Simple, foundational processes — like how to manage an escalation or hand off between a CS and sales rep — existed in different forms across different teams, each with slight variations inherited from the company they came from. Shona Fenner, Director of CX Ops, found eight different Guru cards about risk management alone, all reflecting slightly different approaches. Looking at them in isolation, there was no clear answer about which process was actually correct.

“We had eight different Guru cards from different teams about risk management, post-sale. Looking at these in isolation, you wouldn’t know which way to turn.”

— Shona Fenner
, Director of CX Ops

A Knowledge Base with Trust Issues

The bigger problem wasn’t just duplicate content — it was the perception of inaccuracy that had settled over Guru as a whole. Cards aged, verifiers got busy, and the general vibe became: this might be outdated. The team wasn’t wrong to feel that way. But they also weren’t always right. Accurate content was sitting next to genuinely stale content, and there was no reliable way to tell the difference.

PetDisor’s Guru verification score was hovering in the 60-70% range for years. The unverified queue was cluttered with cards that had simply timed out of their verification cycle, not cards that were actually wrong. But experts who were already stretched thin didn’t have time to sift through the queue to find the cards that actually needed updating. The signal was buried in noise.

Governance Without a Way to Act on It

Shona had already built a governance model and set verification goals before Quality automations were released to Guru. The internal structure was there. What was missing was a way to surface the right things to the right people at the right time — without flooding stakeholders with long lists of cards to review every six months. Getting a collection owner to take documentation seriously is hard when the task feels endless.

The Approach: Turn It On, Monitor Results, Iterate from There

PetDesk’s approach to implementing Guru’s quality automation was deliberate in one key way: they didn’t overthink it.

"If you want to solve problems, you’ve got to experiment and fail fast... rip off the bandaid and get it in there. You'll spend less time if you just iterate than if you try to drink the ocean and think of every eventuality first."

— Shona Fenner
, Director of CX Ops

1. Start with Defaults, Then Adjust for Reality

Rather than spending time mapping out every possible edge case before going live, Shona turned on Guru’s default quality rules and gave it a day. Within 24 hours, it was immediately apparent which rules needed adjusting to suit their org because they could clearly see the reasons the automations unverified or verified their sources.

“It was very clear after one day of review — here’s the things that are idiosyncrasies to our org. Either they’re over-indexed in the default or under-indexed. And that was very small tweaks because we could see it in the real world.”

— Shona Fenner
, Director of CX Ops

Two rule changes had an outsized impact. First, flagging answers with more than two thumbs-down in the past month gave Shona a clear, undeniable signal that something in that card needed attention — no debate required. Second, excluding admin views from the “viewed frequently” threshold and upping the threshold to 45 views was a quiet fix with a significant effect. Cards that only Shona and a small number of ops team members were opening were previously appearing active when they weren’t.

2. Use the Product Expert Agent as a Testing Ground

PetDesk started with their Product Expert Knowledge Agent — the one covering their largest product line, with the most documentation built out, and the one getting the most internal flack for “inaccuracy.” It was a useful test case because Shona knew the content well enough to evaluate whether quality was surfacing real problems or just creating noise.

What it surfaced was specific: a handful of genuinely outdated cards mixed in with a lot of content that was actually fine. That specificity made internal conversations much easier. When PetDesk's Director of Product saw a focused list of what needed attention — not a sprawling review queue — she enthusiastically reorganized her entire collection, merged legacy cards, and assigned clearer ownership going forward.

3. Use Quality Signals to Reopen Conversations

The process of turning on quality automation didn’t just improve cards in isolation — it opened doors. Shona used the insights to go back to collection owners with something concrete: here’s what’s getting flagged, here’s why, here’s what a small fix would do. That framing changed how stakeholders engaged with knowledge management. Instead of a vague ask to “review your content,” it was a specific, solvable problem with a clear return on the time invested.

"What's getting surfaced if I see that little green checkbox, I know I can trust it a bit better now. I don't feel quite as much like I'm swimming in an ether of mystery where it could be totally accurate, but I just really don't know.

— Shona Fenner
, Director of CX Ops

With a clear line of sight into how small content and architectural changes could have meaningful impacts towards serving the org, experts like the Director of Product were actually eager to take action - they were no longer dreading it. That’s a huge cognitive shift and a big win for the entire org that directly benefits from accurate knowledge in Guru.

Content improvements followed naturally from that, like adding brief descriptions to the top of Cards to help Knowledge Agents easily discern the purpose of Cards. Titles were also made more specific. Both ideas came from Shona asking Guru’s Knowledge Agent how to make card content more AI-friendly so that she could help other experts improve their content.

The Results

  • 📈 Verification score rose from 76% to 85% — an easy 9-point increase since implementing quality automation
  • 🔥 Unverified queue now reflects real problems — auto-verified content cleared the noise; what remains unverified is genuinely worth fixing
  • 📋 Eight duplicate risk management cards consolidated into a single, cohesive process across four departments
  • New stakeholders actively engaging with Guru — leaders who hadn’t mentioned the tool in four years began celebrating wins in Slack
  • 🚀 Quality automation expanded to two Knowledge Agents — with additional agents being piloted across the org

The most meaningful shift wasn’t the score itself — it was what the score unlocked. When stakeholders could see that the unverified queue represented real gaps rather than a slew of cards that had reached the end of their verification intervals, they were willing to take action. Collection owners who previously ignored verification pings started making updates on their own, in some cases editing cards without being asked because the bar felt achievable and the impacts felt tangible.

That change management impact surprised Shona most. For years, getting teams to take ownership of content was a push. Quality automation made it a pull — people could see the problem clearly, understand what fixing it would do, and take a small action with a visible result.

“These features showed enough value to get people who literally hadn’t talked about Guru in years to celebrate success with the tool.”

— Shona Fenner
, Director of CX Ops

What’s Next

PetDesk is expanding quality automation beyond its initial Product Expert Agent. New agents are already being piloted by teams across the business, some by people who discovered the capability on their own and started sharing results in Slack before Shona even knew they were running with it. The momentum is organic now.

Shona’s advice for teams sitting on the fence? Stop waiting for the perfect plan. Turn on the defaults, see what surfaces, and make small adjustments from there. The cost of trying is low — everything in Guru can be undone easily in Card Manager or Source Management — and the return on even a few hours of iteration can reshape how your whole team relates to their knowledge base.

“If you’ve got longtime users who need to breathe life back into Guru, Quality automations are a really great way to do it. You just need 2 or 3 engaged stakeholders who are willing to fix the problems that bug them – in ways that are scalable and iterative.”

— Shona Fenner
, Director of CX Ops

Key Stats

Customer Testimonials

Key Takeaways

Guru Capabilities Leveraged

No items found.
Published on 
April 6, 2026

Further reading

No items found.