Meet Guru’s Search Team
We’re always working to enhance and improve our users’ experiences with Guru, from the way knowledge is created in our editor to how it’s shared through Slack, Teams, and beyond. One area that holds a special place in our team’s heart is our search functionality, which is core to how our platform is used to seek and share knowledge. Last November, we shared a glimpse into how we use product data to improve search within Guru. Since then, we haven’t slowed down a bit, making incremental enhancements to our search UI within our web app and browser extension, as well as directly to our algorithm. Today, we’ll dive into a Q&A session with two members of our dedicated search team to better understand how we make sure that search at Guru is always improving.
Thank you for joining us, Nora and Yev! Can you introduce yourselves and tell us a little bit about what you do at Guru?
Nora: Thanks for having us! My name is Nora West, and I’m the Senior Product Manager for the search and authoring teams at Guru.
Yev: Thanks, Sydney. My name is Yev Meyer, and I’m a Staff Data Scientist at Guru.
To kick things off, I want to ask a little bit about our search team (“pod”) here at Guru. A lot of people might not even know that we have an entire team dedicated to the search experience — can you tell us a little bit about the team?
Yev: Our search pod is a cross-functional team that is entirely dedicated to a single task of delivering a seamless search experience for our customers. The search pod brings together designers, front-end developers, back-end engineers, architects, data scientists, machine learning engineers and product managers to plan and execute a balanced and sound approach to augmenting our search capabilities.
Nora: Yep, exactly. Regardless of our exact titles, we work together as a team to create an amazing search experience focusing on both the external design of search and the internal algorithm function. I help prioritize our work based on the feedback we're seeing, company goals, and relevant market insights.
Yev: I help the team infuse natural language processing (NLP) and machine learning (ML) more generally into all aspects of search. I also help the team figure out our experimentation strategy, which carefully balances customer feedback, search performance metrics and team/technology insights.
Search isn’t something that people give much thought, but it’s a core functionality of tools like Guru. Can you give us a basic overview of how Guru’s search works?
Yev: Not only is search incredibly important, but according to even Google itself, it is not a solved problem, and is incredibly hard. While most people don’t give much thought to search in software products (because they are so used to “googling” things), there is a lot that happens behind the scenes. From understanding the search query (e.g., inferring intent, extracting semantic meaning, correcting spelling mistakes, rewriting the query using synonyms or other approaches to better capture intent, etc.) to incorporating search context, to retrieving and ranking results, all at scale — it’s a hard and interesting problem. Guru builds on top of groundbreaking work in search by teams behind Lucene, Solr and Elasticsearch open source projects, as well as teams at companies like Lucidworks, Elastic, Google, and AWS to make sure we surface the most relevant knowledge to our users.
What are some indicators you look at to determine how “well” our search is working? How do you identify opportunities to improve and/or enhance search within Guru?
Yev: We look at both qualitative and quantitative indicators. On the quantitative side, we have spent a lot of time building event tracking into the product, so that we can track user-product interaction data. By looking at that interaction data, we can measure quite precisely how well search is performing. Are we returning relevant results? Are users interacting with them? How? In what position do these results appear when users interact with them? Besides recall, mean average precision (MAP) and other metrics typically used to answer these questions, we also look at user frustration. Are people searching for something else without interacting with search results? Are they reformulating their search queries? These are just a few general examples and each question can be further refined to a particular portion of the product, particular context, integration, etc.
Nora: As Yev stated, data gives us incredible insight into the actions our users are taking, which allows us to measure search performance over time. With these insights, we can optimize for actions we see users continuously taking, and assist where we see poor outcomes. For example, we saw that users’ queries often included words that are in the title of the Card they’re looking for, so we introduced quick title search to help them get to those Cards faster. Right now, we're focusing efforts on improving performance for longer searches. Data also helps us confirm a change before bringing it into the product. With our testing, we can see if proposed algorithm changes will improve results before they are released to customers — so we can be sure that any change we do release improves the search experience.
Yev: On the qualitative side, we constantly examine customer feedback, and talk to customers in real-time when possible to determine what’s working and what’s not.
Nora: Yes, we talk with our users as much as we can — data allows us to infer a great deal, but talking with users helps us understand the motivation behind the actions. This helps us to verify or refute the trends that we're seeing in the data. For example, when looking at the Cards users consistently use, they are often limited to a few Collections and Boards. When we discuss this with users, however, they are usually not aware of the organizational structure of their Guru team. This tells us that additional organizational filters in search could potentially increase confusion, rather than making it easier to find the Card that they were looking for.
It seems like search algorithm changes can impact users’ experiences finding knowledge in Guru. How do you test potential changes to see the impact they’ll have? How do you decide to set them live (or not)?
Yev: Great question! At Guru, we embrace the culture of experimentation, and our incredible search pod has built out a search trial framework that allows us to quickly replay search queries to test many ideas without affecting the live search functionality. Once we analyze data and confirm that the tested hypothesis indeed results in improvement, we then do a limited live test directly in the product for a small subset of teams and users. If that test clears, we then roll out the change to our customers.
Thank you both for sharing all of this with us today! Before we go, can you tell us what’s next up for Guru’s search?
Yev: A ton of improvements!
Nora: Yes, lots of improvements ahead. This quarter, we've focused on improving the search experience for longer searches, and this year, we're optimizing for algorithm improvements. We also upgraded our systems to increase the speed at which we can test and release changes to our users.
To stay up-to-date with iterative improvements to Guru’s search functionality, subscribe to our blog and keep an eye out for upcoming feature releases.