Demystifying the Security Unknowns of AI Add-ons

Companies shouldn’t have to put their secure data at risk to reap AI's benefits. Here’s what Guru is doing to bring you AI innovations and keep your company data safe.
Table of Contents

Security is always top of mind at Guru, especially when it comes to the security of customer data. Thanks to recent advancements in AI, there’s never been a tighter focus on security in the tech world. As we’ve worked to release the steadily growing number of AI features you can use in Guru, we’ve also been hard at work enhancing our security features. Luckily, that’s somewhat simple when you’re already working with a solid foundation.

A foundation of security and protection

Guru adheres to best practices regarding securing our customers’ data. You can read about our security and compliance practices in full on our security page, but you can get a brief rundown of what you need to know here.

Guru Collage Image-Library-39

Whether it’s business continuity or building access, configuration management or key rotation, we monitor standards and capture artifacts to support our compliance. We routinely review industry standards and privacy regulations and adapt our internal controls as needed. Along with our annual SOC 2 audit to demonstrate our year-round security oversight, we’re always working hard to create a safe and secure environment for your most important data.

Guru, AI, and your company data

As much as we’re focused on security, our belief in innovation is equally strong. We’re constantly seeking ways to integrate new features and capabilities that add value to the product. There’s no denying that AI has advanced to the point where it legitimately complements a number of SaaS products, Guru included. Failure to consider AI and its potential for Guru simply isn’t an option, which is why we’ve worked so hard to safely bring these innovations to our product and customers as soon as possible.

Generative AI may prove to be one of the most transformative technological advancements of our age, but it comes with its unique concerns. This takes us back to the sanctity of customer data. It’s only natural for customers and companies to be hesitant about pointing their sensitive information toward new and unknown sources. We’re starting to see some repeat questions: Where is my information going? How will it be used? Who’s responsible for protecting it? How long will it remain there?

As Guru begins to explore third-party AI services, we do so with clear security tenets that center around 3 core themes: the platforms we use, the customer data we collect, and the ways we manage data storage and deletion.

Critical ongoing evaluations of 3rd party AI platforms

To be clear: We will not allow any AI partner to access Guru’s valued data without thoroughly vetting their technology, security compliance, service terms, and privacy practices. We already abide by a clear set of criteria when we onboard third parties, and AI vendors will be no exception. We will review their record of performance and the integrity of their infrastructure, then hold them contractually accountable to privacy and security norms.

This is not a decision exercised by a single individual, but a collaborative process involving subject matter experts looking through a lens of security. We assess the criticality of the data in question, the health of the company, the integrity of hosting and tooling, and a documented record of compliance (ex: external audits, penetration tests, etc.). The provider is only allowed to touch Guru production data after a formal endorsement from our CTO, and that partnership, per standard, will be revalidated at least annually.

Using less data to do more

Data security gets a bit more complicated with AI due to the nature of AI training models. Where vendors traditionally process or store customer data, AI providers seek to train their models by incorporating as many data sets as possible.

While training models are a natural and necessary part of an AI service, Guru will limit the release of customer data relative only to the task at hand. We will feed the third-party model only what it needs to execute the function and prohibit its use for wide-scale model training. This directly reflects OpenAI’s data usage policy, which says, “By default, OpenAI will not use API data to train OpenAI models or improve OpenAI’s service offering.”

Guru_Collage_Image-Library-43

But even before customer data reaches OpenAI, it’s limited to what’s absolutely needed on the Guru end first, thereby keeping the vast majority of information within Guru’s protected hosting environment. In short, we don’t simply throw open the floodgates of customer data for consumption by OpenAI. Rather, we retrieve and reconcile data internally based on the initial customer task, allowing only that information segment to enter the AI stream.

Regular data deletion and maintenance

Customer data cannot live in perpetuity, and we will make rolling data deletion a standard clause in any AI provider relationship or self-hosted solution. We will limit the digital terrain of generative AI by establishing roll-off rules. The goal is to maintain positive control of all customer data, implementing “time to live” constraints wherever possible.

OpenAI recognizes that many users do not want their histories saved, so they offer the ability to delete past prompts. Again, per their usage terms, “When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse before permanently deleting.”

While AI generally benefits from amassing and aggregating as much data as possible, we feel that where Guru users are concerned, the peace of mind that comes with data deletion outweighs the benefit of open-ended retention.

Guru, AI, and the future

Generative AI offers a variety of improvements in how our users retrieve and make sense of information. While the way we use AI is limited only by our imagination, we will ensure we approach it with due care and consistency. Our third-party vendor processes are time-tested (and annually audited), and AI providers will undergo the same scrutiny as any other vendor accessing Guru.

As we continue to explore potential avenues for machine learning, we will not compromise our underlying security position in the process. We understand that if we erode trust and confidence in the product, then the AI boon has gained us nothing. We remain committed to the underlying truth that a ready source of knowledge empowers people to do their best work, and we will harness AI’s capabilities only to the extent that they improve Guru.

Security is always top of mind at Guru, especially when it comes to the security of customer data. Thanks to recent advancements in AI, there’s never been a tighter focus on security in the tech world. As we’ve worked to release the steadily growing number of AI features you can use in Guru, we’ve also been hard at work enhancing our security features. Luckily, that’s somewhat simple when you’re already working with a solid foundation.

A foundation of security and protection

Guru adheres to best practices regarding securing our customers’ data. You can read about our security and compliance practices in full on our security page, but you can get a brief rundown of what you need to know here.

Guru Collage Image-Library-39

Whether it’s business continuity or building access, configuration management or key rotation, we monitor standards and capture artifacts to support our compliance. We routinely review industry standards and privacy regulations and adapt our internal controls as needed. Along with our annual SOC 2 audit to demonstrate our year-round security oversight, we’re always working hard to create a safe and secure environment for your most important data.

Guru, AI, and your company data

As much as we’re focused on security, our belief in innovation is equally strong. We’re constantly seeking ways to integrate new features and capabilities that add value to the product. There’s no denying that AI has advanced to the point where it legitimately complements a number of SaaS products, Guru included. Failure to consider AI and its potential for Guru simply isn’t an option, which is why we’ve worked so hard to safely bring these innovations to our product and customers as soon as possible.

Generative AI may prove to be one of the most transformative technological advancements of our age, but it comes with its unique concerns. This takes us back to the sanctity of customer data. It’s only natural for customers and companies to be hesitant about pointing their sensitive information toward new and unknown sources. We’re starting to see some repeat questions: Where is my information going? How will it be used? Who’s responsible for protecting it? How long will it remain there?

As Guru begins to explore third-party AI services, we do so with clear security tenets that center around 3 core themes: the platforms we use, the customer data we collect, and the ways we manage data storage and deletion.

Critical ongoing evaluations of 3rd party AI platforms

To be clear: We will not allow any AI partner to access Guru’s valued data without thoroughly vetting their technology, security compliance, service terms, and privacy practices. We already abide by a clear set of criteria when we onboard third parties, and AI vendors will be no exception. We will review their record of performance and the integrity of their infrastructure, then hold them contractually accountable to privacy and security norms.

This is not a decision exercised by a single individual, but a collaborative process involving subject matter experts looking through a lens of security. We assess the criticality of the data in question, the health of the company, the integrity of hosting and tooling, and a documented record of compliance (ex: external audits, penetration tests, etc.). The provider is only allowed to touch Guru production data after a formal endorsement from our CTO, and that partnership, per standard, will be revalidated at least annually.

Using less data to do more

Data security gets a bit more complicated with AI due to the nature of AI training models. Where vendors traditionally process or store customer data, AI providers seek to train their models by incorporating as many data sets as possible.

While training models are a natural and necessary part of an AI service, Guru will limit the release of customer data relative only to the task at hand. We will feed the third-party model only what it needs to execute the function and prohibit its use for wide-scale model training. This directly reflects OpenAI’s data usage policy, which says, “By default, OpenAI will not use API data to train OpenAI models or improve OpenAI’s service offering.”

Guru_Collage_Image-Library-43

But even before customer data reaches OpenAI, it’s limited to what’s absolutely needed on the Guru end first, thereby keeping the vast majority of information within Guru’s protected hosting environment. In short, we don’t simply throw open the floodgates of customer data for consumption by OpenAI. Rather, we retrieve and reconcile data internally based on the initial customer task, allowing only that information segment to enter the AI stream.

Regular data deletion and maintenance

Customer data cannot live in perpetuity, and we will make rolling data deletion a standard clause in any AI provider relationship or self-hosted solution. We will limit the digital terrain of generative AI by establishing roll-off rules. The goal is to maintain positive control of all customer data, implementing “time to live” constraints wherever possible.

OpenAI recognizes that many users do not want their histories saved, so they offer the ability to delete past prompts. Again, per their usage terms, “When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse before permanently deleting.”

While AI generally benefits from amassing and aggregating as much data as possible, we feel that where Guru users are concerned, the peace of mind that comes with data deletion outweighs the benefit of open-ended retention.

Guru, AI, and the future

Generative AI offers a variety of improvements in how our users retrieve and make sense of information. While the way we use AI is limited only by our imagination, we will ensure we approach it with due care and consistency. Our third-party vendor processes are time-tested (and annually audited), and AI providers will undergo the same scrutiny as any other vendor accessing Guru.

As we continue to explore potential avenues for machine learning, we will not compromise our underlying security position in the process. We understand that if we erode trust and confidence in the product, then the AI boon has gained us nothing. We remain committed to the underlying truth that a ready source of knowledge empowers people to do their best work, and we will harness AI’s capabilities only to the extent that they improve Guru.

Experience the power of the Guru platform firsthand – take our interactive product tour
Take a tour