Blog

How AI Companies Can Quickly Become Enterprise Ready

Enterprise Readiness in AI is accelerated due to the downstream impact of SOC 2 compliance, stringent protection of sensitive customer data in LLMs, and a focus on core product development over non-proprietary features.


AI has taken the world by storm, as companies like OpenAI and Anthropic redefine the way we work through their continued advancements in large language models (LLMs). Unlike in the past where AI was an ambiguous academic concept, the present-day AI is practical, accurate, and versatile. This has spurred the creation of hundreds of generative AI solutions in the B2B SaaS space, designed to enhance efficiency across every business function.

As these solutions gain rapid adoption, becoming “Enterprise Ready” has never been more important. Beyond initial growth fueled by strong PLG, to win enterprise-sized contracts, AI companies must meet stringent requirements larger organizations demand.

WorkOS has seen tremendous growth with AI companies using the platform, powering organizations like Jasper, Copy.ai, AI21, Hex, and more. This article explains key priorities every AI company should embrace to achieve Enterprise Readiness, unlocking the next phase of growth.

AI requires “Day 1” Enterprise Readiness

Enterprise Readiness refers to a SaaS product's capability to fulfill the security, compliance, reliability, and support requirements of large-scale organizations, typically with over 1000 employees and complex evaluation, procurement, and decision-making structures.

According to Umair Akeel, who led AI at Twilio for many years and now leads an AI-focused startup, one of the first things he prioritized with his design partners was implementing SSO on “Day 1.” Because his company provides generative AI tooling for sales teams, the solution processes huge volumes of sensitive customer information before outputting personalized sequences. He further explained, “We are targeting growth-stage companies, but the trend is that even for these relatively smaller companies, compliance, privacy, and security are extremely important. These companies would never consider sending CRM data to a vendor that doesn’t take security seriously, so it was crucial for us to have SSO and SOC 2 in place before they even asked.”

Closely aligned with Umair’s approach, in the recently published Enterprise Readiness Guide for SaaS Product Managers, one of the major themes that emerged was the need to adopt a “Day 1” Enterprise Readiness mindset. Previously, companies had typically reserved attaining SOC 2 compliance or implementing enterprise-grade security features like SSO until there was visible traction with large customers. However, as the need to secure higher average contract values (ACV) and decrease churn has become more paramount, it has become the standard for organizations to roll out these capabilities from the onset.

In the AI space in particular, the shift to become Enterprise Ready occurs even earlier than in other industries for a few reasons:

  • Downstream impact of SOC 2 compliance.
  • Strong safeguards for sensitive customer data being processed in LLMs.
  • Prioritization of core product development over non-proprietary features.

Downstream impact of SOC 2 compliance

A beneficial by-product of achieving SOC 2 compliance is that it helps customers to also remain SOC 2 compliant. While there are different types of compliance standards such as ISO 27001, GDPR, HIPAA, and FedRAMP, in North America, SOC 2 is often perceived as the de facto standard in North America, because it specifically focuses on the security, availability, processing integrity, and privacy of customer data in technology solutions.

For SOC 2, there are two types:

SOC 2 Type 1: This audit captures a snapshot of an organization's systems and assesses if their design aligns with trust principles on a given date. It focuses on control design but doesn't evaluate operational effectiveness over time.

SOC 2 Type 2: This audit is a more in-depth review spanning six months to a year and examines both the design and operational effectiveness of controls throughout the period. It ensures controls are not just aptly designed but also consistently applied and effective.

For AI companies, SOC 2 compliance is not just a matter of ticking a regulatory box; it's a critical component of their value proposition. When these companies achieve SOC 2 compliance, they assure their customers, particularly those handling sensitive data, that their services are reliable and secure. This assurance is crucial in industries where data breaches can have severe consequences, such as in healthcare or finance, where AI applications are increasingly common.

Moreover, when AI companies are SOC 2 compliant, it simplifies the compliance journey for their customers. By providing SOC 2 compliant services, AI companies enable their clients, who may also be working towards or maintaining their SOC 2 compliance, to integrate these services without jeopardizing their compliance status.

Strong safeguards for sensitive customer data being processed in LLMs

For companies utilizing Large Language Models (LLMs), establishing strong safeguards for sensitive customer data is not just a best practice, but a necessity. Due to their design, LLMs can inadvertently store or recall sensitive information. To counter this, AI companies must implement robust data protection measures. Encryption of data, both in transit and at rest, is essential. Additionally, rigorous access controls and authentication protocols such as single sign-on (SSO), multi-factor authentication (MFA), and role-based access control (RBAC) must be put in place, ensuring that only authorized personnel can access sensitive data.

AI companies are also encouraged to employ advanced techniques such as data anonymization and pseudonymization, which are vital in protecting user privacy.

Data anonymization involves completely stripping away personally identifiable information (PII) from data sets, so that individuals cannot be identified, tracked, or linked to the data. Anonymization is commonly used in AI for training models where personal data privacy is a concern, such as in finance. By removing identifiers, AI companies can utilize large datasets for machine learning without compromising individual privacy.

Unlike anonymization, pseudonymization replaces private identifiers with fake identifiers or pseudonyms. While it allows data to remain usable for analytics and processing, it reduces the risk associated with data breaches or unauthorized access. This technique is particularly useful in scenarios where data needs to be re-identified under controlled conditions, such as in personalized marketing or customer service applications.

Many AI companies, especially those dealing with sensitive information, incorporate these techniques as a part of their data handling and processing protocols. For instance, AI platforms in healthcare might use anonymization to protect patient data while analyzing medical records, or a customer relationship management (CRM) tool might use pseudonymization to safely process customer data.This increased focus on data security propels AI companies into a state of Enterprise Readiness much earlier than seen in other industries.

For developers and technical leads in the AI space, this means designing and implementing security measures right from the inception of the product. Unlike traditional software development, where extensive security protocols might be developed in response to growth, AI companies must consider these aspects from the ground up, especially when working with LLMs.

Prioritization of core product development over non-proprietary features

In AI, where innovations occurs at an unparalleled velocity, delivering exceptional products is critical. Companies face stiff competition, which means teams of highly skilled engineers, who are in great demand, should focus their efforts on developing and refining the core product differentiation. A common pitfall that organizations face when deciding to build enterprise features like SSO and SCIM user provisioning is underestimating the time and resources required to build and maintain these services. In fact, in the Enterprise Readiness Guide, product leaders explicitly highlighted the sheer amount of resources building in-house would require:

  • “Yes, it is sometimes necessary to build features in-house, but it is critical to realize that processes will be even more complex and time-consuming than you expect.” - Patrick Malatack, former VP Product, Twilio
  • “If you do decide to build in-house, adopt a mindset that it is okay with the first few iterations simply not working. It’s also super annoying to have to worry about all the edge cases but those are inevitable.“ - Thomas Schiavone, former VP Product, Sift

Here are additional considerations that explain the complexities of building a feature like SSO in-house:

Initial Setup Challenges: Establishing SSO requires in-depth knowledge of various Identity Providers (IdPs) and their specific authentication protocols like SAML, OAuth 2.0, and OpenID Connect. Each IdP (e.g., Okta, Azure AD, Google) presents unique integration challenges, ranging from different SSO token formats to varying user attribute mappings. Developers must ensure their SSO solution can normalize these discrepancies, a process that requires extensive coding and configuration. The development phase also includes implementing secure token handling and validation mechanisms, along with setting up secure communication channels like TLS to prevent man-in-the-middle attacks.

Ongoing Maintenance: Post-deployment, the SSO solution demands continuous monitoring and updating. This includes keeping up with the changing security standards and protocols of IdPs, regular application of security patches, and adjusting to API changes from IdP providers. The development team must also stay on top of emerging security vulnerabilities, requiring frequent updates to security audits and practices.

Operational Challenges: Service availability is critical, and any downtime in the SSO service can lock out users, disrupting access to the core product. This requires a robust disaster recovery and redundancy plan. As new enterprise clients onboard, each potentially requiring custom SSO integration or specific IdP configurations, the support and engineering teams must engage in detailed, client-specific implementation processes. This not only adds to the support workload but also requires a high level of technical expertise to address each client’s unique requirements and potential integration issues.

Achieving Enterprise Readiness with WorkOS

Embracing Enterprise Readiness from the start is increasingly pivotal for companies, particularly for those operating in the highly competitive and fast-paced AI landscape. WorkOS plays a pivotal role in supporting high-growth AI companies like AI21, Copy.ai, Jasper, and Hex achieve Enterprise Readiness. By providing scalable and secure enterprise features, WorkOS enables these companies to focus on their core AI competencies while effortlessly meeting the enterprise demands of security, compliance, and reliability. This partnership not only accelerates their journey to becoming enterprise-ready but also positions them favorably to capture significant opportunities in the ever-expanding realm of AI.

In this article

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.