Six months ago, you vetted a CRM integration. Clean SOC 2, reasonable data access, standard OAuth scopes. Last month, that same vendor shipped a “GenAI Summarization” feature that pipes your customer data through a third-party LLM API. The announcement was buried in a Terms of Service update that nobody on your team read.
This is not theoretical. It’s happening across SaaS stacks right now, and it’s reshaping what enterprise procurement teams expect from vendor security assessments.
If you sell B2B SaaS into enterprise accounts, you need to understand two converging threats that are rewriting the rules of vendor due diligence: shadow AI and OAuth token exposure. Both were niche concerns 18 months ago. In 2026, they’re showing up in DDQs — alongside NIS2 supply chain requirements that hold covered entities responsible for their vendors’ security posture.
Shadow AI: The Vendor Risk You Didn’t Scope
Shadow AI is not an employee installing ChatGPT on their laptop. That’s the version most security articles describe, and it’s the least interesting version.
The real shadow AI risk for SaaS vendors is this: AI capabilities embedded inside software your organization already uses, enabled by default, operating on existing OAuth permissions — with no separate security review.
A collaboration tool adds an AI assistant that reads message history. A project management platform ships an AI summarizer that accesses all project data. A support tool integrates an LLM for ticket classification that processes customer PII. None of these trigger a new vendor assessment because the vendor relationship already exists. The procurement team approved the original integration. The new AI feature rides on the same OAuth token.
According to Gartner, 75% of security failures by 2026 will stem from mismanaged identities — not infrastructure vulnerabilities. Shadow AI accelerates this because AI features consume data through identity and permission layers that were scoped for a different purpose.
The Torii 2026 report found that more than half of the most widely adopted shadow applications discovered in enterprise environments are now AI-first tools, many relying on OAuth-based permissions that connect directly to corporate data stores.
For SaaS vendors selling into enterprise: your prospects’ InfoSec teams are starting to ask whether your product has AI features, what data they access, and whether customers can disable them. If you’ve added any AI capability to your product — even a summarization feature, even an AI-assisted search — expect questions about it in the next DDQ you receive. Our vendor security assessment checklist now includes an AI governance section covering exactly these questions.
OAuth Token Exposure: From Access to Attack Surface
OAuth is the plumbing that connects SaaS products together. It’s also the attack vector that defined 2025.
In August 2025, threat actor UNC6395 compromised OAuth tokens from the Drift chatbot integration with Salesforce. Because Drift had delegated access to customer Salesforce environments, the stolen tokens gave attackers trusted access to over 700 organizations — including financial institutions, healthcare providers, and government agencies. MFA was irrelevant. The tokens represented completed authentication. Obsidian Security researchers estimated the blast radius was 10x greater than previous direct Salesforce breaches.
This wasn’t a vulnerability in Salesforce or in Drift’s code. It was a supply chain attack exploiting the trust model of OAuth itself: a third-party integration with broad access, a stolen token, and no runtime monitoring of what that token was doing after issuance. For a deeper look at token lifecycle risks, see our guide on OAuth token security in vendor risk assessments.
The pattern has repeated across 2025 and into 2026. Token theft — through phishing, browser compromise, or supply chain breaches — has become the dominant SaaS attack vector. Attackers don’t need your password or your MFA device. They need one OAuth token from one integration in your stack.
For SaaS vendors, this has two implications:
As a product that issues tokens: Enterprise prospects will ask about your token lifecycle management. How long do OAuth tokens live? Are they scoped to minimum necessary permissions? Do you support token rotation? Can customers revoke third-party access without disabling the integration entirely?
As a product that consumes tokens: If your product integrates with customer systems via OAuth, your vendor assessment exposure now includes the security posture of every integration you connect to. The Drift/Salesforce incident proved that one compromised integration poisons the entire trust chain.
What Enterprise Procurement Teams Are Now Asking
Based on the DDQ evolution we’re seeing in 2026, here are the new questions appearing in vendor security assessments — and how to prepare for them.
AI Governance Questions
- Does your product use AI/ML features? If yes, what customer data do they access?
- Can AI features be disabled per-tenant?
- Do AI features transmit data to third-party APIs (including LLM providers)?
- What is your AI data retention policy? Is customer data used for model training?
- Do you have an AI governance policy? Who owns it?
How to prepare: Document every AI feature in your product, including experimental ones. For each, map the data flow: what data it reads, where it’s processed, whether it leaves your infrastructure. If you use a third-party LLM API, your data processing agreement with that provider needs to be audit-ready.
OAuth and Third-Party Access Questions
- What OAuth scopes does your product request? Are they documented?
- What is your token expiration and rotation policy?
- Do you monitor for anomalous API access patterns using issued tokens?
- Have you experienced any security incidents involving third-party integrations in the past 24 months?
- Can customers audit which third-party integrations have active access to their data?
How to prepare: Audit your own OAuth implementation. List every scope you request and justify each one against actual product functionality. If you request read:all when you only need read:contacts, you have a scoping problem that will surface in due diligence. Implement token rotation if you haven’t. Build a customer-facing integration dashboard that shows active connections and permissions.
Continuous Monitoring Questions
- How frequently do you scan your own application for vulnerabilities?
- Do you have automated security monitoring (not just annual pen tests)?
- Can you provide evidence of your current security posture dated within the last 90 days?
How to prepare: This is where continuous scanning — as opposed to one-off pen tests — becomes a competitive advantage. A 14-month-old pen test report doesn’t answer the “current posture” question. A scan from this week does. See our continuous security monitoring guide for how to set up automated evidence generation, or automate it with SaaSFort’s CI/CD integration to run scans on every deployment.
From Point-in-Time to Continuous Assessment
The thread connecting shadow AI and OAuth risk is the same: point-in-time vendor assessments are no longer sufficient.
A vendor assessed clean in January can introduce AI features in March, suffer an OAuth token compromise in June, and none of those events trigger a reassessment under the traditional annual review model. The Cloud Security Alliance now recommends “trigger-based reviews” for high-risk SaaS vendors — meaning any material change to the vendor’s product, data processing, or integration architecture should initiate a security review.
For SaaS vendors selling into enterprise, this means your security posture needs to be continuously demonstrable, not just annually documentable. The shift from “pass the audit once” to “prove your posture continuously” is the single biggest change in vendor risk management happening right now.
Practically, this means:
-
Automate your own scanning. Run OWASP, SSL/TLS, header, and API security checks on a regular schedule — weekly at minimum. Have current evidence ready at all times.
-
Document AI features proactively. Don’t wait for the DDQ to ask. Publish an AI transparency page or include AI governance in your security documentation. Prospects notice when you’ve anticipated the question.
-
Audit your OAuth implementation. Review every scope, every integration, every token lifetime. Publish a customer-facing integrations dashboard. Make revocation easy.
-
Monitor your own supply chain. The vendors you integrate with are part of your attack surface. If one of your dependencies ships an AI feature or suffers a breach, your customers’ procurement teams will want to know how you responded. A robust third-party risk management program is essential for tracking these dependencies.
The Competitive Advantage of Proactive Transparency
Here’s the counterintuitive insight: the SaaS vendors that address shadow AI and OAuth risk first won’t just avoid losing deals — they’ll win them faster.
Enterprise procurement teams are drowning in vendors that can’t answer the new questions. When a SaaS vendor shows up with a current OWASP scan, documented AI governance, audited OAuth scopes, and a clear data processing map — they stand out immediately. How enterprise buyers actually evaluate SaaS security breaks down the exact scoring model procurement teams use. Not because they’re more secure than everyone else, but because they can prove it.
The bar hasn’t risen to perfection. It’s risen to transparency. And transparency is something you can build in days, not months. Adopting a zero trust approach to vendor integrations accelerates this shift. Package your AI governance documentation alongside your security evidence package so it’s ready when the DDQ arrives. Download our SaaS Security Playbook 2026 for the complete AI governance and OAuth risk framework.
SaaSFort runs continuous OWASP, SSL, and header scans against your domains — so your security evidence is always current when enterprise buyers come calling. Start a free scan to see your score. Need NIS2-specific compliance evidence? Generate a NIS2 compliance PDF in 7 seconds.
Frequently Asked Questions
Q: What is shadow AI in the context of SaaS vendor risk?
Shadow AI refers to AI capabilities embedded inside software your organization already uses — enabled by default, operating on existing OAuth permissions, with no separate security review. Unlike shadow IT (employees installing unapproved tools), shadow AI rides on approved vendor relationships. A collaboration tool adding an AI assistant that reads message history is a typical example.
Q: How can enterprises detect shadow AI features in their SaaS stack?
Monitor vendor release notes and terms of service updates for AI feature announcements. Audit OAuth scopes periodically — if a vendor’s permissions haven’t changed but they’ve added AI features, those features are consuming data through existing access grants. Implement trigger-based vendor reviews that fire whenever a vendor announces a material product change, not just on annual cycles.
Q: What OAuth governance controls should SaaS vendors implement?
At minimum: scope tokens to minimum necessary permissions, enforce token expiration and rotation, monitor for anomalous API access patterns, and provide customers with a dashboard showing active integrations and their permissions. Enterprise buyers now specifically ask whether customers can revoke third-party access without disabling the integration entirely.
Q: Are enterprise buyers really asking about AI governance in DDQs?
Yes. In 2026, new DDQ questions cover whether your product uses AI/ML features, what customer data those features access, whether AI features can be disabled per-tenant, whether data is transmitted to third-party LLM providers, and whether you have a documented AI governance policy. If you’ve added any AI capability to your product, expect these questions in your next enterprise assessment.
Q: How does OAuth token theft differ from traditional credential compromise?
OAuth token theft bypasses passwords and MFA entirely. A stolen token represents a completed authentication — the attacker doesn’t need credentials or a second factor. Tokens also inherit the permissions of the original integration, which may be broader than any individual user’s access. The Drift/Salesforce incident in 2025 demonstrated that one compromised integration token can give attackers trusted access to hundreds of organizations simultaneously.
References:
- Gartner, “Managing Privileged Access in Cloud Infrastructure” (2025) — 75% identity-related security failures projection
- Torii 2026 SaaS Management Report — AI-first shadow application discovery statistics
- Obsidian Security / Palo Alto Unit 42 — Drift OAuth supply chain attack analysis (August 2025, 700+ organizations affected)
- Cloud Security Alliance, “Why SaaS and AI Security Will Look Very Different in 2026” (January 2026)
De la lectura a la acción
Escanee su dominio gratis. Primeros resultados en menos de 10 segundos — sin registro.