The New Category: Why AI Vendor Velocity Is Breaking Third-Party Risk
- 5 days ago
- 4 min read
Updated: 3 days ago
By Guy Halfon, CEO at Rescana

The old buckets no longer hold
Every market has a moment when its categories stop making sense. Third-party risk is at that moment now. For years, vendors fit neatly into familiar buckets: SaaS providers, infrastructure partners, outsourced services. Reviews were slow because vendors were slow.
Annual assessments worked because change was incremental. Trust was something you established, documented, and revisited later. AI vendors don’t fit those buckets.
They behave less like traditional SaaS and more like living systems. They reason, act, integrate, and evolve continuously. They introduce new execution paths, new data flows, and new failure modes long after procurement has signed the contract. Yet we continue to evaluate them using frameworks designed for a static world. That mismatch is no longer theoretical. It’s operational.
Velocity is the new risk multiplier
The defining characteristic of modern AI vendors isn’t novelty. It’s speed:
Speed of releases.
Speed of adoption.
Speed at which risk can emerge.
We’ve already seen what happens when velocity outpaces oversight. Microsoft Copilot surfaced how prompt injection could lead to unintended data access. Cursor, an AI-powered IDE embedded deep in developer workflows, had to patch a vulnerability where crafted prompts could result in unintended code execution. Anthropic’s MCP tooling showed that risk often lives not in the model, but in the ecosystem around it. Perplexity demonstrated how something as simple as shareable links and tokens could expose sensitive information.
These weren’t obscure startups. They were enterprise-relevant vendors doing what modern AI companies do best: moving fast. The lesson wasn’t that these vendors should never have been approved. It was that approval, as we define it today, is no longer sufficient.
The real job-to-be-done has changed
Most third-party risk programs still optimize for the wrong outcome.
They optimize for completion: questionnaires filled, reviews closed, approvals granted.
But the job security leaders are actually hired to do is different:
Maintain awareness of vendor risk as it evolves
Detect meaningful change early
Focus human attention where it matters most
In other words, the job is no longer to decide once, but to continuously understand. This is a fundamentally different problem than traditional vendor risk management. And it requires a different category of solution.
Introducing a new category: Continuous Vendor Trust
What’s emerging is not “faster TPRM” or “AI questionnaires.” It’s something else entirely. Continuous Vendor Trust is about treating trust as a living signal rather than a static status. It assumes vendors will change. It assumes documentation will lag reality. And it assumes human judgment is scarce and should be applied intentionally, not exhaustively.
In practice, this means:
Trust centers, security pages, and public commitments become primary inputs, not supporting artifacts
Baseline verification happens automatically, without human friction
Change detection becomes more important than initial approval
Escalation is driven by signals, not schedules
This isn’t about reviewing vendors more often. It’s about reviewing smarter.
Why the old functions feel the strain first
This category shift shows up differently depending on where you sit. For GRC teams, the work quietly moves from collection to interpretation. The challenge is no longer chasing answers, but understanding scope, mapping inconsistent evidence to controls, and explaining why a risk decision still holds after a vendor changes.
For CISOs, the problem becomes one of focus. When dozens of AI vendors enter the organization, not all risks deserve equal attention. The ability to distinguish between low-impact experimentation and high-blast-radius tooling becomes critical.
TPRM teams feel the breaking point most directly. Volume explodes while timelines compress. The traditional model of deep manual reviews for every vendor collapses. Differentiation becomes the only viable strategy.
Procurement sits in the middle. Deals slow down not because security is “too strict,” but because risk is unclear. Clarity creates speed. Uncertainty creates friction.
The inevitable evolution of the stack
Every category shift eventually produces infrastructure built specifically for the new reality.
Rescana was built around this exact inflection point. Not to replace existing GRC or TPRM functions, but to give them a system designed for velocity. Evidence collection, normalization, and change detection happen continuously. Human expertise is reserved for ambiguity, immaturity, and meaningful risk.
This isn’t automation for automation’s sake. It’s an acknowledgment that trust, at AI speed, cannot be managed manually.
Early signals that the category is real
You can see the signals already:
Security teams relying more on trust centers than questionnaires
CISOs asking “what changed?” instead of “did we review?”
TPRM teams redesigning intake flows around escalation, not uniformity
Procurement pushing for earlier, clearer risk signals
These are not edge cases. They’re early indicators of a market reorienting itself around a new job-to-be-done.
The bottom line
AI vendor velocity didn’t break third-party risk because teams failed. It broke it because the category evolved.
The organizations that adapt will stop forcing new vendors into old buckets. They’ll adopt systems designed for change, not stability.
The question is no longer whether AI vendors will change after approval. It’s whether you’re operating in the old category - or the new one.
.png)


