Blog

The New SR 26-02 Guidance from the OCC

April 28, 2026

The new MRM guidance didn’t just update SR 11-7. It exposed a much bigger problem.

For years, model risk management has been treated as a structured discipline—defined by inventories, validation cycles, and documentation standards that became, in practice, a checklist. That era is now over. The latest interagency guidance shifts the industry toward judgment. Regulators are stepping back from prescriptive expectations and emphasizing a risk-based, proportional approach. On the surface, that’s a welcome correction.

But there’s a second-order effect that deserves far more attention. As the definition of a “model” narrows, a growing share of real-world risk is being pushed outside the boundaries of formal MRM. Spreadsheets, end-user tools, rule-based processes, and automation layers may no longer qualify as models—but they continue to drive critical decisions across the enterprise.

And then there’s AI. Acknowledged, but effectively deferred. At a moment when financial institutions are actively embedding generative and agentic AI into workflows, decisioning, and customer interactions, the guidance stops short of addressing how these capabilities should be governed within an MRM framework.

How is that possible?

The answer is that regulation moves deliberately, and AI is evolving faster than any prior risk domain. Agencies are cautious by design—they avoid locking in frameworks too early, especially when the underlying technology is still shifting. The absence of clear guidance does not slow adoption. It creates a playing field for institutions to run in different directions. Firms are already making decisions about how to classify, validate, and control AI-driven tools—without a consistent regulatory lens. Some are forcing AI into existing model frameworks. Others are treating it as a technology risk. Many are doing both, inconsistently. This is where new risk is emerging. Not from AI itself, but from the lack of a coherent governance approach around it.

At CIMCON, we see this dynamic clearly. The same environments that have long housed end-user computing risk – spreadsheets, EUCs, automation – are now becoming the entry point for AI-driven capabilities. And yet, these sit largely outside the scope of traditional MRM.

So, we’re left with a gap: Less prescriptive MRM, expanding non-model risk, and the fastest-growing risk category, Artificial Intelligence, still undefined. The firms that navigate this well will not wait for regulators to catch up. They will build governance frameworks that align control with actual risk, regardless of whether something is formally labeled a model.

Because the question is not “Does this fall under MRM?” It’s “Are we appropriately governing the risks we’re actually taking?”

AI Risk Management Policy

AI Policy
Explore the realm of Artificial Intelligence (AI) with our AI Risk Management Policy. This concise guide covers the spectrum of AI models, including supervised, unsupervised, and deep learning, and emphasizes making AI trustworthy based on the NIST AI Risk Management Framework. Learn to assess and manage AI Risk, cultivate a culture of risk awareness, and utilize periodic testing with tools like ours. This policy is your essential toolkit for responsible and effective AI utilization in your organization.