Two years ago, the typical board conversation about a company's models was either absent or perfunctory. The audit committee asked whether the team was complying with whichever regulation applied to the industry, the answer was yes, and the conversation moved on. The board treated the topic as an engineering and legal matter that did not require their judgment.
That posture is changing fast. The boards we sit in front of, briefing executives on AI capability and risk, have moved from delegating the topic to engaging with it directly. The change is not driven by a single regulation or a single incident. It is driven by a cumulative recognition that automated decisions are now central to the business, that the failure modes are non-obvious, and that the board's customary diligence on financial and operational risk has not yet been adapted to cover model risk in the same way.
The audit and assessment teams who get called when a board does engage with this topic should know what the conversation actually looks like at the board level, because the conversation is shaping the kind of audit work that is going to be commissioned in the next few years.
The most consistent shift is that boards are asking for inventories. The first question, repeated almost verbatim across our briefings, is some version of "what models are running in production, what decisions do they affect, and who can produce a list." The asking is not adversarial. The board has come to understand that without an inventory, the rest of the conversation is hypothetical. Audit teams who arrive with an inventory-as-a-deliverable, separate from the audit itself, are addressing the question the board actually has.
The second shift is that boards are increasingly distinguishing between high-stakes and low-stakes automated decisions and asking the company to do the same. The interesting decisions, in the board's view, are not the ranking on the homepage. They are the decisions that affect a customer's price, eligibility, treatment, or experience in a meaningful way. Audit programs that triage by stakes, and apply more rigor to the high-stakes decisions, match what boards are actually asking for. Programs that treat all models uniformly are misaligned with the board-level conversation.
The third shift is that boards have started treating point-in-time audits as a starting point rather than as a deliverable. The first audit produces a baseline. The board now expects something that runs continuously or on a known cadence. Audit teams who can offer a continuous monitoring component, alongside the periodic deeper audit, fit into the board's mental model of the topic. Teams who only offer the periodic audit are positioning themselves for a smaller share of the work over time.
The fourth shift is on remediation. Boards have learned, often through painful examples, that an audit without remediation is not safety. They are now asking pointed questions about which findings have been addressed, which are accepted, and which are pending. Audit teams who deliver findings with a remediation plan attached, and who follow up on the plan in subsequent engagements, are addressing the question the board is actually asking.
The fifth shift is on disclosure. Boards are increasingly thinking about how the company's model risk posture would look if it became externally visible. This is partly a reputational concern and partly an anticipation of regulatory disclosure obligations that are emerging in several jurisdictions. Audit deliverables that produce both an internal report and an external-ready summary are filling a real need.
For audit and assessment teams adapting their practice to this environment, the working pattern is to design engagements with the board's questions in mind from the start. The deliverable is not just the audit report. It is the inventory, the triage by stakes, the continuous monitoring, the remediation plan, and the external-ready summary. Each of these maps to a question the board has been asking, and each of them is more valuable to the board than a thicker version of the original audit document.
The audit teams who make this shift are positioning themselves for the next phase of the work. The teams who do not are likely to find that the boards have started commissioning the missing pieces from someone else.
This is a guest post from the team at Stator AI, who run executive briefings and capability reviews for boards and senior leaders engaging with applied AI.