Episode 22 — Design the AI Risk Operating Model: People, Process, Tools, and Cadence (Domain 2)

In this episode, we’re going to turn the charter from a statement of intent into something you can run week after week without reinventing it each time: an AI risk operating model. Beginners often think governance lives in policies and committees, but the day-to-day reality of risk control is operational. It depends on who does the work, what steps are followed, what information is captured, and how often everything is reviewed. An operating model is the practical design that answers those questions in a repeatable way, so AI risk management is not dependent on one heroic person remembering to check something. It also prevents the most common program failure, where the organization writes strong policies and standards but has no consistent rhythm for applying them, so controls exist on paper and not in practice. A good operating model balances speed and safety by scaling oversight based on impact and by making the approved path easier than shadow behavior. By the end, you should be able to explain what an operating model is, why it matters for AI risk, and how people, process, tools, and cadence fit together to produce consistent control and defensible reporting.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Start with the idea that an operating model is the program’s daily operating system, not a diagram for a slide deck. It describes how work moves from request to decision to monitoring and back again, with clear handoffs and clear responsibility at each stage. In AI risk, the operating model must handle two pressures at once: the pressure for rapid adoption of AI capabilities and the pressure for careful oversight because harms can be severe. If the model is too heavy, teams bypass it and create shadow AI use, which increases risk dramatically. If the model is too light, high-impact systems slip into production with weak evidence and unclear accountability, which creates the kind of surprise incidents leaders hate. A strong operating model uses proportionality, meaning it applies stronger controls to higher-impact use cases and lighter controls to lower-impact uses, while still maintaining baseline rules for data protection and accountability. It also connects to Enterprise Risk Management (E R M) so AI risk is not isolated, and it uses consistent documentation so decisions are traceable. Beginners should see that an operating model is not about bureaucracy; it is about making responsible behavior repeatable.

People is the first element, and it is more than a list of job titles because it is really about responsibilities and decision rights. The operating model should define who owns AI use cases from a business outcome perspective, because the business owner is accountable for the consequences of relying on AI. It should define who owns the technical operation, such as configuring systems, maintaining integrations, and running monitoring, because controls fail when operational ownership is fuzzy. It should define the governance roles, such as risk management, compliance, legal, privacy, and security, and clarify what reviews they perform and what authority they have to require changes. It should also define who maintains the central inventory and documentation repository, because visibility is a control that must be owned by someone rather than assumed. Another key people element is escalation, meaning who is notified when Key Risk Indicators (K R I s) exceed tolerance and who has authority to pause or restrict AI use. Without named roles and decision rights, the operating model becomes a set of suggestions and meetings rather than a functioning system. For beginners, the key insight is that governance only works when people know who is supposed to act, and the operating model is the place where those responsibilities become explicit.

It also helps to understand that an effective people design includes separation of duties, not because you distrust anyone, but because separation reduces blind spots and improves defensibility. The team that wants to deploy AI for business value may not be the best team to judge whether the evidence is sufficient, especially under pressure to ship. The technical team that builds the system may not be the best team to decide whether the use case is ethically acceptable or legally compliant, because that requires different expertise and accountability. The operating model can address this by ensuring that different perspectives are involved at the right decision points, especially for high-impact use cases. At the same time, the model should avoid creating so many handoffs that nothing moves, because delays encourage shadow behavior. This is where proportionality matters again, because high-impact cases justify more review, while low-impact cases should flow quickly under standard controls. A mature people design also includes training and enablement roles, because employees need guidance on policy boundaries and responsible use. If training is ignored, the organization will rely on enforcement alone, which is inefficient and often ineffective. People design is therefore both accountability design and behavior shaping.

Process is the second element, and process is the set of steps that turns governance intentions into repeatable decisions. A practical AI risk process begins with intake, where new AI use cases, new tools, and major changes enter the governance system through a known channel. Intake is where the organization prevents hidden adoption, because if there is no clear intake path, teams will build or buy AI quietly and only reveal it when it is already embedded in operations. After intake, the process typically includes classification, where the use case is categorized by impact so the organization knows what level of review is required. Next comes review and assessment, where required documentation and evidence are gathered, such as intended use, data sources, evaluation results, and planned controls. Then comes decision and approval, where decision rights are exercised and conditions are recorded, such as requiring human review or limiting automation scope. After approval, the process includes deployment readiness checks, ensuring monitoring and controls are in place before the system influences high-impact outcomes. Finally, the process includes ongoing monitoring, periodic reassessment, and incident handling, because AI risk can increase over time due to drift, misuse, and changing environments. The process must include an explicit loop for change management, because updates to data sources, model behavior, or workflow integration can change risk materially. For beginners, the important point is that process creates predictability, and predictability is what makes governance usable and enforceable.

A useful way to judge process quality is to ask whether the process produces the evidence and traceability needed for defensibility. Each stage should generate or update documentation artifacts that answer who approved what, why it was approved, what limitations are known, and what monitoring exists to detect harm. Process should also explicitly connect to escalation triggers, meaning it should define what happens when a K R I crosses a threshold or when an incident occurs. If the process ends at approval, it is incomplete, because approval is not the end of risk, it is the beginning of operational reliance. Another process quality factor is clarity of decision points, because uncertainty about who decides leads to delay and conflict. The operating model should define which decisions can be made by standardized controls and which require committee review or executive escalation. That decision routing is part of making the process scalable, because as AI adoption grows, the volume of use cases will increase. A process that requires senior review for every minor use will become a bottleneck, and people will route around it. A scalable process treats governance like traffic control, directing high-risk vehicles into deeper inspection while allowing low-risk vehicles to move with minimal friction.

Tools is the third element, and tools are not just software; they are the mechanisms that support visibility, evidence capture, and monitoring. Beginners sometimes hear tools and assume you must buy an expensive platform, but an operating model can start with simple mechanisms as long as they are consistent and enforceable. The most important tool capability is inventory management, which provides a living view of AI systems, vendors, data flows, and usage contexts. Another essential tool capability is documentation management, meaning a reliable way to store and retrieve intended use statements, approvals, evaluation evidence, monitoring plans, and change histories. A third tool capability is monitoring and reporting, which supports K R I tracking, trend analysis, and escalation when thresholds are exceeded. Tools also support workflow, such as routing intake submissions to reviewers and recording decision outcomes, because manual tracking tends to break under scale. However, the operating model should treat tools as enablers of process and evidence, not as replacements for governance decisions. A tool cannot define risk appetite, cannot assign accountability, and cannot make ethical tradeoffs; it can only support humans in applying those decisions consistently. For beginners, the key is to see tools as part of control reliability, because the more consistent the tools, the less governance depends on memory and informal conversations.

Tools must also support the organization’s ability to detect and control shadow AI, because shadow use is a major risk pathway that weakens governance. In practice, this means the operating model should include mechanisms for discovering unapproved AI use, encouraging reporting, and providing approved alternatives. The tool aspect might be supported by monitoring of network usage patterns or application inventory, but the deeper tool is often cultural and procedural, such as creating a safe way for teams to disclose AI use without fear of punishment, especially when misuse was accidental. A strong operating model makes it easy to do the right thing by offering approved tools and clear policy boundaries, which reduces the incentive for shadow behavior. The tools should also support vendor management, because vendor AI features can appear through product updates, and the organization needs a way to track those features and determine whether they require review. Another tool function is supporting assurance, meaning the program can verify that reviews occurred, monitoring reports were produced, and controls were maintained over time. Even simple dashboards or reports can serve this role if they are reliable and tied to accountability. The operating model becomes stronger when tools reinforce the idea that governance is continuous rather than one-time.

Cadence is the fourth element, and cadence is where the operating model becomes a living rhythm rather than a collection of good intentions. Cadence defines how often key activities occur, such as inventory updates, review meetings, monitoring reviews, risk reporting, and reassessments of high-impact systems. Without cadence, governance becomes reactive, meaning things are only reviewed when someone complains or when an incident occurs, which is too late for many AI harms. With cadence, the program creates regular checkpoints where risk signals are reviewed and decisions are revisited based on new evidence. Cadence should be proportional, meaning high-impact systems may require more frequent monitoring and review, while low-impact systems can be reviewed less frequently. Cadence also supports change management, because regular reviews can catch when a system’s use has expanded beyond its original boundaries. For beginners, it helps to think of cadence as the heartbeat of the program, because it keeps oversight alive even when attention shifts to other priorities. A program with no heartbeat is a program that slowly stops functioning, even if its documents remain.

Cadence also needs to connect to executive reporting, because leaders need a predictable view of risk posture rather than occasional crisis updates. A good operating model defines when leadership receives updates on AI risk, what those updates include, and how escalation occurs between regular reports if urgent risk emerges. This is where K R Is connect to cadence, because K R I thresholds can trigger out-of-cycle escalation when tolerance is exceeded. Cadence should also include periodic assurance activities, such as reviewing whether documentation is current, whether monitoring is operating, and whether exceptions are being handled properly. These checks prevent governance theater, where policies exist but controls are not actually operating. Cadence should be designed to be realistic, because an overly ambitious cadence will lead to missed reviews and erode credibility. It is better to have a cadence that is consistently followed than a cadence that looks impressive but collapses under workload. For beginners, the lesson is that cadence is a control because it creates predictability, and predictability is what makes risk management sustainable.

Now let’s connect people, process, tools, and cadence into a single coherent picture so you can see how the operating model works as a system rather than as four separate topics. People define responsibility and authority, ensuring each step has an owner and that decisions can be enforced. Process defines how work flows, ensuring intake, classification, review, approval, monitoring, and change management occur consistently. Tools support visibility and evidence, ensuring inventory and documentation are reliable and monitoring is actionable. Cadence ensures the system keeps running, ensuring reviews happen regularly and escalation occurs when thresholds are crossed. When these four pieces align, governance becomes a normal part of operating rather than a special event, and AI risk becomes manageable even as AI adoption accelerates. When any piece is missing, the model becomes fragile, because people do not know who acts, process becomes inconsistent, tools fail to provide evidence, or cadence collapses into reactive crisis management. Beginners should see that the operating model is what turns a charter into reality, because it defines how the charter’s objectives are achieved and measured in daily practice. This is why Domain 2 focuses on operating models, because execution is where risk programs succeed or fail.

A common beginner misunderstanding is thinking that an operating model must be complex to be effective, when in reality complexity can reduce compliance and increase shadow behavior. The operating model should be as simple as possible while still controlling high-impact risk, and that simplicity comes from standardization and proportionality. For low-impact use cases, standard controls and lightweight documentation may be enough, allowing rapid adoption without heavy review. For high-impact use cases, deeper review is justified, but even then the process should be predictable and criteria-driven so teams can plan. Another misunderstanding is thinking tools can replace governance, when tools only support the process and cannot determine acceptable tradeoffs. A third misunderstanding is thinking cadence is optional, when cadence is what prevents drift, policy decay, and surprise incidents. These misunderstandings matter because they show up in real program design choices, such as whether to route everything through a central committee or whether to empower distributed ownership under common standards. The strongest designs often create a centralized set of standards and oversight with distributed execution, because that scales better and reduces bottlenecks. For beginners, recognizing these design principles will help you answer exam questions about what makes an operating model effective and sustainable.

To make this concrete, imagine an organization where business teams are launching AI features in customer-facing products while employees are also using external AI tools for productivity. An effective operating model would define the people who own customer outcomes, the technical teams who operate AI systems, and the governance functions that review high-impact uses and data handling. The process would include intake for new product use cases and for new tools, impact classification to route high-impact cases to deeper review, and required documentation and evidence before deployment. Tools would include a living inventory that captures both product AI features and employee tool usage contexts, plus a documentation repository and monitoring dashboards tracking K R Is and control health. Cadence would include regular review meetings for high-impact systems, periodic inventory refresh, and executive reporting on AI risk posture, with escalation triggers when K R I thresholds are exceeded. This example shows how the operating model creates a controlled path for both formal projects and informal usage, reducing the gap where shadow AI often grows. It also demonstrates that the operating model is not only a governance concern; it shapes the organization’s daily habits around AI use. When that daily habit exists, risk becomes visible and manageable.

To close, designing an AI risk operating model is about creating a repeatable system that makes responsible AI use sustainable under real-world pressure. People define accountability, supporting roles, and decision rights, so governance is enforceable rather than advisory. Process defines the flow from intake through classification, review, approval, monitoring, and change management, so decisions are consistent and traceable. Tools support visibility and evidence capture, including inventory, documentation management, monitoring, and reporting, so the program does not depend on memory or informal conversations. Cadence provides the heartbeat of the program, ensuring regular review, assurance checks, and predictable leadership reporting, with escalation when K R I thresholds indicate rising risk. When these four elements align, the program becomes faster and safer at the same time, because teams know what to do, what evidence is required, and how to respond when conditions change. This operating model foundation prepares you for the next step, where we design an intake process that brings new AI use cases under control without blocking responsible innovation.

Episode 22 — Design the AI Risk Operating Model: People, Process, Tools, and Cadence (Domain 2)
Broadcast by