Episode 23 — Stand Up an AI Risk Intake Process: Bring New Use Cases Under Control (Domain 2)

In this episode, we’re going to build one of the most practical pieces of an AI risk program: an intake process that captures new AI use cases before they quietly become production reality. When beginners hear intake process, they sometimes picture a slow approval gate that exists mainly to say no, but the best intake process is designed to do the opposite. It creates a predictable path for teams to bring ideas forward, get clarity on what is required, and move faster because they are not guessing about rules and reviews. AI makes intake especially important because teams can adopt AI through vendor features, personal accounts, or simple configuration changes that bypass normal project planning. Without intake, the organization’s first awareness of AI use can come from a complaint, an incident, or an audit question, which is the worst time to discover something. The goal today is to show how to design intake so it is easy to use, hard to bypass, and strong enough to route high-impact work to deeper review without drowning low-impact work in bureaucracy.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A strong intake process starts with a simple truth that organizations often ignore: AI use cases will appear whether governance is ready or not, and the choice is whether they appear in the open or in the shadows. If the approved path is unclear or slow, teams will use whatever tool is available because they are under pressure to deliver. That behavior is not usually malicious, but it creates uncontrolled data exposure, unmanaged decision influence, and confusion about accountability. Intake works by making discovery routine, meaning new AI use is detected and recorded as a normal operating activity rather than as a crisis response. It also works by making expectations visible early, so teams know what evidence and controls will be required before they invest heavily in a solution. This early clarity reduces wasted work, because teams do not build a high-impact system only to learn later that they cannot use the data or cannot meet oversight requirements. A good intake process therefore protects innovation by reducing surprise and rework, which is exactly what leaders want when they ask for responsible AI adoption. When you design intake well, you are building a practical bridge between curiosity and controlled deployment.

The first teaching beat to lock in is what intake means in the AI risk context, because it is broader than project intake in traditional information technology. Intake is the set of steps that ensures any new AI use, any expansion of AI use, and any material change to an AI system is declared, recorded, and routed to the right level of governance review. That includes obvious projects like building a model, but it also includes enabling a vendor feature that uses organizational data, adopting a new external service, or changing a workflow so AI output becomes a decision trigger instead of a suggestion. Intake is also the moment where the program can identify whether the use case is low-impact or high-impact, which determines what happens next. If intake captures only major projects, it will miss the real-world risk, because many high-impact AI influences arrive through small decisions that do not look like projects. A beginner misunderstanding is thinking that only data science teams need intake, when in reality intake must capture business, vendor, and employee use patterns. Intake is therefore a visibility control, and visibility is the foundation of every other control.

To make intake workable, you need a clear entry point, because people will not follow a process they cannot find or do not understand. The entry point should be simple enough that a busy team can submit a use case without feeling like they are writing a thesis. At the same time, it must gather the minimum information needed to route the request correctly, because the worst kind of intake is one that collects too little information and then forces endless back-and-forth. A good intake entry point collects a plain-language description of what the AI will do, who will use the output, what decisions it influences, and what data is involved at a high level. It also asks whether the use is internal-only or customer-facing, whether it touches sensitive data, and whether the team is proposing a pilot or production deployment. This information is not meant to be a burden, because it is the same information any responsible team should already be thinking about. The difference is that intake captures it consistently and stores it as evidence. For beginners, the key is to design intake around questions people can answer without specialized technical knowledge, because intake is the front door for non-technical teams as well.

Once the entry point exists, intake must include triage, because not every use case deserves the same pathway. Triage is the step where the program decides whether a request can proceed under standard controls or whether it needs deeper review due to impact, sensitivity, or uncertainty. Triage is not the same as approval, because triage is about routing, not about deciding final acceptability. Triage should identify high-impact use cases where being wrong could cause severe harm, where the decision affects rights or safety, or where legal and trust obligations are strong. It should also identify sensitive data concerns, such as personal information or confidential business data, especially when data might be shared with an external service. Another triage focus is whether the AI output is advisory or determinative, because determinative use raises risk sharply. For low-impact uses, triage should route the request to a quick path that still enforces basic data rules and documentation, because speed matters for adoption and compliance. For high-impact uses, triage should route to formal assessment and governance review, because proportionality is what keeps the process both effective and usable.

A reliable intake process also needs to treat changes as first-class events, because AI risk can increase dramatically without a brand-new system being introduced. Changes include adding new data sources, expanding the user population, changing the decision context, enabling an automation step that removes human review, or switching vendors or vendor settings. These changes often feel minor to teams, but they can alter fairness, reliability, privacy exposure, and accountability. Intake should therefore define what kinds of changes must be submitted, not in legalistic language, but in practical terms that match how work evolves. If the intake process ignores change, then systems drift into new uses without review, and the program loses the ability to claim it governs high-impact use. A beginner mistake is to assume change management is separate from intake, when in reality change is just another form of intake because it introduces new risk. When changes are captured, the program can reassess impact classification, verify documentation updates, and adjust monitoring thresholds based on new conditions. This is how intake becomes a living control rather than a one-time gate.

To bring use cases under control, intake must connect tightly to the inventory you built earlier, because a request is not truly known to the program until it is recorded as part of the organization’s AI landscape. Intake should create or update an inventory entry as a normal outcome of submission, so the program always knows what exists, who owns it, and what the system touches. This also helps with accountability, because inventory records can link to approvals, documentation evidence, and monitoring plans. When the intake process automatically feeds the inventory, the program reduces the risk of disconnected records where a system is approved but not tracked, or tracked but not monitored. The intake process is also a good moment to detect duplicate efforts, such as multiple teams proposing similar AI solutions, which can increase risk and waste resources if not coordinated. A complete inventory supports better governance decisions because leaders can see how a proposed use case fits into the existing portfolio and whether similar risks have already been managed elsewhere. For beginners, the important point is that intake is not a standalone form; it is the pipeline that keeps the inventory accurate and keeps the program’s visibility current.

Evidence expectations also start at intake, because the program must communicate what documentation and evaluation will be required before approval, especially for restricted and high-impact uses. A common failure pattern is collecting the request and then leaving the team to guess what comes next, which causes delays and encourages side routes around the process. A strong intake design includes a clear set of evidence categories that scale with impact, such as intended use boundaries, data sources and flows, evaluation plans, ownership and decision rights, and monitoring intentions. This does not mean the team must provide all evidence at intake, because early-stage proposals may not have all details. Instead, intake should establish the expectation that evidence will be required and should outline what the next step will request, so teams can plan. When teams know the evidence path, they can build their project plan around producing that evidence rather than treating it as an afterthought. This also supports defensibility because the organization can show that it requires evidence as a condition of deployment. For beginners, it helps to remember that evidence is the currency of risk governance, and intake is where the organization begins collecting that currency consistently.

A well-designed intake process also reduces friction by clarifying who participates in review and how handoffs occur, because handoffs are where processes often stall. Intake triage should route requests to the right reviewers based on impact and data sensitivity, such as privacy, security, legal, compliance, and risk functions, without requiring the requesting team to personally hunt down approvals. This routing can be managed through the operating model, where roles and decision rights are predefined, making the process predictable. The requesting team should also understand who will make the final decision and on what basis, because uncertainty about decision rights creates frustration and encourages teams to bypass governance. Intake should therefore generate transparency about the path ahead, including approximate review steps and required artifacts, while still keeping the details flexible. It should also support dialogue, because risk management is often about adjusting a use case to fit boundaries rather than simply approving or rejecting. If intake feels like a one-way submission into a black hole, teams will stop using it. If intake feels like a guided route to safe adoption, teams will use it willingly because it helps them succeed.

Another key element is designing intake to address shadow AI in a constructive way, because shadow use is often the first signal that the organization’s approved path is not meeting real needs. An intake process can serve as a safe channel for teams to disclose AI use that already exists, especially when that use began informally. This does not mean the program ignores policy violations, but it means the program responds in a way that prioritizes risk reduction and practical correction rather than punishment as a first move. If the organization reacts only with enforcement, teams will hide more, and the program will lose visibility. Intake can be positioned as the place where teams can bring a tool into compliance by switching to approved alternatives, changing data handling practices, or adding required oversight. This approach reduces risk because it turns hidden behavior into managed behavior. It also gives the program intelligence about where employees are feeling pressure, such as needing summarization support or faster drafting tools, which can guide approved tool selection and training. For beginners, the key insight is that shadow AI is a governance design problem as much as it is a user behavior problem, and intake can be a bridge from informal use to controlled use.

Intake also needs to connect to risk acceptance and exceptions, because not every use case will meet every requirement immediately, and a mature program needs a controlled way to handle that reality. If the program has no exception pathway, teams may treat governance as impossible and bypass it entirely. If the program allows informal exceptions, then boundaries are meaningless and the program becomes inconsistent. A defensible approach is that intake captures when a team cannot meet a requirement, documents why, and routes the exception request to the appropriate decision right holder, especially for high-impact cases. The exception should be time-limited, conditions should be clear, and compensating controls should be required where possible. This exception handling is part of the intake system because exceptions often arise early when teams are trying to move forward and discover constraints. When exceptions are managed properly, the organization can make deliberate tradeoffs within risk appetite and tolerance instead of accidental tradeoffs driven by speed. For beginners, it is important to see exceptions not as loopholes but as documented risk decisions that must be owned and monitored. Intake is the mechanism that makes exception decisions visible and accountable.

A strong intake process also prepares the organization for monitoring by capturing the right operational details early enough that monitoring is not bolted on at the end. Monitoring plans depend on understanding what the system is supposed to do, what data it uses, and what harms matter most, which are all things intake can capture at least at a high level. Intake should therefore prompt teams to consider what signals will indicate rising risk, such as increased error rates in high-impact categories, increasing overrides, increasing complaints, or unusual output shifts. It should also encourage teams to identify who will monitor those signals and how often they will review them, because monitoring without ownership is not monitoring. When the intake process makes these expectations explicit, teams build their implementations around controllability, not just functionality. This is especially important for high-impact systems where the organization needs early warning and rapid response capability. A beginner misunderstanding is that monitoring is something you do only after an incident, when in reality monitoring is what helps you prevent incidents by detecting risk trends early. Intake is where that prevention mindset can begin, because it shapes design and resource planning before deployment.

For exam-ready reasoning, it is helpful to connect intake to the kinds of failures it prevents, because that connection shows why intake is a Domain 2 capability rather than an administrative detail. Intake prevents unknown AI use, which is the root cause of many surprises, because the organization cannot govern what it cannot see. Intake prevents inconsistent oversight, because it routes similar use cases through similar review steps, improving defensibility. Intake prevents uncontrolled data exposure, because it flags sensitive data and vendor data flows early, before habits form. Intake prevents uncontrolled expansion of scope, because changes and new uses are captured and reassessed instead of being treated as casual adjustments. Intake also prevents governance theater, because it creates records of submissions, routing decisions, and approvals that can be reviewed for assurance. When you see exam scenarios where an organization is surprised by AI behavior, cannot explain where AI is used, or discovers a vendor feature after a problem occurs, the missing control is often an effective intake process. Strong answers typically establish intake as the front door that feeds inventory, documentation, classification, and monitoring, because those are the building blocks of a real program. If you can see intake as the beginning of the control chain, you can reason through many scenario questions calmly.

To close, standing up an AI risk intake process is about creating a predictable, usable front door that brings new AI use cases, new tools, and meaningful changes into governance before they become uncontrolled reality. Intake works when it is easy to find, simple to use, and designed for non-technical teams as well as technical teams, because AI adoption is organization-wide. Triage within intake routes low-impact uses quickly under standard controls while directing high-impact and sensitive uses to deeper review, which keeps governance proportional and enforceable. Intake must capture changes and expansions, not just new projects, because risk often increases through small shifts in reliance and workflow integration. It should feed the inventory, establish evidence expectations, route reviews efficiently, and provide a constructive path for bringing shadow AI into compliance. It should also handle exceptions as documented, time-limited risk decisions with clear ownership, and it should set monitoring expectations early so controllability is built in rather than bolted on. When intake is designed well, it accelerates responsible adoption by reducing guessing, reducing rework, and ensuring that risk boundaries are applied consistently before harm occurs.

Episode 23 — Stand Up an AI Risk Intake Process: Bring New Use Cases Under Control (Domain 2)
Broadcast by