Episode 14 — Inventory AI Systems Completely: Models, Data, Vendors, and Shadow AI (Domain 1)
In this episode, we’re going to take a very practical step that separates organizations that feel in control of AI from organizations that are guessing: building a complete inventory of AI systems. If you cannot name what AI exists in your environment, you cannot set boundaries, you cannot monitor outcomes, and you cannot defend decisions when someone asks what you are doing with AI and why. Beginners sometimes assume inventory is just a technical asset list, but for AI it is much more like a map of influence, because AI can shape decisions quietly through tools, vendor services, and embedded features that people forget are even there. A complete inventory helps you connect AI use to ownership, governance approvals, documentation evidence, and risk tolerance, because it gives you a single view of what must be controlled. It also helps you find surprises, like a team using a public AI tool with sensitive data or a vendor feature turned on by default. By the end, you should understand what it means to inventory AI completely and why that inventory is one of the strongest controls in an AI risk program.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A complete AI inventory starts with a mindset shift: you are not only tracking software, you are tracking where automated judgment touches business outcomes. That includes systems that predict, classify, rank, recommend, summarize, generate content, or route decisions, even when the word AI is not used in their product name. It also includes systems that influence humans, like tools that propose what to say to customers or which cases deserve attention first, because those outputs can shape behavior and therefore shape outcomes. The inventory is a way of making invisible influence visible, so the organization can apply governance consistently. Without that visibility, teams may apply strong controls to one high-impact model while ignoring another that is equally influential but hidden inside a vendor product. A beginner misconception is thinking inventory only matters after an incident, when in reality inventory is what prevents many incidents by revealing where controls are missing. When you know what exists, you can classify impact, assign ownership, and set oversight expectations before harm occurs. This is why inventory is not a clerical task; it is a foundational risk control.
To build an inventory that is actually complete, you need to understand what counts as an AI system in the first place. Artificial Intelligence (A I) in an organization often shows up as a model, meaning a learned pattern that produces outputs from inputs, but it can also show up as a service that wraps models in a user interface or a workflow feature. Some AI is obvious, like a chat assistant or a content generator, but some is embedded, like scoring engines used in fraud detection, prioritization features in service desks, or recommendation tools in marketing platforms. A system can also be AI-adjacent, where the model itself is outside your environment but your data is sent to it and the output is used in your processes. For inventory purposes, what matters is the functional behavior and the decision influence, not where the model physically runs. If a tool produces an output that changes what people do, it belongs on the inventory map. That functional definition helps avoid the trap of relying on vendor marketing labels, which can be inconsistent and misleading.
One major section of the inventory is models, and beginners should think of this as tracking the decision engines that generate AI outputs. A model might be built internally, bought as part of a product, provided by a vendor service, or embedded in a platform feature. The inventory should capture enough information to distinguish one model from another, because models can have different purposes and different risk profiles even inside the same product. It should also capture what the model is used for, what type of output it produces, and whether the output is advisory or directly triggers actions. In many organizations, multiple models exist across departments, and the same model may be reused for more than one purpose, which creates risk if boundaries are unclear. A model that is safe for low-impact triage might be unsafe for high-impact eligibility decisions, even if it performs well overall. By inventorying models explicitly, you create the ability to ask whether each model is being used only within its intended purpose and whether the right oversight exists for the context where it operates. That is how inventory connects directly to responsible governance.
Another major section of the inventory is data, because AI risk is often driven as much by data as by the model itself. The inventory should identify what data types each AI system uses, where that data comes from, and whether sensitive or regulated data is involved. This matters because data drives privacy risk, fairness risk, and sometimes security risk, and those risks cannot be assessed if data flows are unknown. Beginners often assume data is only a training concern, but operational data matters just as much, because data sent to a model during daily use can be exposed, retained, or used in ways that violate policy. Data inventory also supports drift monitoring, because if data sources change or if the population changes, the model’s behavior can change. When you track data sources and flows, you can notice when a vendor adds a new data integration, when a team starts using a new field as input, or when a tool begins capturing information that was not previously included. Those changes can raise risk silently unless they are recorded and reviewed. Data tracking is therefore an essential part of complete AI inventory, not a separate project.
Vendors are another crucial part of the inventory, because many organizations rely on AI through products and services they did not build. When AI comes from a vendor, you still own the risk of how it is used, even if you do not control how the model was trained. That means the inventory should document which vendors provide AI capabilities, what those capabilities do, and what your organization’s responsibilities are under contracts and policies. Vendor AI is often a risk blind spot because it is easy to assume the vendor handled everything, and then discover later that you cannot explain outcomes, cannot access evaluation evidence, or cannot verify how data is handled. The inventory should also capture whether AI features are optional, whether they are enabled by default, and who has authority to turn them on or off. This is important because default-enabled features can create AI use without deliberate approval, especially when products are updated automatically. A strong vendor inventory also supports better governance decisions, because it allows comparison of vendor options based on transparency, limitations, data controls, and support for oversight. Without that vendor view, organizations treat AI as a scattered set of purchases rather than a controlled capability.
Shadow AI is the part of inventory that often surprises organizations, and it deserves careful, beginner-friendly explanation because it is a common pathway to harm. Shadow AI is AI use that exists outside official governance processes, often through employees using public tools, unsanctioned browser extensions, personal accounts, or unapproved features inside approved platforms. Shadow AI is rarely malicious; it is usually a productivity shortcut taken by people trying to do a good job quickly. The risk is that these uses can involve sensitive data, can generate content that is shared externally without review, or can influence decisions without accountability. Shadow AI also hides from monitoring, so harmful patterns can persist unnoticed. A complete inventory is the way you bring shadow AI into the light, not to punish people, but to understand what is happening and to create safer paths. When organizations ignore shadow AI, they create an environment where risk grows quietly and then explodes during an incident. When organizations inventory shadow AI, they can replace risky behavior with approved tools and clear rules.
A complete inventory also needs to capture how AI is embedded into workflows, because risk is determined by how outputs are used, not just by the model’s existence. Two teams might use the same AI feature in very different ways, one using it as a drafting assistant and another using it as an automated decision trigger. The inventory should therefore record the use context, such as whether the output is reviewed by a human, whether it is customer-facing, whether it affects eligibility or prioritization, and whether it is tied to high-impact outcomes. This context helps governance teams apply the right oversight level and helps monitoring teams choose the right signals to track. Beginners sometimes assume that once a tool is approved, its use is automatically safe, but approvals are typically tied to a particular intended use and set of boundaries. If the tool drifts into new uses, the risk profile changes, and the original approval may no longer apply. Inventory that captures workflow context helps prevent that drift in purpose by making expansions visible and reviewable. It also supports defensibility because the organization can show it understood not only what tool existed, but how it was used.
Ownership is another inventory element that must be captured clearly, because a list without owners is just a catalog of future confusion. Each AI system should have an accountable owner who can answer why it exists, what it is used for, what controls are in place, and what happens when risk increases. It should also have named responsibilities for technical maintenance, monitoring, and incident response, even if those responsibilities are shared across teams. Without ownership, issues linger because everyone assumes someone else is handling them, and monitoring becomes performative because no one is empowered to act. The inventory is the place where ownership becomes visible, because it connects systems to decision rights and escalation paths. This is also where inventory supports culture, because people learn that AI is not an unowned feature, it is a governed capability with accountability. For beginners, the key point is that ownership is part of what makes AI use defensible, because the organization can show that every system has someone responsible for oversight and someone authorized to intervene. A system without ownership is a system without control.
Completeness also depends on how the inventory is discovered and maintained, because AI environments change quickly and inventories become stale unless they are living processes. New vendor features appear, new internal projects start, and employees adopt new tools in response to pressure and convenience. If the inventory is created once and then forgotten, it will give leaders a false sense of safety, which can be worse than knowing you have gaps. A strong program treats inventory as ongoing, meaning it is updated when new use cases are proposed, when systems change, and when new tools are introduced. It is also reviewed periodically to confirm that recorded systems still exist, still have owners, and are still being used within boundaries. Beginners should understand that this is not about perfection; it is about building a reliable habit of visibility. When visibility is reliable, governance decisions become faster because reviewers do not start from scratch each time. When visibility is unreliable, every decision becomes a crisis investigation, and that is not sustainable.
Another reason inventory matters is that it supports consistent documentation expectations, because you cannot demand evidence for systems you do not know exist. Once an AI system is on the inventory, it can be linked to required documentation artifacts like intended use, impact classification, data sources, evaluation evidence, approvals, controls, and monitoring plans. This linkage turns the inventory into a control hub rather than a static list, because it becomes a map of what evidence exists and what evidence is missing. It also supports audits and executive reporting, because leaders can see the scope of AI adoption and the coverage of oversight controls. Beginners sometimes assume documentation is a separate activity that happens only for big projects, but responsible AI programs treat documentation as a baseline expectation that scales with impact. Inventory makes scaling possible because it tells you how many systems exist, where the high-impact ones are, and which ones need immediate attention. Without inventory, documentation efforts can be misdirected toward visible projects while hidden systems remain uncontrolled. This is why inventory is often the first real step toward an operational AI risk program.
Inventory also plays a direct role in monitoring and early warning, because monitoring must be targeted to the systems that actually influence outcomes. If you do not know which systems are making recommendations or scoring customers, you cannot choose meaningful signals to track. Once systems are inventoried with context, monitoring can focus on the right measures, such as performance changes, incident reports, drift indicators, and unusual patterns in outcomes. Inventory also supports escalation because it clarifies which systems are high-impact and which owners and leaders need to be notified when thresholds are exceeded. Beginners should notice how this connects to risk tolerance, because tolerance boundaries are only enforceable when you can identify which systems they apply to and who is responsible for acting. Monitoring is also where shadow AI becomes especially risky, because unapproved tools are invisible to oversight and therefore invisible to early warning systems. A complete inventory shrinks that blind spot and improves the organization’s ability to detect harm before it becomes severe. In practical terms, inventory is a prerequisite for credible monitoring.
To bring this all together, it helps to think of a complete AI inventory as a structured story that can be told consistently across the organization. The story begins with what AI systems exist, including models and embedded features, and it continues with how each system is used and what outcomes it influences. The story includes what data each system touches, where that data flows, and what vendor relationships are involved. It includes who owns the system, who monitors it, and what governance approvals apply. It includes whether the system is within policy boundaries and whether evidence exists to support reliance on it. When an organization can tell that story, leaders can defend their AI posture and teams can move faster because expectations are clear. When an organization cannot tell the story, every inquiry becomes a scramble, and risk grows in the gaps between teams. Beginners should see that inventory is how you replace scattered adoption with governed adoption.
To close, inventorying AI systems completely is one of the most powerful Domain 1 practices because it creates visibility, and visibility is the foundation of governance, accountability, documentation, and monitoring. A complete inventory tracks models and AI features as sources of automated judgment, tracks data sources and flows as the fuel that shapes behavior and risk, tracks vendors as shared responsibility relationships that must be understood, and tracks shadow AI as the hidden adoption that most often causes surprise. It also captures use context and workflow influence so oversight scales with impact, and it captures ownership so controls are enforceable rather than theoretical. When the inventory is treated as a living process, it stays accurate as the environment changes and prevents false confidence. Most importantly, inventory turns AI risk management from guesswork into disciplined control, because the organization can see what exists, apply boundaries consistently, and intervene quickly when harm signals appear. That is what it means to manage AI responsibly in the real world, and it is why inventory is never a side task in a serious AI risk program.