All Episodes
Displaying 41 - 60 of 92 in total
Episode 41 — Control Training and Tuning: Reproducibility, Versioning, and Provenance Discipline (Domain 3)
Effective risk management during the training and fine-tuning phases requires rigorous discipline to ensure that AI models are both predictable and auditable. This epi...
Episode 42 — Establish Model Validation: Performance, Robustness, and Generalization Testing (Domain 3)
Model validation is the process of confirming that an AI system performs its intended function accurately and reliably before it reaches production. This episode explo...
Episode 43 — Test for Safety Failures: Hallucinations, Toxicity, and Unsafe Recommendations (Domain 3)
Safety testing is a non-negotiable step in Domain 3, particularly for generative models and autonomous systems that interact directly with humans. This episode examine...
Episode 44 — Understand Explainability Options: When You Need It and What Works (Domain 3)
Explainability is the degree to which a human can understand the cause of a decision made by an AI system, a critical requirement for high-stakes environments in Domai...
Episode 45 — Protect Against Adversarial Inputs: Evasion, Prompt Injection, and Abuse Patterns (Domain 3)
Adversarial attacks represent a unique class of security threats where small, often invisible changes to inputs can cause an AI model to misbehave. This episode focuse...
Episode 46 — Prevent Data Poisoning: Supply Chain Controls for Training Data Integrity (Domain 3)
Data poisoning is a long-term threat where an attacker corrupts the training data to create "backdoors" or systemic biases in the resulting model, a key concern in Dom...
Episode 47 — Reduce Model Inversion and Leakage: Privacy Attacks and Practical Mitigations (Domain 3)
Model inversion and membership inference attacks are privacy-focused threats where an attacker attempts to extract sensitive training data or determine if a specific i...
Episode 48 — Secure AI Interfaces: APIs, Plugins, Agents, and Permission Boundaries (Domain 3)
The points where AI systems interact with other software—APIs, plugins, and autonomous agents—are often the most vulnerable to security breaches. This episode covers t...
Episode 49 — Control Access and Least Privilege: Who Can Use, Train, and Deploy Models (Domain 3)
Access control is a fundamental administrative and technical requirement for maintaining the security of the AI lifecycle in Domain 3. This episode focuses on the impl...
Episode 50 — Deploy Safely: Change Management, Rollback Plans, and Guardrail Monitoring (Domain 3)
The deployment phase is the most critical transition in the AI lifecycle, requiring a structured approach to change management to prevent service disruptions. This epi...
Episode 51 — Monitor Drift in Production: Data Shift, Concept Shift, and Silent Degradation (Domain 3)
Maintaining the integrity of an AI system after deployment requires a sophisticated approach to monitoring "drift," which is the gradual decline in a model's predictiv...
Episode 52 — Handle AI Incidents Well: Triage, Containment, Communication, and Recovery (Domain 2)
AI-related incidents require a specialized response plan that differs from traditional IT security because the failure might be behavioral rather than technical. This ...
Episode 53 — Manage Human Oversight: Approvals, Overrides, and Accountability Under Pressure (Domain 3)
The concept of "human-in-the-loop" is a vital safety mechanism in high-stakes AI systems, yet it introduces its own set of risks if not managed properly. This episode ...
Episode 54 — Build Fallbacks and Fail-Safes: What Happens When AI Must Stop (Domain 3)
Every mission-critical AI system must have a robust "Plan B" to ensure business continuity if the model fails or behaves unpredictably. This episode explores the desig...
Episode 55 — Control Retraining and Updates: Governance Gates and Regression Testing (Domain 3)
The lifecycle of an AI model is iterative, but retraining a model on new data introduces the risk of "regression," where previously corrected errors reappear or new bi...
Episode 56 — Validate Third-Party Models: Assumptions, Limits, and Hidden Dependencies (Domain 3)
When using AI models developed by external vendors, the risk management challenge shifts from internal process control to external validation. This episode focuses on ...
Episode 57 — Retire AI Systems Safely: Data Deletion, Archiving, and Lifecycle Closure (Domain 3)
The final stage of the AI lifecycle, retirement, is often overlooked but carries significant risks regarding data privacy and intellectual property. This episode explo...
Episode 58 — Spaced Retrieval Review: Lifecycle Risk Scenarios and Control Choices Rapid Recall (Domain 3)
Success in Domain 3 requires the ability to instantly link a specific stage of the AI lifecycle to its most relevant risks and controls. This episode utilizes the spac...
Episode 59 — Build Strong AI Risk Narratives: Scenario Thinking Without Guesswork (Domain 1)
AI risk narratives are essential for making abstract technical threats understandable to business leaders, but they must be based on evidence rather than speculation. ...
Episode 60 — Quantify AI Risk When Possible: Likelihood, Impact, and Confidence Ranges (Domain 2)
While qualitative assessments are useful for ethics, many AI risks can and should be quantified to provide more precise guidance for decision-makers in Domain 2. This ...