James Bonar recently emphasized that “AI won’t replace humans—but humans with AI will replace humans without AI.” This leadership imperative raises a critical question for Boards and C-suite executives: How do we transition from understanding AI’s strategic importance to governing its implementation effectively?
The answer lies in mastering four interconnected challenges that define the new leadership mandate for the AI age.
1. The Trust Calibration Challenge
AI doesn’t just make mistakes—it makes mistakes confidently.
I was initially skeptical about integrating AI into my process. What I didn’t expect was that somewhere in this process of testing, calibrating, and gradually expanding my use of AI tools, I would cross a crucial threshold: I had formed a working relationship with AI, and I couldn’t imagine operating without it now.
While my initial skepticism was personal, the dilemma it raises is systemic: how do you develop trust in a system that demands perpetual skepticism? This challenge is fundamental to cultural governance. Boards must ensure that Management actively fosters a culture of healthy skepticism and psychological safety necessary to challenge AI output and properly “calibrate trust.”
The answer lies in what I call ‘strategic delegation’ – instructing AI to create output within defined parameters and subject to rigorous quality controls.
Perhaps most importantly, I’ve learned to watch for bias reinforcement. AI tends to build on the assumptions and frameworks that I provide rather than challenging them. This means that we, the human partners, must deliberately ask it to consider alternative perspectives or play devil’s advocate.
2. The Cognitive Dependency Risk
The ease of AI collaboration—perfectly patient, always responding thoughtfully—creates a new baseline for work interactions. This convenience presents two distinct risks to human capability:
Social Atrophy: If we become accustomed to streamlined, frictionless interaction, how will teams hone the essential skills we build through navigating the natural complexities of human relationships? Reading social cues, managing different communication styles, handling pushback, and working through disagreements remain foundational for team effectiveness.
Critical Thinking Dilution: There is a deeper concern about cognitive dependency—that regular AI collaboration might gradually weaken our ability to think through problems independently, analyze information critically, and generate original insights without technological assistance. For Boards, this is a talent risk. Boards must ensure AI deployment structures actively promote up-skilling to prevent the strategic risk of cognitive dependency in the workforce. AI must serve as an amplifier, not a cognitive crutch.
3. The Great Risk Expansion
AI fundamentally expands what we must govern—and transforms how we create value. Yes, AI can amplify our work through pattern recognition, policy drafts, and enhanced analysis. But AI cannot be held accountable, think critically about context, or own the outcome.
The risk function isn’t shrinking; it is evolving into something more critical. As risk professionals, we now must govern AI use itself, manage new attack surfaces, address model risk, and oversee digital workers (AI agents). AI agents making autonomous decisions require robust governance, accountability frameworks, and rigorous oversight mechanisms. This expanded scope is the new baseline for organizational risk.
4. Navigating the AI Paradox
AI’s greatest strength—its velocity and scale—also represents its greatest risk without human oversight.
To manage the risks defined above, Boards must focus on oversight risk. Oversight is required to enforce critical human review and human checkpoints where context, judgment, or ethics are material. With global frameworks developing, here’s a 6-point navigation system for Board discussions and implementation:
- Assume Dual-Use and Red Team Relentlessly – Assume adversaries will weaponize every AI capability you deploy.
- Human-AI Partnerships: Critical Thinking Required – Design a clear scope of authority: AI for pattern recognition and speed; humans for context, judgment, and accountability.
- Strategic Human Checkpoints – Define the materiality of decisions requiring human-in-the-loop oversight.
- Velocity Governors – Build in intentional friction at critical junctures where human oversight is non-negotiable.
- Transparency as Security – Make AI decision-making visible and auditable.
- Cross-Functional AI Governance – Security ensures agent and identity guardrails; Privacy and Compliance review training data lineage and model risk; Data Owners approve AI use on datasets; IT enforce technical controls; Business Units own outcomes and risk appetite.
The Path Forward
As with any emerging risk, the key is conscious navigation rather than unconscious drift. The AI paradox is a permanent condition to manage. Success lies not in eliminating the risk, but in building organizations that can manage the paradox while maintaining human agency.
AI adoption is about amplifying human expertise while maintaining the human elements essential for complex decision-making and teamwork. That’s the practical expression of AI leadership that Boards and C-suite executives must champion throughout their organizations.
The evolution of purposeful leadership and ethical risk management is underway.
About the Author:
Astrid Yee-Sobraques, FRM, CISSP is the senior risk executive Fortune 100 financial institutions call when complex risk and regulatory challenges bring previous efforts to a standstill. Over 25 years at GE Capital, AIG, Citibank, and PwC, she has honed expertise across Enterprise Risk Management, Strategic Advisory, Regulatory Compliance, Digital Transformation, and Cyber Security spanning Banking, Asset Management, and P&C Insurance globally. As the Risk Sherpa, she is known for her “risk connectivity” approach—integrating people, processes, and data to strengthen how organizations anticipate, manage, and respond to cascading risks. Astrid serves on GARP’s New York Chapter Advisory Committee and in October 2025, she released the Connected Enterprise framework—the first systematic approach to elevate ERM from foundational discipline to strategic engine.



