On July 22, 2025, Minneapolis played host to the ElevateIT-sponsored Technology Summit, bringing together leaders and innovators to discuss the future of enterprise technology. A focal point of the event was the participation of John Barker, SVP AI at The Bonar Institute for Purposeful Leadership, whose expertise in AI governance and purposeful leadership took center stage in both the opening and closing sessions.
Opening Keynote: “The Rise of the Digital Vanguard”
The opening keynote was titled “The Rise of the Digital Vanguard: Leading in the Age of Innovation.” Moderated by security expert James McQuiggan, the panel featured distinguished voices including Bill Kloster, Wells Larsen, and John Barker himself. The panel explored challenges and opportunities in leadership as organizations grapple with rapid technological transformation, particularly of agentic AI—self-directed AI agents that increasingly shape organizational strategy, workflows, and risk profiles.
John contributed insights as an enterprise governance strategist and legal advisor. He noted the necessity for board and C-suite oversight of agentic AI, warning that ambiguity around definitions and responsibilities creates risk. Few executives fully grasp its implications or requirements for effective governance.
Closing Keynote: “Agentic AI and AI Agents: Governing Cultural, Legal, Regulatory, and Technical Risks”
Barker’s closing keynote urged leaders to engage deeply with the governance of AI agents and agentic AI systems. He articulated highlights and best practices for board and C-suite involvement:
- Definitional Clarity: Organizations must move beyond buzzwords and establish clear, enterprise-wide definitions for agentic AI to ensure consistent governance, risk, and compliance approaches. International standards like ISO/IEC 22989:2022 and frameworks from NIST and OWASP were cited as benchmarks.
- Oversight Structures: Barker advocated for robust human oversight models, distinguishing between “human-in-the-loop,” “human-on-the-loop,” and “human-out-of-the-loop” approaches. For boards and executive teams, this means instituting lines of defense and clear accountability for AI oversight.
- Risk Mitigation: Barker detailed emerging threat models—memory poisoning, tool misuse, privilege compromise, cascading hallucination attacks, and more. Boards must allocate adequate resources for compliance programs and establish incident response protocols tailored to AI risks.
- Culture and Ethics: Culture creates the “human firewall” for AI adoption. Barker highlighted findings from recent enterprise surveys showing tensions and even active internal resistance to generative AI adoption. Senior leadership must promote healthy, ethical cultures that balance innovation with compliance.
- Legal and Regulatory Vigilance: Agentic AI multiplies exposure to legal and regulatory scrutiny. Boards must anticipate issues stemming from agency law, consumer protection, contract breaches, intellectual property disputes, privacy violations, and product liability. Board education and legal oversight are essential to avoid missteps as legislation like the Colorado AI Act and the EU AI Act come online.
Looking Ahead: Executive Leadership in the AI Era
John Barker’s sessions emphasized that as agentic AI transforms the enterprise landscape, boards and C-suites cannot afford passive engagement. Leadership must proactively address definitional, technical, legal, and cultural challenges—building governance frameworks that adapt to the pace of AI innovation.
For executives and directors, the imperative is clear: embrace continuing education, foster cross-functional collaboration, and institute controls that blend resilience with ethical purpose.
For more on the Minneapolis Technology Summit 2025, visit the official event page at https://eitevents.com/event_pages/minneapolis-technology-summit-2025/.




