governance

Agent Misalignment and Insider Threats: A Strategic Risk for AI Governance

Anthropic’s 2025 paper, Agentic Misalignment: How LLMs Could Be an Insider Threat, highlights a risk that boards, regulators, and investors should address directly: large language models (LLMs), if not properly governed, could behave like insider threats—leaking sensitive information, undermining decisions, or misusing internal workflows. The paper goes beyond technical vulnerabilities to examine how integration of […]

Agent Misalignment and Insider Threats: A Strategic Risk for AI Governance Read Post »

The EU AI Act, DORA, and MiCA: The Emerging Structure of Board Accountability in Digital Regulation

Introduction With the adoption of the EU AI Act, DORA (Digital Operational Resilience Act), and MiCA (Markets in Crypto-Assets Regulation), the European regulatory framework has expanded in scope and complexity. Each regulation addresses distinct technological domains—artificial intelligence, ICT resilience, and crypto-assets—but together they introduce a coherent supervisory approach: one that places increased responsibility for emerging

The EU AI Act, DORA, and MiCA: The Emerging Structure of Board Accountability in Digital Regulation Read Post »

Board-Level AI Oversight and Independent Director Responsibilities in Light of the 2025 CSSF/BCL Review

Introduction In May 2025, the Banque Centrale du Luxembourg (BCL) and the Commission de Surveillance du Secteur Financier (CSSF) published their second thematic review on the use of artificial intelligence in Luxembourg’s financial sector. The scope was significantly expanded compared to the 2023 review, and now provides the most comprehensive regulatory snapshot to date. While

Board-Level AI Oversight and Independent Director Responsibilities in Light of the 2025 CSSF/BCL Review Read Post »

Scroll to Top