Gallagher Updates Regulation for Artificial Intelligence


Gallagher’s Cyber practice maintains a sharp focus on emerging technologies and the associated risks as organizations adopt them. In 2024, our attention is centered on evolving compliance requirements related to artificial intelligence (AI). Recent proposals for AI-specific regulations at the state, federal, and international levels are of particular interest. This summary serves as an update to our Q1 summary, “The Latest Regulation for Artificial Intelligence,” highlighting important developments.

State Regulation:

Currently, 17 states have introduced legislation aimed at regulating AI: California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia, and Washington.

  • Four states emphasize interdisciplinary collaboration: Illinois, New York, Texas, and Vermont.
  • Four states prioritize protection from unsafe or ineffective systems: California, Connecticut, Louisiana, and Vermont.
  • Eleven states focus on safeguarding against abusive data practices: California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, and Virginia.
  • Three states — California, Illinois, Maryland — and New York City focus on transparency.
  • Three states concentrate on protection from discrimination: California, Colorado, and Illinois.
  • Twelve states emphasize accountability: California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Virginia, and Washington.

Federal and Industry Sector Regulation:

On March 27, 2024, the US Department of Treasury published a report titled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.” This report offers recommendations to financial institutions for utilizing AI technologies securely and effectively while mitigating operational risks, cybersecurity threats, and fraud challenges. Key recommendations include addressing capability gaps, regulatory coordination, and enhancing the National Institute of Standards and Technology (NIST) AI risk management framework.

On March 28, 2024, the US Office of Management and Budget issued a memorandum mandating government agencies to appoint chief AI officers (CAIOs). These officers will be responsible for promoting AI innovation, coordinating agency AI usage, managing associated risks, and expanding reporting on AI use cases.

Global Regulation:

On March 13, 2024, the European Union passed the Artificial Intelligence (AI) Act, aiming to establish a comprehensive legal framework for AI use worldwide. The act aims to foster trustworthy AI by ensuring adherence to fundamental rights, safety, and ethical principles while addressing risks associated with impactful AI models.

The key points from the AI Act include the following.

Risk Classification:

The AI Act classifies AI systems based on risk:

  • Unacceptable risk: Certain AI systems (e.g., social scoring systems and manipulative AI) are prohibited.
  • High-risk AI systems: These systems are regulated and subject to extensive obligations. Providers (i.e., developers) of high-risk AI systems must comply with requirements related to transparency, safety, and accountability.
  • Limited risk AI systems: These systems — including chatbots and deepfakes — are subject to lighter transparency obligations, as long as users are aware the content is AI generated.
    Minimal risk AI systems: Systems such as AI-enabled video games and spam filters remain unregulated.

Most obligations fall on providers of high-risk AI systems intending to use the systems within the EU or use their output within the EU.

General-Purpose AI:

  • All general-purpose AI (GPAI) model providers are required comply with the terms of the Directive on Copyright in the Digital Single Market (also called the Copyright Directive). They’re also required to educate users with instructions to use the platform with written documentation on technical terms.
  • Any GPAI models that present a systemic risk have a mandate to conduct model evaluations and adversarial testing, document and report incidents considered serious, and take steps to implement cybersecurity controls.

Prohibited AI Systems:

The AI Act prohibits certain types of AI systems:

  • Those deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm.
  • Those exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior, causing significant harm.
  • Biometric categorization systems inferring sensitive attributes (e.g., race, political opinions, sexual orientation), except for specific lawful purposes.

Deployers of AI systems:

Deployers of high-risk AI systems have obligations, though less than providers. This applies to deployers located in the EU and third-country deployers where the AI system’s output is used in the EU.

Risk Management Strategies:

Organizations affected by these new AI compliance requirements should communicate them to key stakeholders and consider leveraging Cyber insurance policies that offer regulatory compliance guidance. It’s essential to embed a formal risk management plan for AI usage into overall enterprise risk management programs and coordinate efforts between various stakeholders.

In summary, today’s regulation around AI is cutting across multiple industry sectors and jurisdictions — including financial services, healthcare, technology, education, real estate, and municipalities — and will undoubtedly spread to others in short order. Any organization considering embracing generative AI tools should consider embedding a formal risk management plan for AI usage into their overall enterprise risk management program. A cross-divisional effort between several key stakeholders will be required. Risk managers should look to coordinate efforts between legal, compliance, human resources, operations, IT, marketing, and others while closely monitoring emerging risks as AI systems become more widely used.