Massachusetts Attorney General Issues Advisory on AI Regulation
The Massachusetts Attorney General’s Office (AGO) has issued an advisory clarifying that existing Massachusetts law applies to artificial intelligence (AI) in the same way it applies to any other product in commerce.
Massachusetts Attorney General Andrea Campbell is the first AG in the country to provide such guidance about AI. The advisory acknowledges AI’s potential societal benefits and emphasizes Massachusetts’s significant role in guiding the technology’s development.
However, the primary purpose of the advisory is to warn AI developers, suppliers, and users that Massachusetts law, including the Massachusetts Consumer Protection Act (Chapter 93A), applies to AI. This Act prohibits unfair or deceptive business practices in Massachusetts.
The AGO provided the following examples of unfair or deceptive AI business practices:
- Falsely advertising the quality, value, or usability of AI systems.
- Supplying a defective, unusable, or impractical AI system for the advertised purpose.
- Misrepresenting the reliability, performance, safety, or conditions of an AI system, including claims of bias-free operation.
- Selling an AI system that breaches warranty by not being fit for its ordinary or specified purpose.
- Misrepresenting audio or video of a person to deceive others into business transactions or sharing personal information, as in cases of deepfakes, voice cloning, or fraudulent chatbots.
- Failing to comply with Massachusetts laws intended to protect public health, safety, or welfare.
The advisory also reminds businesses that AI systems must comply with privacy protection, anti-discrimination, and federal consumer protection laws.
Increasing AI Regulation
AI is expected to face increasing regulation and litigation at both state and federal levels. At the national level, the Biden administration issued an Executive Order in October 2023, directing federal agencies to address AI’s growing utility and risks. Following this, the Federal Trade Commission proposed a rule prohibiting AI from impersonating humans, and the Department of Labor announced principles for AI systems in the workplace. Other federal agencies are also taking action.
In 2024, Colorado and Utah passed AI laws likely to serve as models for other states. The Colorado Artificial Intelligence Act and Utah’s Artificial Intelligence Policy Act integrate AI use within existing state consumer protection laws. Reflecting the AGO’s warning, plaintiffs have begun asserting privacy and consumer claims based on AI technology on business websites.
Internationally, the EU Artificial Intelligence Act, enacted on March 13, 2024, categorizes AI applications by risk level and regulates them accordingly. Unacceptable risk applications are banned, while high-risk applications are subject to strict precautionary measures and oversight. AI developers and suppliers doing business in Europe should ensure compliance with the EU AI Act.
Preparing for AI Compliance, Enforcement, and Litigation Risks
Given the uncertainty surrounding future AI deployment and how laws will be applied, compliance obligations and enforcement risks are likely to increase. Businesses should consult with experienced counsel before deploying new AI tools to mitigate risks. Organizations should consider the following measures:
- Develop an internal AI policy governing the use of AI in the workplace.
- Update due diligence practices to understand third-party vendors’ use of AI, including data collection, transmission, storage, and usage in training AI tools.
- Monitor state and federal laws for new legal developments affecting compliance obligations.
- Ensure appropriate governance processes, including continuous monitoring and testing for AI quality and absence of bias.
- Provide clear disclosure about AI tools, functions, and features, including notifications when customers engage with an AI assistant.
- Modify privacy policies and terms and conditions to explain AI technology use and available opt-out or dispute resolution options for customers.
- Review and update third-party contracts for AI-related terms, disclosure obligations, and liability allocation.
By taking these steps, businesses can better navigate the evolving regulatory landscape and ensure compliance with emerging AI laws.
Source: natlawreview.com
Got a Questions?
Find us on Socials or Contact us and we’ll get back to you as soon as possible.