EU Finalizes AI Code of Practice: What U.S. Policymakers Should Know
On July 10, 2025, the European Commission (EU) published its much-anticipated General-Purpose AI Code of Practice, offering companies a voluntary roadmap to align with the EU’s new Artificial Intelligence Act before enforcement begins on August 2, 2025.
Here’s what’s included:
Transparency Requirements: Companies are encouraged to disclose details about training data, compute usage, and system limitations.
Copyright Compliance: The framework urges respect for content owner controls (like robots.txt*) and content attribution.
Risk & Safety Controls: Especially for frontier models, developers are asked to implement risk assessments, safeguards, and incident reporting protocols.
While not legally binding, companies that adhere to the code can receive a “rebuttable presumption” of compliance, lowering their regulatory exposure under the AI Act.
U.S. Implications: From Guidance to Guardrails
Now that a federal moratorium on state AI laws is off the table for now, this EU blueprint could influence policy discussions here in the U.S., particularly at the state level:
Regulatory Benchmarking
States like California, New York, and Illinois can draw on the EU’s structured approach to develop voluntary codes of conduct for developers and deployers, fostering responsible innovation while avoiding heavy-handed enforcement.
Transparency as a Default
Expect growing momentum for documentation requirements e.g., disclosures around model provenance, capabilities, and safety limits, which are increasingly seen as the baseline for public trust.
Copyright & IP Pressures
The EU’s emphasis on protecting creators will likely accelerate similar debates in the U.S., especially as courts and Congress navigate questions around AI-generated content and fair use.
Proactive Risk Governance
The code’s focus on risk mitigation for advanced systems offers a valuable model for U.S. policymakers weighing incident response frameworks, audit standards, or safety gating requirements.
Where the U.S. Goes Next
For U.S. lawmakers and regulators, the EU’s move reinforces a key lesson: voluntary frameworks can build industry buy-in and regulatory momentum, especially in emerging sectors like AI, where flexibility and clarity are both essential.
U.S. enforcement agencies, such as the Federal Trade Commission and State Attorneys General, could require A.I. model documentation forms for transparency, mirroring the EU’s rigorous model reporting.
States that move early can set the tone for federal guidance much like how California’s Consumer Privacy Act shaped national policy in the United States. The window is open for bold, balanced action.