EU Publishes Final AI Code of Practice to Guide Compliance for AI Companies

The European Commission said Thursday (July 10) that it published the final version of a voluntary framework designed to help artificial intelligence companies comply with the European Union’s AI Act.

The General-Purpose AI Code of Practice seeks to clarify legal obligations under the act for providers of general-purpose AI models such as ChatGPT, especially those posing systemic risks like ones that help fraudsters develop chemical and biological weapons.

The code’s publication “marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” Henna Virkkunen, executive vice president for tech sovereignty, security and democracy for the commission, which is the EU’s executive arm, said in a statement.

The code was developed by 13 independent experts after hearing from 1,000 stakeholders, which included AI developers, industry organizations, academics, civil society organizations and representatives of EU member states, according to a Thursday (July 10) press release. Observers from global public agencies also participated.

The EU AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe and transparent, as well as respectful of fundamental human rights.

The act classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue.

The code is voluntary, but AI model companies who sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. The next step is for the EU’s 27 member states and the commission to endorse it.

Read also: European Commission Says It Won’t Delay Implementation of AI Act

Inside the Code of Practice

The code is structured into three core chapters: Transparency; Copyright; and Safety and Security.

The Transparency chapter includes a model documentation form, described by the commission as “a user-friendly” tool to help companies demonstrate compliance with transparency requirements.

The Copyright chapter offers “practical solutions to meet the AI Act’s obligation to put in place a policy to comply with EU copyright law.”

The Safety and Security chapter, aimed at the most advanced systems with systemic risk, outlines “concrete state-of-the-art practices for managing systemic risks.”

The drafting process began with a plenary session in September 2024 and proceeded through multiple working group meetings, virtual drafting rounds and provider workshops.

The code takes effect Aug. 2, but the commission’s AI Office will enforce the rules on new AI models after one year and on existing models after two years.

A spokesperson for OpenAI told The Wall Street Journal that the company is reviewing the code to decide whether to sign it. A Google spokesperson said the company would also review the code.

Source: https://www.pymnts.com/