Meta Opts Out of EU AI Code of Practice, Citing Legal Concerns
Meta has declined to sign the EU's AI code of practice, raising concerns about legal ambiguities.
Key Points
- • Meta declines to sign EU's AI code of practice.
- • Joel Kaplan critiques the EU's AI regulatory path.
- • Microsoft is considering signing the code, while OpenAI and Mistral have already committed.
- • EU's code mandates transparency and prohibits using pirated content.
Meta Platforms has officially declined to sign the European Union's (EU) voluntary code of practice on artificial intelligence (AI), a decision articulated by Joel Kaplan, the company's chief global affairs officer. In a LinkedIn post, Kaplan expressed that Europe is ‘heading down the wrong path on AI.’ He emphasized that the code imposes significant legal ambiguities for developers, with requirements that extend beyond those outlined in the forthcoming AI Act, which aims to regulate AI usage within the EU.
The EU's code of practice, which was crafted by a panel of 13 independent experts, sets forth mandates for transparency, including the obligation for signatories to disclose the sources of their training data and comply with copyright laws. Amongst other expectations, the code disallows the use of pirated content in AI training and necessitates regular updates on AI tools utilized by companies. In contrast to Meta’s rejection, Microsoft is reportedly considering signing the code, with their President Brad Smith suggesting a supportive stance after a thorough review. Furthermore, OpenAI and Mistral have both signed the code, showcasing a more cooperative approach with EU regulations.
The broader context reveals that earlier in July, multiple tech firms requested a delay in the AI Act’s implementation; however, the European Commission maintained its schedule. New guidelines affecting AI model providers are set to come into effect by August 2, 2027, impacting companies including OpenAI, Anthropic, Google, and Meta.