EU SERIOUSChatGTP Creator, Sam Altman Says the Company Might Leave EU Due to Over-Regulation

The CEO of OpenAI, the company behind ChatGPT, has raised concerns about the possibility of leaving the European Union (EU) if it fails to comply with upcoming legislation on artificial intelligence (AI).

The EU is preparing to introduce the first specific regulations for AI, which may include requirements for generative AI companies to disclose the copyrighted material used to train their systems.

Sam Altman of OpenAI has expressed reservations about the current draft of the EU AI Act, stating that it is overly restrictive. However, he remains hopeful that the legislation will be revised. Altman believes that some of the Act’s safety and transparency provisions would be technically unfeasible for OpenAI to implement, as reported by Time magazine. He recently met with UK Prime Minister Rishi Sunak and the heads of AI companies DeepMind and Anthropic to discuss the risks associated with AI, including disinformation and national security, as well as the necessary voluntary actions and regulations to address them.

While some experts express concerns about the potential threats posed by super-intelligent AI systems, Sunak believes that AI has the power to positively transform humanity and improve public services. At the G7 summit in Hiroshima, the leaders of several nations, including the US and the UK, agreed on the need for international collaboration in creating trustworthy AI.

Before the EU legislation takes effect, the European Commission aims to develop an AI pact with Alphabet, Google’s parent company. EU industry chief Thierry Breton emphasizes the importance of international cooperation in regulating AI and proposes working with AI developers to establish a voluntary AI pact ahead of the legal deadline.

French President Emmanuel Macron (L) meets OpenAI CEO Sam Altman (R) at the Elysee Palace in Paris on 23 May 2023, to discuss AI issues. | Photo: Reuters

Tim O’Reilly, a veteran of Silicon Valley and the founder of O’Reilly Media, suggests that a good starting point for AI regulation would be to mandate transparency and create regulatory institutions for enforcing accountability. He emphasizes the need for companies developing advanced AI to collaborate on defining comprehensive metrics that can be regularly reported to regulators and the public, with a process in place to update those metrics as new best practices emerge. O’Reilly also warns against succumbing to fearmongering about AI, which, combined with regulatory complexity, could lead to analysis paralysis.

 

Latest

Related