OpenAI might own ‘AI,’ the domain name, obviously. But, does it give them the power to influence the power players to make laws in its favour? The EU AI Act, which is touted as the most stringent AI law in the world, is now a mockery. According to a TIME article, OpenAI is said to have lobbied the EU to weaken the much-talked about European Union (EU) AI Act to reduce the regulatory burden on the company.
Crusading for AI regulations across the world is OpenAI’s frontman Sam Altman. In AI senate hearings, world tours and many other forums, Altman has been extensively talking about regulating AI that would probably mitigate the potential harm associated with it. The surprising thing was Altman’s proactive approach to suggesting regulations much before any government body requested it. However, going by the EU lobbying claim, it only fortifies the doubt of how OpenAI has been trying to influence stakeholders that have controlling powers, in their favour.
OpenAI Way or the Highway
OpenAI was lobbying to not categorise generative AI systems such as ChatGPT and Dall-E as ‘high risk’ if they produced content that appeared human-generated. Instead, the company suggested using content labelling and user disclosure. They even stated that though GPT-3 is not high risk, it can possess capabilities that can be used for high-risk applications.
Ironically, last month, Altman had made threats to exit the EU if he felt that they were going to be ‘over regulated’, to which lawmakers had different stances. Romanian member of the European Parliament Dragos Tudorache was open to having talks with Altman, whereas EU industry chief Thierry Breton retorted saying that the draft rules were not for negotiation. However, in the end, the EU draft favoured Altman.
OpenAI’s lobbying endeavours yielded positive results, as the ultimate iteration of the EU legislation draft did not categorise generative AI systems as inherently high risk. It instead focussed on foundational models and restrictions on providers. The proposed EU Act was approved on June 14 and is said to finalise in January.
Take Your Business Elsewhere
While the EU takes a fluctuating stance on AI regulations by incorporating inputs from the very person who triggered the rise of the AI revolution, India has always had their strict stance when it comes to regulation. Union Minister Rajeev Chandrasekhar recently called Altman a ‘smart man’ and said that though he might have his own ideas on how AI should be regulated, India also has some ‘smart brains’ who have their own views on how AI guardrails.
On his recent visit to India, Altman visited prominent leaders and government officials, including Prime Minister Narendra Modi. Though claims of having had fruitful conversations were made, there were no talks on AI regulation or Sam’s views of incorporating a regulatory body.
Big Tech Lobbyists
OpenAI was not the only company that had been trying to meddle with the EU AI draft. Google and Microsoft have also been lobbying against the proposed EU regulations asserting that generative AI systems are versatile and not inherently high-risk. They have been arguing that when small companies utilise these systems for high-risk use cases is when potential dangers arise.
Painting a Misleading Picture
Ever since the AI senate hearing took place where he rallied for AI regulations, Altman has been trying to build a democratised model. By introducing million dollar grants for creating AI regulatory frameworks, and fixing cybersecurity problems, OpenAI is trying to appear as a company that works for the welfare of the people. However, by pushing for regulatory bodies to control AI across countries, and in the EU case, even influencing them to make frameworks that will favour OpenAI, the agenda of the company is still unknown. With big expansion plans, the path remains obscure.
Playing Evasive
Amidst inconclusive results on AI regulations, problems with the chatbot continue. ChatGPT is still laden with data privacy issues, and a number of companies are continuing to ban their employees from using ChatGPT. Yesterday, over 1 lakh ChatGPT user accounts were exposed and sold on the dark web. Without addressing data threats, OpenAI is concentrating on bringing conformity through regulations, probably in a bid to have an upperhand when it comes to future product releases. The company is considering creating a marketplace kind of model AI software.