Sections

ideals
Business Essentials for Professionals



Markets
22/05/2023

Regulators Review Their Policies In Order To Address Issues With Generative AI Like ChatGPT




Regulators Review Their Policies In Order To Address Issues With Generative AI Like ChatGPT
Some regulators are using outdated rules to maintain control over a technology that has the potential to fundamentally alter how societies and businesses function as the race to create more potent artificial intelligence services like ChatGPT accelerates.
 
With the rapid advancements in the generative AI technology that powers OpenAI's ChatGPT, privacy and safety issues have arisen. The European Union is in the forefront of developing new AI rules that may become the worldwide standard.
 
However, it will take some time before the law is put into effect.
 
"In absence of regulations, the only thing governments can do is to apply existing rules," said Massimiliano Cimnaghi, a European data governance expert at consultancy BIP.
 
"If it's about protecting personal data, they apply data protection laws, if it's a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable."
 
A task force was established in April by Europe's national privacy watchdogs to address concerns about ChatGPT after the Italian regulator Garante ordered the service to be shut down after charging OpenAI with violating the EU's GDPR, a comprehensive privacy law passed in 2018.
 
After the American corporation agreed to include age verification tools and allow European users to opt out of having their information used to train the AI model, ChatGPT was reinstituted.
 
A person close to Garante told Reuters that the agency would start looking more closely at alternative generative AI tools. In April, inquiries into OpenAI's compliance with privacy rules were also opened by the data protection authorities in France and Spain.
 
The tendency of generative AI models to make errors, or "hallucinations," spitting out false information with eerie certainty, has come to be well-known.
 
These mistakes could have detrimental effects. People might unfairly be turned down for loans or assistance payments if a bank or government agency used AI to expedite decision-making. Large tech businesses have ceased utilising AI products that are regarded unethical, such as financial goods, including Alphabet's Google and Microsoft Corp.
 
According to six regulators and experts in the US and Europe, regulators intend to apply current laws governing copyright and data privacy to two crucial issues: the data put into models and the material they produce.
 
The two regions' agencies are being urged to "interpret and reinterpret their mandates," according to Suresh Venkatasubramanian, a former White House technology advisor.
 
He mentioned an examination into algorithms by the U.S. Federal Trade Commission (FTC) using its pre-existing regulatory authority.
 
The EU's proposed AI Act will make businesses like OpenAI subject to legal challenges by requiring them to reveal any copyrighted content, like books or photos, used to train their models.
 
However, according to Sergey Lagodinsky, one of several MPs involved in creating the EU proposals, proving copyright infringement will not be simple.
 
"It's like reading hundreds of novels before you write your own," he said. "If you actually copy something and publish it, that's one thing. But if you're not directly plagiarizing someone else's material, it doesn't matter what you trained yourself on.
 
According to Bertrand Pailhes, the technology head at the French data regulator CNIL, the organisation has begun "thinking creatively" about how existing legislation may be applied to AI.
 
For instance, in France, the Defenseur des Droits (Defender of Rights) is often responsible for handling discrimination allegations. However, he claimed that CNIL has taken the lead on the matter because to its lack of knowledge in AI bias.
 
"We are looking at the full range of effects, although our focus remains on data protection and privacy," he told Reuters.
 
The firm is thinking about utilising a GDPR clause that safeguards people from automated decision-making.
 
"At this stage, I can't say if it's enough, legally," Pailhes said. "It will take some time to build an opinion, and there is a risk that different regulators will take different views."
 
New regulations for AI are being developed by a number of state agencies in Britain, including the Financial Conduct Authority. In order to further its comprehension of the technology, it is consulting with the Alan Turing Institute in London in addition to other judicial and academic organisations, a spokesperson told Reuters.
 
Despite calls for deeper engagement with corporate executives from certain industry insiders, regulators are still adjusting to the pace of technological advancement.
 
The conversation between regulators and businesses has been "limited" thus far, according to Harry Borovick, general counsel at Luminance, a firm that utilises AI to process legal papers.
 
"This doesn’t bode particularly well in terms of the future," he said. "Regulators seem either slow or unwilling to implement the approaches which would enable the right balance between consumer protection and business growth."
 
(Source:www.reuters.com)

Christopher J. Mitchell

Markets | Companies | M&A | Innovation | People | Management | Lifestyle | World | Misc