On Tuesday, a number of present and past workers at artificial intelligence (AI) firms, such as Alphabet’s Google DeepMind and Microsoft-backed OpenAI, expressed concerns about the hazards associated with the developing technology.
Eleven present and former employees of OpenAI and one current and one former employee of Google DeepMind said in an open letter that the profit-driven nature of AI businesses impedes efficient supervision.
The letter went on, “We do not think bespoke structures of corporate governance are sufficient to change this.”
On Tuesday, a number of present and past workers at artificial intelligence (AI) firms, such as Alphabet’s Google DeepMind and Microsoft-backed OpenAI, expressed concerns about the hazards associated with the developing technology.
Eleven present and former employees of OpenAI and one current and one former employee of Google DeepMind said in an open letter that the profit-driven nature of AI businesses impedes efficient supervision.
The letter went on, “We do not think bespoke structures of corporate governance are sufficient to change this.”
It also raises concerns about the dangers of unchecked AI, including the dissemination of false information, the demise of autonomous AI systems, and the exacerbation of already-existing disparities, all of which could lead to the “extinction of humans.”
Despite laws prohibiting such content, researchers have discovered instances of image generators from businesses like Microsoft and OpenAI creating images with misleading information about elections.
The letter stated that AI businesses cannot be depended upon to freely share information with governments about the capabilities and limitations of their systems, and that they have “weak obligations” to do so.
The open letter is the most recent to express worries about the safety of generative AI technology, which can generate text, images, and audio that resembles that of a human quickly and affordably.
The association has asked AI companies to refrain from enforcing confidentiality agreements that forbid criticism and instead to create a channel for present and past employees to voice concerns about risks.
Separately, the company founded by Sam Altman announced on Thursday that it has thwarted five clandestine influence campaigns that aimed to employ its artificial intelligence models for online deceptive activity.
For more information, visit https://technoworldhub.com/