OpenAI has started a program that will provide $100,000 grants to fund research towards developing a democratic process for deciding the norms AI systems should adhere to in response to regulatory concerns in the artificial intelligence (AI) business.
A Democratic AI Framework’s Development
Because there is no regulation and little government control, the enormous breakthroughs in the field of artificial intelligence, and more especially, generative AI, have raised worries around the world.
Elon Musk and many other well-known figures in the tech sector signed an open letter asking for a hold to be imposed on any experiments more advanced than OpenAI’s GPT-4 until regulations are put in place in an effort to persuade stakeholders to take into consideration a regulatory framework for the technology.
OpenAI, the company behind ChatGPT, has demonstrated support for the need to regulate technology in a number of ways, including by introducing this initiative. According to a statement from the business, OpenAI, Inc., « is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law. »
The company added that
“The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence”.
OpenAI hopes to achieve the goal of democracy through the program, which is « a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision-making process. »
Additionally, OpenAI made it clear that the experiment would not serve as the final deciding factor but would rather enrich the process of creating democratic tools and future choices using such tools.
The $100,000 grants will be given to teams that present compelling frameworks for addressing issues like whether AI systems should criticize or condemn public figures, how AI should represent divergent viewpoints in its outputs, and whether the AI should represent the’median individual’ from the user’s country or from around the world.
Since it is a democratic process, the winning teams will be required to show prototypes created by involving at least 500 people after receiving the money. The prototypes must also make all of their source code and intellectual property open source and accessible to the public.
Is OpenAI Sincere in Seeking AI Regulation?
Some allege that Sam Altman and OpenAI’s efforts for legislation are purely driven by financial gain. There are a few reasons why taking part in the development of regulations for one’s own sector might be quite advantageous.
The first is that following regulations typically comes at a cost to businesses in the form of additional charges or general repercussions. New regulations may only strengthen OpenAI’s position as the leading provider of generative AI, wiping out any potential rivals.
It will be difficult to trust any regulatory frameworks developed from this « democratized » effort, whether Altman is sincere or not, because OpenAI, which has a reason to promote regulation beneficial to itself, is physically paying for it.
This software was released just three days after OpenAI’s CEO Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever openly urged for an organization to regulate artificial intelligence on an international scale, akin to the one that oversees nuclear power.
The trio said that the present legislative framework cannot adequately oversee the technology because of the speed at which artificial intelligence is evolving. They said:
“We are likely to eventually need something like an [International Atomic Energy Agency] for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc”
Although Altman claimed that the current European Union AI Act proposal was « overregulating » and backed its retraction, it is unclear exactly what criteria Altman is holding the laws to. He further cautioned that OpenAI would stop all activities in the area if it couldn’t comply with EU standards.