Helen Toner and Tasha McCauley Argue that Regulation is Essential to Control Market Forces for the Benefit of Humanity.
Can private companies pushing forward the frontier of a revolutionary new technology be expected to operate in the interests of both their shareholders and the wider world?
When we were recruited to the board of OpenAI—Tasha in 2018 and Helen in 2021—we were cautiously optimistic that the company’s innovative approach to self-governance could offer a blueprint for responsible ai development. But based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives. With ai’s enormous potential for both positive and negative impact, it’s not sufficient to assume that such incentives will always be aligned with the public good.
For the rise of ai to benefit everyone, governments must begin building effective regulatory frameworks now.
If any company could have successfully governed itself while safely and ethically developing advanced ai systems, it would have been OpenAI. The organisation was originally established as a non-profit with a laudable mission: to ensure that agi, or artificial general intelligence—ai systems that are generally smarter than humans—would benefit “all of humanity”. Later, a for-profit subsidiary was created to raise the necessary capital, but the non-profit stayed in charge.
The stated purpose of this unusual structure was to protect the company’s ability to stick to its original mission, and the board’s mandate was to uphold that mission. It was unprecedented, but it seemed worth trying. Unfortunately it didn’t work.
Last November, in an effort to salvage this self-regulatory structure, the OpenAI board dismissed its ceo, Sam Altman. The board’s ability to uphold the company’s mission had become increasingly constrained due to long-standing patterns of behaviour exhibited by Mr Altman, which, among other things, we believe undermined the board’s oversight of key decisions and internal safety protocols.
Multiple senior leaders had privately shared grave concerns with the board, saying they believed that Mr Altman cultivated “a toxic culture of lying” and engaged in “behaviour [that] can be characterised as psychological abuse”. According to OpenAI, an internal investigation found that the board had “acted within its broad discretion” to dismiss Mr Altman, but also concluded that his conduct did not “mandate removal”. OpenAI relayed few specifics justifying this conclusion, and it did not make the investigation report available to employees, the press or the public.
The question of whether such behaviour should generally “mandate removal” of a ceo is a discussion for another time. But in OpenAI’s specific case, given the board’s duty to provide independent oversight and protect the company’s public-interest mission, we stand by the board’s action to dismiss Mr Altman. We also feel that developments since he returned to the company—including his reinstatement to the board and the departure of senior safety-focused talent—bode ill for the OpenAI experiment in self-governance.
Our particular story offers the broader lesson that society must not let the roll-out of ai be controlled solely by private tech companies. Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.
And yet, in recent months, a rising chorus of voices—from Washington lawmakers to Silicon Valley investors—has advocated minimal government regulation of ai. Often, they draw parallels with the laissez-faire approach to the internet in the 1990s and the economic growth it spurred. However, this analogy is misleading.
Inside ai companies, and throughout the larger community of researchers and engineers in the field, the high stakes—and large risks—of developing increasingly advanced ai are widely acknowledged. In Mr Altman’s own words, “Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history.”
The level of concern expressed by many top ai scientists about the technology they themselves are building is well documented and very different from the optimistic attitudes of the programmers and network engineers who developed the early internet.
It is also far from clear that light-touch regulation of the internet has been an unalloyed good for society. Certainly, many successful tech businesses—and their investors—have benefited enormously from the lack of constraints on commerce online. It is less obvious that societies have struck the right balance when it comes to regulating to curb misinformation and disinformation on social media, child exploitation and human trafficking, and a growing youth mental-health crisis.
Goods, infrastructure and society are improved by regulation. It’s because of regulation that cars have seat belts and airbags, that we don’t worry about contaminated milk and that buildings are constructed to be accessible to all. Judicious regulation could ensure the benefits of ai are realised responsibly and more broadly. A good place to start would be policies that give governments more visibility into how the cutting edge of ai is progressing, such as transparency requirements and incident-tracking.
Of course, there are pitfalls to regulation, and these must be managed. Poorly designed regulation can place a disproportionate burden on smaller companies, stifling competition and innovation. It is crucial that policymakers act independently of leading ai companies when developing new rules.
They must be vigilant against loopholes, regulatory “moats” that shield early movers from competition, and the potential for regulatory capture. Indeed, Mr Altman’s own calls for ai regulation must be understood in the context of these pitfalls as having potentially self-serving ends. An appropriate regulatory framework will require agile adjustments, keeping pace with the world’s expanding grasp of ai’s capabilities.
Ultimately, we believe in ai’s potential to boost human productivity and well-being in ways never before seen. But the path to that better future is not without peril. OpenAI was founded as a bold experiment to develop increasingly capable ai while prioritising the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology.
Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that ai’s evolution truly benefits all of humanity.
Source : Economist
In an era where consumer preferences are dynamic, AI food trend analysis has emerged as a revolutionary tool to decipher…
Predictive maintenance has become a powerful tool across various industries, including the food industry, where equipment reliability and uptime are…
AI in food supply chain optimization is transforming how companies manage inventory, predict demand, and minimize waste. By analyzing large…
Artificial Intelligence (AI) has been making waves across various industries, but its impact on food quality control has been especially…
Artificial Intelligence (AI) has transformed various sectors, and the food industry is no exception. One of the most promising applications…
The agricultural industry is facing numerous challenges, including climate change, population growth, and resource scarcity. These challenges have created a…