PauseAI Newsletter: August 12, 2024
A new robot steps onto the stage, AI lobbyists fight against California regulation, and the Shakespearean OpenAI saga continues.
This issue’s Guest Post describes the long and storied history of OpenAI, and how, exactly, we got this close to the brink.
By PauseAI volunteer Aslam Husain
Figure 02 Steps Onto the Stage
On August 6th, the company Figure introduced Figure 02, which they described as “The world’s first commercially-viable autonomous humanoid robot.” Figure 02 features OpenAI speech-to-speech reasoning which, in the product demonstration of Figure 01, allows the robot to understand and respond to a user’s voice commands while describing and interacting with the immediate environment.
Though widespread rollout of Figure 02 has not yet occurred, there already appears to be interest in practical applications of this technology. BMW has tested the robot in its manufacturing plant, where Figure 02 “successfully completed production steps that can save employees from having to perform ergonomically awkward and tiring tasks.” While saving employees from potentially strenuous or dangerous work is commendable, many people are concerned with the possibility of jobs being automated out of existence. According to a 2023 Gallup poll, “three in four Americans believe AI will reduce jobs.” While traditionally, automation existed alongside workers, a robot able to take on more human-like abilities could ultimately replace human workers altogether. As AI and robotics scale, it may become less expensive to purchase robots than to pay human workers who require health insurance and sick leave on top of their usual salary.
The foray of AI integrated robots into manufacturing space is not only concerning with regard to job security; it represents an initial incursion of AI into the systems that power and shape our world. Considering the serious risks that future AI may pose to humanity, any new bridge linking AI to the physical world will only enable powerful AI misaligned with human values to utilize our systems in harmful or destructive ways. While this is a longer-term concern, handing AI the keys to manufacturing- or any critical infrastructure space- creates a potential lose-condition for humanity.
Even with current AI technology, there is the potential for hallucinations- where the AI confidently states incorrect information- which could be dangerous in a factory environment where accuracy may be vital. Jailbreaking is another concern: bad actors may use the AI linked to Figure 02 in ways neither OpenAI nor Figure intended. Hallucinations and jailbreaking would be extremely dangerous not only in a factory environment, but in any environment in which a humanoid robot could be deployed. If the goal of automation is to spare workers dangerous tasks, it is even more critical to ensure that the automation tools themselves meet a high standard of safety.
Given the current and future risks the merger of AI and robotics poses, it is vital to pause the development of AI systems until we are confident they can be safely deployed. At a minimum, we need time to put regulations and testing standards in place, and we must be able to demonstrate that those standards have been met.
AI lobbyists fight against California regulation, while scientists and the general public support it
AI lobbyists have worked themselves into a frenzy over SB-1047, a California bill which would be the firstin the US to regulate frontier AI models. The bill’s provisions include:
Mandating pre-deployment safety testing and cybersecurity protections for models larger than 1026 FLOP — a threshold larger than the current frontier models;
Enabling the California Attorney General to hold AI developers accountable if their models cause mass casualties or $500 million in damages;
Establishing whistleblower protections for AI company employees reporting unsafe practices.
Two of the loudest critics of this bill have been Y Combinator and Andreessen Horowitz, both influential firms with deep ties to the AI industry. Marc Andreessen (of Andreessen Horowitz) sits on the board of Facebook, and Y Combinator has invested in OpenAI.
These firms have spread brazen misinformation about SB-1047 — including that the bill would send model developers to jail for failing to anticipate misuse (unambiguously false) and that the bill will stifle innovation and restrict startups (also false, as the bill’s provisions only apply to training runs above $100 million). In a response letter, Sen. Scott Wiener, SB-1047’s lead sponsor, refuted each of these claims.
The few AI scientists who oppose the bill— such as Meta Chief Scientist Yann LeCun or “godmother of AI” Fei-Fei Li — often have financial incentives to do so. Meta is one of the few AI companies large enough to be affected by the bill’s provisions, and Lei’s billion-dollar startup received investment backing from Andressen Horowitz.
Meanwhile, the most knowledgeable AI scientists with the fewest conflicts of interest support the bill. An open letter by Geoffrey Hinton, Yoshua Bengio, and Stuart Russell— the 2 most-cited AI researchers on Earth, plus the author of the most-used AI textbook— states that:
“As some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety testing and common-sense precautions necessary”
The general public agrees, overwhelmingly. Polling shows that 65% of Californians support SB-1047, with bipartisan support from Democrats and Republicans. Only 23% say the bill should be made less strict.
Even some smaller AI companies are behind the bill. Simon Last, co-founder of the AI company Notion, argues that the bill will establish common-sense standards without hurting startup innovation. This perspective makes complete sense, considering that SB-1047 would apply only to the largest and most dangerous AI companies— despite claims to the contrary.
Will truth, expertise, and public support be enough to counteract tech billionaires who lie through their teeth? We will soon find out.
SB-1047 is not enough to keep us safe. It is certainly nowhere near an AI Pause — it doesn’t cap the size of training runs, allowing companies to build ever-larger models (provided they can pass safety tests). The open letter by the world’s leading AI scientists reflects this fact, calling the bill the “bare minimum for effective regulation of this technology.”
But the bill, at least, would get the ball rolling, paving the way for more ambitious federal legislation in coming years. Yet even this will not be enough. Even if the bill were more ambitious, the need for international regulation would be just as urgent as it is today.
The fact that big tech will fight tooth and nail against even this light-touch regulation shows that they will stop at nothing when more ambitious regulation— enough to actually keep us safe— is on the table. The best defense we have is widespread, grassroots activism. These firms may have the money and the lobbyists, but we have the scientists and the people. And we are just getting started.
Whether or not SB-1047 passes, despite the protestations of the AI lobbyists, we should not rest for a moment. There’s still work to be done.
Thank you for reading PauseAI’s newsletter. If you support our efforts to achieve a safer world, please consider donating to PauseAI.
We are a small, grassroots group, up against the biggest AI companies in the world. We rely on donations to fund volunteer events, initiatives, community projects, and much more.
You can donate on our website: https://pauseai.info/donate
This issue was coauthored by PauseAI members Aslam Husain, Bridgett Kay, and Felix De Simone.
Thank you again for what you do.