Inside the secret meeting that led to the AI political resistance

| 3,723


In early January, a group of 90 or so political, community and thought leaders gathered in a New Orleans Marriott for a secret conference on artificial intelligence — so secret, in fact, that no one knew who else had been invited until they walked into the room. Church leaders and conservative academics were sitting next to labor union representatives. Progressive power brokers who’d drafted Bernie Sanders to run for president suddenly found themselves breathing the same air as MAGA talking heads. And the AI thought leaders who’d invited them to New Orleans were hoping that none of them would kill each other.

On Wednesday, the Future of Life Institute, one of the most authoritative voices in the world of AI safety, released the results of that meeting: the Pro-Human AI Declaration, a concise document with five guidelines on how AI development must be centered on humanity first, with a pointed focus on avoiding the concentration of power in the hands of the powerful; preserving the well-being of children, families and communities; and preserving human agency and liberty. It has the broadest range of signatories that I personally have ever seen on a single political document.

Powerful civic organizations well outside the tech world have signed onto the Declaration: major unions like the AFL-CIO, the American Federation of Teachers, and the Screen Writers Guild; religious organizations like the G20 Interfaith Forum Association and the Congress of Christian Leaders; the Progressive Democrats of America, the group that drafted Bernie Sanders to run as a Democrat in 2016; think tanks like the conservative Institute for Family Studies and advocacy groups like Parents RISE!.

The individual signatories range even further: Democratic presidential candidate Ralph Nader, AFT president Randi Weingarten, Signal Foundation president Meredith Whittaker, The Blaze’s Glenn Beck, War Room’s Steve Bannon, Virgin Group founder Sir Richard Branson, former National Security Advisor Susan Rice, SAG-AFTRA members, leaders of major evangelical organizations. More are expected to sign on in the next several days.

The meeting was under Chatham House Rules and the list of attendees remains private. But the participants who agreed to speak to The Verge about the experience said that they’d been invited by Max Tegmark, the co-founder of FLI and an MIT professor who had been named to the TIME 100 AI list. “We spent a lot of time talking to him over the course of the last few months,” Weingarten, a powerful teachers’ union advocate, told The Verge in a phone interview. Though she was unable to make it to New Orleans, she was involved in drafting the document, and she’d found remarkable similarities in FLI’s worldview and AFT’s own “common sense guardrails” for using AI in schools. “We’ve been on parallel tracks for quite a while without knowing it.”

Joe Allen, the cofounder of Humans First and a former correspondent for Bannon’s show War Room, told The Verge that Tegmark had also invited him to New Orleans, as well as an earlier proof-of-concept meeting in Manhattan. Though the wide range of attendees was jarring and the political tensions weren’t completely gone, Allen was surprised by how quickly they all agreed on similar topics: autonomous lethal weapons should not be solely AI-powered. AI companies should not leverage children’s emotional attachment for profit. AI should not be granted legal personhood. (The least popular position in the Declaration still got approved by 94% of attendees.)

“I think about it like, if there’s knowledge that there’s poison in the water supply, or that drugs are flooding schools — anything like that, in general — most people are going to be against it and it isn’t partisan,” he said. AI was slightly trickier in that people’s general opinion about specific AI models divided along party lines — Grok was the “based” AI and Anthropic was the “woke” AI — but to Allen, the distinction was meaningless. “Like, what does ‘based’ and ‘woke’ even mean at this point?”

“‘We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right.’”

Nearly a decade ago, FLI had laid out a more optimistic set of principles for AI research — 23 principles, to be exact, written during the 2017 Asilomar Conference for Beneficial AI, which drew over 100 tech luminaries of the day. Signatories and endorsers of the Asilomar AI Principles included AI leaders like Sam Altman, Elon Musk, and Demis Hannabis; luminaries like Stephen Hawking and Ray Kurzweil, and representatives from major companies like Google, Intel and Apple.

But this time, no one from the industry was invited, to say nothing of people on the level of Altman and Musk. “That was actually a very deliberate design choice,” Emilia Javorsky, the director of the Futures Program at FLI, told The Verge. Whenever she’d attended conferences and events about AI’s impact across society, she noticed that corporate interests would eventually become the dominant perspective in the room, “just by nature of their size and weight and funding capabilities.” Instead, the invitees were from civil society organizations, all of whom were experiencing mass disruption due to artificial intelligence, and all of whom were fed up with Big Tech shrugging off their concerns.

Anthony Aguirre, another co-founder of FLI and a prominent cosmology professor at UC Santa Cruz, emphasized that this declaration was not their attempt to redo the Asilomar Principles, but a somber acknowledgement of a dark new reality — one where their former colleagues were now the heads of major corporations, trying to achieve artificial general intelligence before their rivals did and satisfy shareholders before addressing safety. The power to steer AI’s development was increasingly concentrated in the hands of the few, and the Trump administration’s aggressive deregulation had further empowered them. “Other than the overall mass of humanity, there was one entity that would have put meaningful control on what they could do, and that was the US government,” he told The Verge. “Now that it’s backing them and wants to keep them unrestrained, the only thing that’s a real threat are other companies.”

“If the government won’t do it, then the people have to force the government to do it”

In the absence of Big Tech and public scrutiny, said Javorsky, there was something unique about how quickly this group coalesced around the same issues and came to the same conclusions. Over the course of the next few days, Javorsky kept hearing the same refrain: “‘We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right.’”

In Weingarten’s view, the Declaration served as the mission statement of what she called a “key demanding coalition” — a strategic alliance of political opponents — and a way to keep all their efforts coordinated against a government that elevated enterprise over society. “What is really important is that there are other people who have said, let’s try to create a bigger coalition to say that we need humanity to be at the center of AI,” she noted. On its own, AFT could have perhaps pushed the issue of child safety, but there was only so much pressure they could exert on lawmakers. But if they joined forces with several other trade unions, plus religious organizations, plus some allies on the other side of the aisle? Now those lawmakers would be nervous. “If the government won’t do it, then the people have to force the government to do it. And you start with a statement of principles.”

“If there’s one statement I would make about the whole thing, which is what I said to the group when I had their attention, is that no one is going to engineer a pro-human movement. The only thing you can do is inspire it,” said Allen. “I do think that statements like this should inspire a pro-human movement. Like a fundamental document that’s setting the tone…There’s no amount of social engineering, or money, or media, or any of that, that’s really gonna do it.”

Exactly what that looks like, however, remains unclear — or at least, not easily translated into elections. FLI is running an ad campaign called “Protect What’s Human,” but as a 501(c)3, cannot endorse or campaign for candidates or ballot initiatives during the midterms. They did, however, conduct a poll with Tavern Research in February, testing the popularity of the Declaration’s principles among voters. Though respondents were split neatly down partisan lines in whom they voted for and which party they belonged to, they overwhelmingly supported the statements that appeared in the Declaration, by a wide margin. The worst-performing principles — AI must not create monopolies or concentrate control in a few hands — still garnered 69% support from respondents. The best-performing principle — humans needed to stay in charge of AI and prevent it from harming children, families and communities — won 80% support.

To Javorsky, the poll results validated the conference’s points. “It’s one thing to have a whole bunch of civil society actors in a room together and think something’s representative. But you have to actually validate those with real people. This is actually resonating with them.”

When we spoke on Thursday, Anthropic, which had recently floated the possibility that its AI had gained consciousness, was in the middle of a fight with the Pentagon over whether the military could use its AI for autonomous lethal weapons without human oversight. By Friday evening, OpenAI threw Anthropic under the bus to score their own Pentagon contract. Days after that fight resolved, and the United States used Anthropic-powered tools to assassinate the Ayatollah of Iran, and several more reports of looming AI layoffs emerged, and the scale of the Pentagon’s asks on mass surveillance was made more evident, Alan Minsky, the CEO of the Progressive Democrats of America and a meeting attendee, told The Verge that he could not foresee any political opposition towards the declaration, either from the left or the right.

“Altman and Musk, certainly, have taken a flippant manner towards what are serious threats to communities: the psychological deterioration of a population that lives increasingly online, the impact of continual economic maldistribution of wealth, and, of course, contempt for the idea that basic protection must come before profits,” he said. “
The risk of an existential threat to humanity is no longer something they even blink at. As the public realizes that this is their attitude, that they have utter contempt for the average person’s welfare — yes, we think the public will be on our side.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.