Effective Altruism (EA) gets some attention from Politico for being a "hot new philosophy" funded by deep pockets:
EAs are particularly fixated on the possibility that future AI systems could combine with gene synthesis tools and other technologies to create bioweapons that kill billions of people — a phenomenon that’s given more traditional AI and biosecurity researchers a front row seat as Silicon Valley’s hot new philosophy spreads across Washington.
Many of those researchers claim that EA’s billionaire backers — who often possess close personal and financial ties to companies like OpenAI and Anthropic — are trying to distract Washington from examining AI’s real-world impact, including its tendency to promote racial or gender bias, undermine privacy and weaken copyright protections.
...
The generally white and privileged backgrounds of EA adherents also has prompted suspicion in Washington, particularly among Black lawmakers concerned about how existing AI systems can harm marginalized communities.
So on the one hand you have lobbyists (Effective Altruism) concerned about how AI could kill billions of people and on the other hand people concerned with AI's potential to distract from DEI concerns. Hmm. Let's weigh these concerns. Clearly the first should have the higher priority, right?
Nope.
Here's a quote from Liberal technocrat Cory Booker--
“I don’t mean to create stereotypes of tech bros, but we know that this is not an area that often selects for diversity of America,” Sen. Cory Booker (D-N.J.) told POLITICO in September.
“This idea that we’re going to somehow get to a point where we’re going to be living in a Terminator nightmare — yeah, I’m concerned about those existential things,” Booker said. “But the immediacy of what we’ve already been using — most Americans don’t realize that AI is already out there, from resumé selection to what ads I’m seeing on my phone.”
Of course some "AI apocalypse" (eye roll) is a possibility, but let's get real. What's important is getting marginalized folks up the meritocratic ladder so they can get their share of the technocracy and its bounty.
While the article does its due diligence in quoting sane people who are EA allies, the main thrust of this article is to suggest Effective Altruism is a "cult" of fanatical true believers who have more influence than they should because they get a lot of funding from Dustin Moskovitz and Open Philanthropy. They are bothering everybody with all their doom and gloom.
It's like a scene in Don't Look Up.
I don't know, maybe it is cult-ish. But who cares? It's like saying that Daniel Ellsberg took himself too seriously. That's just bad form. Not done. These EA folks are naifs who don't understand the Washington game. They're too sincere. They just don't fit here. They should go back to flakey California where they do.
The article ends with these lines--
... many AI and biosecurity experts continue to fret that deep-pocketed doomsayers are distracting Washington with fears of the AI apocalypse.
“The irony is that these are people who firmly believe that they’re doing good,” Connell said. “And it’s really heartbreaking.”
Poor dears. It's so touching that they should be concerned about the future of humanity. They just don't understand how Washington works.