"i ain't reading all that" meme combined with the logos of various AI-related companies

Between the recent kerfuffle at OpenAI, statements from politicians about “risk of extinction from AI”, and AI investor/philanthropist Sam Bankman-Fried’s recent trial for fraud,  there’s been a lot of talk about the dangers of artificial intelligence, AI regulation and a movement called “effective altruism” lately.

After trying and failing to explain this brain-melting rabbit hole to my friends and family, I felt the need to write a simplified explanation of just what the heck is going on with major AI companies. My hope is that people outside of the tech bubble can be spared the psychic damage of being exposed to this content firsthand.


In the beginning, there was a group of science enthusiasts, authors, and computer scientists. After realizing that technology is getting increasingly powerful and watching too much science fiction, they began to get very worried about artificial intelligence. To be specific, they got scared that advanced computers will allow AI to become sentient and attain god-like powers, including the potential to end the entire human race if it so chooses. This hypothesized world-changing event would become known as “The Singularity”.

Meanwhile, a handful of privileged philosophers from the University of Oxford and other fancy universities got together and said something like this to themselves:

“Philanthropy is pretty cool, but we can do it even better. We have the technology. We have the capability to optimize our donations to charity so that they’re super effective! Let’s come up with new kinds of Ethical Math so we can objectively calculate the best causes to support and make the world a better place!”

This idea sounds great in theory - I too love math, optimization, and helping people! Some of the early causes supported by the formulas are animal rights and disease prevention in Africa, both of which are cool with this African-American vegan. But outside of those successes, putting this system into practice very quickly became problematic, especially once AI got involved. Here’s how:

  1. Effective altruists (known as EAs) get to design the Ethical Math formulas and run the numbers all by themselves.
  2. By definition, any project that the Ethical Math formulas endorse is automatically considered one of the most important causes in the world. This is great for the egos of everyone involved.
  3. Once the EAs learned about The Singularity, they plugged “Evil God AI” into their Ethical Math formulas and discovered that stopping it from killing us is the most important project in the history of mankind. This is great for the egos of everyone involved. It’s also great for the wallets of some of the people involved, especially when those people work in the field of AI Safety and are already friends with rich EAs like Sam Bankman-Fried. Unfortunately, it’s not so great for people who are worried about real issues being caused by AI today.
  4. Next, effective altruists did some more Ethical Math and found that one of the most effective ways of spending money is by recruiting more effective altruists. The idea is that these new EAs will then spend their time and money on the most Effective causes while also Effectively recruiting more EAs. This is great for the egos of everyone involved.
  5. Then, they took some future population projections and started imagining how their Ethical Math formulas will affect the world 10, 100, 1000, or even 10,000 years from now and called it Longtermism. These projections make the Ethical Math make for stopping the Evil God AI even more important while also bringing into play other far-fetched ideas like colonizing space.
  6. Because stopping Evil God AI is one of the most important causes in history, EAs have started lobbying governments for strict regulations around large AI systems. This is great for the bank accounts of the big companies currently trying to make Good God AI and annoying for their competitors.

One byproduct of this Ethical Math is that an Evil God AI killing everyone becomes a worse problem than climate change “just” killing or displacing millions. This is a problem for rich techies who can afford to run from climate change but don’t think they can run from Evil God AI. Luckily, a Good God AI could potentially solve climate change with its “superintelligence”, which leads them to the conclusion that using lots of electricity trying to build it is actually good for the planet.

The effective altruist pitch of saving humanity using math and technology is also great for the egos of ambitious university students, which in turn makes for a good recruiting environment. For example, the student-run Existential Risk Initiative club at Yale recently rejected a prospective member due to her inexperience and told her to listen to more EA-related podcasts.

In 2023, we’re now in a position where a subset of people in this overlapping effective altruism/AI Safety network believe that God AI is going to exist very soon, possibly in the form of GPT. This has caused a schism within the movement about how the God AI can be controlled by “aligning it with human values”, if at all. The pessimists within this group are currently losing their minds about about the end of the world while the optimists are excited for a new era of AI-guided humanity.

OpenAI is caught in the middle this fight. The company was founded as a non-profit in 2015 before restructuring and accepting a multi-billion dollar investment from Microsoft a few years later, leading to uncertainty about the direction of the company with regards to safety and profit-seeking — disagreements which almost certainly played into Sam Altman’s firing.


Well, there you have it folks. That’s pretty much the shortest possible explanation of all the strangeness in Silicon Valley that’s been in the news.

After reading this piece, one might come to the conclusion that the field of AI “existential risk” kinda seems like a new form of pseudo-religion that’s tied up in a bunch of corporate drama for clout and money.

That conclusion would be correct.