Thu. Mar 13th, 2025

AI Manhattan Project Proposed. The US’ Suicide or Salvation?

Is AI A Nuclear-level Threat?

Shocked.

That’s how most people have felt after reading the annual report by the US-China Economic and Security Review Commission. They conclude that the race to AGI (Artificial General Intelligence, or a ‘God AI’) is a matter of extreme National Security, a guarantee (or sentence) to the US global supremacy, similar to how nuclear bombs were treated decades ago.

Nonetheless, the program openly calls for a Manhattan Project-like program so that the US reaches AGI before China.

In other words, this report considers the development of AGI of similar strategic importance to the development of nuclear bombs.

But is the promise of building AGI indeed a matter of life and death for the US, and if so, is a Manhattan Project-like program a good idea or a national suicide?

You are probably sick of AI newsletters that simply report the news. That is easy, and anyone can do it, which is why there are so many and why you have grown to abhor them.

But explaining why it matters is another story. That requires knowledge, investigation, and deep thought… all attributes of the people who engage weekly with TheTechOasis, the newsletter that aims to answer the most pressing questions in AI in a thoughtful yet easy-to-follow way.

The Race is On

To understand why people are asking for nuclear-level treatment of AI, let’s revisit the defining moments that led to the development of nuclear weaponry.

A Chain Reaction of Events

When Otto Hahn and Fritz Strassman demonstrated that you could bombard uranium with neutrons to split its nucleus into lighter elements (nuclear fission), thereby releasing insane amounts of energy, it only took a month for Lise Meitner and Otto Frisch to explain the phenomenon theoretically.

A few months afterward, Leo Szilard, who in 1933 already theorized that if neutrons could release more neutrons when striking a nucleus, it would create a chain reaction (fission events leading to other fission events), proved alongside Enrico Fermi that a self-sustaining chain reaction could indeed emerge from materials like uranium.

This event prompted the famous Einstein’s letter to FDR (Franklin Delano Roosevelt, the US President at the time), which argued that, based on Szilard and Fermi’s discovery, the risk the Germans could build a nuclear bomb was very real.

This letter kickstarted the race for the nuclear bomb and the creation of the Manhattan Project.

Two possible things could be done with this discovery:

  • Nuclear reactors, which have a criticality constant of k=1, meaning that, on average, one neutron fissioning a uranium nucleus leads to exactly another fission event. When this happens, we say the chain reaction has reached ‘criticality,’ which means self-sustaining.

The heat liberated by the fission events transforms water into steam, funneling it through a turbine, moving it and generating electricity. That is how nuclear plants work.

  • Nuclear bombs, with k>1 (also known as multiplication factor), where the amount of critical mass is enough to sustain a supercritical event, where each fission event produces more than one subsequent fission event, leading to an exponential increase of these. As each fission event liberates a lot of energy, this basically becomes an explosion liberating unfathomable amounts of energy and heat.

Why is heat a by-product of the reaction?

When each nuclei fissions, the sum of the resulting parts is slightly smaller than the original mass. But as Lavoisier’s Law of Conservation of Mass proves, mass isn’t created or destroyed, but transformed.

Thus, that mass has to be somewhere. Luckily, Einstein proved the relationship between mass and energy (conservation of mass-energy principle) through his E = m*c² formula. Hence, this “missing mass,” also known as the ‘mass defect,’ is actually transformed into kinetic energy (expressed as heat). As ‘c’ is the speed of light, the energy release is enormous.

In nuclear reactors, this heat is used to create electricity, and in nuclear bombs, to kill people.

But in first principles, the idea is the same, which is fascinating considering the same event can be used to improve human lives… or end them.

But how does this relate to AI?

A Matter of National Security

The achievement of nuclear power before other nations cemented the US’ dominance for decades as a ‘natural’ deterrent. Now, the US-China Economic and Security Review Commission thinks AI should be as important to national security as nuclear fission:

But many doubts quickly come to mind. If we think about nuclear weapons, it didn’t take long before other powers developed the technology too, which forced them to find agreement in the management and treatment of nuclear power (in all its forms).

Thus, if other powers got there too pretty quickly (in fact, Russia has the record for the biggest dropped nuclear warhead, back on October 30th, 1961, with the Tsar Bomba), what’s the point of this Manhattan-like Project?

Simple, timing.

In other words, the Commission implicitly argues that it isn’t whether other powers reach AGI too, but who gets there first. The original Manhattan Project was instrumental to the Allies’ victory in World War II and for the US to project power upon its enemies, but is AGI truly comparable in that regard?

But let’s not get ahead of ourselves; what on Earth is AGI?

Projection of Power & The Promise of Great Economic Value

Although many definitions are thrown around (just like in everything in AI these days), the typical definition is the moment when an AI (or group of AIs) can perform tasks at the level of virtuous humans or in the 99% percentile of performance, according to Google Deepmind’s categorization (they categorize different levels of AGI, I’m just going straight to the highest definition, which is the one most likely to have been discussed by the Commission).

Alternatively, you can take Sam Altman’s definition of AGI as ‘the moment AI can execute most tasks of economic value.’

Either way, it’s viewed as a seminal moment for civilization, the moment machines can perform most economic activities autonomously. Indeed, that’s a compelling technology to have as a country willing to sustain its power.

Just to name a few examples:

  • It would be extremely deflationary regarding the costs of producing goods and services, as AIs are far cheaper to serve hourly compared to humans (if you don’t believe so, just like at GPT-4o mini’s price per tokens, and you tell me). This would make US companies unfathomably more competitive in a global spectrum.
  • As Sam Altman always mentions, if we truly build said AI, it would accelerate scientific discovery, which could lead to a given country improving its technology orders of magnitude faster than other nations.
  • Potentially, it would also give the US a cybersecurity advantage, as this ‘God AGI’ could be used to crack the security systems of other nations.

In short, it is equivalent to one country having electricity and the others heating their stoves with a fire. Truly differential.

But is this AGI vision actually possible? And does this ‘God AGI’ definition even make sense?

Let’s put our cynical hats on.

A Way to Secure The Survival of Incumbents

I would be extremely surprised if this Commission hasn’t been heavily lobbied by all AI incumbents.

The reason for this is simple:

Survival.

‘Math Ain’t Mathing’

Today, most frontier AI labs have almost identical tech features and capabilities.

You can count at least seven labs (OpenAI, Anthropic, Google Deepmind, xAI, and maybe the Chinese DeepSeek and Alibaba, and the French Mistral) with almost identical technological IPs.

While there are certainly product differences (ChatGPT is a superior product to Gemini, for instance), open-source looms as the biggest threat to their survival.

Mainly through Meta, Mistral, and the Chinese labs, Large Language Models (LLMs) IP (how to train and run these models) is open to the public. Importantly, you can download them for free and store them safely in your IT systems.

This means developing an ‘AI moat’ is almost impossible. Even OpenAI’s Sora and o1 models, which were unique when they were released, have been ultimately matched by open-source, the latter very recently by Nous Research’s Forge API and DeepSeek’s r1 lite model.

In other words, labs are investing billions of dollars into developing the following frontier AI models only to have to drop costs massively — even subsidizing clients — to remain competitive as rivals release these models free to the public, leading to a burdensome race to the bottom in prices.

And if you factor in that pre-training scaling laws may be plateauing, frontier labs find themselves in an almost impossible conundrum:

  • Far from impressive revenues (except for OpenAI), leading to frankly absurd valuations (xAI has been just valued at $50 billion, which, at $100 million annual recurrent revenue estimation, means it’s valued at 500 times projected revenues, utter madness).
  • Mounting investor pressure to deliver, especially on Big Tech, based on its significant AI spending (combining for more than $50 billion every quarter, or $200 billion per year)
  • No way to differentiate themselves besides product development and, especially price.

Long story short, the party may stop anytime unless something happens.

Consequently, if the US Government suddenly dubs AI as a matter of extreme national security, that would undoubtedly lead to a ban on open-source, which would protect these for-profit labs by preventing anyone else from competing (good luck getting this passed Congress or through the FTC, but anything is possible in today’s state of the US).

Moreover, the other point that makes me cynical about all this is timing.

‘Gradually then Suddenly’

In Ernest Hemingway’s breakout novel, The Sun Also Rises, when Mike was asked how he went broke, he responded “gradually, then suddenly.” Now, this exact framing is being used to describe AI’s achievement of AGI.

In simple terms, it seems that the creation of AGI will be a ‘Trinity-test type’ event where AI is slowly becoming more intelligent and, suddenly, out of the blue, a God AGI is created (or AI’s singularity, as many refer to it).

The Trinity test was the first successful nuclear bomb test. A few weeks later, Hiroshima and Nagasaki happened.

Gradually, then suddenly, World War II was over.

But will it, though? There are plenty of reasons to believe that won’t be the case.

For instance, with test-time training, the new hyped post-training approach to improve AI capabilities, AI is moving from learning by induction (seeing loads of data and finding general laws that apply to new data) to learning by transduction, performing minor weight updates on each task so that the model learns at the same time it makes inferences on those particular tasks without aiming to generalize (although induction is still the main learning method during pre-training, transduction is used in post-training).

This is an extremely murky business for a foundation model that, with excessive post-training transductive learning, may narrow its capabilities to these new tasks to the detriment of its previous broad capabilities (i.e., fine-tuning on task-specific data may lead to broad capacity loss).

Long story short, the point I’m trying to make is that it’s becoming clear that this ‘AGI’ we refer to won’t be a single model but a compound of models working together, each specialized on their own set of tasks.

And if that’s the case, there will be no such thing as ‘gradually, then suddenly.’ No singularity.

Therefore, in that event, this idea of racing toward AGI before others is pure nonsense, as timing won’t matter. If the US builds this AGI system in October 2045, China will have it by November 2045 or even the very same month.

All things considered, I can’t help but be extremely cynical about this idea, which is not good.

By admin

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *