Elon Musk Wants to ‘Pause’ GPT-5 to Save Humanity, but Should We?

Elon Musk Wants to ‘Pause’ GPT-5 to Save Humanity, but Should We?:- In an unprecedented decision, a group of leading thinkers and worldwide prominent figures, including Tesla CEO Elon Musk and Yoshua Bengio, have signed an open letter petitioning for an unwavering halt to the development of more advanced Generative AI models.

Potentially, the survival of humanity.

However, the negative consequences of such a halt taking place could be disastrous, and lead us to a much worse scenario than not stopping Generative AI at all.

GPT-5: Building what we don’t understand

There’s a huge problem with all the impressive developments we’re seeing with the likes of ChatGPT, Claude, Stable Diffusion, or Bard, in that we’re building high-impact technologies that we don’t understand or, even worse, we can’t control.

A parrot-like no other

Funnily enough, explaining how ChatGPT works is extremely easy… although nuanced.

Like any other Large Language Model (LLM), GPT-4, the LLM behind ChatGPT, has a really straightforward way of working.

For every text or image it receives, it outputs the most probable token (a group of 3 to 4 characters, words can be a simple token or groups of them) out of a probability distribution.

In layman’s terms, it lays out a list of potential token candidates and chooses the one with the highest probability of being correct.

In other words, it isn’t giving a second thought to the meaning of what it’s outputting.

It allegedly doesn’t understand the response it’s giving you.

Looking at these tools this way, one can see them as stochastic parrots, implying that these models essentially mimic or “parrot” human language by generating text based on patterns and probabilities they have learned from large datasets of text during their training.

But if you give that description to Sam Altman, the CEO of OpenAI, he will most probably disagree with you, and in all likelihood will define GPT-4 not as a probabilistic engine, but as a reasoning engine.

And this is due to the great unknown concept of emergent behaviors that these LLMs develop as they increase in size.

GPT-5: Learning beyond our expectations

Large Language Models are not large… they are absurdly huge.

For reference, Meta’s smallest model has ‘only’ 7 billion parameters.

Therefore, explaining how these 7 billion parameters decide together what’s the best response to an input is outright impossible, at least by today’s standards.

The only thing we know is that these models predict reasonable responses impressively, but we don’t know, at a neuron level, how they’ve learned to do it so well or explain why they chose the token they did.

Also, as they’re great imitators, we don’t know if they are imitating reasoning or if they are reasoning their responses.

The reason for this is that, although we fully understand the training procedure — minimizing cross entropy — we don’t know or can’t explain what other complex representations — like the capacity to reason — the model has developed.

And this awesome feature of these models is, at the same time, their greatest threat.

GPT-5: An emergent reasoning behavior

When you give a text to GPT-4, it automatically tokenizes it — breaks it into subunits of data called tokens — and encodes it, compressing it into a numeric vector that extracts the context of the text.

The reason for this is two-fold:

  • Machines only understand numbers, not letters
  • With embeddings, we can measure relatedness, the closeness between vectors and, thus, we’re capable of teaching machines to understand complex relationships in natural language.

This embedding is then sent through the latent space, the compressed space inside the model that holds the learned parameters and probabilities of the model, and constructs the answer based on the input text and these learned representations.

The process, when sufficiently trained, achieves amazing results, as you’re probably very aware by now.

But it has a huge problem.

LLMs are black boxes, and that’s a serious issue.

GPT-5: Emerging threats

Since neural networks are loosely based on the way our brains work, and as we know very little about our very own thinking muscles, we fall strikingly short of explaining how our most advanced neural network systems, like GPT-4, work.

This takes us to a paradigm where we can predict what we think the model is doing, but we have no way whatsoever of proving it.

In other words, we may suspect that GPT-4 is a reasoning engine that has moved beyond mere raw probability distributions, but we can’t prove it.

Hence, if we suspect that our models are developing new emerging behaviors as they grow, how can we predict that these models won’t develop emergent behaviors that pose an existential threat to our very survival?

And that fear alone is what has caused this group of people to sign this open letter to cease the development of “better-than-GPT-4” models.

But is this feasible, and what could be the consequences?

Scary implications galore

Artificial Intelligence has a critical difference from any other technology.

We simply don’t understand it.

And the more you know about it, the scarier it becomes, as Paul Graham beautifully stated in a recent tweet.

Thus, creating a machine with intelligence that surpasses humans could represent one of the biggest innovations in the history of mankind and, at the same time, our own demise.

Ungoverned progress

As the open letter states, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources”.

This added to the “out-of-control” race between AI labs that the letter argues the AI industry is on, creates the perfect recipe for clumsy, no-guardrails-at-all decision-making in the name of profits and margins.

Thus, as these models develop new behaviors and “intelligent” capabilities, the sheer absence of control and understanding over them leaves humans very exposed to the impending risks these technologies bring.

The risks are so that, according to the signees of this letter, we’re at risk of “losing control of our civilization”.

The matter is so serious that they demand that, unless AI research labs agree on this halt, governments should impose moratoriums — prohibitions — over these developments.

Reckless measures for desperate times.

But how could we bring AI to a halt?

GPT-5: No easy way

As Jon Stokes brilliantly described in a blog post, the best way to cripple the AI industry is to attack their most revered resource, GPUs.

GPUs have become so important to the AI industry that they are even being used as units of measurement of how advanced an AI company or lab is.

The demand for these pieces of hardware is so big that some sell for up to $10,000 a chip, like the NVIDIA A100.

Therefore, as these elements are required both for training AI models and to run inferences — to execute — these models, seizing them would automatically cripple the industry and pause developments.

But such a measure isn’t simple to pull off.

It would require a globalized effort from all countries in the world, countries that have conflicts of interest and that are explicitly “at war” when it comes to developing cutting-edge AI.

—————————————————————————————————

“Elon Musk Wants to ‘Pause’ GPT-5 to Save Humanity, but Should We?”

Read More:-

Information Source:- https://medium.com/@ignacio.de.gregorio.noblejas/elon-musk-wants-to-pause-gpt-5-a81421c5bcbf

3 thoughts on “Elon Musk Wants to ‘Pause’ GPT-5 to Save Humanity, but Should We?”

Leave a Comment