AI Models Are Undertrained by 100-1000 Times – AI Will Be Better With More Training Resources

The Chinchilla compute optimal point for an 8B (8 billion parameter) model would be train it for ~200B (billion) tokens. (if you were only interested to get the most “bang-for-the-buck” w.r.t. model performance at that size). So this is training ~75X beyond that point, which is unusual but personally, [Karpathy] thinks this is extremely welcome. Because we all get a very capable model that is very small, easy to work with and inference. Meta mentions that even at this point, the model doesn’t seem to be “converging” in a standard sense. In other words, the LLMs we work with all the time are significantly undertrained by a factor of maybe 100-1000X or more, nowhere near their point of convergence. Actually, [Karpathy] really hope people carry forward the trend and start training and releasing even more long-trained, even smaller models.

Karpathy seems to be saying that if we have better compute, we can train up models to a more ideal level for better AI and AI performance.

Congrats to @AIatMeta on Llama 3 release!! 🎉https://t.co/fSw615zE8S
Notes:

Releasing 8B and 70B (both base and finetuned) models, strong-performing in their model class (but we’ll see when the rankings come in @ @lmsysorg :))
400B is still training, but already encroaching…

— Andrej Karpathy (@karpathy) April 18, 2024

If a large language model is undertrained by 1000 times, it means that the model has not been trained on a sufficient amount of data or for a sufficient number of iterations to reach its full potential. In other words, the model has not learned enough from the data to perform well on the tasks it was designed for.

To illustrate this, let’s use an analogy. Imagine you’re trying to learn a new language. If you only study for 10 minutes a day, it will take you much longer to become fluent than if you studied for 10 hours a day. Similarly, if a large language model is trained on a small dataset or for a short period of time, it will not be able to learn as much as it could if it were trained on a larger dataset or for a longer period of time.

The performance of a large language model is often measured in terms of its perplexity, which is a measure of how well the model predicts the next word in a sequence. A lower perplexity score indicates better performance. If a model is undertrained, its perplexity score will be higher than it could be if it were trained properly.

The amount of improvement that can be achieved by training a model properly depends on a variety of factors, including the size of the model, the quality of the data, and the specific task the model is being trained for. However, in general, it is possible for a model to achieve a significant improvement in performance if it is trained properly.

For example, a recent study found that increasing the size of a large language model from 1.5 billion parameters to 175 billion parameters can lead to a 10-fold improvement in performance on some tasks. This suggests that larger models can be more powerful than smaller ones, but only if they are trained properly.

In summary, if a large language model is undertrained by 1000 times, it means that the model has not been trained on a sufficient amount of data or for a sufficient number of iterations to reach its full potential. If the model were trained properly, it could potentially achieve a significant improvement in performance.

AI’s Red Pajama dataset from Oct/2023 continues to hold the crown with 30 trillion tokens in 125 terabytes. Notably, all major AI labs have now expanded beyond text into multimodal datasets—especially audio and video—for training frontier multimodal models like Gemini, Claude 3 Opus, GPT-4o, and beyond.

What is in one of the major 5 trillion token (20-30 Terabyte) text AI training datasets?

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.

Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.

A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts.  He is open to public speaking and advising engagements.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
Scientists created the heaviest Schrödinger cat to date thumbnail

Scientists created the heaviest Schrödinger cat to date

The idea of Schrodinger’s cat being alive and dead at the same time—its fate revealed only upon inspection—came from a thought experiment that pointed out an absurdity in the interpretation of quantum mechanics at the time. However, because such superposition states have now been prepared in many different quantum systems, the question is, where do
Read More
Growing Ground “Bulge” Detected Near Three Sisters Volcanoes thumbnail

Growing Ground “Bulge” Detected Near Three Sisters Volcanoes

Aerial view northward along glaciated summits of South Sister, Middle Sister, and North Sister volcanoes. Credit: Photograph by John Scurlock Scientists detect rejuvenated uplift near South Sister volcano. Using satellite imagery and sophisticated GPS instruments, Cascades Volcano Observatory geophysicists have detected a subtle increase in the rate of uplift of the ground surface about 3…
Read More
What happens when you hold in a fart? thumbnail

What happens when you hold in a fart?

The need to pass gas can come at inappropriate times — but is holding it in bad for you? (Image credit: Jajah-sireenut via Getty Images) Imagine being on a first date when you feel the need to toot your own horn — that is, pass gas. The average person releases about 0.5 to 1.5 liters
Read More
Will the US or China win the race for global quantum dominance? thumbnail

Will the US or China win the race for global quantum dominance?

Technology 29 September 2021 By Matthew Sparkes The National Laboratory for Quantum Information Science in Hefei, ChinaCostfoto/Barcroft Media via Getty Images QUANTUM computers and networks, once merely physicists’ playthings, are increasingly seen as both a national security threat and a potential asset, with the theoretical ability to crack current encryption methods, but also improve artificial…
Read More
Index Of News
Total
0
Share