AI Models Are Undertrained by 100-1000 Times – AI Will Be Better With More Training Resources

The Chinchilla compute optimal point for an 8B (8 billion parameter) model would be train it for ~200B (billion) tokens. (if you were only interested to get the most “bang-for-the-buck” w.r.t. model performance at that size). So this is training ~75X beyond that point, which is unusual but personally, [Karpathy] thinks this is extremely welcome. Because we all get a very capable model that is very small, easy to work with and inference. Meta mentions that even at this point, the model doesn’t seem to be “converging” in a standard sense. In other words, the LLMs we work with all the time are significantly undertrained by a factor of maybe 100-1000X or more, nowhere near their point of convergence. Actually, [Karpathy] really hope people carry forward the trend and start training and releasing even more long-trained, even smaller models.

Karpathy seems to be saying that if we have better compute, we can train up models to a more ideal level for better AI and AI performance.

Congrats to @AIatMeta on Llama 3 release!! 🎉https://t.co/fSw615zE8S
Notes:

Releasing 8B and 70B (both base and finetuned) models, strong-performing in their model class (but we’ll see when the rankings come in @ @lmsysorg :))
400B is still training, but already encroaching…

— Andrej Karpathy (@karpathy) April 18, 2024

If a large language model is undertrained by 1000 times, it means that the model has not been trained on a sufficient amount of data or for a sufficient number of iterations to reach its full potential. In other words, the model has not learned enough from the data to perform well on the tasks it was designed for.

To illustrate this, let’s use an analogy. Imagine you’re trying to learn a new language. If you only study for 10 minutes a day, it will take you much longer to become fluent than if you studied for 10 hours a day. Similarly, if a large language model is trained on a small dataset or for a short period of time, it will not be able to learn as much as it could if it were trained on a larger dataset or for a longer period of time.

The performance of a large language model is often measured in terms of its perplexity, which is a measure of how well the model predicts the next word in a sequence. A lower perplexity score indicates better performance. If a model is undertrained, its perplexity score will be higher than it could be if it were trained properly.

The amount of improvement that can be achieved by training a model properly depends on a variety of factors, including the size of the model, the quality of the data, and the specific task the model is being trained for. However, in general, it is possible for a model to achieve a significant improvement in performance if it is trained properly.

For example, a recent study found that increasing the size of a large language model from 1.5 billion parameters to 175 billion parameters can lead to a 10-fold improvement in performance on some tasks. This suggests that larger models can be more powerful than smaller ones, but only if they are trained properly.

In summary, if a large language model is undertrained by 1000 times, it means that the model has not been trained on a sufficient amount of data or for a sufficient number of iterations to reach its full potential. If the model were trained properly, it could potentially achieve a significant improvement in performance.

AI’s Red Pajama dataset from Oct/2023 continues to hold the crown with 30 trillion tokens in 125 terabytes. Notably, all major AI labs have now expanded beyond text into multimodal datasets—especially audio and video—for training frontier multimodal models like Gemini, Claude 3 Opus, GPT-4o, and beyond.

What is in one of the major 5 trillion token (20-30 Terabyte) text AI training datasets?

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.

Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.

A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts.  He is open to public speaking and advising engagements.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
Why Are Some Planets Surrounded by Rings? thumbnail

Why Are Some Planets Surrounded by Rings?

By Dr. Rudi Kuhn, South African Astronomical Observatory December 14, 2022Illustration of Saturn and its rings. All of the outer gas giant planets — Jupiter, Saturn, Neptune, and Uranus — have rings around them.Planets are surrounded by rings because they are made up of particles that orbit the planet. These particles can be made up
Read More
Franklin’s Franklins Were Freakishly UnFakeable thumbnail

Franklin’s Franklins Were Freakishly UnFakeable

To make something hard to fake, you can use exotic materials or clever tricks. Benjamin Franklin, a printer by vocation, a scientist by avocation, leaned on cleverness, developing measures that are still in use.Those black arts have now yielded to the latest analytical instruments, as described in the Publications of the National Academy of Scientists
Read More
Multimodal chromatin profiling using nanobody-based single-cell CUT&Tag thumbnail

Multimodal chromatin profiling using nanobody-based single-cell CUT&Tag

MainCell identity and the underlying gene expression programs are determined through the action of epigenetic modalities including transcription factor binding1, modifications of histones2, chromatin remodeling3, DNA methylation4, genome architecture5 and long non-coding RNAs6. Together, effects of these factors determine the regulatory logic behind cell state transitions during development and in disease. Changes in the chromatin
Read More
USA testen Roboterhunde für den Grenzschutz thumbnail

USA testen Roboterhunde für den Grenzschutz

© DHS Produkte 03.02.2022 Die Maschinen sollen autonom Erkundungs- und Kontrollgänge an der südlichen Grenze durchführen. Das Ministerium für Innere Sicherheit der Vereinigten Staaten (DHS) hat Daten zu geplanten Roboterhunden herausgegeben, die für den Grenzschutz im Süden des Landes zum Einsatz kommen sollen. Die Maschinen sollen dabei unterschiedliche Rollen zur Überwachung einnehmen, die aktuell getestet werden.  „Die südliche…
Read More
Index Of News
Total
0
Share