Rethinking The Doomsday Clamor That Generative AI Will Fall Apart Due To Catastrophic Model Collapse

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 123 sec. here
  • 13 min. at publisher
  • 📊 Quality Score:
  • News: 83%
  • Publisher: 59%

Artificial Intelligence AI News

Large Language Models Llms,Generative AI,Model Collapse

Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 7.4+ million amassed views of his AI columns. As a CIO/CTO seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research.

There is an ongoing and quite heated debate that generative AI and large language models will end up collapsing.In today’s column, I am continuing my coverage of the latest trends and controversies in the field of AI, especially in the realm of generative AI. The focus of this discussion will be on the hair-raising claim that generative AI will suffer catastrophic model collapse, a contention that has garnered keen interest and lots of anxious hand-wringing.

Why would anyone believe or assert that generative AI and LLMs are heading toward a massive catastrophic collapse?I just mentioned a moment ago that generative AI and LLMs are devised by scanning lots and lots of data. Without data, the spate of modern-day AI that seems so fluent would still be stuck in the backwater dark days of clunky old-fashioned natural language processing.

“Our analysis indicates that the stock of high-quality language data will be exhausted soon; likely before 2026.” Here are some of the weighty questions at play.

Some charge that humans might indeed stop writing due to becoming reliant on generative AI. As a society, we will somehow decide that writing by human hand is no longer needed and totally and exclusively rely upon generative AI to do our writing. I am skeptical about that particular proposition. The beauty of this solution is that we can pretty much make as much synthetic data as we want. Consider that we tell a generative AI app to start rattling off everything that can be said about the life of Abraham Lincoln. Tell the story of Honest Abe over and over again, doing so in dozens, hundreds, thousands of variations. The volume of data being produced could be astronomical.

It is the old line about what happens when you make a copy of a copy. The copy that is made loses something in the process and isn’t of the same high quality as the original. If you make a copy of the copy, the result gets worse. Each successive copy is a kind of degrading that worsens. Suppose that the widely and wildly popular ChatGPT was to be further data trained using synthetic data. First, we tell ChatGPT to start generating zillions of essays on gazillions of topics. A ton of synthetic data is subsequently generated. Next, we take that data and feed it back into ChatGPT, doing additional data re-training of the AI app.If one round of this is seemingly good, we ought to repeat our endeavors.

Others would argue that you are deluding yourself if that’s what you think would arise. They would contend that you are going to suffer the so-called curse of recursion. The curse is that upon using recursion in this fashion, the generative AI is going to be mashed into pure dribble. Similar to the tale of the game of telephone or the plot of the movie, your generative AI is going to sink to a new low and be unrecognizable as to the once stellar capabilities it once had.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in Aİ

Ai Ai Latest News, Ai Ai Headlines