AI trained on AI churns out gibberish garbage

  • 📰 PopSci
  • ⏱ Reading Time:
  • 23 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 13%
  • Publisher: 63%

Ai Ai Headlines News

Ai Ai Latest News,Ai Ai Headlines

Mack DeGeurin is a tech reporter who’s spent years investigating where technology and politics collide. His work has previously appeared in Gizmodo, Insider, New York Magazine, and Vice.

ArticleBody:Large language models like those offered by OpenAI and Google famously require vast troves of training data to work. The latest versions of these models have already scoured much of the existing internet which has led some to fear there may not be enough new data left to train future iterations. Some prominent voices in the industry, like Meta CEO Mark Zuckerberg have posited a solution to that data dilemma: simply train new AI systems on old AI outputs.

In those last generations, the models trained on models are so far removed from the original training data that they begin to forget key aspects of the initial training and lose the plot entirely. It’s at this stage that models begin generating complete meaningless gibberish. When this happens, the researchers say the model’s “indiscriminate” self-cannibalizing of its own previous outputs “causes irreversible defects in the resulting model.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 298. in ERROR

Ai Ai Latest News, Ai Ai Headlines