Here’s why turning to AI to train future AIs may be a bad idea

Training large language models on their own data risks model collapse

AI chatbot with thought bubbles

Training generative AI models on only their own responses could increase bias in the text they create or breakdown further into nonsense, research suggests.

Vertigo3d/Getty Images

ChatGPT, Gemini, Copilot and other AI tools whip up impressive sentences and paragraphs from as little as a simple line of text prompt. To generate those words, the underlying large language models were trained on reams of text written by humans and scraped from the internet.