The doctors were better, until someone yanked the tool away. That’s how every tool works! Even going from a handsaw to a table saw and back will make you lose some skill with the handsaw, because your brain focused on higher-level goals and finer motions. That’s not proof a table saw is bad for woodworking. The problem is “and back.”
since apparently AI can’t feed into AI without collapse
Have you checked on that narrative? It’s been a while. Things stopped getting yellow. Improvements continued.
That’s a lot of “could” and “will” from an article a year old, primarily about concerns from two years ago, while image models to-day keep getting smaller and better. They didn’t find a second internet’s worth of JPEGs. Better training on the same data, or even better labels on less data, beats a simple obsession with scale.
Yes, photocopying a photocopy will degrade, but diffusion is a denoising algorithm. Un-degrading an image is its central function. ‘Make it look less AI’ is how you get generative adversarial networks.
Anyway, the grim truth is that the central concern is mistaken. Training data for cancer screening does not require the patient lived.
I again submit the last two years where model collapse did not happen. The doom-and-gloom predictions - some rather gleeful - plainly missed the mark. The proliferation of generated content has not in fact ruined the content generators, and it’s sure not because we’re any good at marking generated content. Early symptoms went away entirely and the problem has been practically addressed.
As for “unlearning,” universality is why it’s a made-up problem. Nobody loudly complains that x-rays make doctors worse at feeling around for lumps.
The doctors were better, until someone yanked the tool away. That’s how every tool works! Even going from a handsaw to a table saw and back will make you lose some skill with the handsaw, because your brain focused on higher-level goals and finer motions. That’s not proof a table saw is bad for woodworking. The problem is “and back.”
Have you checked on that narrative? It’s been a while. Things stopped getting yellow. Improvements continued.
Have you checked on that narrative?
The only workaround known so far seems to be to make sure enough data is fresh: https://www.inria.fr/en/collapse-ia-generatives https://en.wikipedia.org/wiki/Model_collapse But read for yourself.
That’s a lot of “could” and “will” from an article a year old, primarily about concerns from two years ago, while image models to-day keep getting smaller and better. They didn’t find a second internet’s worth of JPEGs. Better training on the same data, or even better labels on less data, beats a simple obsession with scale.
Yes, photocopying a photocopy will degrade, but diffusion is a denoising algorithm. Un-degrading an image is its central function. ‘Make it look less AI’ is how you get generative adversarial networks.
Anyway, the grim truth is that the central concern is mistaken. Training data for cancer screening does not require the patient lived.
The article links a study. What’s your study that collapse isn’t a concern?
For what it’s worth, my worry was never focused on cancer, these doctors were just an example measured for the likely universal unlearning effect.
I again submit the last two years where model collapse did not happen. The doom-and-gloom predictions - some rather gleeful - plainly missed the mark. The proliferation of generated content has not in fact ruined the content generators, and it’s sure not because we’re any good at marking generated content. Early symptoms went away entirely and the problem has been practically addressed.
As for “unlearning,” universality is why it’s a made-up problem. Nobody loudly complains that x-rays make doctors worse at feeling around for lumps.