In-reply-to » Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learn ... ⌘ Read more

I like this comment on Slashdot in the above link:

LLMs don’t have an understanding of anything. They can only regurgitate derivations of what they’ve been trained on and can’t apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.

So true.

⤋ Read More