That’s the thing about LLMs though. They don’t understand anything at all. They just say stuff that sounds coherent. It turns out of you just say whatever seems like the most reasonable response at all times then you can get pretty close to simulating understanding in some scenarios. But even though these newer language models are quite good at some things, they are no closer to understanding or conceptualizing anything.
That’s the thing about LLMs though. They don’t understand anything at all. They just say stuff that sounds coherent. It turns out of you just say whatever seems like the most reasonable response at all times then you can get pretty close to simulating understanding in some scenarios. But even though these newer language models are quite good at some things, they are no closer to understanding or conceptualizing anything.
I was using the term “understand” as shorthand for “trained after and on content containing” and “given enough context on what is being asked.”