Can’t LLMs take an insane number of tokens as context now (I think we’re up to 1M)
Anywho, he just like me fr
I’ve always said that Turing’s Imitation Game is a flawed way to determine if an AI is actually intelligent. The flaw is the assumption that humans are intelligent.
Humans are capable of intelligence, but most of the time we’re just responding to stimulus in predictable ways.
Look at this another way. We succeeded too well and instead of making a superior AI we made a synthetic human with all our flaws.
Realistically LLMs are just complex models based on our own past creations. So why wouldn’t they be a mirror of their creator, good and bad?
Shit, another existential crisis. At least I’ll forget about it soon
As a large language model i cannot answer that question