Post inspired by the bot threat that people on Lemmy have been talking about. I’m not asking how an expert would design it, but how you would design it if you were tasked with it.
Post inspired by the bot threat that people on Lemmy have been talking about. I’m not asking how an expert would design it, but how you would design it if you were tasked with it.
Bard however, does not seem to get the answer right:
Seems like it got kind of close, with the “The box is both yellow and red, so it is both good and happy”… but then falls apart afterwards.
Edit: I tried to debate with it:
Me:
Bard:
Which is interesting to say the least, its almost like its looking a bit too deeply into the question lol.
Bing had no trouble
Bing is GPT4 based, though I don’t think the same version as ChatGPT. But either way GPT4 can solve these types of problems all day.
Not surprised. I got access to bard a while back and it does quite a lot more hallucinating than even GPT3.5.
Though it doubling down on the wrong answer even when corrected is something I’ve seen GPT4 do even in some cases. It seems like once it says something, it usually sticks to it.