• fnmain@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    Can’t LLMs take an insane number of tokens as context now (I think we’re up to 1M)

    Anywho, he just like me fr

  • SpaceCowboy@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    I’ve always said that Turing’s Imitation Game is a flawed way to determine if an AI is actually intelligent. The flaw is the assumption that humans are intelligent.

    Humans are capable of intelligence, but most of the time we’re just responding to stimulus in predictable ways.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    Look at this another way. We succeeded too well and instead of making a superior AI we made a synthetic human with all our flaws.

    Realistically LLMs are just complex models based on our own past creations. So why wouldn’t they be a mirror of their creator, good and bad?