• 1rre@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    9
    ·
    18 hours ago

    I’ve started using AI pretty heavily for writing code in languages I’m not as confident in (especially JS and SQL) after being skeptical for a while, as well as code which can be described briefly but is tedious to write, and I think the problem here is “by” - it would be better to say “with”

    You don’t say that 90% of code was written by code completion plugins, because it takes someone to pick the right thing from the list, check the docs to see it’s right, etc.

    It’s the same for AI, I check the “thinking”/planning logs to make sure the logic is right, and sometimes it is, sometimes it isn’t, at which point you can write a brief psudocode brief of what you want to do, sometimes it starts on the right path then goes off, at which point you can say “no, go back to this point” and generally it works well.

    I’d say this kind of code is maybe 30-50% of what I write, the other 50-70% being more technically complex and in a language I’m more experienced in, so I can’t fully believe the 30% figure when you’re going to be having some people wasting time by not using it when they could use it for speedup, and others using it too much and wasting time trying to implement more complex things than it’s capable of - this one irks me especially after having to spend 3½ hours yesterday reviewing a new hire’s MR that they could’ve spent actually learning the libraries, or I could’ve spent implementing the whole ticket with some time left over to teach them.

    • TonyTonyChopper@mander.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      14 hours ago

      Large language models can’t think. The “thinking” it spits out to explain the other text it spits out is pure bullshit.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        14 hours ago

        Why do you think I said "thinking"/planning instead of just calling it thinking…

        The “thinking” stage is actually just planning so that it can list out the facts and then try and find inconsistencies, patterns, solutions etc. I think planning is a perfectly reasonable thing to call it, as it matches the distinct between planning and execution in other algorithms like navigation.

        • AliasAKA@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          14 hours ago

          “Thinking” is just an arbitrary process to generate additional prompt tokens. In their training data now, they’ve realized people suck at writing prompts, and that it was clear their models lack causal or state models of anything. They’re simply good at word substitution to a context that is similar enough to the prompt they’re given. So a solution to sucky prompt writing and trying to sell people on its capacity (think full self driving — it’s never been full self driving, but it’s marketed that way to make people think it is super capable) is to simply have the model itself look up better templates within its training data that tend to result in better looking and sounding answers.

          The thinking is not thinking. It’s fancier probabilistic look up.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        That kind of matches my experience, but some of the negatives they bring up can be fixed with monitoring thinking mode. If they start to make assumptions on your behalf, or go down the wrong path, you can interrupt it and tell it to persue the correct line without polluting the context.