• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle



  • You missed the point of my “can be wrong” bit. The focus was on the final clause of “and recognize that it was wrong”.

    But I’m kinda confused by your last post. You say that only computer scientists are giving it feedback on its “correctness” and therefore it can’t truly be conscious, but that’s trivially untrue and clearly irrelevant.

    First, feedback on correctness can be driven by end users. Anyone can tell ChatGPT “I don’t like the way you did that,” and it would be trivially easy to add that to a feedback loop that influences the model over time.

    Second, find me a person who’s only feedback loop was internal. People are told “no that’s wrong” or “you’ve messed that up” all the time. That’s what makes us grow as people. That is arguably the core underpinning of what makes something intelligent. The ability to take ideas from other people (computer scientists or no), and have them influence the way you think about things.

    Like, it seems like you think that the “consciousness program” you describe would count as an intelligence, but then say it doesn’t because it’s only getting its external information from computer scientists, which seems like a distinction without a difference.


  • I think literally all those things are scenarios that a driving AI would be able to measure and heuristically say, “in scenarios like this that were in my training set, these are what often follows.” Like, do you think the training set has no instances of people pulling out of blind spots illegally? Of course that’s a scenario the model would have been trained on.

    And secondarily, those are all scenarios that “real intelligences” fail on very very regularly, so saying AI isn’t a real intelligence because it might fail in those scenarios doesn’t logically follow.

    But I think what you are trying to argue is that AI drivers aren’t as good as an “actual intelligence” driver, which is immaterial to the point I’m making, and is ultimately super quantifiable. As the data comes in we will know in a very objective way if an AI driver is safer on average than a human. That’s quantifiable. But regardless of the answer, it has no bearing on if the AI is in fact “intelligent” or not. Blind people are intelligent, but I don’t want a blind person driving me around either.


  • The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.

    And it’s also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.

    And is having wants and desires necessary to be an “intelligence”? That’s getting into the philosophy side of the house, but I would argue that’s superfluous.


  • Okay, two things.

    First, that’s just not true. Current driving models track all moving objects around them and what they’re doing, including pedestrians and objects like balls. And that counts towards “things happening in the moment”. Everything in sensor range is stuff happening “in the moment”.

    Second, and more philosophically, humans also don’t know how to react to situations they’ve never seen before, and just make a best guess based on prior experience. That’s, like, arguably the definition of intelligence. The only difference arguably is that humans are better at it.


  • Skipping over the first two points, which I think we’re in agreement on.

    To the last, it sounds like you’re saying, “it can’t be intelligent because it is wrong sometimes, and doesn’t have a way to intrinsically know it was wrong.” My argument to that would be, neither do people. When you say something that is incorrect, it requires external input from some other source to alert you to that fact for correction.

    That event could then be added to you “training set” as it were, aiding you in not making the mistake in future. The same thing can be done with the AI. That one addition to the training set that was “just enough to bridge that final gap” to the right answer, as it were.

    Maybe it’s slower at changing. Maybe it doesn’t make the exact decisions or changes a human would make. But does that mean it’s not “intelligent”? The same might be said for a dolphin or an octopus or an orangutan, all of which are widely considered to be intelligent.


  • I don’t really get the “what we are calling AI isn’t actual AI” take, as it seems to me to presuppose a definition of intelligence.

    Like, yes, ChatGPT and the like are stochastic machines built to generate reasonable sounding text. We all get that. But can you prove to me that isn’t how actual “intelligence” works at it’s core?

    And you can argue that actual intelligence requires memories or long running context, but that’s trivial to jerry-rig a framework around ChatGPT that does exactly that (and has been done already a few times).

    Idk man, I have yet to see one of these videos actually take the time to explain what makes something “intelligent” and why that is the definition of intelligence that they believe is the correct one.

    Whether something is “actually” AI seems much more a question for a philosophy major than a computer science major.