• 0 Posts
  • 112 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle





  • Pennomi@lemmy.worldto196@lemmy.blahaj.zonehave you rule?
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 months ago

    But let’s run numbers, an RTX 3090 pulls around 450 Watts (reference spec, it’s probably actually lower). It reportedly takes around 40 seconds to generate an image using Flux on that card. Modern optimizations have cut that back a lot but let’s just use these numbers. We’re looking at around 5 Watt-hours of energy.

    So an artist’s brain is more efficient if they take less than ~25 minutes to create a picture (at 12 watts).

    I mean, that’s assuming the output of the AI is actually worth anything, of course.






  • Pennomi@lemmy.worldto196@lemmy.blahaj.zoneRule berry
    link
    fedilink
    English
    arrow-up
    100
    ·
    4 months ago

    I suspect it’s more like “use the tool correctly or it will give bad results”.

    Like, LLMs are a marvel of engineering! But they’re also completely unreliable for use cases where you need consistent, logical results. So maybe we shouldn’t use them in places where we need consistent, logical results. That makes them unsafe for use in most business.