QuadratureSurfer@lemmy.worldtoHardware@lemmy.ml•Microsoft’s “Copilot+” AI PC requirements are embarrassing for Intel and AMDEnglish
24·
6 months agoI’m just glad to hear that they’re working on a way for us to run these models locally rather than forcing a connection to their servers…
Even if I would rather run my own models, at the very least this incentivizes Intel and AMD to start implementing NPUs (or maybe we’ll actually see plans for consumer grade GPUs with more than 24GB of VRAM?).
Similar use cases to what I’m doing right now, running LLMs like Mixtral8x7B (or something better by the time we start seeing these), Whisper (STT), or Stable Diffusion.
I use a fine tuned version of Mixtral (dolphin-Mixtral) for coding purposes.
Transcribing live audio for notes/search, or translating audio from different languages using Whisper (especially useful for verifying claims of translations for Russian/Ukrainian/Hebrew/Arabic especially with all of the fake information being thrown around).
Combine the 2 models above with a text to speech system (TTS), a vision model like LLaVA and some animatronics and then I’ll have my own personal GLaDOS: https://github.com/dnhkng/GlaDOS
And then there’s Stable Diffusion for generating images for DnD recaps, concept art, or even just avatar images.