

Twitter was a thing because celebrities won’t text at you directly.
Not that texting can’t be toxic and social, though. Whatsapp groups are a major vector of far right radicalization, among other super shady stuff. All social media was a mistake.
Twitter was a thing because celebrities won’t text at you directly.
Not that texting can’t be toxic and social, though. Whatsapp groups are a major vector of far right radicalization, among other super shady stuff. All social media was a mistake.
Honestly, I think it’s because Europeans went to sleep and let the Americans alone in there for too long.
Best I can tell from the rather aimless freakout is that some obscure metric went down and people took it as a sign or failure or something? The first thing I saw be weird was some guy claim he wouldn’t leave Twitter because half their Twitter followers wouldn’t come with them to BS and than it died down before the mass US hysteria today.
I don’t know if the verification thing has anything to do with it, frankly.
This is a fair point. Although it’d be fun in the absolutely extreme scenario people are presenting here where all the coders would lose their coding job but get coding jobs generating random AI training data instead.
Some of the ways both critics and shills present the future of this technology are kinda nuts.
But that point is not the same as LLMs degrading when trained on its own data.
Again, it may be the same as the problem of “how do you separate AI generated data from human generated data”, so a filtering issue.
But it’s not the same as the problem of degradation due to self-training. Which I’m fairly sure you’re also misrepresenting, but I REALLY don’t want to get into that.
But hey, if you don’t want to keep talking about this that’s your prerogative. I just want to make it very clear that the reasons why that’s… just not a thing have nothing to do with training on AI-generated data. Your depiction is a wild extrapolation even if you were right about how poisonous AI-generated data is.
Hm. That’s rolling the argument back a few steps there. None of the stuff we’ve talked about in the past few posts has anything to do with the impact of AI-on-AI training.
I mean, you could stretch the idea and argue that there is a filtering problem to be solved or whatever, but that aside everything I’m saying would still be true if AI training exploded any time it’s accidentally given a “Hello world” written by a machine.
I appreciate that you didn’t mean to say what you said, but words mean things. I can only respond to what you say, not what you meant.
Especially here, where the difference entirely changes whether you’re right or not.
Because no, “less human code” doesn’t mean “less AI training”. It could mean a slowdown in how fast you can expand the training dataset, but again, old code doesn’t disappear just because you used it for training before. You don’t need a novel training dataset to train. The same data we have plus a little bit of new data is MORE training data, not less.
And less human code is absolutely not the same thing as “new human code will stop being created”. That’s not even a slip of the tongue, those are entirely different concepts.
There is a big difference between arguing that the pace of improvement will slow down (which is probably true even without any data scarcity) and saying that a lack of new human created code will bring AI training to a halt. That is flat out not a thing.
That this leads to “less developments and advancements in programming in general” is also a wild claim. How many brilliant programmers need to get replaced by AI before that’s true? Which fields are generating “developments and advancements in programming”? Are those fields being targeted by AI replacements? More or less than other fields? Does that come from academia or the private sector? Is the pace of development slowing down specifially in that area? Is AI generating “developments and advancements” of its own? Is it doing so faster or slower than human coders? Not at all?
People say a lot of stuff here. Again, on both sides of the aisle. If you know the answers to any of those questions you shouldn’t be arguing on the Internet, you should be investing in tech stock. Try to do something positive with the money after, too.
I’d say it’s more likely you’re just wildly extrapolating from relatively high level observations, though.
You are saying a lot of things that sound good to you without much grounding. You claiming this is a “widespread and significant issue” is going to need some backing up, because I may be cautious about not claiming more knowledge than I have, but I know enough to tell you it’s not particularly well understood, nobody is in a position to predict the workarounds and it’s by no means the only major issue. The social media answer would be to go look it up, but it’s the weekend and I refuse to let you give me homework. I have better things to do today.
That’s the problem with being cautious about things. Not everybody has to. Not everybody knows they should or when. I don’t know if you’re dunning kruger incarnate or an expert talking down to me (pretty sure it’s not the second, though).
And I’m pretty sure of that because yeah, it is an infinite doomsday slippery slope scenario. That I happen to know well enough to not have to be cautious about not having done all the reading.
I mean, your original scenario is that. You’re sort of walking it back here where it’s just some effect, not the endgame. And because now you’re not saying “if AI actually replaces programmers wholesale” anymore the entire calculation is different. It goes back to my original point: What data will AI use to train? The same data they have now. Because it will NOT in fact replace programmers wholesale and the data is not fungible, so there still will be human-generated code to train on (and whatever the equivalent high enough quality hybrid or machine-generated code is that clears the bar).
AI has a problem with running out of (good) data to train on, but that only tells you there is a hard limit to the current processes, which we already knew. Whether current AI is as good as it’s going to get or there is a new major breaktrough in training or model design left to be discovered is anybody’s guess.
If there is one, then the counter gets reset and we will see how far that can take the technology, I suppose. If there is not, then we know how far we’ve taken it and we can see how far it’s growing and how quickly it’s plateauing. There is no reason to believe it will get worse, though.
Will companies leap into it too quickly? They already have. We’re talking about a thing that’s in the past. But the current iteration of the tech is incapable of removing programmers from the equation. At most it’s a more practical reference tool and a way to blast past trivial tasks. There is no doomsday loop to be had unless the landscape shifts signfiicantly, despite what AI shills have been trying to sell people. This is what pisses me off the most about this conversation, the critics are buying into the narrative of the shills aggressively in ways that don’t really hold up to scrutiny for either camp.
deleted by creator
My “credibility on this topic” is of zero interest to me. I am not here to appeal to authority. I know you didn’t mean it like that, but man, it’s such a social media argument point to make it jumped right at me. For the record, it’s not that I haven’t heard about problems with training on AI-generated content (and on filtering out that content). It’s that I don’t need to flaunt my e-dick and will openly admit when I haven’t gone deep into an issue. I have not read the papers I’ve heard of and I have not specifically searched for more of them, so I’ll get back to you on that one if and when I do.
Anyway, that aside, you are presenting a bizarre scenario. You’re arguing that corporations will be demonstrably worse off by moving all coding to be machine-generated but they will do it anyway. Ad infinitum. Until there are no human coders left. At which point they will somehow keep doing it despite the fact that AI training would have entirely unraveled as a process by then.
Think you may have extrapolated a bit too far on that one? I think you may have extrapolated a bit too far on that one. Corpos can do a lot of dumb shit, but they tend to be very sensitive about stuff that costs them money. And even if that wasn’t the case, the insane volume of cheap skilled labor that would generate pretty much guarantees some competing upstart would replace them with the, in your sci-fi scenario, massively superior alternative.
FWIW, no, that’s not the same as outsourcing. Outsourcing hasn’t “often been a bad idea”. Having been on both sides of that conversation, it’s “a bad idea” when you have a home base with no incentive to help their outsourced peers and a superiority complex. There’s nothing inherently worse about an outsourced worker/developer. The thing that closes the gap on outsourcing cost/performance is, if anything, that over time outsourced workers get good and expect to get paid to match. I am pretty much okay with every part of that loop. Different pet peeve, though, we may want to avoid that rabbit hole.
Bit of a tautology, that. Presumably for AI to “replace programmers wholesale” it would need to produce human-quality code. Presumably human-quality code would not degrade anything because it’d be… you know, human-quality.
From what I can tell the degradation you’re talking about relates to natural language data. Stuff like physics simulations seems to be working fine to train models for other tasks, and presumably functional code is functional code. I don’t know if there is any specific analysis about code, though, I’ve only seen a couple of studies and then only as amplified by press.
I haven’t looked into it specifically because it really seems like a bit of a pointless hypothetical. Either AI can get better from the training data available or it can’t and then it is as good as it’s going to get unless the training methods themselves improve. At the moment it sure seems that there is a ton of research claiming both paths for growth and growth stalling that are both getting disproven by implementation almost faster than the analysis can be produced.
This argument mostly matters to investors itching to get ahead of a trend where they can fully automate a bunch of jobs and services and I’m more than happy to see them miss that mark and learn what the tech can do the hard way.
To be absolutely clear, AI is not “going to put everything else out of business”. Certainly LLMs won’t. Not even in programming.
It’s bad enough when people spend longer berating the OP for their question-asking etiquette than it would take to answer the question.
However it’s nothing compared to the absolute deviants who do provide an answer but do so in a deliberately oblique fashion that requires much more research to understand than the original problem.
It’s volunteer tech support, not testing that I’m pure of heart so I can access a mystic sword, you can just say the thing.
I mean… the same data they use now? And presumably other LLM output based on that, which is something that may or may not affect things a lot, nobody really knows.
Even if AIs consumed data when they trained on it so it couldn’t be used for training again, which they do not, it’s not like code stops being created, stored and datamined by the people who own the creation and storage.
See also “the Linux community”.
ducks for cover
Oh, yeah, that’s a branch of this argument I had almost forgotten. Such violent swings in the stylization wars.
I think these days it’s less aesthetics/graphics and it’s more photorealistic graphics/minimalist graphics, except minimalist graphics don’t register as graphics at all in some cases.
In the middle there we also have the “graphics haven’t improved since the Xbox 360” crowd. I think remembering that we spent like a decade playing games in black and white will become the new “PSOne games looked terrible and we didn’t realize” in a minute. It’s due, because now we’re in the wave of “PSOne games looked awesome, here’s a lo-fi stylized game people think took no effort to make for some reason” after people stopped referring to pixel art as “retro”.
I have to say I wasn’t ready for how much getting old makes these nerdy arguments start to pile up in sediment layers. It’s been a long trip.
No, there are definitely tradeoffs with TAA. Just… not extreme ghosting trails like the stuff you posted unless something is kinda glitchy. Which is where the weird layers of misinformation seem to be creeping out. You have a layer of people talking about how they find soft looking TAA images annoying and what seems to be an expanding blob of people attributing a whole bunch of other stuff to the thing as if it was the standard, which it absolutely isn’t.
FWIW, I took a peek at that subreddit and it’s mostly relatively informed nerds obsessing over maxing out for a specific thing (edge sharpness, presumably) over anything else. I was pleasantly surprised to see they’re not as much of a cultish thing where soft edges or upscaling are anathema and instead they mostly seem interested in sharing examples of places where temporal upscaling works better/worse than TAA.
Most of them are doing so in video so compressed it’s impossible to tell what looks better or worse at all, but hey, it’s at least not entirely delusional.
Somebody once told me about cinephiles that “some people really like movies, other people really like the movies they like”.
I like games, man.
There are very few types of games I outright reject. At most I’ll tell you I’m pretty antisocial and I don’t like multiplayer stuff as much, but it’s not a hard rule.
It’s not “an Nvidia-only thing”. It’s not a thing at all.
I mean, ghosting artifacts are a thing. Normally not a TAA thing, they are typically a problem with older upscaling methods (your FSR 1s and whatnot). You caaaan get something like that with bad raytracing denoising, but it doesn’t look like that. And your examples are extreme, so it’s either an edge case with a particular situation and a particular configuration or something else entirely.
This is one of those wild claims that become hard to disprove by being so detached from reality it’s hard to start. How do I disprove that hundreds of millions of people who have been gaming in games using TAA for about a decade aren’t constantly ignoring vaseline-smeared visuals on par with the absolute worst artifacts of early DLSS? I mean, I can tell you I played multiple games today and none of them do that, that I’ve played a ton of Cyberpunk and it doesn’t do that and that this is not the default state of a very well understood piece of technology.
It feels weird to even try to be nice about it and bargain. You MAY have stumbled upon a particular bug in a particular config or a game. You MAY be just mistaking “TAA” for temporal upscaling and just using some weird iteration of it in a worst case scenario. I mean, if you’re not outright trolling I can see what you call “too mild in most cases” just being some DLSS ghosting and you’re just lumping several things that cause ghosting as “TAA”. But all that is just… too much credit to the idea, if I’m being honest.
I’d still ike to know what specific GPUs you’re using and how you set up the games when it “happened” in all those computers. Direct video capture wouldn’t be a bad idea, either. I don’t know why I’m even entertaining this as anything other than some weird videogame iteration of flat earth stuff, but I’m still fascinated by how brazen it is and kinda want to know now.
This view on things where a guy in a suit is telling a bunch of passionate artists how to do their day to day jobs is entirely separate from reality.
Don’t get me wrong, there are plenty of big, merecenary operations out there, but this is a) not how those play out, and b) very easy to sus out without needing to have a roll call of the studio.
Case in point, Baldur’s Gate 3, which people keep weirdly excluding from “AAA” was made by a large team that balooned into a huge team during the course of development. That never seemed a budget, size or graphics problem to me. Or about what degree people in the studio happen to have, for that matter.
If you don’t want to play hyper compulsive live services built around a monetization strategy that is perfectly legitimate. Gaming is very broad and not everybody needs to like everything. It’s just the categorizations people use to reference things they supposedly dislike that seem weird to me.
That’s a very random question asked in a very random place.
I don’t know, what year are we on and how am I feeling that day? I’ve played thousands of games, “my favorite” is entirely meaningless.
Currently I’m trying to find time to get through Expedition 33, I just found out that there is apparently a Tetris the Grand Master 4, so I’m messing around with that. I’ve booted up Capcom Fighting Collection 2, I am staring at the 80 bucks price point on Doom Dark Ages and reminding myself I won’t have time to play that for at least a few weeks and I should wait. Steam says my most played games are Metaphor Re: Fantazio, Dragon Ball FighterZ, Street Fighter 6 and Metal Gear V. Nintendo says it is Breath of the Wild. I have 100%-ed the Insomniac Spider-Man trilogy twice. I can beat Streets of Rage 2 in speedrun-worthy times and I’ve played through a bunch of 4.
This is not a question, it’s an existential crisis.
Honestly, all I can think of is “that’s a really good Harrison Ford face sculpture”.