

Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.
Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.
People who like cute art more than gameplay.
Sensationalist headline about science fiction.
No shit it’s “possible” despite being hopelessly beyond today’s technology.
XCode would randomly stop syntax highlighting for years because their engineering was so shit.
As long as you can keep the vibe coded pieces tiny and modular you’d probably be fine. But that takes a robust knowledge of Unity and gamedev architecture, and at that point you should probably just write it yourself.
Complex, math-heavy stuff like gaming usually is too much for an AI. It’s better at like, basic Python scripts or writing a bunch of dirty CSS.
So my backyard is a utilities easement and it pretty much looks like this.
Not sure what you mean, boilerplate code is one of the things AI is good at.
Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.
More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.
Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.
I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)
Why would I trust a drill press when it can’t even cut a board in half?
Sometimes it’s about the class of plane and the weather on the specific route you’re taking.
Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.
LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.
Cherry picking the things it doesn’t do well is fine, but you shouldn’t ignore the fact that it DOES do some things easily also.
Like all tools, use them for what they’re good at.
Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.
To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.
LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.
In other words, “anti-theists” instead of just atheists.
Most people whose personalities revolves around being anti-something are insufferable. It’s far better to be for something than against something.
Like, I grew up Mormon, and left when I grew old enough to think for myself. Among my friends who also left the church, there are two major categories: the “post-mormons” and the “anti-mormons”. The anti-mormons are miserable to be around while the rest of us decided we’d rather build our lives around what we love, not what we hate.
I suspect you are all too right.
I am self-employed. So myself, I guess.
It’s risk/reward. If brain chips made me twice as productive or intelligent, I’d probably tolerate a lot more risk than if it was just a way to check my Instagram notifications without pulling out my phone.
I don’t think a grad student could handle the volume here and still has some bias, even if the bias comes from “I didn’t get great sleep last night”.
Classification algorithms have been around for decades and are the perfect use of AI.
I don’t think I suggested it wasn’t worrisome, just that it’s expected.
If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.
“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.
Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.