Have any other devs tried using LLMs for work? They’ve been borderline useless for me.
Also the notion of creating a generation of devs who have no idea what they are writing and no practice of resolving problems “manually” seems insanely dumb.
Honestly, i dont understand how other devs are using LLMs for programming. The fucking thing just gaslights you into random made up shit.
I tried as a test to give it a madeup problem. I mean, it could be a real problem. But i made it up to try. And it went "ah yes. This is actually a classic problem in (library name) version 4. What you did wrong is you used (function name) instead of the new (new function name). Here is the fixed code: "
And all of it was just made up. The function did still exist in that version and the new function it told me was completely made up. It has zero idea of what the fuck its doing. And if you tell it its wrong, it goes “oh my bad, you’re right hahaha. Function (old function name) still exists in version 4. Here is the fixed code:”
And again it made shit up. It is absolutely useless and i don’t understand how people use it to make anything besides the most basic “hello world” type of shit.
Often it also just gives you the same code over and over. Acting like it changed it and fixed it. But its the exact same as the response before it.
I do admit LLMs can be nice to brainstorm ideas with. But write code? It has zero idea of what its doing and is just copy pasting shit from its training data and gaslighting you into thinking it made it up itself and that its correct.
I find them quite useful, in some circumstances. I once went from very little Haskell knowledge to knowing how to use cabal, talk to a database and build a REST API with the help of an AI (I’ve done analogous things in Java before, but never in Haskell). This is my favourite example and for this kind of introduction I think it’s very good. And maybe half of the time it’s at least able to poke me in the right direction for new problems.
Copilot-like AI which just produces auto-complete is very useful to me, often writing exactly what I want to do for some repetitive tasks. Testing in particular. Just take everything it outputs with great scepticism and it’s pretty useful.
The saddest part is the devs that aggressively use AI will probably keep their jobs, vs the “Non-AI” devs. I still acknowledge there “IS” a use for LLMs but we already have been losing humanity, especially in the states rapidly for a decade now, I don’t wanna lose more.
I used them on some projects but it feels like copilot is still the best application of the tech and even that is very ummm hit or miss.
Writing whole parts of the application using AI usually led to errors that I needed to debug and coding style and syntax were all over the place. Everything has to be thoroughly reviewed all the time and sometimes the AI codes itself into a dead end and needs to be stopped.
Unfortunately I think this will lead to small businesses vibe coding some kind of solution in AI and then resorting to real people to debug whatever garbage they „coded“ which will create a lot of unpleasant work for devs.
Some LLMs are better than others. ChatGPT is pretty good at Python code. It is very limited on its ability to write fully functioning code but it can toss together individual functions fairly well. I think most people have a fundamental understanding of how to write a question and set parameters for LLMs. This is leading to the “AI Bad!” circle jerk. Its no different than any other new tool.
It is nice for when you need a quick and dirty little fix that would require you to read a lot of documentation and skim through a lot of jnfo you will never need again. Like converting obsolete config file format #1 to obsolete format #2. Or to summatize documentation in general, although one needs to be careful with hallucinations. Basically, you need a solid understanding already, and can judge if something is plausible or not. Also, if you need standard boilerplate, of course.
It sucks most when you need any kind of contextual knowledge, obviously. Or need accountability. Or reliable complexity. Or something new and undocumented.
Last time I used one, I was trying to get help writing a custom naming strategy for a Java ObjectMapper. Mostly written python in my career so just needed the broad strokes of it to be filled in.
It gave me some example code that looked plausible but in actuality was the exact inverse of how you are supposed to implement it. Took me like a day and a half to debug it; reckon I could have written it in an afternoon by going straight to the documentation.
deleted by creator
Might aswell only have a single speaker if you are gonna put them this close together. Mono ass setup.
then have them study ai and be even more fucked when it potentially fails
Is that a coffee percolator fish tank?
Yeah and it’s waaay too small for that fish bro
Also I hope it’s fake because that’s the tiniest aquarium and there’s no plants.
You guys are thinking this from a selfish perspective. You have to look at it from your employer. If they don’t do it. Other company’s will. Then they’ll feel left out. Have you been to a yacht party where you’re the only one that hasn’t fired your employees? Goddamned miserable. /s
The working class is SO selfish
How about make people study for ten years and make them pay tens of thousands of dollars to do it … then tell them they didn’t do the right studying for the work you want them to do so you don’t want to hire them
Also, hire someone who dropped out of school and got a chatbot to lie on their resume.
They’re a Leafs fan. They’re used to disappointment.
Then tell them a holy parable about finding cheese.
Is it the one about the guy who finds a nice brick of aged cheddar on their fridge that they had forgotten and get to enjoy the salty deliciousness? No… Wait … That’s just me fantasizing about tasty cheese.