330
What's with all the tech layoffs?
(infosec.pub)
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
Looking for support?
Looking for a community?
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
I want to offer my perspective on the AI thing from the point of view of a senior individual contributor at a larger company. Management loves the idea, but there will be a lot of developers fixing auto-generated code full of bad practices and mysterious bugs at any company that tries to lean on it instead of good devs. A large language model has no concept of good or bad, and it has no logic. It'll happily generate string-templated SQL queries that are ripe for SQL injection. I've had to fix this myself. Things get even worse when you have to deal with a shit language like Bash that is absolutely full of God awful footguns. Sometimes you have to use that wretched piece of trash language, and the scripts generated are horrific. Remember that time when Steam on Linux was effectively running
rm -rf /*
on people's systems? I've had to fix that same type of issue multiple times at my workplace.I think LLMs will genuinely transform parts of the software industry, but I absolutely do not think they're going to stand in for competent developers in the near future. Maybe they can help junior developers who don't have a good grasp on syntax and patterns and such. I've personally felt no need to use them, since I spend about 95% of my time on architecture, testing, and documentation.
Now, do the higher-ups think the way that I do? Absolutely not. I've had senior management ask me about how I'm using AI tooling, and they always seem so disappointed when I explain why I personally don't feel the need for it and what I feel its weaknesses are. Bossman sees it as a way to magically multiply IC efficiency for nothing, so I absolutely agree that it's likely playing a part in at least some of these layoffs.
I'm pretty excited about LLMs being force multipliers in our industry. GitHub's Copilot has been pretty useful (at times). If I'm writing a little utility function and basically just write out the function signature, it'll fill out the meat. Often makes little mistakes, but I just need to follow up with little tweaks and tests (that it'll also often write).
It also seems to take context of my overall work at the time somehow and infers what I'll do next occasionally, to my astonishment.
It's absolutely not replacing me any time soon, but it sure can be helpful in saving me time and hassle.
Those little mistakes drove me nuts. By the end of my second day with copilot, I felt exhausted from looking at bad suggestions and then second guessing whether I was the idiot or copilot was. I just can't. I'll use ChatGPT for working through broad issues, catching arcane errors, explaining uncommented code, etc. but the only LLM whose code output doesn't generally create a time cost for me is Cody.
If you tried copilot at the beginning, it's improved a lot since, now it's using GPT-4.