34
submitted 6 days ago by chobeat@lemmy.ml to c/technology@hexbear.net
top 38 comments
sorted by: hot top controversial new old

I'm pretty sure a lot of tech workers don't trust it either

[-] 7bicycles@hexbear.net 30 points 6 days ago

My experience with it is technical knowledge correlates negatively to AI enthusiasm

[-] fox@hexbear.net 25 points 6 days ago

The more knowledge you have about what an LLM is and how it works the worse it feels to watch people use the damned things. I've got a friend who uses them for marketing copy for her job (sure, whatever), for grocery lists (uhh), for writing scripts for difficult conversations (uuuuhhhh), and as a source-of-truth search engine (UUUHH). Like she's fully surrendering her ability to think to the Machine That Only Tells Lies. Butlerian Jihad can't come fast enough

[-] DragonBallZinn@hexbear.net 19 points 6 days ago

I overheard at the gym the other day someone getting this great idea for a book and said with a straight face “Someone should totally get AI to write that!”

I just died a little on the inside that day. It’s so depressing seeing the world be almost gleeful about delegating all creativity to an algorithm. doomjak

[-] 7bicycles@hexbear.net 14 points 6 days ago

yeah you hit sort of an x-point of technologically illiterate digital forward people and I do believe they're going to get unbelievably fucked by the whole ordeal on account of trusting their decision making to the hallucination machine

[-] semioticbreakdown@hexbear.net 4 points 5 days ago

Yeah its fucked. These things are Completely unmoored from reality. People are outsourcing their higher cognitive functions to a machine that provably doesn't have them. Absolutely terrifying.

[-] Le_Wokisme@hexbear.net 18 points 6 days ago

gun pointed at my printer, etc

[-] tim_curry@hexbear.net 23 points 6 days ago

As a long time tech worker i just hate computers entirely.

There are two types of tech worker, the ones who want to find a way to turn themselves into a literal monkey and never look at a computer ever again. Those that love tech hype trains but couldn't program their way out of a paper bag and probably end up in management which drive the former to want to metamorphose into le-monke

[-] DragonBallZinn@hexbear.net 13 points 5 days ago

Unironically I’ve began to realize I don’t hate STEM, I just dislike tech.

Unfortunately for me, the S in STEM is silent.

[-] lapis@hexbear.net 12 points 5 days ago
[-] chobeat@lemmy.ml 5 points 5 days ago

in Italy, the monkey is of the symbols of the Tech Workers Movement

[-] DragonBallZinn@hexbear.net 9 points 5 days ago

As far as I’m concerned, return to monke should be another immortal meme like loss.

[-] lapis@hexbear.net 6 points 5 days ago

it's immortal for me!

...so it'll live for 1-70 more years, at least.

[-] TheaJo@hexbear.net 3 points 5 days ago

and repeatedly get its ass handed to it by even the most paltry villains and then fuck off to a cabin somewhere

[-] GeneralSwitch2Boycott@hexbear.net 19 points 6 days ago

The anti-IOT techies must all hate AI even more. The inverse must also be an overlapping circle too.

[-] lapis@hexbear.net 11 points 6 days ago

I was about to point out that I have IoT stuff and hate AI, but my IoT stuff is mostly using zigbee instead of wifi so it can’t try to phone home…

[-] GeneralSwitch2Boycott@hexbear.net 21 points 6 days ago

I'm anti-wave and anti-trend. Clear the air and put everything back into wires.

"Yes, I'd like 2.4Ghz of music, please"
Sad no one ever.

We have been played for absolute fools.

I guess we could have an IOT but it's all wired like a Simon Stalenhag artwork but it'd be cool to see what a tech-centric world were wireless signals didn't evolve would be like.

[-] lapis@hexbear.net 9 points 6 days ago* (last edited 6 days ago)

everyone carries a dumb smartphone-like hand terminal but they have to physically jack into a network access point to download new content and messages.

capitalist hellhole side-plot: tech workers are expected to do so every 15 minutes, even on days off, to ensure no missed slack/teams/etc. notifications.

[-] AstroStelar@hexbear.net 5 points 5 days ago* (last edited 5 days ago)

smartphone-like hand terminal

jack into

You're describing Mega Man Battle Network down to the terminology

[-] lapis@hexbear.net 4 points 5 days ago

hell yeah and here I was just using standard scifi/cyberpunk terms

[-] Dessa@hexbear.net 7 points 5 days ago

Everyone carries a spool of cable attached to their phone

[-] lapis@hexbear.net 5 points 5 days ago

we'd have to do a physical inspection of access points for data skimmers like we had to do with gas pump credit card scanners before chips were widespread.

[-] GeneralSwitch2Boycott@hexbear.net 6 points 6 days ago* (last edited 6 days ago)

Rich people have servants called data-fetchers who bring them constant updates via high-speed floppy sticks that they transfer off a data terminal and constantly update whatever electronic devices necessary for their master. People who make bootleg ports into the wirework (internet) are called data lords and these places are frequent targets for theft, sabotage, tertiary greymarket dealings, etc.

[-] Collatz_problem@hexbear.net 5 points 5 days ago

monkey paw curls

Military drones are now controlled via optical fiber.

[-] Dirt_Owl@hexbear.net 40 points 6 days ago

Pretty sure people in tech hate it too outside of those trying to sell it

[-] Cadende@hexbear.net 20 points 6 days ago* (last edited 6 days ago)

There's a contingent that like it. For some, they don't have to even pretend to have social skills since they can outsource writing to AI. They are also increasingly using it in place of google/copy-pasting from stackoverflow/etc to get "quick fix" solutions to their problems. It's not particularly good at those tasks IMO, but I genuinely think for some people the dopamine hit of copy-pasting something directly from chatgpt and not having to so much as lift a finger and it working first try, is addictive, and even though they usually have to troubleshoot it and re-prompt and then make changes by hand, they just keep trying for that sweet no-effort fix. For some of them they seem to treat it like a junior coworker you can offload all your work onto, forever.

In my experience (I've literally never used it but had coworkers try to feed its answers to me when we're working together on something, or giving what it spit out to me to fix for them), it tends to do okay for common use-cases, ones that you can almost always just look up in documentation or stackoverflow anyhow, but in more niche problems, it will often hallucinate that there's a magic parameter that does exactly what you want. It will never tell you "Nope, can't be done, you have to restructure around doing it this other way", unless you basically figure it out yourself and prompt it into doing so.

in more niche problems, it will often hallucinate that there's a magic parameter that does exactly what you want. It will never tell you "Nope, can't be done, you have to restructure around doing it this other way"

This was why, in spite of it all, I had a brief glimmer of hope for DeepSeek -- it's designed to reveal both its sources and the process by which it reaches its regurgitated conclusions, since it was meant to be an open-source research aid rather than a proprietary black-box chatbot.

[-] Cadende@hexbear.net 6 points 6 days ago

I have been meaning to try deepseek for a chuckle/to see what the hype is about. I have pretty much no drive to use AI instead of learning or doing the work myself, but I am willing to accept that, free of the shackles of capitalism, it might be useful and non-destructive technology for some applications in the future, and maybe it's a tiny glimpse towards that

[-] semioticbreakdown@hexbear.net 2 points 5 days ago

make weird prompts and get it to do weird outputs, it's kind of fun. I put in such a strange prompt that I got it to say something suuuuper reddity like "le bacon is epic xD" completely unprompted. I think my prompt involved trying to make the chatbot's role like an unhinged quirky catgirl or something. unfortunately this tends to break the writing model pretty quickly and it starts injecting structural tokens back into the stream but its very funny

Anthropic's latest research shows the chain of thought reasoning shown isn't trustworthy anyway. It's for our benefit, and doesn't match the actual reasoning used internally 1:1.

[-] semioticbreakdown@hexbear.net 5 points 5 days ago

thank you anthropic for letting everyone know your product is hot garbage

crank theoryI dont even think LLMs are reasoning in chain-of-thought, because they arent a cognitive model at all, just a writing model. By forcing the model to write out all the intermediate steps of a computation the model's sign-interpretation ability allows it to probabilistically choose a more correct result without any sort of cognition or actual "reasoning" as we would think of it happening. This is the reason CoT is both untrustworthy and easily added to existing models of LLMs by CoT prompting. Its using the CoT examples of the prompt to produce output in such a way that significantly reduces the chances that it will hallucinate wrongly. Its non-cognitive and still suffers from the ability to hallucinate, and this will happen as long as it isnt meta-aware of what its doing. I think LLMs are basically a solved problem. Great job. You made a really good model of language itself. Maybe move on??

As you say, hallucinating can be solved by adding meta-awareness. Seems likely to me we'll be able to patch the problem eventually. We're just starting to understand why these models hallucinate in the first place.

[-] semioticbreakdown@hexbear.net 2 points 4 days ago

I dont think hallucination is that poorly understood, tbh. Its related to the grounding problem to an extent, but also a result of the fact that its a non-cognitive generative model. Youre just sampling from a distribution. Its considered "wrong output" because to us, truth is obvious. But if you look at generative models beyond language models, the universality of this behavior is obvious. You cannot have the ability to make a picture of minion JD vance without LLMs hallucinating (or the ability to have creative writing, for a same-domain analogy). You can see it in humans too in things like wrong words or word salads/verbal diarrhea/certain aphasias. Language function is also preserved in instances even when logical ability damaged. With no way to re-examine and make judgements about its output, and also no relation to reality (or some version of it), the unconstrained output of the generative process is inherently untrustworthy. That is to say - all LLM output is hallucination, and only has relation to the real when interpreted by the user. Massive amounts of training data are used to "bake" the model such that the likelihood of producing text that we would consider "True" is better than random (or pretty high in some cases). This extends to the math realm too, and is likely why CoT improves apparent reasoning so dramatically (And also likely why CoT reasoning only works when a model is of sufficient size). They are just dreams, and only gain meaning through our own interpretation. They do not reflect reality.

REALLY crank thoughtsits more than just a patch tbh, its a radical difference in structure as compared to LLMs. the language model becomes a language module. Also, it doesnt solve the grounding problem - the only thing that can do that is multi-modality, the network needs to have a coherent representational system that is also related to the real in some way. further question: if such a system of meta-awareness is responsible for p-consciousness in humans, would incorporating it into an AI system also provide p-consciousness? Is it inherently impossible to create the desired artificial intelligence systems without imbuing them with subjective experience? I'm beginning to suspect that might be the case

[-] umbrella@lemmy.ml 5 points 5 days ago

many chuds work in tech. you will sadly see plenty of them bootlicking musk, ai and shit

[-] Beaver@hexbear.net 27 points 6 days ago

When all the AI startup bros ever fucking talk about is how AI is going to let them throw workers out on their ass, then of course workers aren't going to trust AI.

[-] ChaosMaterialist@hexbear.net 2 points 5 days ago
this post was submitted on 14 Apr 2025
34 points (100.0% liked)

technology

23678 readers
192 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS