100
submitted 2 months ago* (last edited 2 months ago) by cyrano@lemmy.dbzer0.com to c/asklemmy@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] jg1i@lemmy.world 41 points 2 months ago

I absolutely hate AI. I'm a teacher and it's been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don't bother to think critically about the answers the AI gives and just assume it's 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

People seem to think there's an "easy" way to learn with AI, that you don't have to put in the time and practice to learn stuff. News flash! You can't outsource creating neural pathways in your brain to some service. It's like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don't allow any electronic devices during exams.

[-] polle@feddit.org 10 points 2 months ago

As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I've seen people wasting hours like that. Its insane.

load more comments (4 replies)
[-] vk6flab@lemmy.radio 36 points 2 months ago

Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there's been no impact whatsoever in my personal life.

In my professional life as an ICT person with over 40 years experience, it's helped me identify which people understand what it is and more specifically, what it isn't, intelligent, and respond accordingly.

The sooner the AI bubble bursts, the better.

[-] Vinny_93@lemmy.world 6 points 2 months ago

I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.

Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I'd be fine with that.

Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.

But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.

This of course is all what stakeholders and decision makers do not want for obvious reasons.

[-] vk6flab@lemmy.radio 10 points 2 months ago

The thing that's stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and "journalism" hype to the contrary.

ChatGPT is predictive text on steroids.

Type a word on your mobile phone, then keep tapping the next predicted word and you'll have some sense of what is happening behind the scenes.

The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.

It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.

There is no understanding of the text at all, no true or false, right or wrong, none of that.

AI today is Assumed Intelligence

Arthur C Clarke says it best:

"Any sufficiently advanced technology is indistinguishable from magic."

I don't expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.

That's not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.

Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.

load more comments (4 replies)
load more comments (3 replies)
[-] LovableSidekick@lemmy.world 30 points 2 months ago* (last edited 2 months ago)

Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre's entrance, with the previously described characters reacting in their own ways.

I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I'm short on time.

My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it's a super valuable tool.

[-] Norin@lemmy.world 25 points 2 months ago* (last edited 2 months ago)

For work, I teach philosophy.

The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.

Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.

As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.

[-] AFKBRBChocolate@lemmy.world 14 points 2 months ago

My son is in a PhD program and is a TA for a geophysics class that's mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn't have enough data to give an answer and refers to ways the person could find out. It's honestly pretty sad.

[-] MonkeMischief@lemmy.today 10 points 2 months ago

convinced that every student needs to use the LLMs in order to find a career after graduation.

Yes, of course, why are bakers learning to use ovens when they should just be training on app-enabled breadmakers and toasters using ready-made mixes?

After all, the bosses will find the automated machine product "good enough." It's "just a tool, you guys."

Sheesh. I hope these students aren't paying tuition, and even then, they're still getting ripped off by admin-brain.

I'm sorry you have to put up with that. Especially when philosophy is all about doing the mental weightlifting and exploration for onesself!

[-] PonyOfWar@pawb.social 23 points 2 months ago

As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it's pretty crap. I don't see any use for it in my personal life.

I think its influence is negative overall. Right now it might be useful for programming questions, but that's only the case because it's fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it'll become less useful for future problems, while having effectively killed those useful sites in the process.

Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.

[-] Tyfud@lemmy.world 12 points 2 months ago* (last edited 2 months ago)

I wish more people understood this. It's short term, mediocre gains, at the cost of a huge long term loss, like stack overflow.

[-] Routhinator@startrek.website 19 points 2 months ago

I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.

And, even if they pull it, I don't think I'll ever go back. No more cloud drives, no more 'apps'. Webpages and local files on a file share I own and host.

[-] Nostalgia@lemmy.world 16 points 2 months ago

AI has completely killed my desire to teach writing at the community college level.

load more comments (2 replies)
[-] Caboose12000@lemmy.world 16 points 2 months ago* (last edited 2 months ago)

I got into linux right around when it was first happening, and I dont think I would've made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.

probably a very friendly expert or mentor or even just a regular established linux user could've done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me

[-] LogicalDrivel@sopuli.xyz 15 points 2 months ago

It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of "prompt engineering". I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn't stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn't really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for "attitude problems".

[-] JudahBenHur@lemm.ee 8 points 2 months ago

im sorry.

managers tend to be useless fucking idiots.

[-] Skanky@lemmy.world 6 points 2 months ago

Curious - what type of job was this? Like, how was AI used to interact with your customers?

[-] LogicalDrivel@sopuli.xyz 9 points 2 months ago

It was just a small e-commerce store. Online sales and shipping. The boss wanted me to run emails i would send to vendors through gpt and any responses for customer complaints were put through GPT. We also had a chat function on our site for asking questions and what not and the boss wanted us to copy the customers' chat into gpt, get a response, rewrite if necessary, and then paste GPT's response into our chat. It was so ass backwards I just refused to do it. Not to mention it made the response times super high, so customers were just leaving rather than wait (which of course was always the employees fault).

load more comments (1 replies)
[-] AFKBRBChocolate@lemmy.world 15 points 2 months ago

I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it's a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

You can't put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I've had employees use it to come up with an algorithm or find an error, but I think it's risky to have one generate large pieces of code.

load more comments (3 replies)
[-] traches@sh.itjust.works 15 points 2 months ago

I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.

I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.

[-] weeeeum@lemmy.world 12 points 2 months ago

Scam emails are a lot more coherent now

load more comments (1 replies)
[-] 2ugly2live@lemmy.world 11 points 2 months ago

I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.

load more comments (1 replies)
[-] sudneo@lemm.ee 11 points 2 months ago

After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.

Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.

For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.

[-] PeriodicallyPedantic@lemmy.ca 10 points 2 months ago* (last edited 2 months ago)

It's changed my job: I now have to develop stupid AI products.

It has changed my life: I now have to listen to stupid AI bros.

My outlook: it's for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we're fucked. And if they can't then this was all a huge waste of time and energy.

Alternative outlook: if this was a tool given to the people to help their lives, then that'd be cool and even forgive some of the terrible parts of how the models were trained. But that's not how it's happening.

[-] MNByChoice@midwest.social 10 points 2 months ago

Impact?

My company sells services to companies trying to implement it. I have a job due to this.

Actual use of it? Just wasted time. The verifiable answers are wrong, the unverifiable answers don't get me anywhere on my projects.

load more comments (1 replies)
[-] GiantChickDicks@lemmy.ml 9 points 2 months ago

I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I'm AI in chat, which makes sense. A surprising number think I'm a bot when they call in, because I guess my voice sounds like a recording.

Most of the time it's just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can't believe the abuse people hurl out when they assume it's not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.

It's not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people's exhaustion with being forced to deal with robots from my own experiences when I've needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we're treating the people offering them decently so they want to continue offering that to us.

load more comments (2 replies)
[-] Fedegenerate@lemmynsfw.com 9 points 2 months ago

It's my rubber duck/judgement free space for Homelab solutions. Have a problem: chatgpt and Google it's suggestions. Find a random command line: chatgpt what does this do.

I understand that I don't understand it. So I sanity check everything going in and coming out of it. Every detail is a place holder for security. Mostly, it's just a space to find out why my solutions don't work, find out what solutions might work, and as a final check before implementation.

[-] Phoenicianpirate@lemm.ee 8 points 2 months ago

I am going to say that so far it hasn't done that much for me. I did originally ask it some silly questions, but I think I will be asking it for questions about coding soon.

[-] Brkdncr@lemmy.world 8 points 2 months ago

For me, the amount of people and time spent in meetings that talk about AI grossly outweighs any benefit of AI.

[-] acchariya@lemmy.world 7 points 2 months ago

It is extremely useful for suggesting translations and translating unclear foreign language sentences

[-] theplanlessman@feddit.uk 7 points 2 months ago

How do you know the output is an accurate translation?

load more comments (1 replies)
[-] IMNOTCRAZYINSTITUTION@lemmy.world 7 points 2 months ago

My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers' work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren't even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.

load more comments (2 replies)
[-] Skanky@lemmy.world 7 points 2 months ago

It's made our marketing department even lazier than they already were

[-] That_Devil_Girl@lemmy.ml 7 points 2 months ago

It has helped tremendously with my D&D games. It remembers past conversations, so world building is a snap.

[-] Sludgehammer@lemmy.world 6 points 2 months ago

Searching the internet for information about... well anything has become infuriating. I'm glad that most search engines have a time range setting.

[-] MonkeMischief@lemmy.today 6 points 2 months ago

"It is plain to see why you might be curious about Error 4752X3G: Allocation_Buffer_Fault. First, let's start with the basics.

  • What is an operating system?"

AGGHH!!!

[-] Binette@lemmy.ml 6 points 2 months ago

Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask "how do I change this setting?" It answers: by changing this setting), or gave me a wrong answer. If I can't already find it on a search engine, then it's pretty useless to me.

[-] Burninator05@lemmy.world 6 points 2 months ago

It seemingly has little impact. I've attempted to use LLMs a couple of times to ask very specific technical questions (on this specific model, running this specific OS version, how do I do this very specific thing) to try and cut down on the amount of research I would have to do to find a solution. The answer every time has been wrong. Once it was close enough to the answer I was able to figure it out but "close enough" doesn't seem worth bothering with most of the time.

When I search for things I always slip the AI summary at the top of the page.

[-] higgsboson@dubvee.org 6 points 2 months ago

Main effect is lots of whinging on Lemmy. Other than that, minimal impact.

[-] vrighter@discuss.tchncs.de 5 points 2 months ago

my face hurts from all the extra facepalms

[-] Kaiyoto@lemmy.world 5 points 2 months ago

Not much impact personally. I just read all the terrible implications of it online. Pressure in the professional world to use it, though fuck if I know what to use it for in this job. I don't like using it for my writing because I don't want to rely on something like that and because it's prone to errors.

Wish something that used a ton of resources would actually have a great impact to make it worth the waste.

load more comments (1 replies)
[-] icogniito@lemmy.zip 5 points 2 months ago

It helps me tremendously with language studies, outside of that I have no use for it and do actively detest the unethical possibilities of it

[-] MonkeMischief@lemmy.today 5 points 2 months ago

Man, so much to unpack here. It has me worried for a lot of the reasons mentioned: The people who pay money to skilled labor will think "The subscription machine can just do it." And that sucks.

I'm a digital artist as well, and while I think genAi is a neat toy to play with for shitposting or just "seeing what this dumb thing might look like" or generating "people that don't exist" and it's impressive tech, I'm not gonna give it ANY creative leverage over my work. Period. I still take issue with where it came from and how it was trained and the impact it has on our culture and planet.

We're already seeing the results of that slop pile generated from everyone who thought they could "achieve their creative dreams" by prompting a genie-product for it instead of learning an actual skill.

As for actual usefulness? Sometimes I run a local model for funsies and just bounce ideas off of it. It's like a parrot combined with a "programmer's rubber ducky." Sometimes that gets my mind moving, in the same way "autocomplete over and over" might generate interesting thoughts.

I also will say it's pretty decent at summarizing things. I actually find it somewhat helpful when YouTube's little "ai summary" is like "This video is about using this approach taking these steps to achieve whatever."

When the video description itself is just like "Join my Patreon and here's my 50+ affiliate links for blinky lights and microphones" lol

I use it to explain concepts to me in a slightly different way, or to summarize something for which there's a wealth of existing information.

But I really wish people were more educated about how it actually works, and there's just no way I'm trusting the centralized "services" for doing so.

[-] RalphFurley@lemmy.world 5 points 2 months ago

I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.

These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.

Sure I could've written it myself but I'm not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.

I use it all the time to output simple automation tasks when something like Ansible isn't apropos

load more comments
view more: next ›
this post was submitted on 01 Dec 2024
100 points (96.3% liked)

Ask Lemmy

28054 readers
645 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS