91

I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there's hype. There are ethical concerns but we'll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it's a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what's the point of it all?

(page 2) 50 comments
sorted by: hot top controversial new old
[-] arken@lemmy.world 3 points 2 weeks ago

There are some great use cases, for instance transcribing handwritten records and making them searchable is really exciting to me personally. They can also be a great tool if you learn to work with them (perhaps most importantly, know when not to use them - which in my line of work is most of the time).

That being said, none of these cases, or any of the cases in this thread, is going to return the large amounts of money now being invested in AI.

[-] Xavienth@lemmygrad.ml 1 points 2 weeks ago

Generative AI is actually really bad at transcription. It imagines dialogues that never happened. There was some institution, a hospital I think? They said every transcription had at least one major error like that.

[-] octochamp@lemmy.ml 3 points 2 weeks ago

This is an issue if it's unsupervised, but the transcription models are good enough now that with oversight then they're usually useful: checking and correcting the AI generated transcription is almost always quicker than transcribing entirely by hand.

If we approach tasks like these assuming that they are error-prone regardless whether they are done by human or machine, and will always need some oversight and verification, then the AI tools can be very helpful in very non-miraculous ways. I think it was Jason Koebler said in a recent 404 podcast that at Vice he used to transcribe every word of every interview he did as a journalist, but now transcribes everything with AI and has saved hundreds of work hours doing so, but he still manually checks every transcript to verify it.

[-] red_concrete@lemmy.ml 3 points 2 weeks ago

My understanding is that it will eventually used to improve autocorrect, when they get it working properly.

[-] robot_dog_with_gun@hexbear.net 3 points 2 weeks ago

simple tasks you can verify yourself and you're just rolling dice for some time saved. everything else is kinda shit

[-] orcrist@lemm.ee 3 points 2 weeks ago

There is no point. There are billions of points, because there are billions of people, and that's the point.

You know that there are hundreds or thousands of reasonable uses of generative AI, whether it's customer support or template generation or brainstorming or the list goes on and on. Obviously you know that. So I'm not sure that you're asking a meaningful question. People are using a tool to solve various problems, but you don't see the point in that?

If your position is that they should use other tools to solve their problems, that's certainly a legitimate view and you could argue for it. But that's not what you wrote and I don't think that's what you feel.

[-] sunzu2@thebrainbin.org 3 points 2 weeks ago
[-] corsicanguppy@lemmy.ca 2 points 2 weeks ago* (last edited 2 weeks ago)

Ha! I use it to write Ansible.

In my case, YAML is a tool of Satan and Ansible is its 2001-era minion of stupid, so when I need to write Ansible I let the robots do that for me and save my sanity.

I understand that will make me less likely to ever learn Ansible, if I use a bot to write the 'code' for me; and I consider that to be another benefit as I don't need to develop a pot habit later, in the hopes of killing the brain cells that record my memory of learning Ansible.

[-] weeeeum@lemmy.world 3 points 2 weeks ago

I think LLMs could be great if they were used for education, learning and trained on good data. The encyclopedia Britannica is building an AI exclusively trained on its data.

It also allows for room for writers to add more to the database, to provide broader knowledge for the AI, so people keep their jobs.

[-] hamid@vegantheoryclub.org 3 points 2 weeks ago

I use it to re-tone and clarify corporate communications that I have to send out on a regular basis to my clients and internally. It has helped a lot with the amount of time I used to spend copy editing my own work. I have saved myself lots of hours doing something I don't really like (copy-editing) and more time doing the stuff I do (engineering) because of it.

[-] thepreciousboar@lemm.ee 3 points 2 weeks ago

I know they are being used to, and are decently good for, extracting a single infornation from a big document (like a datasheet). Considering you can easily confirm the information is correct, it's quite a nice use case

[-] GaMEChld@lemmy.world 3 points 2 weeks ago

I like using it to help get the ball rolling on stuff and organizing my thoughts. Then I do the finer tweaking on my own. Basically I kinda use a sliding scale of the longer it takes me to refine an AI output for smaller and smaller improvements is what determines when I switch to manual.

[-] graymess@hexbear.net 2 points 2 weeks ago

I recently had to digitize dozens of photos from family scrapbooks, many of which had annoying novelty pattern borders cut out of the edges. Sure, I could have just cropped the photos more to hide the stupid zigzagged missing portions. But I had the beta version of Photoshop installed with the generative fill function, so I tried it. Half the time it was garbage, but the other half it filled in a bit of grass or sky convincingly enough that you couldn't tell the photo was damaged. +1 acceptable use case for generative AI, I guess.

[-] GuyFi@lemmy.sdf.org 2 points 2 weeks ago

I have personally found it fantastic as a programming aid, and as a writing aid to write song lyrics. The art it creates lacks soul and any sense of being actually good but it's great as a "oh I could do this cool thing" inspiration machine

[-] UnRelatedBurner@sh.itjust.works 2 points 2 weeks ago

Just today I needed a pdf with filler english text, not lorem. ChatGPT was perfect for that. Other times when I'm writing something I use it to check grammar. It's way better at it than grammarly imo, and faster and makes the decisions for me BUT PROOF-READ IT. if you really fuck the tenses up it won't know how to correct it, it'll make things up. Besides these: text manipulation. I could learn vim, write a script, or I could just copy "remove the special characters" enter -> done.

I use perplexity for syntax. I don't code with it, but it's the perfect one stop shop for "how does this work in this lang again" when coding. For advanced/new/unpopular APIs it's back to the olds school docs, but you could try to give it the link so it parses it for you, it's usually wonky tho.

[-] sgtlion@hexbear.net 2 points 2 weeks ago* (last edited 2 weeks ago)

Programming quick scripts and replacement for Google/Wikipedia more than anything. I chat to it on an app to ask about various facts or info I wanted to know. And it usually gets in depth pretty quickly.

Also cooking. I've basically given up on recipe sites, except for niche, specific things. AI gets stuff relatively right and quickly adjusts if I need substitutions. (And again, hands free for my sticky flour fingers).

And ideation. Whether I'm coming up with names, or a specific word, or clothes, or a joke, I can ask AI for 50 examples and I can usually piece together a result I like from a couple of those.

Finally, I'll admit I use it as a sounding board to think through topics, when a real human who can empathise would absolutely be better. Sadly, the way modern life is, one isn't always available. It's a small step up from ELIZA.

The key is that AI is part of the process. Just as I would never say "trust the first Google result with your life", because its some internet rando who might say anything, so too should you not let AI have the final word. I frequently question or correct it, but it still helps the journey.

[-] SplashJackson@lemmy.ca 2 points 2 weeks ago* (last edited 2 weeks ago)

I wish I could have an AI in my head that would do all the talking for me because socializing is so exhausting

[-] tetris11@lemmy.ml 2 points 2 weeks ago* (last edited 2 weeks ago)

Other people would then have AIs in their heads to deal with the responses.

A perfect world, where nothing is actually being said, but goddamn do we sound smart saying it

load more comments (1 replies)
[-] Tartas1995@discuss.tchncs.de 2 points 2 weeks ago

I hate questions like this due to 1 major issue.

A generative ai with "error free" Output, is very differently useful than one that isn't.

Imagine an ai that would answer any questions objectively and unbiased, would that threaten job? Yeah. Would it be an huge improvement for human kind? Yeah.

Now imagine the same ai with a 10% bs rate, like how would you trust anything from it?

Currently generative ai is very very flawed. That is what we can evaluate and it is obvious. It is mostly useless as it produces mostly slop and consumes far more energy and water than you would expect.

A "better" one would be differently useful but just like killing half of the worlds population would help against climate change, the cost of getting there might not be what we want it to be, and it might not be worth it.

Current market practice, cost and results, lead me to say, it is effectively useless and probably a net negative for human kind. There is no legitimate usage as any usage legitimizes the market practice and cost given the results.

[-] CanadaPlus@lemmy.sdf.org 2 points 2 weeks ago* (last edited 1 week ago)

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

I'd actually challenge both of these. The property of "soulessness" is very subjective, and AI art has won blind competitions. On programming, it's empirically made faster by half again, even with the intrinsic requirement for debugging.

It's good at generating things. There are some things we want to generate. Whether we actually should, like you said, is another issue, and one that doesn't impact anyone's bottom line directly.

[-] nairui@lemmy.world 1 points 1 week ago

To win a competition isn’t speaking to the purpose of art really, whose purpose is for communication. AI has nothing to communicate and approximates a mish mash of its dataset to mimic to great success the things it’s seen, but is ultimately meaningless in intention. It would be a disservice to muddy the art and writing out in the world created by and for human beings with a desire to communicate with algorithmic outputs with no discernible purpose.

load more comments (3 replies)
[-] scytale@lemm.ee 1 points 2 weeks ago

The winter storm was set to arrive while I’m traveling, and I needed to drip our faucets to avoid our pipes bursting. I didn’t want to waste water from a dripping faucet for more than a week, so I asked duckduckgo AI to calculate how much water will 1 drop a second accumulate and if it will overflow on a standard baththub with the drain closed. I can do the math myself, but it’s easier for AI to do it.

[-] robot_dog_with_gun@hexbear.net 2 points 2 weeks ago

that could've gone catastrophically wring lmao. LLMs don't do mathematic computation.

load more comments (1 replies)
[-] SnotFlickerman@lemmy.blahaj.zone 1 points 2 weeks ago* (last edited 2 weeks ago)

So what’s the point of it all?

To reduce wages.

Instead of using tech to reduce work and allow humans to thrive and make art, we use tech to make art and force humans into long hours of drudgery and repetitive bitch work just because CEOs like to watch other people suffer I guess.

[-] ReCursing@lemmings.world 1 points 2 weeks ago

art. It's a new medium, get over it

[-] dingus@lemmy.world 1 points 2 weeks ago

Never used it until recently. Now I use it to vent because I'm a crazy person.

[-] yogthos@lemmy.ml 1 points 2 weeks ago

Learning languages is a great use case. I'm learning Mandarin right now, and being able to chat with a bot is really great practice for me. Another use case I've found handy is using it as a sounding board. The output it produces can stimulate new ideas in my own head, and it makes it a good exploration tool that let me pull on different threads of thought.

[-] boredtortoise@lemm.ee 1 points 2 weeks ago

Documentation work, synthesis, sentiment analysis

[-] Jolteon@lemmy.zip 1 points 2 weeks ago

Making dynamic templates.

load more comments
view more: ‹ prev next ›
this post was submitted on 14 Jan 2025
91 points (96.9% liked)

Asklemmy

44616 readers
861 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS