759
top 50 comments
sorted by: hot top controversial new old
[-] BlameTheAntifa@lemmy.world 1 points 3 hours ago

This reminds me of some stuff in Charles Stross’ Accelerando. The book mentions how AI was rapidly filing patents and lawsuits and all this stuff by itself constantly. It was terrifying as a fictional idea, but here we are, it’s real.

[-] merc@sh.itjust.works 53 points 1 day ago

All this really does is show areas where the writing requirements are already bullshit and should be fixed.

Like, consumer financial complaints. People feel they have to use LLMs because when they write in using plain language they feel they're ignored, and they're probably right. It suggests that these financial companies are under regulated and overly powerful. If they weren't, they wouldn't be able to ignore complaints when they're not written in lawyerly language.

Press releases: we already know they're bullshit. No surprise that now they're using LLMs to generate them. These shouldn't exist at all. If you have something to say, don't say it in a stilted press-release way. Don't invent quotes from the CEO. If something is genuinely good and exciting news, make a blog post about it by someone who actually understands it and can communicate their excitement.

Job postings. Another bullshit piece of writing. An honest job posting would probably be something like: "Our sysadmin needs help because he's overworked, he says some of the key skills he'd need in a helper are X, Y and Z. But, even if you don't have those skills, you might be useful in other ways. It's a stressful job, and it doesn't pay that well, but it's steady work. Please don't apply if you're fresh out of school and don't have any hands-on experience." Instead, job postings have evolved into some weird cargo-culted style of writing involving stupid phrases like "the ideal candidate will..." and lies about something being a "fast paced environment" rather than simply "disorganized and stressful". You already basically need a "secret decoder ring" to understand a job posting, so yeah, why not just feed a realistic job posting to an LLM and make it come up with some bullshit.

[-] ilovepiracy@lemmy.dbzer0.com 16 points 22 hours ago

Exactly. LLM's assisting people in writing soul-sucking corporate drivel is a good thing, I hope this changes the public perception on the umbrella of 'formal office writing'. (including: internal emails, job applications etc.) So much time-wasting bullshit to form nothing productive.

[-] merc@sh.itjust.works 6 points 19 hours ago

LLM's assisting people in writing soul-sucking corporate drivel is a good thing

I don't think so, not if the alternative is simply getting rid of that soul-sucking corporate drivel.

[-] JackbyDev@programming.dev 11 points 18 hours ago

Reminds me of the one about

  1. See? The AI expands the bullet point into a full email.
  2. See? The AI summarizes the email into a single bullet point.
[-] JackbyDev@programming.dev 5 points 18 hours ago

Job postings are wild. Like, "Java Spring Boot developer with 8+ years experience" would be fine 90% of the time.

[-] merc@sh.itjust.works 2 points 4 hours ago

Even that is often treated as something non-negotiable for the HR people reviewing applicants, when the group that needs the dev would probably say "ok, this guy doesn't have 8 years experience, but clearly knows his shit" or "so what if she doesn't have any Spring Boot experience, look at all the rest of this, she'll pick it up in no time".

[-] Matriks404@lemmy.world 29 points 1 day ago* (last edited 1 day ago)

As a person who is intrigued in linguistics, I wonder how ~~AI~~ LLMs will affect real languages. I wonder if there is any research papers on this.

[-] lvxferre@mander.xyz 7 points 10 hours ago* (last edited 5 hours ago)

I'm not aware of any paper about this; specially with how recent LLMs are, it's kind of hard to detect tendencies.

That said, if I had to take a guess, the impact of LLMs in language will be rather subtle:

  • Some words will become more common because bots use them a lot, and people become more aware of those words. "Delve" comes to my mind. (Urgh. I hate this word.)
  • Swearing will become more common too. I wouldn't be surprised if we saw an uptick of "fuck" and "shit" after ChatGPT was released. That's because those bots don't swear, so swearing is a good way to show "I'm human".
  • Idiosyncratic language might also increase, as a mix of the above and to avoid sounding "bland and bot-like". Including letting some small typos to go through on purpose.

Text-to-speech, mentioned by @Shelbyeileen@lemmy.world, is another can of worms; it might reinforce non-common pronunciations until they become common. This should not be a big issue e.g. in Italian (that uses a mostly regular spelling), but it might be noticeable in English.

[-] NigelFrobisher@aussie.zone 10 points 22 hours ago* (last edited 22 hours ago)

I dunno. If people can’t be bothered to write stuff anymore, I doubt they will be bothered to read it either. Also, the model deviates towards the mean by its very design.

[-] Mycatiskai@lemmy.ca 2 points 21 hours ago

If people can't be bothered to write anymore, then I will be very picky about what I read. I will probably do more research and make sure it is someone I trust to have written it themselves not relied on trash machines.

[-] Comment105@lemm.ee 9 points 20 hours ago* (last edited 20 hours ago)

I've been picky about what I read ever since human written slop seemed to peak in the late 2010's, articles written by humans to appeal to search engines are almost just as worthless as AI slop.

This volume of garbage is certainly much more concerning, though.

[-] Shelbyeileen@lemmy.world 2 points 18 hours ago

Not quite the same, but I'm waiting for the day when people will pronounce street names like the GPS, instead of how they are actually pronounced. The street Schoenherr, in my neck of the woods is pronounced "Shane urr (yes, like the planet Omicron Percii 8, cause Detroit (Day twah) is weird), but the GPS says "Shown her". I'm really curious to see how long it takes for the computer voice to be considered the correct one.

[-] barsoap@lemm.ee 2 points 12 hours ago

The GPS is definitely closer to the proper German pronunciation.

[-] Vespair@lemm.ee 18 points 1 day ago

I am not saying the two are equally comparable, but I wonder if the same "most rapid change in human written communication" could also have been said with the proliferation of computer-based word processors equipped with spelling and grammar checks.

[-] ayyy@sh.itjust.works 23 points 1 day ago

Llm detectors are always snake oil 100% of the time. Anyone claiming otherwise is lying for personal gain.

[-] nyamlae@lemmy.world 2 points 21 hours ago
[-] ayyy@sh.itjust.works 2 points 14 hours ago

They make the claim, the burden of proof is on them. Please look at the paper, there is so much hand waving it could be a parade.

[-] nyamlae@lemmy.world 1 points 6 hours ago

That's not how the burden of proof works. Regardless of what they're doing, you're also making a claim, and are refusing to back it up.

[-] Jax@sh.itjust.works 1 points 17 hours ago

The source is their fingers when they typed in the message. Silly goose.

[-] msage@programming.dev 85 points 1 day ago

I just want to point out that there were text generators before ChatGPT, and they were ruining the internet for years.

Just like there are bots on social media, pushing a narrative, humans are being alienated from every aspect of modern society.

What is a society for, when you can't be a part of it?

[-] Schadrach@lemmy.sdf.org 20 points 1 day ago

I just want to point out that there were text generators before ChatGPT, and they were ruining the internet for years.

Hey now, King James Programming was pretty funny.

For those unfamiliar, King James Programming is a Markov chain trained on the King James Bible and the Structure and Interpretation of Computer Programs, with quotes posted at https://kingjamesprogramming.tumblr.com/

4:24 For the LORD will work for each type of data it is applied to.

In APL all data are represented as arrays, and there shall they see the Son of man, in whose sight I brought them out

3:23 And these three men, Noah, Daniel, and Job were in it, and all the abominations that be done in (log n) steps.

I was first introduced to it when I started reading UNSONG.

[-] Fedop@slrpnk.net 6 points 22 hours ago

This was such a good idea, so many of these are fire.

then shall they call upon me, but I will not cause any information to be accumulated on the stack.

How much more are ye better than the ordered-list representation

evaluating the operator might modify env, which will be the hope of unjust men

load more comments (16 replies)
[-] taiyang@lemmy.world 110 points 1 day ago

I'm the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I'll be grading them next week. I'm already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we're swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It'd be wrong, cause you have to reference the book, but at least it'd not be at blatant.

I didn't unilaterally give them 0s but they usually got it wrong anyway so I didn't really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don't even care that much because again, it's usually worse quality anyway but I have to grade this stuff, I don't want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

[-] faythofdragons@slrpnk.net 2 points 18 hours ago

Why even cheat on that? The universe has a billion ad examples.

I'm not one of your students, but I do remember how I thought in high school. Both of my parents worked, so I was the one that had to cook dinner and help my little brothers with their homework, then I had multiple hours of my own homework to do.

While I do enjoy analyzing media, the homework I struggled with would get priority. I was the oldest, so I didn't have anybody to ask for help with questions, and often had to spend a larger amount of time than intended on topics I struggle with. So, I'd waste the whole night struggling with algebra and chemistry, then do the remaining 'easy' assignments as quickly and carelessly as possible so I could get to bed before midnight. Getting points knocked off for shoddy work is far preferable to getting a zero for not doing it at all, and if I could get to bed at a reasonable time, I wouldn't lose points in the morning class for falling asleep.

It just... makes sense to cheat sometimes.

[-] RunawayFixer@lemmy.world 4 points 22 hours ago

Students and cheating is always going to be a thing, only the technology evolves. It's always been an interesting cat and mouse game imo, as long as you're not too personally affected (sorry).

I was a student when the internet started to spread and some students had internet at home, while most teachers were still oblivious. There was a french book report due and 4 kids had picked the same book because they had found a good summary online. 3 of the kids hand wrote a summary of the summary, 1 kid printed out the original summary and handed that in. 3 kids received a 0, the 4th got a warning to not let others copy his work :D

[-] taiyang@lemmy.world 3 points 21 hours ago

Lol, well sounds like a bad assignment if you can get away with just summary, although I guess it is language class(?) it's more reasonable. I'm not really shooken up over this type of thing, though. I'm not pro-cheating, but it's not for justice or morality; it's cause education is for the students benefit and they're missing out on growth. We really need more critical thinkers in this world. Like, desperately need them. Lol

[-] RunawayFixer@lemmy.world 3 points 15 hours ago

Yep, french language class in a too large highschool class. If the class had been smaller, then the teacher would have definitely gone for more presentations by the students.

Keep up the good fight, I'm certain that many of your students appreciate what they learn from you.

[-] Shou@lemmy.world 30 points 1 day ago

As someone who has been a teenager. Cheating is easy, and class wasn't as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?

That said, I unlearned to copy summaries when I heard I had to talk about the books I "read" as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted... unless those summaries were too long to read. We had no other option but to take it seriously.

As long as there isn't something that GPT can't do the work for, they won't learn how to write/do the assignment.

Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.

[-] ICastFist@programming.dev 10 points 1 day ago

Last November, I gave some volunteer drawing classes at a school. Since I had limited space, I had to pick and choose a small number of 9-10yo kids, and asked the students interested to do a drawing and answer "Why would you like to participate in the drawing classes?"

One of the kids used chatgpt or some other AI. One of the parts that gave it away was that, while everyone else wrote something like "I want because", he went on with "By participating, you can learn new things and make friends". I called him out in private and he tried to bullshit me, but it wasn't hard to make him contradict himself or admit to "using help". I then told him that it was blatantly obvious that he used AI to answer for him and what really annoyed me wasn't so much the fact he used it, but that he managed to write all of that without reading, and thought that I would be too dumb or lazy to bother reading or to notice any problems.

load more comments (3 replies)
load more comments (1 replies)
load more comments (4 replies)
[-] pezhore@infosec.pub 182 points 1 day ago

I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.

I can only see this get worse.

[-] null_dot@lemmy.dbzer0.com 112 points 1 day ago

It's not just the internet.

Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.

I genuinely believe that the general effectiveness of written communication has regressed.

[-] based_raven@lemm.ee 4 points 1 day ago

Yep. My work has pushed AI shit massively. Something like 53% of staff are using it. They're using it to write reports for them for clients, all sorts. It's honestly mad.

load more comments (10 replies)
load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 02 Mar 2025
759 points (98.8% liked)

Science Memes

12582 readers
2877 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS