64
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 15 Feb 2024
64 points (100.0% liked)
Science
13006 readers
23 users here now
Studies, research findings, and interesting tidbits from the ever-expanding scientific world.
Subcommunities on Beehaw:
Be sure to also check out these other Fediverse science communities:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
The pace at which AI can generate bullshit not only currently vastly outstrips the ability for individual humans to vet it, but is actually accelerating. We cannot manually solve this by saying "people just need to catch it." Look at YT with CSAM or other federal violations - they literally can't keep up with the content coming in despite having armies of people (with insane turnover I might add) trying to do it. So the bar has been changed from "you can't have any of this stuff" to "you must put in reasonable effort to minimize it," because we've simply accepted it can't be done with humans - and that's with the assistance of their current algorithms constantly scouring their content for red flags. Bear in mind this is an international, massive company with resources these journals can't even dream of and almost all this content has been generated and uploaded by individual people.
These people I'm sure are perfectly capable of catching AI generated nonsense most of the time. But as the content gets more sophisticated and voluminous, the problem is only going to get worse. Stuff is going to get through. So we are at a crossroads where we throw up our hands and say "well there's not much we can do, good luck separating the wheat from the chaff," or we get creative. And this isn't just in academic journals either. This is crossing into more and more industries, in particular if it requires writing. Someone(s) is throwing money and resources at getting AI to do it faster and cheaper than people can.
I feel like two different problems are conflated into one though.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
If at point 3 people don't do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI. If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should "just do their job already". But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn't seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
spoiler
asdfasdfsadfasfasdfI feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn't caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn't mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don't know what to tell you...
The fact that you specifically respond to this one highly specific thing. While I clearly have written more is exactly what I mean.
shrugs
No, that's not really what I'm asking for. I'm also not looking for responses that isolate a single sentence from my longer messages and ignore the context. I'm not sure how to make my point any clearer than in my first reply to you, where I started with two bullet points. You seemed to focus on the second, but my main point was about the first. If we do want to talk about standard behavior in human conversation, generally speaking, people do acknowledge that they have heard/read something someone said even if they don't respond to it in detail.
Again, I've been agreeing that AI is causing significant problems. But in the case of this specific tweet, the real issue is with a pay to publish journal where the peer review process is failing, not AI. This key point has mostly been ignored. Even if that was not the case, if you want to have any change of trying to combat the emergence of AI I think it is pretty reasonable to question if the basic processes in place are even functioning in the first place. Where my thesis (again, if this wasn't a pay to publish journal) would be that this is likely not the case as in that entire process clearly nobody looked closely at these images. And just to be extra clear, I am not saying that AI never will be an issue, etc. But if reviewing already isn't happening at a basic level how are you ever hoping to combat AI in the first place?
The context of this tweet, saying "It’s finally happened. A peer-reviewed journal article with what appear to be nonsensical AI-generated images. This is dangerous.", does imply that. I've been responding with this in mind, which should be clear. It is this sort of thing I mean when I say selective reading when you seemingly take it as me saying that you personally said exactly that. Which is a take, but not one I'd say is reasonable if you take the whole context into account.
And in that context, I've said:
Which I stand by. In this particular instances, in this particular context AI isn't the issue and somewhat clickbait. Which makes most of what you argued about valid concerns. Youtube struggling, SEO + AI blog spam, etc are all very valid and concerning of AI causing havoc. But in this context of me calling a particular tweet clickbait they are also very much less relevant. If you just wanted to discuss the impact of AI in general and step away from the context of this tweet, then you should have said so.
Now, about misrepresenting arguments:
Have you looked back at your own previous comments when you wrote that? Because while have this, slightly bizarre, conversation I have gone back to mine a few times. Just to check if I actually did mess up somewhere or said things differently that I thought I did. The reason I am asking is that I have been thrown a few of these remarks from you where I could have responded with the above quote myelf. Things like "It’s passing the buck and saying that AI in no way, shape, or form, bears any responsibility for the problem."