88
you are viewing a single comment's thread
view the rest of the comments
[-] MagicShel@lemmy.zip 27 points 4 months ago* (last edited 4 months ago)

A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

What the actual fuck? You couldn't spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

So they did. Why are we talking about ChatGPT then? You could just leave that part out. It's useless. Obviously a fake photo has been manipulated. Why bother asking?

[-] Deestan@lemmy.world 21 points 4 months ago

I tried the image of this real actual road collapse: https://www.tv2.no/nyheter/innenriks/60-mennesker-isolert-etter-veiras/12875776

I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.

[-] Atropos@lemmy.world 16 points 4 months ago

God damn I hate this tool.

Thanks for posting this, great example

[-] plantfanatic@sh.itjust.works -3 points 4 months ago* (last edited 4 months ago)

Wait, you’re surprised it did what you asked of it?

There’s a massive difference between asking if something is fake, and telling it it is and asking why.

A person would make the same type of guesses and explanations if given the same task.

All this is showing is, you and ALOT of other people just don’t know enough about AI to be able to have a conversation about it.

It even says “suggests” in it, it’s making no claim that it’s real or fake. The lack of basic comprehension is the issue here.

[-] sem@piefed.blahaj.zone 6 points 4 months ago

A person would have the agency to ask, " why do you think it's fake?"

[-] plantfanatic@sh.itjust.works 1 points 4 months ago* (last edited 4 months ago)

Why would it have to? It and the person doing the task already knows to do any task put in front of it. It’s one of a hundred photos for all it and the person knows.

You are extending context and instructions that doesn’t exist. The situation would be, both are doing whatever task is presented to them. A human asking would fail and be removed. They failed order number one.

You could also setup a situation where the ai and human were both capable of asking. The ai won’t do what it’s not asked, that’s the comprehension lacking.

[-] sem@piefed.blahaj.zone 6 points 4 months ago

When people use a conversational tool, they expect it to act human, which it INTENTIONALLY DOES but without the sanity of a real human.

[-] plantfanatic@sh.itjust.works 0 points 4 months ago* (last edited 4 months ago)

It’s not a conversation tool when you present it with a specific task….

Do you not understand even the basic premise of how ai works?

[-] sem@piefed.blahaj.zone 0 points 4 months ago

When we are talking about LLM chat bots, they have a conversational interface. I am not talking about other types of machine learning. I don't have time to keep responding.

[-] plantfanatic@sh.itjust.works 0 points 4 months ago* (last edited 4 months ago)

. I am not talking about other types of machine learning.

Then you are making up you own conversation instead of following the thread?

The person presented a specific task to an AI, where does a chatbot come in? You seem to be confused about what Ai is, and that’s what I pointed out, thanks for making it clear.

[-] sem@piefed.blahaj.zone 0 points 4 months ago* (last edited 4 months ago)

They are asking chatgpt. If you think that interface is not conversational, let me know how can help you.

[-] plantfanatic@sh.itjust.works 0 points 4 months ago* (last edited 4 months ago)

Seriously? A chatbot is one function of an ai, not the other way around. So when you give the ai a different task or set of instructions, it’s no longer the chatbot anymore, it’s whatever function that’s needed for that task.

I weep for humanity if you’re any indication of the general education on ai….

If you ask it to create an image, are you seriously expecting it to have a conversation and point out where you messed up? That’s not how any of this works lmfao. “Hey I need to point out that ducks don’t have scales, and the sky isn’t green.” No it does what it’s asked. But now suddenly it’s different? Why?

[-] sem@piefed.blahaj.zone 0 points 4 months ago

Please see my above comments.

[-] Weslee@lemmy.world 1 points 4 months ago

I think if a person were asked to do the same they would actually look at the image and make genuine remarks, look at the points it has highlighted, the boxes are placed around random points and the references to those boxes are unrelated (ie. yellow talks about branches when there are no branches near the yellow box, red talks about bent guardrail when the red box on the guardrail is of an undamaged section)

It has just made up points that "sound correct", anyone actually looking at this can tell there is no intelligence behind this

[-] plantfanatic@sh.itjust.works -1 points 4 months ago* (last edited 4 months ago)

Yet that wasn’t the point they even made! Lmfao nice reaching there.

Those would be the same type of points a human would make to accomplish the task.

You seem to be ignoring the facts. It was told the image was fake, and told to explain why. Even a human that knows it’s real would still do what was presented to it.

The person told the ai a very specific thing to do, with not room for variance, it wasn’t even stated as a question, they made a demand and any human in the same position would act the same way. If you’re expecting to have to tell a human a 100 times that “yes the image is real, can you do the task presented” is more efficient and better then it being done?

Now you could also present the task as both being able to question it, the ai would follow instructions better.

Back to situation one, while with the human you would be constantly interrupted, is that a good employee or subject? Or one you would immediately replace as it can’t even follow basic instructions? Ai or human, you would point to do the task at hand, yes critical thinking is important, but not for this stupid task. Stop applying instructions and context that never existed in the first place. In a one for one example, the Ai would question too, if you can’t understand this, you shouldn’t be commenting on Ai.

Ai sucks, but don’t ignore reality to make your asinine point.

[-] Deestan@lemmy.world 0 points 4 months ago

Wait, you’re surprised it did what you asked of it?

No. Stop making things up to complain about. Or at least leave me out of it.

[-] plantfanatic@sh.itjust.works 2 points 4 months ago* (last edited 4 months ago)

Then what are doing? Complaining it did exactly what you instructed it to do?

What else did you expect?

I get circle jerking against ai is hip and fun, but this isn’t even one of the valid errors it makes. This is just pure human error lmfao.

[-] WhyJiffie@sh.itjust.works 4 points 4 months ago* (last edited 4 months ago)

clearly, they asked it a question that average joe would do, and has shown that again its full of overly confident lies. it did not just reinforce the original belief of the user that it is fake, but it also hallucinated there a bunch of professional-like statements that are false if you take the time to check them. most people won't check them though, and straight up believe what it just spit out and think "oh this is so smart! outrageous that people call me dumb for asking it life advice!"

[-] IcyToes@sh.itjust.works 7 points 4 months ago

They needed time for their journalists to get there. They're too busy on the beaches counting migrant boat crossings.

[-] BanMe@lemmy.world 4 points 4 months ago

I am guessing the reporter wanted to remind people tools exist for this, however the reporter isn't tech savvy enough to realize ChatGPT isn't one of them.

[-] 9bananas@feddit.org 5 points 4 months ago* (last edited 4 months ago)

afaik, there actually aren't any reliable tools for this.

the highest accuracy rate I've seen reported for "AI detectors" is somewhere around 60%; barely better than a random guess...

edit: i think that way for text/LLM, to be fair.

kinda doubt images are much better though...happy to hear otherwise, if there are better ones!

[-] rockerface@lemmy.cafe 3 points 4 months ago

The problem is any AI detector can be used to train AI to fool it, if it's publicly available

[-] 9bananas@feddit.org 3 points 4 months ago* (last edited 4 months ago)

exactly!

using a "detector" is how (not all, but a lot of) AIs (LLMs, GenAI) are trained:

have one AI that's a "student", and one that's a "teacher" and pit them against one another until the student fools the teacher nearly 100% of the time. this is what's usually called "training" an AI.

one can do very funny things with this tech!

for anyone that wants to see this process in action, here's a great example:

Benn Jorda: Breaking The Creepy AI in Police Cameras

[-] Wren@lemmy.today 3 points 4 months ago

My best guess is SEO. Journalism that mentions ChatGPT gets more hits. It might be they did use a specialist or specialized software and the editor was like "Say it was ChatGPT, otherwise people get confused, and we get more views. No one's going to fact check whether or not someone used ChatGPT."

That's just my wild, somewhat informed speculation.

[-] Tuuktuuk@piefed.ee 1 points 4 months ago* (last edited 4 months ago)

There's hoping that the reporter then looked at the image and noticed, "oh, true! That's an obvious spot there!"

[-] Railcar8095@lemmy.world 0 points 4 months ago

Devils advocate, AI might be an agent that detects tapering with a NLP frontend.

Not all AI is LLMs.

[-] MagicShel@lemmy.zip 2 points 4 months ago* (last edited 4 months ago)

A "chatbot" is not a specialized AI.

(I feel like maybe I need to put this boilerplate in every comment about AI, but I'd hate that.) I'm not against AI or even chatbots. They have their uses. This is not using them appropriately.

[-] Railcar8095@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

A chatbot can be the user facing side of a specialized agent.

That's actually how original change bots were. Siri didn't know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.

[-] MagicShel@lemmy.zip 1 points 4 months ago* (last edited 4 months ago)

Okay I get you're playing devil's advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I'm wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn't be.

My second point still stands. If you sent someone to look at the thing and it's fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.

[-] Railcar8095@lemmy.world 0 points 4 months ago

It's not like BBC is a single person with no skill other than a driving license and at least one functional eye.

Hell, they don't even need to go, just call the local services.

For me it's most likely that they have a specialized tool than an LLM detecting correctly tampering with the photo.

But if you say it's unlikely you're wrong, then I must be wrong I guess.

this post was submitted on 07 Dec 2025
88 points (98.9% liked)

Technology

83599 readers
1169 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS