262

This is the humanless future, hurray!

top 26 comments
sorted by: hot top controversial new old
[-] fearout@kbin.social 45 points 1 year ago* (last edited 1 year ago)

I was curious how current LLMs might handle this with proper instructions, so I asked chatGPT this: “What can you tell me about this Reddit post? Would you write a news article about this? Analyze the trustworthiness of this information:” and pasted the text from the post. Here’s a part of its reply:

This Reddit post appears to be discussing updates in the context of […] Hearthstone and World of Warcraft. However, there are several factual inaccuracies which raises questions about the trustworthiness of the post:

  1. Hearthstone was first released in 2014, not 1994. This error may be a typo or a misunderstanding by the user.
  1. Glorbo does not appear to be a known character in either the Hearthstone or World of Warcraft universe […]
  1. Chen Stormstout is a known character from the Warcraft universe, but the claim about him being the end boss of a new version of Karazhan doesn't match with any known information […]
  1. Dragonflight is not known as a video game developer or associated term in the context of Hearthstone or World of Warcraft […]

Given these points, it seems this post might be either a joke, speculation, or misinformation. […] So I'd recommend taking this information with a grain of caution and verify it from reliable sources.

[…] I would caution against using the information in this post for a news article due to the factual inaccuracies and lack of verifiable sources. Good journalism relies on accuracy, fact-checking, and use of reliable sources. This Reddit post contains several claims that contradict established knowledge, and without further verification from reliable, official sources (like game developers or official news releases), it would not be advisable to use this information in a news article.

So it’s not even an issue with current models, just bad setup. An autoGPT with several fact-checking questions added in can easily filter this stuff.

[-] Marsupial@quokk.au 13 points 1 year ago

That’s not the right approach.

CharGPT is going off of old information, something not good for “breaking” news.

An example is not knowing Dregonflight is the name of the WoW expansion from late 2022.

So theoretically Glorbo could be a real character but only known more recently or that stout guy could’ve had a character arc that makes it a possible outcome yet current LLMs failed to pick up on that.

[-] fearout@kbin.social 7 points 1 year ago

Half of the deleted […] things are chatGPT mentioning its 2021 knowledge cutoff and suggesting double-checking that info. It was mentioned in this case as well.

If it were an autoGPT with internet access, I think these would prompt an automated online lookup to fact-check it.

[-] ericjmorey@beehaw.org 9 points 1 year ago

Try these questions about sources of recent information that you believe are accurate.

[-] fearout@kbin.social 9 points 1 year ago

So I tried it on this BBC article (a current top story), and this /r/Hearthstone post. It did pretty well. I won't copy-paste the whole reply, but here are some excerpts:

The post you've shared describes a series of recent climate records related to temperature, ocean heat, and Antarctic sea ice, and their concerning implications. [...] The trustworthiness of this information depends on the credibility of the sources cited and the accuracy of the climate records mentioned. Given that these are attributed to credible individuals and institutions like the UN, they likely hold a high degree of trustworthiness. [...] A pattern of increasing intensity and frequency of extreme weather events is consistent with what scientists expect from climate change. [...] However, for a comprehensive analysis, it would be prudent to cross-check this information with authoritative climate research bodies, like the Intergovernmental Panel on Climate Change (IPCC), National Aeronautics and Space Administration (NASA), or National Oceanic and Atmospheric Administration (NOAA).

The post you've shared appears to be an announcement about an upcoming expansion for Hearthstone. [...] The new expansion, named "Legendary Titans and Keepers," seems to introduce some new gameplay elements, including the "Titan" keyword and "Forge" keyword. Assessing the trustworthiness of this information can be tricky without an official source. Ideally, the announcement should be verified on Blizzard Entertainment's official website or through their official social media channels. The details mentioned, such as the gameplay mechanics for the new Titan and Keeper cards, as well as the new Forge keyword, are specific and elaborate, which might lend some credibility to the post. [...] If this information came from an official announcement from Blizzard Entertainment or a reliable insider, it would be newsworthy content for audiences interested in Hearthstone or gaming in general.

So it guessed correctly in both cases and suggested where to fact-check the info to be sure.

[-] fearout@kbin.social 1 points 1 year ago

Did you intend to paste or attach something? Your comment doesn't show anything on kbin besides that one sentence.

I think you may be misunderstanding this comment (no shade). I think they’re not saying “try these (that I will now provide) questions”. They’re saying “try these questions (that you asked in your previous query), and ask those same questions about sourced material that you trust or believe to be true.”

[-] FaceDeer@kbin.social 21 points 1 year ago

It's the humanless present. The AIs will get better in the future, presumably learning the things that human journalists have known for centuries - verify your sources.

For now, though, this was a fun gag.

[-] TwilightVulpine@kbin.social 10 points 1 year ago

Could a language model actually independently discern if a source is trustworthy? Seems that's something difficult to determine when it comes to possible leaks. The kinds of AIs that we have today can't really conceptualize a world outside the texts they process, they can only check based on other texts and user input.

[-] fearout@kbin.social 3 points 1 year ago* (last edited 1 year ago)

I mean, chatGPT with its knowledge cutoff and no internet connection figured it out. See my comment below, I asked it and posted its response.

The guys who run that news website just didn’t include any checks in their algorithm. It doesn’t seem like an LLM problem at this point. A properly set up AutoGPT with an ability to look stuff up online would have no problem sorting though and fact-checking posts to decide which ones to use for an article.

[-] FaceDeer@kbin.social 2 points 1 year ago

It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn't strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.

[-] MagicShel@programming.dev 1 points 1 year ago

Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I've thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it's beyond my meager means.

[-] nickajeglin@lemmy.one 2 points 1 year ago* (last edited 1 year ago)

I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.

[-] FaceDeer@kbin.social 1 points 1 year ago

Human journalists already do this, though. All I'm suggesting is that these automated journalists should do likewise. That clearly wasn't the case in this particular instance.

[-] barsoap@lemm.ee 1 points 1 year ago

Beep bop, I'm [citation needed] bot, a large language model. The information you referenced in your post can neither be found in official Blizzard material including release notes, nor in community wikis. Have a nice day!

[-] FaceDeer@kbin.social 1 points 1 year ago

Exactly. An AI journalist could easily have a rule that tells it to go web-searching for other sources when something new like this pops up. If the WoW subreddit is going on about Glorbo rumors, check the other WoW fora to see if the same thing is being talked about there. Perhaps it would have found a post where someone talked about what those silly Redditors are up to with their fake Glorbo antics, or at the very least there would have been a suspicious silence.

Right now AI journalists are only being used to replace the absolute bottom-of-the-barrel human journalism, because that's really cheap and easy and since that bottom-of-the-barrel journalism doesn't earn much revenue finding a cheaper way to churn it out is useful. So AI journalism is getting a bad reputation. I hope that when the more refined AI journalists start putting out higher-quality material it won't stick too badly. I've seen how whole disciplines of technology can be tarred with stereotypes that prevent it from being used in applications where it would be a genuine boon.

[-] jarfil@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

The kinds of AIs that we have today can't really conceptualize a world outside the texts they process

The LLMs we have today process "tokens", which can represent anything. That they happen to look "more intelligent" to humans when used as "text goes in, text comes out", is a purely human bias, not a limitation of the AI.

Don't be mistaken, LLMs can process, conceptualize, and output, anything that can be represented with a token, including the initial, intermediary, or final states of other AIs, for which even humans lack a token/word. That's how multimodal AIs with plugins work right now.

Using text (with or without emojis) as an external input/output system, is just a way to interact with humans, other AIs designed to input/output text, and to feedback (reflect) on themselves.

[-] Zeus@lemm.ee 19 points 1 year ago* (last edited 1 year ago)

the whole archived article is worth reading as well - i particularly like

Reddit user malsomnus hails it as the best change since the quest to depose Quackion, the Aspect of Ducks.

[-] Jamie@jamie.moe 13 points 1 year ago

"Quackion, the Aspect of Ducks" sounds like a title Dwarf Fortress would generate.

[-] frog@beehaw.org 17 points 1 year ago

Reading the Reddit thread was worthwhile, if only for this. Yep, a Redditor tricked IMDB into listing them as the voice actor for Glorbo.

[-] Stillhart@lemm.ee 10 points 1 year ago

I mean, it was a fun gag, but I really AM excited about glorbo!

[-] mPony@kbin.social 8 points 1 year ago

The article is great. The top comment under the article caught my eye:
Quote
but at least I'm real…last I checked.
That's very suspicious since that is exactly what I would expect a bot to write.

Point of order: in order for a bot to write that text, it would need to have been already written by someone else. So if a bot didn't write it before, it might the next time.

[-] MagicShel@programming.dev 8 points 1 year ago* (last edited 1 year ago)

That's not entirely accurate. AI can put together novel sentences that have never been written before. Everything is written one token at a time so being written before makes it more likely (as you would expect) but it absolutely does not preclude novel combinations.

[-] thevoyagekayaking@lemmy.nz 8 points 1 year ago

In fairness to the AI, human writers are far from infallible. Anyone remember the Cambodian midget fighting league?

[-] chinpokomon@beehaw.org 4 points 1 year ago

I think a human might consider the meaning about what is being said whereas an LLM is only going to consider what token is the best one to use next. Humans might not be infallible, but they are presently better at detecting obvious BS that would slip undetected past an AI.

Maybe this is an opportunity we haven't considered. This is the chance to create a Turing CAPTCHA Test. We can't use Glorbo to do so, because it has been written, but perhaps it makes sense that there is a nonsensical code phrase people can use to identify AIs, both with markers intentionally added to LLM training models, buried in articles written by human authors, and a challenge/response which is never written down and only passed verbally through real human-human interactions.

[-] athos77@kbin.social 5 points 1 year ago

I swear Collider is the worst, it's all just ten best/worst [whatever], according to reddit. I absolutely hate that Google continues to include them in their news feed.

load more comments
view more: next ›
this post was submitted on 21 Jul 2023
262 points (100.0% liked)

Technology

37742 readers
811 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS