1299
you are viewing a single comment's thread
view the rest of the comments
[-] A_Porcupine@lemmy.world 21 points 1 year ago

The saying "ask a stupid question, get a stupid answer" comes to mind here.

[-] UnderpantsWeevil@lemmy.world 39 points 1 year ago

This is more an issue of the LLM not being able to parse simple conjunctions when evaluating a statement. The software is taking shortcuts when analyzing logically complex statements and producing answers that are obviously wrong to an actual intelligent individual.

These questions serve as a litmus test to the system's general function. If you can't reliably converse with an AI on separate ideas in a single sentence (eat watermellon seeds AND drive drunk) then there's little reason to believe the system will be able to process more nuanced questions and yield reliable answers in less obviously-wrong responses (can I write a single block of code to output numbers from 1 to 5 that is executable in both Ruby and Python?)

The primary utility of the system is bound up in the reliability of its responses. Examples like this degrade trust in the AI as a reliable responder and discourage engineers from incorporating the features into their next line of computer-integrated systems.

[-] TheGreenGolem@lemmy.dbzer0.com 5 points 1 year ago

Unfortunately that ship has sailed but this is what I say from the start of these: don't call them Artificial Intelligence. There is absolutely zero intelligence there.

[-] Even_Adder@lemmy.dbzer0.com 2 points 1 year ago

They didn't use Bing Chat, which is the actual AI powered search.

[-] Ultraviolet@lemmy.world 6 points 1 year ago

If a search engine is going to put a One True Answer in a massive font above all other results, they should be pretty confident in it. Yes, tech-literate people know the "featured snippet" thing is dogshit and to ignore it, but there are a lot of people that just look at that and think they have their answer.

[-] Even_Adder@lemmy.dbzer0.com 1 points 1 year ago

That's a completely separate problem from confusing two different products.

[-] Chunk@lemmy.world 1 points 1 year ago

We have a new technology that is extremely impressive and is getting better very quickly. It was the fastest growing product ever. So in this case you cannot dismiss the technology because it doesn't understand trick questions yet.

[-] UnderpantsWeevil@lemmy.world 1 points 1 year ago

new technology that is extremely impressive

Language graphs are a very old technology. What OpenAI and other firms have done is to drastically increase the processing power and disk space allocated to pre-processing. Far from cutting edge, this is a heavy handed brute force approach that can only happen with billions in private lending to prop it up.

It was the fastest growing product ever

this post was submitted on 27 Dec 2023
1299 points (96.0% liked)

Microblog Memes

6036 readers
3151 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS