21
submitted 7 months ago* (last edited 7 months ago) by reallyzen@lemmy.ml to c/asklemmy@lemmy.ml

A "natural language query" search engine is what I need sometimes.

Edit: directly reachable with the !ai bang

top 24 comments
sorted by: hot top controversial new old
[-] FeelThePower@lemmy.dbzer0.com 40 points 7 months ago

I just want this trend to be over.

[-] BigTechMustBurn@lemmy.ml 8 points 7 months ago
[-] BrikoX@lemmy.zip 15 points 7 months ago

Useless. Unless you are dumb enough to trust the result without verifying it yourself. And if you do verify it, at that point you spend more time than just doing a regular search.

[-] umbrella@lemmy.ml 5 points 7 months ago

i find its useful to get your toes dipped in a new topic, summarized in a neat way. most of the actual search results doing that are now ai garbage too anyway.

of course you should always verify.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone -1 points 7 months ago* (last edited 7 months ago)

I think that's a little unfair: not everyone has the know-how to verify, and not everyone who can has the know-how to do original research on every potential topic they want to learn about.

If we all went by your logic here, none of us would put any stock in books, essays, encyclopedias, nothing.

Yes, comprehending what you read is important, but expecting everyone to original research on everything they want to learn is just not practical.

AI can be a valuable tool, in addition to critical thinking skills, if used properly.

[-] BrikoX@lemmy.zip 11 points 7 months ago

You are missing the point. You don't have to become a subject expert to verify the information. Not all sources are the same, some are incorrect on purpose, some are incorrect due to lax standards. As a thinking human being, you can decide to trust one source over the other. But LLMs sees all the information they are trained on as 100% correct. So it can generate factually incorrect information while believing what it provided you are 100% factually correct.

Using LLMs as a shortcut to find something is like playing a Russian roulette, you might get correct information 5 out of 6 times, but that one time is guaranteed to be incorrect.

No, I understood that. Hence why I said if sourced ethically & responsibly.

[-] Vendetta9076@sh.itjust.works 8 points 7 months ago

If you think that LLM'S are anything like encyclopedias, you fundamentally misunderstand what an LLM is. Its a story teller. Its not designed to be right its designed to engaging.

Encyclopedias are designed to be knowledge bases. Things you can rely on to give correct answers. LLM's are not. They can be pushed towards that, but their very foundation is antithetical to that and it makes them very hard to believe.

If you think that LLM’S are anything like encyclopedias, you fundamentally misunderstand what an LLM is.

I never said I think they're anything like encyclopedias; I said that being so skeptical that you feel you have to personally verify every little thing you hear or read or watch would be akin to not trusting second- or third-party sources, such as encyclopedias, books, essays, documentaries, expert opinions, etc.

Its a story teller. Its not designed to be right its designed to engaging.

That heavily depends on how its designed and for what purpose. It is not a hard-and-fast rule.

their very foundation is antithetical to that and it makes them very hard to believe.

Current iterations maybe. But future iterations will improve. As they say, it's a learning process.

[-] Vendetta9076@sh.itjust.works 6 points 7 months ago

That heavily depends on how its designed and for what purpose. It is not a hard-and-fast rule.

Every current LLM is built this way so it is a hard and fast rule.

Current iterations maybe. But future iterations will improve. As they say, it's a learning process.

I'm only talking about current iterations. No one here knows what the next iterations will be so we can't comment on it. And right now its incredibly foolish to believe what an LLM tells you. They lie, like a lot.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone -1 points 7 months ago* (last edited 7 months ago)

Every current LLM is built this way so it is a hard and fast rule.

No, that is a trend, not a rule, and the former of which I would argue is not even 100%. Claude in my experience of using it seems to be designed to be more conversational and factual, not strictly entertaining.

And right now its incredibly foolish to believe what an LLM tells you. They lie, like a lot.

I never said you should believe everything an LLM says. Of course a critical mind is important, but one can't necessarily just assume any answer they give is wrong either just because they're an LLM. Especially in this stage of LLM development; the technology is still maturing, still in its infancy.

I’m only talking about current iterations. No one here knows what the next iterations will be so we can’t comment on it.

Generally the more a technology matures out of its infancy the better it becomes at the job it's designed for. If an AI is designed to be entertaining, then yes it will be better at that in time; but likewise also if it's designed for factuals. And I already said what I think about the current state of development in regards to that.

Therefore, I think it's a reasonable assumption that as time goes on, the frequency of hallucinations will go down. We're still working out the kinks, as it is.

[-] Vendetta9076@sh.itjust.works 2 points 7 months ago* (last edited 7 months ago)

Rule or trend, whatever word you use is semantics at this point. And your experience is irrelevant to the facts of how all current LLM's are built. They are all built the same way. We have proof they are all built the same way.

If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?

We can sit down and speculate all day about what could be but that has no bearing on what is which is the entire point of this discussion.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone -1 points 7 months ago* (last edited 7 months ago)

Rule or trend, whatever word you use is semantics at this point.

Hardly. There is a very clear distinction between a rule & a trend.

And your experience is irrelevant to the facts of how all current LLM’s are built. They are all built the same way. We have proof they are all built the same way.

They are not all built the same, though. Claude, for instance, is built with a framework of values called "Constitutional AI". It's not perfect, as the developers even state, but it is a genuine step in the right direction compared to many of its contemporaries in the AI space.

If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?

Humans are not tools that can be improved upon. They are sentient beings that have conscious choice. LLMs are the former, and are not the latter.

They are not 1:1 comparisons as you claim.

[-] Vendetta9076@sh.itjust.works 1 points 7 months ago

You are wrong and tiresome. Goodbye.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 1 points 7 months ago* (last edited 7 months ago)

And yet I've provided sources for each and every one of my assertions, while you have not.

Good day.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 5 points 7 months ago* (last edited 7 months ago)

I actually think it's really fucking (ducking?) cool.

I'm not gonna lie: it actually changed my perception of AI chat engines.

I truly believe now that it CAN be very good as a technology if used (and sourced) ethically. ChatGPT is very problematic in this respect, but Claude—though limited as a result—seems like a good step in the right direction.

[-] BaroqueInMind@lemmy.one -2 points 7 months ago* (last edited 7 months ago)

You realize "Claude" literally just using an API for ChatGPT 3.5 and using Bing is better because it's leveraging GPT version 4? GPT-4 is multi modal and significantly faster. And also DDG AI is just as intrusive as Bing AI in the shit that it is tracking from you.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 0 points 7 months ago* (last edited 7 months ago)

Can you please provide a source for that claim?

Because I can find multiple sources that state they use separate LLMs but not one that states Claude uses Chat-GPT's LLM:

Claude's 195K context exceeds ChatGPT's 4K context in GPT-3.5.

Claude 3 was trained with data up to August 2023. In contrast, ChatGPT was trained on data leading up to January 2022.

Source: ”4 things Claude AI can do that ChatGPT can't”, ZDnet, 2024-04-20

ChatGPT [and] Claude ... all use different language models (LLMs) to process and respond to prompts ...

ChatGPT uses GPT-3.5 and GPT-4. If you're using the free version of ChatGPT, you'll be interacting with GPT-3.5. But if you're using ChatGPT Plus, OpenAI's paid chatbot version, you'll be interacting with GPT-4.

Claude, created by Anthropic, uses its most recent LLM version, Claude 2.1.

Source: "ChatGPT vs. Perplexity vs. Claude: AI Chatbot Tools Compared", How-To Geek, 2023-12-11

 

Also, Claude has a number of privacy & ethical restrictions baked into its protocols:

https://support.anthropic.com/en/articles/7996885-how-do-you-use-personal-data-in-model-training

[-] BaroqueInMind@lemmy.one 0 points 7 months ago* (last edited 7 months ago)

It literally says it when you first click on the fucking link OP provided:

DuckDuckGo AI Chat is a private AI-powered chat service that currently supports OpenAI’s GPT-3.5

Corollary:

Some key differences that should convince you Claude is useless and was only added by DDG to try to differentiate themselves but ended up being a weak attempt using other companies and sacrificing privacy:

  1. Company: Claude is developed by Anthropic, while ChatGPT is developed by OpenAI.

  2. AI Model: Claude has three versions: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. ChatGPT has different versions like GPT-3.5, GPT-4, and GPT-4 Turbo.

  3. Context Window: Claude can process up to 200,000 tokens (and up to 1,000,000 tokens for certain use cases), while ChatGPT can process up to 32,000 tokens.

  4. Internet Access: ChatGPT has internet access, while Claude does not.

  5. Image Generation: ChatGPT can generate images (DALL·E), while Claude cannot.

  6. Supported Languages: Claude officially supports English, Japanese, Spanish, and French, but in testing, it supported even less common languages. ChatGPT supports 95+ languages.

  7. API Pricing: Claude offers cheaper API access compared to ChatGPT.

  8. Capabilities: ChatGPT is more versatile with features like image generation and internet access. Claude, on the other hand, offers a much larger context window, meaning it can process more data at once.

Source:

(1) Claude vs. ChatGPT: What's the difference? [2024] - Zapier. https://zapier.com/blog/claude-vs-chatgpt/.

(2) Claude AI vs ChatGPT: Which is Better? | Lifehacker. https://lifehacker.com/tech/claude-ai-versus-chatgpt-which-is-better.

(3) Claude vs ChatGPT: Which is Better in 2024? | EM360Tech. https://em360tech.com/tech-article/claude-vs-chatgpt.

(4) What Is the Difference Between Claude 2 and ChatGPT? [2023]. https://aboutechs.com/what-is-the-difference-between-claude-2-and-chatgpt/.

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 1 points 7 months ago* (last edited 7 months ago)

Except DuckDuckGo AI Chat ≠ Claude

Claude = Anthropic's AI

ChatGPT = OpenAI's AI

DuckDuckGo AI Chat = a chat framework that a) allows you to choose between either ChatGPT or Claude, and b) acts as an intermediary between you and the one you choose ensuring that the one you choose does not remember conversations as soon as you refresh your browser tab, thus ensuring relative privacy as well as making it so your conversations through DDG are not used to educate the AI you chose.

As for your list of points, your original point was that Claude is just another chatbot using the ChatGPT LLM, not that one is better than the other. I assume that you stating they're different now means you agree with my original point? Cool, thread closed, I guess.

[-] BaroqueInMind@lemmy.one 2 points 7 months ago

Thanks for clarifying that I am really dumb. Also thanks for the informative summary, their tool sounds really nice from a privacy standpoint

[-] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 2 points 7 months ago* (last edited 7 months ago)

You're cool. Sorry if I came across as douchy.

That being said, you're not dumb; you just didn't know.

And yeah it's not perfect, not by a LONG shot, but I like it. I wouldn't have ever given it a try if it wasn't for DDG providing such an interface, and for free no less. (I'm broke. Lol.)

[-] Fizz@lemmy.nz 2 points 7 months ago

After this post I started using it and I actually really like it. However I wish it would search instantly instead of requiring me to click again

[-] Omega_Haxors@lemmy.ml 1 points 7 months ago* (last edited 7 months ago)

Seriously?? I guess... It makes sense, DDG stopped being good like 2 or 3 years ago.

this post was submitted on 22 Apr 2024
21 points (76.9% liked)

Asklemmy

43993 readers
600 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS