161
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Apr 2024
161 points (100.0% liked)
Technology
37754 readers
283 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
Yes, and I largely disagree with it :/
https://stract.com/ is the new kid on the block
Lol. I typed the name of my hometown and the two first results were escort sites from that area.
I mean, either it knows me really well and their privacy claims are wrong 🤭 Or it has a funny way of prioritising indexes.
Thanks
On the other hand, it doesn't really matter so much anymore.
LLM is the new search. I can ask it the actual question I have and it will give me the answer. If it's not exactly what I need I can ask it to specify further.
Contrast that with a search engine that just gives me a ton of bookmarks to sift through to see if they actually might answer my question or are just clickbait.
Of course there's still some times when you need search, like when you need to find an actual website, or when you need a source reference. But really the need for me is greatly reduced now.
Be careful relying on LLMs for "searching". I'm speaking from experience here - getting actually accurate results from the current generation of LLMs, even with RAG, is difficult. You might get accurate results most of the time (even 80% or more), but it can be difficult to identify the inaccurate results due to the confidence models present their output with when hallucinating.
Also, if your LLM isn't doing retrieval-augmented generation (RAG), then it isn't actually a search and won't find results more recent than the data it was trained off of.
I know. But I'm often not really looking for accuracy. I just need to know something for myself. Most of the stuff I look up is absolutely not critically important. It's not like I'm trying to write a PhD dissertation or something.
I know it can be inaccurate but I can verify the results (and they usually are totally fine).
I think you're underestimating how huge of an undertaking a half-decent search index is, much less a good one.