106

All the posts about Reddit blocking everyone except Google and Brave got me thinking: What if SearNGX was federated? I.E. when data is retrieved via a providers API, that data is then federated to all other instances.

It would spread the API load out amongst instances, removing the API bottlenecks that come from search providers.

It would allow for more anonymous search, since users could cycle between instances and get the same results.

Geographic bias would be a thing of the past.

Other than ActivityPub overhead and storage, which could be reduced by federating text-only content, I fail to see any downside.

Thoughts?

all 28 comments
sorted by: hot top controversial new old
[-] kbal@fedia.io 61 points 1 month ago

I think you are not a computer programmer. Trying to build an index of the web by querying other search engines is not an efficient or sensible way to do things. Using ActivityPub for it is insane. Sharing query results in the obvious way might help a little during events where everyone searches for the same thing all at once, but in a relatively small pool of relatively sophisticated Internet users I don't think that happens often enough to justify the enormous amount of work and complexity.

On the other hand a distributed web crawler that puts its results in a free and decentralized database (one appropriate to the task; not blockchain) might be interesting. If the load on each node could be made light enough and the software simple enough that millions of people could run it at home, maybe it could be one way to build a new search engine. If that needs doing and someone has several hundred hours of free time to get it started.

[-] hendrik@palaver.p3x.de 27 points 1 month ago* (last edited 1 month ago)

If you're looking for a distributed crawler and index:

https://en.wikipedia.org/wiki/YaCy

Yacy already exists and has been around for 2 decades.

[-] fmstrat@lemmy.nowsci.com 6 points 1 month ago

This is close to what I was thinking, but rather than crawling independently, leverage the API results from queries to build a list of sites (and then perhaps crawl). Potentialy a tag index of sorts. I'm not solid on any idea as I haven't investigated SearNGX enough to see how it works under the hood, but yes, on the same plane of thought.

[-] Max_P@lemmy.max-p.me 11 points 1 month ago

I ran a YaCy instance for a while like a decade ago. It does federate index requests, and when you search it propagates the search request across a bunch of nodes. When my node came online it almost immediately started crawling stuff and it did get a bunch of search queries. But the network was still pretty small back then and the search results were... not great. That's the price of independence from Google's and Microsoft's giant server farms, it's hard to compete with that size.

But at the rate Google and Bing are enshittifying, I think it's worth revisiting.

Using ActivityPub for this would be immensely wasteful. It's just not feasable that all instances would have the whole index because it's so large. Back when I tried it, the network still had several TBs worth of indexed pages. This is firmly in the realm of distributed P2P systems. One could have an ActivityPub plugin however to receive updates from social media near instantly and index those immediately with less overhead. But you still want to index wikipedia, forums, blogs, whatever the crawlers can find.

[-] hendrik@palaver.p3x.de 5 points 1 month ago

Sure. SearX is a meta-search engine. It does (only) queries to other search engines to get results. YaCy on the other hand is itself a search engine. It has the data available and doesn't do queries to other engines. In theory you could combine the two concepts. Have a software that does both. But that requires some clever thinking. The returned (Google) ranking only applies to the exact search term. And it's questionable if you can store it and do anything useful with it except for when some other user searches for the exact same thing. And also the returned teaser texts are very short and tailored to the search query. So maybe also useless. It'd be hard.

One thing you could do is crawl the results that users actually click on. And I think YaCy already does that. AFAIK they had an browser add-on or a proxy or something to intercept visited pages (and hence search results).

[-] seang96@spgrn.com 2 points 1 month ago

I really want to use this, but from what I read it basically requires a minimum of 20-30GB of RAM to be performant. Also the documentation appears to be a mess and highly outdated. I'd also want to cluster it internally and connect with outside peers still which seems possible, but with the large resource requirement not as feasible with my setup.

[-] Buelldozer@lemmy.today 13 points 1 month ago* (last edited 1 month ago)

basically requires a minimum of 20-30GB of RAM to be performant.

That's odd, the project page states 256 Megabytes and practically speaking that's nothing. Where did you find 20-30G? Are you sure you're not confusing the memory requirement with the suggested free hard drive space?

Even if it does need 32G of RAM to perform well it's not a very high hurdle. 32G of DDR4 can be had used for less than $75. Toss that in an old Core8/9 I5 Desktop, install your preferred flavor of Linux, add Docker, and you're off to the races.

[-] Im_old@lemmy.world 4 points 1 month ago

I've run it in containers, never used that many resources. The whole server (running a few dozen containers) was 32gb, and it wasn't impacted in any sensible way.

[-] hendrik@palaver.p3x.de 3 points 1 month ago

That is misinformation. It doesn't need anywhere close to that amount of RAM. It's pretty much like other webapps and I used to run it on an old computer. It'll fill up your harddisk, though. If you allow it to do that.

[-] seang96@spgrn.com 1 points 1 month ago

Well initial setup was definitely interesting. I didn't want to expose 8090 and wanted it behind a web proxy and I finally got that working and actually received my first remote crawl overnight. I had to change to 80/443 internally so it would map correctly for p2p connections, public port setting doesn't apparently cut it. I kinda dislike the whole setup with it micromanaging CPU load, but otherwise it doesn't seem atrocious for a new peer at least, I guess this and the web proxy problems are likely awkward due to the age of the software.

[-] seang96@spgrn.com 1 points 1 month ago

There also seems to be a lot of settings so perhaps they had it misconfigured. It also is Java so I wouldn't put it past it for such a monolith of a Java program to require so much to be performant. Perhaps I'll try a cluster of them then and see how it fares.

[-] aldalire@lemmy.dbzer0.com 3 points 1 month ago

One of the things that can get annoying about searxng is that often search engines will rate limit if a lot of people are using one searxng instance. Maybe a “federated” approach would be, if results are rate limited -> send query to another trusted searx instance -> receive the results and send back to user. That way, people can stick to their favorite searxng instance without having to manually change their instance if the search engines were rate limiting.

[-] fmstrat@lemmy.nowsci.com 3 points 1 month ago

Well, I am, including products in the Fediverse. And I never said federate the search queries.

Trying to build an index of the web by querying other search engines is not an efficient or sensible way to do things.

Never made this suggestion.

On the other hand a distributed web crawler that puts its results in a free and decentralized database

Now you're getting there.

[-] kbal@fedia.io 3 points 1 month ago

Okay, sorry! Still a long way to go before the idea becomes sufficiently well-specified to make much sense to me though. Perhaps an examination of yacy could provide you a concrete example of the ways in which such things are complicated. One would need to do much better to end up with a suitable replacement for the ways many of us use searx.

It was wanting to use ActivityPub and the "I fail to see any downside" which led me to read the rest of your post in a way that might've been overly pessimistic about its merits.

[-] fmstrat@lemmy.nowsci.com 3 points 1 month ago

Yea, another user has suggested passing along the request to other instances when API limits are hit. That sounds like a better model for SearXNG specifically.

[-] aldalire@lemmy.dbzer0.com 21 points 1 month ago

One of the things that can get annoying about searxng is that often search engines will rate limit if a lot of people are using one searxng instance. Maybe a “federated” approach would be, if results are rate limited -> send query to another trusted searx instance -> receive the results and send back to user. That way, people can stick to their favorite searxng instance without having to manually change their instance if the search engines were rate limiting.

[-] mesamunefire@lemmy.world 5 points 1 month ago

I self host with yunohost it's a good way to not bog down the system.

[-] mesamunefire@lemmy.world 16 points 1 month ago

I recall there is a federated search engine... somewhere. Anyone know what that was called.

[-] toothbrush@lemmy.blahaj.zone 18 points 1 month ago
[-] kbal@fedia.io 7 points 1 month ago* (last edited 1 month ago)

Ah, I wondered if something like that had been tried before. Looks like it is maybe still running: https://yacy.net/

The demo isn't giving me useful search results.

[-] Buelldozer@lemmy.today 8 points 1 month ago* (last edited 1 month ago)

There's only been about 700 yacy peers online in the last 30 days which is pretty low for a "crowd sourced" search engine, especially when many of those are, I think, temporary peers that come and go. It looks like it has only maybe 200 "master" servers which wouldn't be nearly enough to keep up with the Internet these days.

The good news is that if there's websites / urls that you care about you can point your own yacy instance at them and schedule the crawls to keep up with content changes.

I remember reading about yacy some years ago and now that I've bumped it into again it's sparked my interest. I may stand up a docker instance and play with it for awhile. If nothing else it could make a very useful "arrrrr" search engine.

[-] Wxnzxn@lemmy.ml 6 points 1 month ago

I ran an instance for a while out of curiosity a few years back - building the database seemed to work fine and appeared like a good idea, had a lot of fun to see the connections with other servers and my crawler filling holes of unknown spaces. But I think the search algorithm itself was (most likely is) not sophisticated enough, it just did not give relevant results often enough, and it was extremely vulnerable to very simple SEO tactics to push trash to the top.

[-] seang96@spgrn.com 8 points 1 month ago

Besides yacy, there is a project to build decentralized apps with search listed as an example. Its very early and nothing is built off of it yet though.

https://github.com/freenet/freenet-core

[-] NataliaTheDrowned2@kbin.run 5 points 1 month ago

There was an NLNet project for the former SearX before it got discontinued

[-] catloaf@lemm.ee 2 points 1 month ago

So everyone stores a part of the search index? I think you've invented a machine-readable website index with extra steps.

[-] fmstrat@lemmy.nowsci.com 2 points 1 month ago

Hah, could be.

[-] BaroqueInMind@lemmy.one 2 points 1 month ago

This is a great idea that will never work because it's too expensive to maintain.

this post was submitted on 26 Jul 2024
106 points (92.7% liked)

Fediverse

27736 readers
689 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS