view the rest of the comments
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
Here's what interests me: obviously on a single instance you need to scale the infrastructure as more users join. But how much do you need to scale to account for the usage of federated instances?
I ask that because the answer in this thread is broadly "it's sustainable because each instance is low cost, and you can always add more instances" but that doesn't work if, after the number of instances grows 100x, all existing instances face an increase in costs even if they didn't gain many more users, because they're receiving more messages from federated instances, and need to download and store stuff from those other instances for their local users.
That's a very interesting question and I'm not sure of the answer.
Obviously on some level, the cost of the infrastructure scales with the number of people using it. But so does the ability to crowdfund, if there are 100x more instances then theoretically there would be 100x more potential donors to meet the cost.
One clear way to influence the scaling in our favor would be to utilize instances with clear themes and purposes. If everybody on a particular instance is interested in the same content, that reduces the wasted computational resources compared to an instance where all of the users are interested in different topics, and thus subscribed to a much wider variety of communities.
My intuition is that as long as the platform only hosts text and images, the costs should be manageable, especially with inevitable improvements to computational efficiency that are likely to come as Lemmy matures. For instance, I believe there is some kind of patch that reduces storage utilization that should be shipping with the next version (0.19).
My line of thinking is really wondering about what optimisations are necessary to allow Lemmy to scale in this way. Large social media sites have very interesting designs to deal with the huge amount of data flowing through them, caching as much as possible close to where it will be needed. I don't know about Lemmy's design though and I don't really have a good idea of how that would impact optimisations.
To take an example I remember from reddit (actually I had to re-read about it because I didn't really remember it...) reddit caches ordered lists of things, for example, the list of posts on the homepage. The problem is that the ordering has to be reevaluated all the time because it can change whenever someone votes. (Let's assume we're looking at a listing which incorporates voting). To make that work efficiently the reddit programmers made vote processing actually update not just the backing store and invalidate the cache, but modify the "cache" directly, which is now more like a denormalised view of the backing data. The way this was done meant that later, when the rate of votes increased, there were again problems because all this processing was contending on these denormalised views. I'm thinking this is probably going to be complicated by the federated aspect, because that's a separate source of updates from those local sources.
I would also say that you can't expect linear scaling for donations: early adopters are going to be enthusiasts, and correspondingly more enthusiastic with their money!
You mean say I have an instance with 100 users.
If they all hang out on communities on the server (more or less), no problemo at all.
But if they all roam around and sign up on thousands of active communities on other servers, my server will be under water.
I love thinking about stuff like this (P=NP, complexity, etc) and I do not see very much about that concerning the lemmyverse which is IMO a shame.
I'm planning setting up a Lemmy build so I can tinker around with it, but you know, time and stuff. I also spent a lot of time just setting up the docker version so maybe it's quite the job :-)