946
submitted 1 year ago* (last edited 1 year ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@phiresky@lemmy.world did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @cetra3@lemmy.ml created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @phiresky@lemmy.world , @cetra3@lemmy.ml , @stanford@discuss.as200950.com, @db0@lemmy.dbzer0.com , @jelloeater85@lemmy.world , @TragicNotCute@lemmy.world for their help!

And not to forget, thanks to @nutomic@lemmy.ml and @dessalines@lemmy.ml for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] NatoBoram@lemmy.world 7 points 1 year ago

It works so well, that's very refreshing

[-] nostalgicgamerz@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

Can we have an update on the status of Lemmy.world and how close ties we are going to have with Meta's threads? Threads is going to support ActivityPub, but time has shown that this is an attempt to try to kill this open platform and eventually replace it with theirs once they get everyone in their ecosystem. (Embrace, Extend...extinguish) Mastodon has said today that they don't mind sleeping with vipers when their demise / dissolution is in Meta's best interest.

Please tell me we are defederating from Meta....or let us know what to expect

EDIT: I originally stated that Mastodon told them to fuck off, but I got confused with Fosstodon (who did that). Mastodon doesn't mind being in bed with Meta

load more comments (6 replies)
[-] lichkain@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

Would HAProxy work better as a load balancer? For work we switched due to some issues with NGINX; so far, the service has been much more consistent with pretty much no downtime, even when restarting server hosts.

[-] ruud@lemmy.world 4 points 1 year ago

Can be considered, for now it's working ...

[-] pickledredonions@lemmy.world 6 points 1 year ago

Amazing work, thank you so much!

[-] Sekemoto@lemmy.world 6 points 1 year ago

It is much smoother than it was previously. Thank you!

[-] jcg@lemmy.world 6 points 1 year ago

Wow I applied these PRs on my server as well, running waaay lighter now. And it seems the federation misses have cleared up! Bravo Lemmy.world team!

[-] NegativeCool@lemmy.world 6 points 1 year ago

This was cool to read.

[-] ramblechat@lemmy.world 6 points 1 year ago

Seems a lot faster today - great work!

load more comments (1 replies)
[-] sv1sjp@lemmy.world 6 points 1 year ago

Thank you guys for your awesome work!

Also to other people: DONATE TO FOSS PROJECTS. If 50.000 people donate only 0.5€, we have 25.000€ for funding the servers, coding, motivating/ people etc. Just don't take a cup of coffee for 1 day. We are already 2 millions in Lemmy instances. We can build a decentralized world together!!

[-] wmrch@lemmy.world 4 points 1 year ago

You can pry my cup of coffee from my my cold, dead hands.

Will donate anyway, I really want this project to keep going.

load more comments (3 replies)
[-] sma3in@lemmy.world 4 points 1 year ago

good vibes!! thank you for your work

[-] InverseParallax@lemmy.world 4 points 1 year ago

That's some pretty heroic shit right there.

You just took lemmy from something I'm willing to live with in the short term in hope it gets better, to something I am fully satisfied with.

Now let's grow so we can fuck it up all over again!!!

[-] akippnn@lemmy.world 4 points 1 year ago

Thank you so much for the hard work, time and money you spend into making lemmy.world run very smoothly. This much transparency is awesome for something that's being used so massively.

[-] mkhopper@lemmy.world 4 points 1 year ago

Awesome news. Thanks for all the hard work.

[-] sunnyxiongster@lemmy.world 4 points 1 year ago

Everytime I open a post and go back to previous page it scrolls back to top. Is this fixable? Im on windows 11, chrome.

[-] cloudless@feddit.uk 6 points 1 year ago

Try wefwef. It remembers exactly where you were when you press back.

[-] wilberfan@lemm.ee 4 points 1 year ago

I love the smell of updates in the morning.

[-] SapienSRC@lemmy.world 4 points 1 year ago

Thank you for all the hard work and transparency as always! Everything is running perfectly knocks on wood

[-] sirnak@lemmy.world 4 points 1 year ago

Am I getting this correct: the whole lemmy.world instance run in one single container on one single host?

[-] cley_faye@lemmy.world 4 points 1 year ago

You'd be surprised at how much performance this kind of setup can squeeze off. Often the limitation is more on the DB/storage than network handling and processing power.

[-] eek2121@lemmy.world 4 points 1 year ago

This. Most of the time, the bottleneck will be the database backend.

Curious if lemmy.world uses separate reader/writer instances.

[-] bappity@lemmy.world 4 points 1 year ago

if it runs this well on a single container, considering the amount of users it has, I fear the power it'd hold with more

load more comments (1 replies)
[-] ArrogantAnalyst@feddit.de 3 points 1 year ago

Memmy is great! Been using it since about 2 weeks now. Really feel at home.

[-] Richard@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Minor thing but over night both wefwef and Memmy clients are showing the wrong comment score (karma) against my profile, and given they are showing the same amount I assume it’s related to API fed data. Value was correct yesterday. Easy for me to confirm given I have only 2 dozen posts and the value has dropped to single digits.

Not a biggie, but figured I’d report it in case there was some issue causing that. Might be some optimisation around indexing or something has intentionally or unintentionally impacted that.

Otherwise the service feels much more stable currently. No timeouts today where it’s been very frequent the past few days. Nice job. 👍

[-] ruud@lemmy.world 4 points 1 year ago

I notified the wefwef dev of this.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 05 Jul 2023
946 points (98.7% liked)

Lemmy.World Announcements

28381 readers
3 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages 🔥

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations 💗

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS