71
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 28 May 2024
71 points (92.8% liked)
Lemmy
11948 readers
49 users here now
Everything about Lemmy; bugs, gripes, praises, and advocacy.
For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.
founded 4 years ago
MODERATORS
For the moment at least. Whatever problem we had before, it seemed to get worse over time, eventually requiring a restart. So we’ll have to wait and see.
Well, I've been on this instance through a few updates now (since Jan 2023) and my impression is that it's a pretty regular pattern (IE, certain APIs like that for replying to a post/comment or even posting have increasing latencies as uptime goes up).
Sounds exactly like the problem I fixed and mostly caused
https://github.com/LemmyNet/lemmy/pull/4696
Nice! Also nice to see some SQL wizardry get involved with lemmy!
My server seems to get slower until requiring a restart every few days, hoping this provides a fix for me too 🤞
Try switching to Postresql 16.2 or later.
What’s new in postgres?
Nothing particular, but there was a strange bug in previous versions that in combination with Lemmy caused a small memory leak.
In my case it’s lemmy itself that needs to be restarted, not the database server, is this the same bug you’re referring to?
Yes, restarting Lemmy somehow resets the memory use of the database as well.
Hm, weird bug. Thanks for the heads up ❤️ I’ve been using the official ansible setup but might be time to switch away from it