There were optimizations related to database triggers, these are probably responsible for the speedup.
For the moment at least. Whatever problem we had before, it seemed to get worse over time, eventually requiring a restart. So we’ll have to wait and see.
Well, I've been on this instance through a few updates now (since Jan 2023) and my impression is that it's a pretty regular pattern (IE, certain APIs like that for replying to a post/comment or even posting have increasing latencies as uptime goes up).
Sounds exactly like the problem I fixed and mostly caused
Nice! Also nice to see some SQL wizardry get involved with lemmy!
My server seems to get slower until requiring a restart every few days, hoping this provides a fix for me too 🤞
Try switching to Postresql 16.2 or later.
What’s new in postgres?
Nothing particular, but there was a strange bug in previous versions that in combination with Lemmy caused a small memory leak.
In my case it’s lemmy itself that needs to be restarted, not the database server, is this the same bug you’re referring to?
Yes, restarting Lemmy somehow resets the memory use of the database as well.
Hm, weird bug. Thanks for the heads up ❤️ I’ve been using the official ansible setup but might be time to switch away from it
Reddthat has 0.19.4 too, feels indeed snappier
Interesting. It could be for the same reason I suggest for lemmy.ml though. Do you notice latencies getting longer over time?
It's a smaller server so I guess latency issues would appear at a slower pace than lemmy.ml
makes sense ... but still ... you're noticing a difference. Maybe a "boiling frog" situation?
I would say it still feels snappier today than before the update (a couple weeks ago?), so definitely an improvement
Lemmy
Everything about Lemmy; bugs, gripes, praises, and advocacy.
For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.