23
submitted 1 year ago* (last edited 1 year ago) by belidzs@fost.hu to c/selfhosted@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] iso@lemmy.com.tr 2 points 1 year ago

Have you been able to load balance with multiple containers? Im not really familiar with k8s

[-] belidzs@fost.hu 1 points 1 year ago
[-] redcalcium@c.calciumlabs.com 5 points 1 year ago* (last edited 1 year ago)

I also use kubernetes to run my Lemmy instance. Sadly, pictrs uses their own "database" file which can only be opened by a single pod because it refuse to run if the "database" lock is already acquired by another pod, making scaling up the number of pods impossible. I wish they use postgres instead of inventing their own database. I suspect this is one of the reasons why those large Lemmy instances have difficulty scaling up their server.

[-] tyfi@wirebase.org 1 points 1 year ago

This is a really interesting observation. Curious if the devs are aware that this breaks simple scalability efforts

[-] Ducks@ducks.dev 1 points 1 year ago

You mean pictrs can't scale, or the other pods cannot as well? I separated lemmy-ui, the backend, and pictrs into different pods. Haven't tried scaling anything yet though, but I did notice the database issue with pictrs when RollingRestart, had to switch to Recreate.

[-] redcalcium@c.calciumlabs.com 3 points 1 year ago

Only pictrs that can't scale. Lemmy ui and backend seems to be stateless.

[-] Ducks@ducks.dev 1 points 1 year ago

Great to hear, that will make it super easy if I start allowing users on my instance.

[-] iso@lemmy.com.tr 3 points 1 year ago

I saw that Lemmy container has scheduled jobs. How did you handled that? IDK I’m not sure about is Lemmy really “stateless”.

https://lemmy.world/post/920294

[-] belidzs@fost.hu 7 points 1 year ago

Right, that's a good point.

So far it's working quite well, however for a micro-sized instance it's no surprise. Worst case scenario I can do the same thing as the admins of lemmy.world did: create a dedicated scheduling pod using the same docker image as the normal ones, but exclude it from the Service's target, so it won't receive any incoming traffic.

The rest of the pods can then be dedicated to serve traffic with their scheduling functionality disabled.

[-] tyfi@wirebase.org 1 points 1 year ago

Do they have a write up on their setup?

this post was submitted on 02 Jul 2023
23 points (96.0% liked)

Selfhosted

39677 readers
348 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS