70

Hi all, a shout-out for assistance. I’m considering hosting a Lemmy instance (assuming I can pass the wife test on costs) and I’m looking for some guidance on specs.

Can anyone who’s currently hosting an instance (or who knows the inner workings of one) please reply with:

  • specs on the hardware / VPS that’s hosting your instance
  • how many users / posts that’s supporting
  • what the system load looks like with the above
  • if locally hosting, the type of bandwidth requirements you’re seeing

I previously posted this in the wrong community, and one of the responses asked how many users I'm expecting. To preemptively answer - I don't know. I'm just trying to get an idea of relative sizing.

Thank you!!

top 34 comments
sorted by: hot top controversial new old
[-] moira@femboys.bar 17 points 1 year ago* (last edited 1 year ago)

Myself i'm running a instance for two people in a pretty small lxc container on my home server- 1vCore, 512MB of ram and 8GB storage. Currently it utilize around 5% of CPU, ~250MB of ram (+260MB of swap), and ~2GB of storage (nearly 50/50 picts/postgres), in terms of network traffic i see average of 20kb/s, depends how many communities are you subscribed for.

My homeserver is running on i3-4150, 16GB ram and a couple of ssds, using Proxmox VE as hypervisor

edit: typo

Huh, proxmox on a i3-4130? That doesn’t choke on cpu? TBF, I’m assuming you’re running several other VMs. Also, why not docker?

[-] moira@femboys.bar 2 points 1 year ago

Proxmox itself is pretty lightweight, and yes, i'm also running other VMs and LXC containers (not much, about 9 containers with some lite services like teamspeak server, couple of bots, deluge and hestiacp, prometheus, k3s for testing and "vdi" in vm). Actually - i'm running docker - inside LXC containers. Not the prettiest way to do it, but it works fine

Fair enough. There are no rules for homelab; do what you want!

Out of curiosity, are you running a repurposed 1L OEM box? I’ve picked up a handful of those for dirt cheap, and they’re kinda fun to play around with!

[-] ThorrJo@lemmy.sdf.org 2 points 1 year ago

not the one you were replying to, but I'm 2/3 thru switching my servers over to the 1L form factor and am liking it. it's amazing how much compute can be crammed into a tiny space these days.

[-] moira@femboys.bar 2 points 1 year ago

Close enough! I'm using a HP z230 SFF, not as small as those 1L USFF, but pretty practical for a small homeserver, have a couple of PCI-E slots to expand, can hold 2x HDD (if you count replacing 5,25 optical drive with a tray) or multiple SSD wherever they fit. Pretty happy with this build, day-to-day it draws about ~18-50W from the wall, depends on load.

[-] Shit@sh.itjust.works 1 points 1 year ago

How are you routing it to the internet?

[-] Stubborn9867@lemmy.jnks.xyz 6 points 1 year ago

I use nginx proxy manager to route all my services. Just forward 80 and 443 from my router to that.

[-] moira@femboys.bar 2 points 1 year ago

I'm using hestiacp to host some websites anyway, so i just added a new nginx template to create reverse proxy to lemmy+lemmy_ui containers

[-] Shit@sh.itjust.works 3 points 1 year ago

I really want to figure out if it's possible to stick it behind cloudflare or something. I would rather not expose any IP address directly to the internet. I'm leaning on just setting up a reverse proxy on a cheap cloud instance back to my home.

[-] moira@femboys.bar 4 points 1 year ago

My instance is actually behind cloudflare and it works fine, but remember that it would be possible to "expose" ip of your server due to federation, as your server will talk to other server (directly, that traffic won't go over cloudflare), so if you are paranoid about that, i would recommend setting up a wireguard tunnel to cloud instance, and forwarding the traffic that way, or just setup the lemmy on that instance

[-] Shit@sh.itjust.works 1 points 1 year ago

Thanks that's kind of what I was thinking. Have you used cloudflared before?

[-] moira@femboys.bar 1 points 1 year ago

No, i didn't, but i think it should also work over cloudflared

[-] terribleplan@lemmy.nrd.li 11 points 1 year ago* (last edited 1 year ago)

To answer what I think you are getting at lemmy scales based on two things:

  1. Database size (and write volume) scales mostly on what communities are being federated to you. Unless you are .world the volume of remote content is going to massively outweigh local content. On my (mostly) single-user instance I have found this to be the same with Pictrs as well, as it is mostly eating storage to store federated thumbnails.
  2. Database read load scales mostly on the number of users you have. For a single-user instance this is pretty minimal. For an instance like .world (with thousands of users) I imagine it is significant and scaling postgres to have read-only replicas to scale this load.

~18 hours ago I wrote

My instance has been running for 23 days, and I am pretty much the only active local user:

7.3G    pictrs
5.3G    postgres

I may have a slight Reddit Lemmy problem

As of right now

7.5G    pictrs
5.7G    postgres

So my storage is currently growing at around 1G per day, though pictrs is mostly cached thumbnails so that should mostly level out at some point as the cache expires.

To answer your stated question: I run an instance on a mini PC with 32G of RAM (using <2G including all lemmy things such as pg, pictrs, etc and any OS overhead) and a quad core i5-6500T (CPU load usually around 0.3). You could probably easily run Lemmy on a Pi so long as you use an external drive for storage.

[-] GustavoM@lemmy.world 9 points 1 year ago

AFAIK even a rpi zero can serve a lemmy instance just fine.

[-] haakon@lemmy.sdfeu.org 10 points 1 year ago

But it probably couldn't have hosted lemmy.world. The answer depends on what the plans are for the instance, I suppose.

[-] manitcor@lemmy.intai.tech 8 points 1 year ago* (last edited 1 year ago)
[-] Shit@sh.itjust.works 2 points 1 year ago

How fast is the disk use growing for you?

[-] manitcor@lemmy.intai.tech 1 points 1 year ago

not terrible, db is about ~100mb a day, running about 20 days now and have a 4.4gb in images.

thinking about a mod to move images off to IPFS.

[-] sudneo@lemmy.world 4 points 1 year ago

I read about using s3 storage for pictures. I am planning to use maybe backblaze for that or if I end up taking the beefy server, use a separate minIO instance. This is also great for scaling horizontally in the future, maybe.

[-] manitcor@lemmy.intai.tech 1 points 1 year ago

federated forum, federated storage imo

[-] b3nsn0w@pricefield.org 8 points 1 year ago* (last edited 1 year ago)

i'm currently hosting an instance for about 20 users on a dual-core epyc-7002 based cloud vm with 2 gb of ram and currently a 50 gb ssd volume. memory tends to sit around halfway and total disk usage is 14 GB, of which it's 4.5 GB for the picture server and 2.3 GB for the database for now, i'm monitoring both in case upgrades are needed. cpu usage is quite low, usually sits between 5-10% and never went above 25%. it was the highest during a spambot attack when they tried to register hundreds of accounts -- speaking of, enable captcha (broken on 0.18.0) or set registrations to approve-only.

i'm paying about $10-15 per month currently, which includes a cache to keep the instance snappy.

[-] InverseParallax@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

Storage seems to be the main requirement, so even a raspi4 should be fine, though you'll want 4gb ram, you just want a large ssd attached somehow.

Iirc it doesn't really like nfs either.

Mine is running on a low performance VM on my MiniPC under my bed lol. I've had absolutely no lag or errors. No problems at all very smooth.

[-] Ducks@ducks.dev 5 points 1 year ago* (last edited 1 year ago)

It's pretty lightweight. I've given each container 1/3Gi of memory and 1CPU limit with low requests. Utilizing kubernetes HPA to scale containers under load up to 4 replicas. It only scales when a user takes large actions (subscribing to hundreds of new to the instance communities at once). But once the initial federation begins it seems to quickly scale back down. The biggest bottleneck is pictrs since it is stateful.

So far the database and pictrs is only about 2Gi of storage but I've allocated 25Gi to each since I have a lot to spare at the moment.

I have to play with the HPA more since I'm not happy yet with my settings. I have 2 users and 1 bot on my instance.

I'd like to start contributing to Lemmy's codebase so I wanted to host my own instance to learn the inner workings.

My postgres is a single replica at the moment but I may scale that if stability is becomes a problem.

[-] hitagi@ani.social 3 points 1 year ago

I'm running mine (ani.social) on 4 cores and 16gb RAM for 17 users as of now. There isn't a lot of posts/comments coming from us yet but there's a couple of images uploaded already.

The current load average is only 0.10, postgres db is at 1.6 GB and pictrs is only at 430 MB. The database has been growing a lot faster than expected though but it seems manageable.

[-] morethanevil@lmy.mymte.de 1 points 1 year ago

I would like to be able to select more than one Community when I create a post, it could help smaller instances to get more activity

At the moment only crossposts are possible 🤔

[-] hitagi@ani.social 3 points 1 year ago

That sounds like a nice feature but perhaps maybe a limit to how many communities you can post to at once to avoid abuse from bad actors.

[-] morethanevil@lmy.mymte.de 1 points 1 year ago

Yes a limit would be okay, like 3 or 4 Communitys. Maybe we could make a feature request? 🤔

[-] Dax87@forum.stellarcastle.net 2 points 1 year ago* (last edited 1 year ago)

I'm running my instance as a containerized app on an i9-12900H, 64gb ddr4 ram, a 128gb Intel optane as a swap drive (my mobo maxes out at 64gb ram), and on a SATA SSD. My bottleneck is my internet which is stuck on 5g home internet. Serving. Any service behind cgnat has been a challenge, but thanks to zero tier and a vps reverse proxy, it's been possible.

[-] redcalcium@c.calciumlabs.com 1 points 1 year ago

a 128gb Intel optane as a swap drive

Interesting, does it actually help when your system run out of memory? My system is completely unusable when it starts swapping at one time (some app was leaking memory and exhaust the ram), so I decided to turn off the swap (I'd rather it crashed than have unusable system).

[-] Dax87@forum.stellarcastle.net 1 points 1 year ago

It won't replace RAM speeds, but it's supposed to be significantly faster. Caveat is that it's not functioning like optane memory is supposed to, I just opted to make the whole drive partition swap, since it was simpler to do.

I never have used a significant amount of my RAM to warrant heavy swap usage though. Swappiness is at its default 60.

[-] borlax@lemmy.borlax.com 2 points 1 year ago

Mine is running on a VPS with 1vCPU and 1GB of RAM, it is mostly okay except for going OOM on occasion. Luckily it’s just me in this instance right now lol. You may want to opt for more RAM depending on your planned usage tho.

load more comments
view more: next ›
this post was submitted on 04 Jul 2023
70 points (98.6% liked)

Selfhosted

40487 readers
99 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS