[-] terribleplan@lemmy.nrd.li 1 points 1 year ago

I have owned and otherwise dealt with a few different Startech 4-post open racks and have been very happy with them. I currently use one of their 25U racks for my lab, but am running out of space...

[-] terribleplan@lemmy.nrd.li 2 points 1 year ago* (last edited 1 year ago)

A few of these servers were stacked on top of each other (and a monitor box to get the stack off the ground) in a basement for several years, it's a journey.

[-] terribleplan@lemmy.nrd.li 2 points 1 year ago

There is an "Actions" feature coming that is very similar to GitHub actions for CI and similar use-cases. It's still behind a feature flag as it's not quite ready for prime-time, but you can enable it on a self-hosted instance if you want. I believe this is also in Gitea as well, so you don't have to use the Forgejo fork, but I have moved my instance over due to the whole situation leading to the fork.

[-] terribleplan@lemmy.nrd.li 1 points 1 year ago

Federation, haha.

[-] terribleplan@lemmy.nrd.li 1 points 1 year ago

Your logs (at debug level at least, which is where I keep my server, haha) should have entries something along the lines of:

  • Receiving configuration from the file provider
  • What routers and services it sets up based on the configuration
  • Whether certificate generation is needed for the routers
  • What happens when LEGO tries to generate the certificate (created account, got challenge, passed/failed challenge, got cert, etc)
[-] terribleplan@lemmy.nrd.li 2 points 1 year ago

Yeah, you could also set up some sort of caching proxy in the cloud just for images and host those on a different domain (e.g cdn.lemmyinstance.com) if you want to host large images still and be as self-hosted as is possible given the constraints.

[-] terribleplan@lemmy.nrd.li 1 points 1 year ago* (last edited 1 year ago)

Is traefik successfully getting the cert via LE? It sounds like for one reason or another it is still using the built-in/default cert for those services. You can check the traefik log's LEGO lines, and/or look at your /letsencrypt/acme.json.

In my example I specified entrypoints.https.http.tls.domains, but I think that is only necessary when you're doing wildcard domains with a DNS solver.

edit: You may need to use the file provider rather than trying to specify stuff in the main config toml... traefik differentiates from "static" config that it has to know at boot time and can't change and "dynamic" config like routers and stuff.

[-] terribleplan@lemmy.nrd.li 2 points 1 year ago

Traefik. It has a GUI that I can use to see things, and (depending on your setup) you define the routes and stuff as part of your container definitions, minimal extra work required, makes setup and teardown a breeze. It is also nice that you can use it in all sorts of places, I have used it as Kubernetes ingress and as the thing that routed traffic to a Nomad cluster.

I went from Apache to Nginx (manually configured, including ACME) to Traefik over the course of the past ~10 years. I tried Caddy when I was making the switch to Traefik and found it very annoying to use, too much magic in the wrong places. I have never actually used NPM, as it doesn't seem useful for what I want...

Anyway, with traefik you can write your services in docker compose like this, and traefik will just pick them up and do the right thing:

version: "3"
services:
  foo-example-com:
    image: nginx:1.24-alpine
    volumes: ['./html:/usr/share/nginx/html:ro']
    labels:
      'traefik.http.routers.foo-example-com.rule': Host(`foo.example.com`)
    restart: unless-stopped
    networks:
      - traefik
networks:
  traefik:
    name: traefik-expose-network
    external: true

It will just work most of the time, though sometimes you'll have to specify 'traefik.http.services.foo-example-com.loadbalancer.server.port': whatever or other labels according to the traefik docs if you want specific behaviors or middleware or whatever.

And your deployment of traefik would look something like this:

version: '3'
services:
  traefik:
    image: traefik:v2
    command: >-
      --accesslog=true
      --api=true
      --api.dashboard=true
      --api.debug=true
      --certificatesresolvers.le.acme.dnschallenge.provider=provider
      --certificatesresolvers.le.acme.storage=acme.json
      [ ... other ACME stuff ... ]
      --entrypoints.http.address=:80
      --entrypoints.http.http.redirections.entrypoint.to=https
      --entrypoints.http.http.redirections.entrypoint.scheme=https
      --entrypoints.https.address=:443
      --entrypoints.https.http.tls.certresolver=le
      --entrypoints.https.http.tls.domains[0].main=example.com
      --entrypoints.https.http.tls.domains[0].sans=*.example.com
      --entrypoints.https.http.tls=true
      --global.checknewversion=false
      --global.sendanonymoususage=false
      --hub=false
      --log.level=DEBUG
      --pilot.dashboard=false
      --providers.docker=true
    environment:
      [ ... stuff for your ACME provider ... ]
    ports:
      # this assumes you just want to do simple port forwarding or something
      - 80:80
      - 443:443
      # - 8080:8080 uncomment if you want to hit port 8080 of this machine for the traefik gui
    working_dir: /data
    volumes:
      - ./persist:/data
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - traefik
    restart: unless-stopped
networks:
  traefik:
    name: traefik-expose-network
    external: true

Note that you'd have to create the traefik-expose-network manually for this to work, as that is how traefik will talk to your different services. You can get even fancier and set it up to expose your sites by default and auto-detect what to call them based on container name and stuff, but that is beyond the scope of a comment like this.

Technically my setup is a little more complex to allow for services on many different machines (so I don't use the built-in docker provider), and to route everything from the internet using frp using proxy protocol so I don't expose my home IP... I think this illustrates the point well regardless.

[-] terribleplan@lemmy.nrd.li 1 points 1 year ago* (last edited 1 year ago)

I expect the moderators of communities to do sufficient policing of their community to make sure it follows the rules of the instance it is on and the rules of that community. If those rules permit something you disagree with (or don't permit something you do want to see) the power is in your hands as a user to not participate or even see that community. The only way for a user to guarantee they won't interact with someone from instance X (whether that is exploding-heads or lemmygrad or whatever you don't like) is to only interact with communities on instances that have them defederated. There are places you can get a more curated and aggressively moderated experience, and have been recommending places such as beehaw to anyone looking for that.

I will take action against:

  • Local users harassing someone
  • Local users breaking local rules
  • Local users repeatedly breaking remote rules
  • Local communities that break local instance rules
  • Remote users harassing local users
  • Remote users repeatedly breaking local rules
  • Remote instances that repeatedly allow its users to break local rules
  • Remote instances that repeatedly allow its users to harass my users

The first rule on my instance is a catch-all "Be welcoming", that will be wielded to aggressively remove far more than just "racism, sexism, homophobia and transphobia".

As an admin I don't have the time or desire to police:

  • Local users interacting on remote communities, so long as they are following remote rules
  • Remote communities
  • Remote users interacting with remote users/communities

I do hope for a way to better curate (or just disable for now) the "All" feed, at the very least for anyone who isn't logged in. Given the general rules above that feed may include disagreeable posts, and is not a good representation of my instance or the type of community most users there will experience.

[-] terribleplan@lemmy.nrd.li 1 points 1 year ago* (last edited 1 year ago)

Mikrotik stuff is pretty good and quite inexpensive. They all run RouterOS, which is their linux distro for switches/routers. Fully managed and can do basically anything L2 or L3 you could want. Worth noting that availability and pricing has been pretty variable from their resllers for a while, and you will pay a premium via Amazon. Here's a list of relevant product product pages with the port counts and MSRPs:

Also these may help for some corner cases or change your ideas on what you want to build:

[-] terribleplan@lemmy.nrd.li 1 points 1 year ago

Based on the hardware you have I would go with ZFS (using TrueNAS would probably be easiest). Generally with such large disks I would suggest using at least 2 parity disks, but seeing as you only have 4 that means you would lose half your storage to be able to survive 2 disk failures. Reason for the (at least) 2 parity disks is (especially with identical disks) the risk of failure during a rebuild after one failure is pretty high since there is so much data to rebuild and write to your new disk (like, it will probably take more than a day).

Can't talk much about backup as I just have very little data that I care enough about to backup, and just throw that into cloud object storage as well as onto my local high-reliability storage.

I have tried many different solutions, so will give you a quick overview of my experiences, thoughts, and things I have heard/seen:

Single Machine

Do this unless you have to scale beyond one machine

ZFS (on TrueNAS)

  • It's great, with a few exceptions.
  • Uses data checksums so it can detect bitrot when performing a "scrub".
  • Super simple to manage, especially with the ~~FreeNAS~~ TrueNAS GUI.
  • Can run some services via Jails and/or plugins
  • It only works on a single machine, which became a limiting factor for me.
  • It can't add disks one at a time, you have to add an entire vdev (another set of drives in RAID-Z or whatever you choose).
  • You have to upgrade all disks in a vdev to use higher capacity disks.
  • Has lots of options for how to use disks in vdevs:
    • Stripe (basically RAID-0, no redundancy, only for max performance)
    • Mirror (basically RAID-1, same data on every disk in the vdev)
    • RAID-Zx (basically RAID-5, RAID-6, or <unnamed raid level better than 6>, uses x # of disks for parity, meaning that many disks can be lost)
  • ZFS send seems potentially neat for backups, though I have never used it

MDADM

  • It's RAID, just in your linux kernel.
  • Has been in the kernel for years, is quite reliable. (I've been using it for literally years on a few different boxes as ZFS on Linux was less mature at the time)
  • You can make LVM use it mostly transparently.
  • I would probably run ZFS for new installs instead.

BTRFS

  • Can't speak to this one with personal experience.
  • Have heard it works best on SSDs, not sure if that is the case any more.
  • The RAID offerings used to be questionable, pretty sure that isn't the case any more.

UnRaid

  • It's a decently popular option.
  • It lets you mix disks of different capacity, and uses your largest disk for parity
  • Can just run docker containers, which is great.
  • Uses a custom solution for parity, so likely less battle-hardened and less eyes on it vs ZFS or MDAM.
  • Parity solution reminds me of RAID-4, which may mean higher wear on your parity drive in some situations/workloads.
  • I think they added support for more than one parity disk, so that's neat.

Raid card

  • Capabilities and reliability can vary by vendor
  • Must have battery backup if you are using write-back for performance gains
  • Seemingly have fallen out of favor to JBODs with software solutions (ZFS, BTRFS, UnRaid, MDADM)
  • I use the PERCs in my servers for making a RAID-10 pool out of local 2.5in disks on some of my servers. Works fine, no complaints.

JBOD

  • Throwing this in here as it is still mostly one machine, and worth mentioning
  • You can buy basically a stripped down server (just a power supply and special SAS expander card) that you can put disks in, and lets you connect that shelf of storage to your actual server
  • May let you scale some of those "Single Machine" solutions beyond the number of drive bays you have.
  • Is putting a number of eggs in one basket as far as hardware goes if the host server dies, up to you to decide how you want to approach that.

Multi-machine

Ceph

  • Provides block (RBD), FS, and Object (S3) storage interfaces.
  • Used widely by cloud providers
  • Companies I've seen run it often have a whole (small) team just to build/run/maintain it
  • I had a bad experience with it
    • Really annoying to manage (even with cephadm)
    • Broke for unclear reasons, while appearing everything was working
    • I lost all the data I put into during testing
    • My experience may not be representative of what yours would be

SeaweedFS

  • Really neat project
  • Combines some of the best properties of replication and erasure coding
    • Stores data in volume files of X size
    • Read/Write happens on replica volumes
    • Once a volume fills you can set it as read only and convert it to erasure coding for better space efficiency
    • This can make it harder to reclaim disk space, so depending on your workload may bot be right for you
  • Has lots of storage configuration options for volumes to tolerate machine/rack/row failures.
  • Can shift cold data to cloud storage and I think even can back itself up to cloud storage
  • Can provide S3, WebDAV, and FUSE storage natively
  • Very young project
  • Management story is not entirely figured out yet
  • I also lost data while testing this, though root cause there was unreliable hardware

Tahoe LAFS

  • Very brief trial
  • Couldn't wrap my head around it
  • Seems interesting
  • Seems mostly designed for storing things reliably on untrusted machines, so my use case was probably not ideal for it.

MooseFS/LizardFS

  • Looked neat and had many of the features I want
  • Some of those features are only on (paid) MooseFS Pro or LizardFS (seemingly abandoned/unmaintained)

Gluster

  • Disks can be put into many different volume configurations depending on your needs
    • Distributed (just choose a disk for each file, no redundancy)
    • Replicated (store every file on every disk, very redundant, wastes lots of space, only as much space as the smallest disk)
    • Distributed Replicated (Distributed across Replicated sets, add X disks as a Replicated set of disks, choose one of the replica sets and store the file on every disk in that set, is how you scale Replicated disks, each replica can only be as big as the smallest member disk, you must add X disks at a time)
    • Dispersed (store each file across every disk using X disks for parity, tolerates X disk failures, only as much space as the smallest disk * (number of disks - X), means you are only losing X disks worth of parity)
    • Distributed Dispersed (Distributed across Dispersed sets, add X disks as a Dispersed set of disks with Y parity, choose one of the disperse sets and store each file across its X disks using Y disks for parity, is how you scale Dispersed disks, each disperse only has as much space as the smallest disk * (X - Y), you must add X disks at a time)
  • Also gets used by enterprises
  • Anything but dispersed stores full files on a normal filesystem (vs Ceph using its own special filesystem, vs Seaweed that stores things in volume files) meaning in a worst case recovery scenario you can read the disks directly.
  • Very easy to configure
  • I am using it now, my testing of it went well
    • Jury is still out on what maintenance looks like

Kubernetes-native

Consider these if you're using k8s. You can use any of the single-machine options (or most of the multi-machine options) and find a way to use them in k8s (natively for gluster and some others, or via NFS). I had a lot of luck just using NFS from my TrueNAS storage server in my k8s cluster.

Rook

  • Uses Ceph under the hood
  • Used it very briefly and it seemed fine.
  • I have heard good things, but am skeptical given my previous experience with Ceph

Longhorn

  • Project by the folks at Rancher/SUSE
  • Replicates the volume
  • Worked well enough when I was running k8s with some light workloads in it
  • Only seems to provide block storage, which I am much less interested in.

OpenEBS

  • Never used it myself
  • Only seems to provide block storage, which I am much less interested in.
[-] terribleplan@lemmy.nrd.li 1 points 1 year ago

I own sublime merge because it was cheap when I upgraded to ST4, but never use it. It's not bad or anything, but honestly the CLI is more convenient to use (and all the GUIs I've used have a lot of clicking involved). I don't know that you're going to find something better than the CLI, especially given your requirement ow "comfortable to use with only a keyboard".

view more: ‹ prev next ›

terribleplan

joined 1 year ago