[-] terribleplan@lemmy.nrd.li 5 points 1 year ago* (last edited 1 year ago)

Whoo, can't wait for this season of "Wait, I thought we made progress last episode/chapter!?"

I am a bit behind on the manga, but it has been really hard to be motivated to read it. It feels like any minuscule piece of progress is followed by immediate regression. I was very much in the mindset of "Fuck you, I'll see you next week" for a while, haha.

I'll comment my thoughts after I get around to watching the episode a bit later today.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Lemmy and Akkoma, both in docker with Traefik in front.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Business in the front:

  • Mikrotik CRS2004-1G-12S+2XS, acting as a router. The 10g core switch plugs into it as well as the connection to upstairs
  • 2u cable management thing
  • Mikrotik CRS326-24S+2Q+, most 10g capable things hook into this, it uses its QSFP+ ports to uplink to the router and downlink to the (rear) 1g switch.
  • 4u with a shelf, there are 4x mini-pcs here, most of them have a super janky 10g connection via an M.2 to PCIe riser.
  • "echo", Dell R710. I am working on migrating off of/decomissioning this host.
  • "alpha", Dell R720. Recently brought back from the dead. Recently put a new (to me) external SAS card into it, and it acts as the "head" unit for the disk shelf I recently bought.
  • "foxtrot", Dell R720xd. I love modern-ish servers with >= 12-disks per 2u. I would consider running a rack full of these if I could... forgive the lack of a label, my label maker broke at some point before acquiring this machine.
  • "delta", "Quantum" something or other, which is really just a whitelabeled Supermicro 3u server.
  • Unnamed disk shelf, "NFS04-JBOD1" to its previous owner. Some Supermicro JBOD that does 45 drives in 4u, hooked up to alpha.

Party in the back:

  • You can see the cheap monitor I use for console access.
  • TP-Link EAP650, sitting on top of the rack. Downstairs WAP.
  • Mikrotik CRS328-24P-4S+, rear-facing 1g PoE/access switch. The downstairs WAP hooks into that as well as the one mini-PC I didn't put a 10g card on. It also provides power (but not connectivity) to the upstairs switch. It used to get a lot more use before I went to 10g basically everywhere. Bonds 4x SFP+ to upllink via the 10g switch in front.
  • You can see my cable management, which I would describe as "adequate".
  • You can see my (lack of) power distribution and power backup strategy, which I would describe as "I seriously need to buy some PDUs and UPSs"

I opted for a smaller rack as my basement is pretty short.

As far as workloads:

  • alpha and foxtrot (and eventually delta) are the storage hosts running Ubuntu and using gluster. All spinning disks. ~160TiB raw
  • delta currently runs TrueNAS, working on moving all of the storage into gluster and adding this in to that. ~78TiB raw, with some bays used for SSDs (l2arc/zil) and 3 used in a mirror for "important" data.
  • echo, currently running 1 (Ubuntu) VM in Proxmox. This is where the "important" (frp, Traefik, DNS, etc) workloads run right now.
  • mini-pcs, running ubuntu, all sorts of random stuff (dockerized), including this Lemmy instance. Mounting the gluster storage if necessary. They also have a gluster volume amongst themselves for highly redundant SSD-backed storage.

The gaps in the naming scheme:

  • I don't remember what happened to bravo, it was another R710, pretty sure it died, or I may have given it away, or it may be sitting in a disused corner of my basement.
  • We don't talk about charlie, charlie died long ago. It was a C2100. Terrible hardware. Delta was bought because charlie died.

Networking:

  • The servers are all connected over bonded 2x10g SFP+ DACs to the 10g switch.
  • The 1g switch is connected to the 10g switch with QSFP+ breakout to bonded 4x SFP+ DAC
  • The 10g switch is connected to the router with QSFP+ breakout to bonded 4x SFP+ DAC
  • The router connects to my ISP router (which I sadly can't bypass...) using a 10GBASE-T SFP+.
  • The router connects to an upstairs 10g switch (Mikrotik CRS305-1G-4S+) via a SFP28 AOC (for future upgrade possibilities)
  • I used to do a lot of fancy stuff with VLANs and L3 routing and stuff... now it's just a flat L2 network. Sue me.
[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

If you find a decent alternative let me know. I have been looking for a while and not found anything that supports the full feature set I want (including Twilio).

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Yeah, I think how most Lemmy clients (including the default web UI) handle display name is a real mistake.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago* (last edited 1 year ago)

"Initial sync" isn't a thing. Things only federate from communities after you subscribe to it. Old posts will make their way over if someone interacts with it (comments/votes on it). I think old comments may make their way over under the same conditions. Old votes will not make their way over so your vote count on old posts will never be right.

You can search for a post or comment to force your instance to load it (copy the federation link, the rainbow-web-looking icon) just like you would do for communities. I think there are scripts out there that may automate this process to force your instance to load old content, but you're putting more load on an already strained system.

And yes, lemmy.world is probably overloaded. Usually this just means that federation from it isn't instant and may take a little time.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Which is exactly why you should self-host. No one to blame but yourself when your instance goes down/away.

Sadly this idea doesn't mesh well with how communities work given those are inherently tied to an instance, unlike e.g. hashtags on Mastodon. It would suck if some community goes away just because the instance admin got tired of running it.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Communities are inherently tied to the instance on which they are created and cannot be moved. If the instance is overloaded then that community will not federate properly. If the instance goes down nobody can post to the community. If the instance goes away that community goes away (except for the "cache" that other instances have).

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Yeah... it is kinda hypocritical for this community to be based on .world, haha. There are plenty of people here running instances, who wants to volunteer as tribute and to sign up to be on call?

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago* (last edited 1 year ago)

lemmy.nrd.li - The domain is pronounced Nerd-ly. I welcome anyone that considers themself a nerd and any community someone feels like being nerdy about.

I also have like 20+ others domains... Most of which are unused... I may have a problem.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago* (last edited 1 year ago)

I agree that we need far stronger admin and moderation tools to fight spam and bots. I disagree with the idea of a whitelist approach, and think taking even more from email (probably the largest federated system ever) could go a long way.

With email, there is no central authority granting "permission" for me to send stuff. There are technologies like SPF, DKIM, DMARC, and FcRDNS, which act as a minimum bar to reach before most servers trust you at all, then server-side spam filtering gets applied on top and happens at a user, domain, IP, and sometimes netblock level. When rejections occur, receiving servers provide rejection information, that let me figure out what is wrong and contact the admins of that particular server. (Establish a baseline of trust, punish if trust is violated)

A gray-listing system for new users or domains could generate reports once there is a sufficient amount of activity to ease the information gathering an admin would have to do in order to trust a certain domain. Additionally, I think establishing a way for admins to share their blacklisting actions regarding spam or other malicious behavior (with verifiable proof) could achieve similar outcomes to whitelisting without forcing every instance operator to buy in to a centralized (or one of a few centralized) authority on this. This would basically be an RBL (which admins could choose to use) for Lemmy. This could be very customizable and allow for network effects ("I trust X admin, apply any server block they make to my instance too" sort of stuff).

I think enhancements to Lemmy itself would also address help. Lemmy itself could provide a framework for filtering and report when an instance refuses a federated message with relevant information, allowing admins to make informed decisions (and see when there are potential problems on their instance). Also having ways to attach proof of bad behavior to federated bans at an instance level, and some way to federate bans (again with proof) from servers that aren't a user's home instance.

Finally, as far as I can tell everything following a "Web of Trust" model (basically what you are proposing) has struggled to gain widespread adoption. I have never been to a key signing party. I once made a few proofs on keybase, but that platform never really went anywhere. This doesn't mean your solution won't work, it just concerns me a little.

I expanded a bit more on some of how email tooling could be used within lemmy in this comment as well. My ideas aren't fully baked yet, but I hope they at least make some sense.

[-] terribleplan@lemmy.nrd.li 5 points 1 year ago

Yeah, as I said IDK what device timelines are, but for some reason I can't imagine apple not releasing an iPhone in the EU for 4 years... the charge port mandate was not super impactful/difficult for Apple to comply with. I am still not convinced Apple isn't going to drop the charge port entirely in favor of their magsafe wireless thing (again, anti-consumer IMO), or at the very least will be putting out an EU-only SKU with USB-C.

view more: ‹ prev next ›

terribleplan

joined 1 year ago