I think it is in the current version just mostly undocumented and behind flags to turn it on. It definitely is in the latest Fogejo with flags as I described, which I migrated my gitea to. I haven't had a chance to play with it yetm but am looking forward to it. I love github actions and it is super similar.
I mean, you've got the answers I normally give in your last paragraph. I personally like mini PCs, I usually look for something refub'd, as new a processor generation I can find, and that I can upgrade the RAM to 16 or 32 gigs. M2 is nice but not necessary, I broke my M2 slot out to a PCI-E riser to add a 10g card in the jankiest way possible. Also, if you're doing transcoding with Jellyfin you're probably not going to have too great a time with mini PCs if you want more that like 2 or 3 streams at a time unless you make sure to get a higher-end processor.
I was unaware that was the case, that's pretty expensive and inefficient to do it for every user (because each user's "Subscribed" feed is likely different) every 15 minutes regardless of how recently they've visited. The inflection point is probably lower for the benefits of even single-user instances in that case.
Summarizing the relevant parts of an eerily similar conversation I had the other day:
If you are using the built-in mail relay then you aren’t signing your mail with DKIM, don’t have SPF set up right, don’t have a DMARC policy, and don’t have FcRDNS, all of which basically any mail provider will require from you to even consider accepting your mail. Basically without all of that literally anyone can pretend to be whatever.com
and send email from it. They really shouldn’t be shipping that mail relay at all IMO, it just leads to confusion. More than likely you would already know if you need a mail relay and be able to set it up yourself if so.
Sendgrid and Postmark are popular transactional mail services (which is what sort of email you will be sending, google that term to find more options). If you want some help getting your own mail server set up in a dockerized way I run my mail using docker-mailserver and if only set up for outgoing mail it is pretty easy to run, though you will probably run into deliverability issues as the large providers (google, microsoft, apple, etc) can be real assholes and assume anything from a non-large provider is spam. Feel free to ask me about how to do it if you are interested though, the more people run their own mail the better it gets for all of us.
If my threat model realistically involved TLAs or other state-sponsored actors I would not be advertising what I do or do not know on a public forum such as Lemmy, haha.
This conversation was in the conext of running Unbound, which is a recursive resolver and AFAIK DNS "encryption" isn't a thing in a way that helps in this scenario... DoH, DoT, and DNSCrypt are all only concerned/deployed by recursive servers, meaning unbound isn't using those. DNSSEC only provides authentication (preventing tampering) of the response, not any sort of encryption/hiding.
Scrot is a screenshot tool in Linux that you can run from the command line, I think the implication is that OP didn't do this on purpose so "may have been hacked", or had something heavy fall on their "Prnt Scrn".
Yeah, you're basically on the right track. I do a couple things in a possibly interesting way that you may find useful:
- I run multiple
frps
s on different servers. Haven't gotten around to setting up a LB in front or automatically removing them from DNS, but doing that sort of thing is the eventual plan. This means running as manyfrpc
s as I havefrps
s. I also haven't gotten to the point of figuring out what to do if e.g. one service exposed viafrps
is healthy but another is not. It may make sense to run HAProxy in front of it or something... sounds terrible... - I have multiple
frpc.ini
s, they define all of the connection details for a particularfrps
then useincludes = /conf.d/*.ini
to load up whatever services thatfrpc
exposes. - I run
frpc
in docker and use volumes to manage e.g. putting the rightfrpc.ini
and/conf.d/<service>.ini
files in there. - I use QUIC for the communication layer between
frpc
andfrps
using certificates for client authentication. - I run my
frpc
s (one container perfrps
, I'm considering ways to combine them to make it less annoying to deploy) right alongside the service I am exposing remotely, so I run e.g. one for Traefik, one for ~~gogs~~ ~~gitea~~ forgejo ssh, etc. If you are using docker-compose I would put one (set of)frpc
in that compose file to expose whatever services it has. Similar thought for k8s, I would do sidecar containers as part of your podspec. - If I have more than one instance of a service, such as e.g. running multiple Traefik "ingress" stacks, I run a set of
frpc
s per deployment of that service. - Where possible I use proxy protocol via
proxy_protocol_version = v2
to easily preserve incoming IP address. Traefik supports this natively, which is the most important service to me as most of what I run connects over HTTP(s). - I choose to terminate end-user SSL using Traefik within my homelab, so the full TLS session gets sent as a plain TCP stream. There is support for HTTPS within frp using
plugin = https2http
, but I like my setup better.
As to your question of "what happens when the frpc
s go offline?", it depends on service type. I only use services of type = tcp
and type = udp
, so can't speak to anything beyond that with experience.
In the case of type = tcp
your frps
you can run multiple frpc
s and the frps
will load-balance to them, meaning if you run multiple you should get some level of HA because if one connection breaks it should just use the other, killing any still-open connections to the failed frpc
. Same thought there as how e.g. cloudflared
using their Tunnels feature makes two connections to two of their datacenters. If there is nothing to handle a particular TCP service on an frps
I think the connection gets refused, it may even stop listening on the port, but I'm not sure of that.
Sadly in the case of type = udp
the frps
will only accept one frpc
connection. I still run multiple frpc
s, but those particular connections just fail and keep retrying until the "active" frpc
for that udp service dies. I believe this means that if there is nothing to handle a particular UDP service on an frps
it just drops the packets since there isn't really a "connection" to kill/refuse/reset, the same thing about stopping listening may apply here as well but I am also unsure in this case.
My wishlist for frp is, in no particular order:
frpc
making multiple connections to a serverfrpc
being able to connect to multiple servers- Some sort of native ALPN handling in
frps
, and ability to use a custom ALPN protocol for frp traffic (so I can run client traffic and frp traffic on the same port) frps
support for loadbalancing to multiple UDP via some sort of session tracking, triple-based routing, or something elsefrps
support for clustering or something, so even if onefrps
didn't have a way to route a service it could talk to anotherfrps
nearby that did- or even support for tiering the idea of "locality", first it tries on the local machine, then it tries in the same zone/region/etc
- It'd be super neat if there were a way to do something like Cloudflare's "Keyless SSL"
Overall I am pretty happy with frp, but it seems like it is trying to solve too much (e.g "secret" services, p2p, HTTP, TCP multiplexing). I would love to see something more focused purely TCP/UDP (and maybe TLS/QUIC) edge-ingress emerge and solve a narrower problem space even better. Maybe there is some sort of complex network-level solution with a VPN and routing daemons (BGP?) and firewall/NAT stuff that could do this better, but I really just want a tiny executable and/or container I can run on both ends and have things "just work".
Or if you just don't want to give your "proper" phone number out to every single company out there to add to their spam list, sell on to anyone else, and give away for free every time they have a data breach. I use GV out of necessity for blocking spam calls.
I may very well do that once I've gotten a prototype of my mailing list thing going. I'm hoping to get an initial version of that out today or tomorrow. I am overengineering it a bit for this first release with most everything going through job queues so it (hopefully) doesn't fall apart as soon as people actually start using it and can scale somewhat easily if necessary.
P.S. You may find my mailing-list thing useful for letting admins (or other interested parties) subscribe to get updates about Fediseer from you as PMs rather than relying on them happening to see updates posted to communities. If you're interested LMK and I can set you up as one of the beta users once it's ready.
I hope my feedback on your earlier post didn't come off as mean or anything. I know I gave you a bit of a hard time since we're not exactly aligned on how to solve some of these problems, but I do appreciate your attempts and the work you are doing.
I don't know how hard it would be, but could Fediseer be something admins can send messages to directly within Lemmy? Could it make sense to just have this work over PMs as a primary interface? This would let onboarding for unclaimed guaranteed instances be as simple as admins responding to the PM they got.
Alternatively I think there is probably a lot that could be done in some sort of web interface whenever Lemmy decides to be an OIDC provider so you admins could log in to Fediseer via their instance. But that is probably a long way out, there is so much to be done within Lemmy.
Yeah, my current one is 11th gen, just before they started doing that, so I don't know how good or bad the "efficiency" cores are or if the power savings is worth it.
Yeah, I think the problem comes if you don't want to manually configure "Add-ons". Using this feature is only supported on their OS or using "Supervised". "Supervised" can't itself be in a container AFAIK, only supports Debian 12, requires the use of network manager, "The operating system is dedicated to running Home Assistant Supervised", etc, etc.
My point is they heavily push you to use a dedicated machine for HASS.