1
1
submitted 4 hours ago* (last edited 1 hour ago) by mrcheeseman@lemmy.world to c/selfhosted@lemmy.world

I am diving into the world of self hosting by setting up a small server on my raspberry pi4. I already have tailscale, pihole, and navidrome setup in containers on my pi, I now want to add lidarr or some alternative to work with navidrome. When I tried to build the lidarr docker container it says “no matching manifest for Linux/arm/v8 in the manifest list entries” I am trying to pull from the Linux server docker image which says it supports arm64 on their page but so far I have not gotten it to work. Any and all suggestions are welcome. Thank you.

Edit: I’ve tried adding platform: Linux/arm64 to force the container to use my platform but then the container constantly crashes and says “exited with code 159” I have tested this with lidarr and calibre-web both with the same result.

2
1
Best Practice Ideas (startrek.website)

So I have rebuilt my Production rack with very little in terms of an actual software plan.

I host mostly docker contained services (Forgejo, Ghost Blog, OpenWebUI, Outline) and I was previously hosting each one in their own Ubuntu Server VM on Proxmox thus defeating the purpose.

So I was going to run a VM on each of these Thinkcentres that worked as a Kubernetes Cluster and then ran everything on that. But that also feels silly since these PCs are already Clustered through Proxmox 9.

I was thinking about using LXC but part of the point of the Kubernetes cluster was to learn a new skill that might be useful in my career and I don't know how this will work with Cloudflared Tunnels which is my preferred means of exposing services to the internet.

I'm willing to take a class or follow a whole bunch of "how-to" videos, but I'm a little frazzled on my options. Any suggestions are welcome.

3
1

VoidAuth is a self-hosted Single Sign-On solution that aims to be easy to setup and use while feeling seamless to your users. Release v1.1.0 brings a few new features I have been working on and am excited about:

  • Passkey-only Users, the option on sign-up to use a passkey instead of a password.
  • Admin Notification Emails, so admins know when they have new tasks such as user registrations to approve.
  • Approval Emails for New Users, so new users awaiting approval know when they have been.
  • DEFAULT_REDIRECT back to your main page for invitations, logouts, etc.
  • and more!
4
1
submitted 10 hours ago by notes@piefed.social to c/selfhosted@lemmy.world
5
1

FreshRSS is a selfhosted RSS feed management tool, which is compatible with a number of open source mobile apps

Excerpts from the Changelog:

A few highlights ✨:

  • Implement support for HTTP 429 Too Many Requests and 503 Service Unavailable, obey Retry-After
  • Add sort by category title, or by feed title
  • Add search operator c: for categories like c:23,34 or !c:45,56
  • Custom feed favicons
  • Several security improvements, such as:
    • Implement reauthentication (sudo mode)
    • Add Content-Security-Policy: frame-ancestors
    • Ensure CSP everywhere
    • Fix access rights when creating a new user
  • Several bug fixes, such as:
    • Fix redirections when scraping from HTML
    • Fix feed redirection when coming from WebSub
    • Fix support for XML feeds with HTML entities, or encoded in UTF-16LE
  • Docker alternative image updated to Alpine 3.22 with PHP 8.4 (PHP 8.4 for default Debian image coming soon)
  • Start supporting PHP 8.5+
  • And much more…
6
1
submitted 1 day ago* (last edited 14 hours ago) by KarnaSubarna@lemmy.ml to c/selfhosted@lemmy.world

Does anyone have any experience in successfully self-hosting Signal server using docker?

Thanks in advance.

EDIT: Thanks all for your response. I gave up on Signal and installing Matrix server instead.

7
1

Hi! I've never built a NAS before and only one custom gaming PC, so I'd love if any of you more experienced folks could take a look at my parts selection and possibly suggest better options.

Of course first my use cases:

  • Nextcloud
  • Immich
  • Jellyfin
  • Possibly more, similar to the above

Planning on using Truenas with a Raidz (1? - 1 disk failure tolerance) and running most of my stuff in Docker containers. The amount of users will likely stay at or below 3, certainly at or below 5, so it doesn't need to handle that much.

Here's my parts list:

  • CPU: AMD Ryzen 5 Pro 4650G
    • iGPU, power efficient, AM4 so cheaper, performant enough (I think)
  • Case: Jonsbo N3
    • This is the component I started with, since I really like the form factor. It did limit my choice on motherboards heavily though.
  • Motherboard: Gigabyte A520I AC
    • I was trying to go for one with ECC memory support, but at least on pcpartpicker I struggled to find ones at this form factor supporting it. However from reading through Forum threads ECC isn't critically important for a more "casual" build like mine, just a nice-to-have.
  • Memory: Found about 16GB of DDR4 in my old pc, they worked before so I didn't bother looking at them in detail
    • Cheap
  • Storage:
    • OS: Western Digital Black SN770 1 TB M.2-2280
      • Where I live the 500GB version is actually more expensive
    • Cache: Samsung 870 Evo 500 GB
      • Cheap enough, although if I can combine this with the OS drive, then even better
    • Primary Storage: 4x Seagate IronWolf Pro 8TB (ST8000NT001)
      • I have to admit, I can't recall why I settled on these. 8TB seemed good for price-to-size and I didn't want the server ones despite them actually being cheaper because they're extremely loud apparently, but why Pro and not non-pro and why this exact model... I can't recall, I just remember having a headache that afternoon TwT

I realize I left out the cooler and psu as I don't think they're particularly relevant here, I can deal with those myself. Price-wise, I am going by German prices and parts availability. On any of the parts listed, or if I forgot anything else though, I would love advice on the quality of my decision and how to improve it, thanks <3

8
1
submitted 1 day ago* (last edited 1 day ago) by JohnWorks@sh.itjust.works to c/selfhosted@lemmy.world

If you've been wanting to get scrobbling history and recommendations similar to spotify without having to be subbed to spotify you can go about this process to get your spotify listening history imported into Listenbrainz.

Listenbrainz does have a settings page to import spotify history but it is not implemented yet so this process can be used to import now. I went through and was able to get my listening history imported over although I needed to update the script that filters out skipped songs. You'd need to update the X to however many json files spotify gives you for your listening history and then also update the start date to your first listen on your current listenbrainz history.

#!/bin/bash

MIN_DURATION=30000

START_DATE="YYYY-MM-DDTHH:MM:SS"


for i in {0..X}; do
    input_file="parsed_endsong_$i.json"

    output_file="filtered_endsong_$i.jsonl"

    elbisaur parse "$input_file" \
        --filter "skipped!=1&&duration_ms>=$MIN_DURATION" \
        -b "$START_DATE" \
        "$output_file"
    fi
done

Or make your own script that'll work better or maybe the one listed in the article works for you ¯\(ツ)

9
1

I have a Proxmox server running two Opteron 6272 CPUs on an Asus KGPE-D16 (chosen because it was the fastest computer that supported Libreboot, although I haven't gotten around to installing it). Using normal BIOS settings, it's drawing just under 100W at idle, measured via smart plug reported in Home Assistant. With aggressive efficiency settings (PowerCap to P-state 4 and disabling CPU 2 entirely) it idles at 70W. It's a server, not a gaming PC, so it doesn't appear to have any options for underclocking or adjusting voltage.

Anybody know of any other ways (maybe software-based) to get the power draw down further?

10
1

Hello. I have just recently started with self hosting my media with Jellyfin... and I am LOVING it! I had been carrying around media players for decades, with everyone looking at me like an insane crank for not giving up on my hundreds of gigs of media for SAS things like spotify... now they're jealous! We've come full circle!

Annnyway. Obviously, I want to access the server anywhere, and don't want to just raw-dog an open port to the internet- yikes!

There are SO MANY ways and guides and thoughts on this, I'm a bit overwhelmed and looking for your thoughts on the best way to start off... it doesn't have to be 'fort knox' and I am sure I'll adjust and pivot as I learn more... but here are the options I know of (did I miss any?):

  • Tailscale VPN connection

  • Reverse Proxy with Caddy or similar (this is recommended as easy in the jellyfin official guides and thus is my current leading contender!)

  • Docker/VM 'containerized' server with permissions/access control

What are your thoughts on the beginner-friendly-ness and ease of setup/management of these? This is exclusively for use by me and my family, so I don't need something that's easy for anyone to access with credentials... just our handful of devices.

Please don't laugh, but I'm currently hosting on a Raspberry Pi5 with a big-ass harddrive attached (using CasaOS on a headless Ubuntu Server). I know this is JANK as far as self-hosting goes, and plan to upgrade to something like NAS in the future, but I'm still researching and learning, and aside from shitty video transcoding, it's working fine for now... Thank you in advance for your advice, help and thoughts!

11
1

New server has been acquired. Debian 13 has been installed.

GS308EP switches have been acquired and installed.

Now, I'm working to migrate to the new machine. 3 1/2 years ago when I started futzing with Docker, I sorta followed guides and guessed, abused it trying to make it do things it wasn't designed for, and flipped switches I likely shouldn't have flipped, so the set up is more than a little shabby.

As a result, I'll likely end more redeploying than migrating the containers.

So rather than go forward with Docker blindly, I want to reassess whether I shouldn't look into Proxmox, LXC, or Podman instead of Docker, or maybe something else entirely?

Work is just about done dumping ESX for Nutanix, but both of those seem overkill for my needs.

Of course the forums for any of the solutions make their own out to be the best thing since sliced bread and the others useless, so I'm hoping to get a more nuanced answer here.

12
1

Some thoughts on how useful Anubis really is. Combined with comments I read elsewhere about scrapers starting to solve the challenges, I'm afraid Anubis will be outdated soon and we need something else.

13
1
submitted 3 days ago by kiol@lemmy.world to c/selfhosted@lemmy.world
14
1

I have a bunch of plain text recipe files on a NAS. If a family member wants to cook something, they ask me to print them a copy.

I’m looking for a simple as possible way to put them on a local web server via a Docker image or similar.

Basically all I need is to have http://recipes.local/ show the list of files, then you can click one to view and or print it.

Don’t want logins. Don’t need ability to edit files. Want something read-only I can set and forget while I continue to manage the content directly on the NAS.

What would you suggest?

15
1

So,

I've never bothered with this before, since systemD seems to work just fine.

But I did this year stop using Ubuntu for most of my hosting needs and moved to Alpine or Debian, depending on what I'm doing.

So it makes sense to optimize even more. I read up a little about why people dislike systemD. Good reasons if mainly you're worried that it's doing too much and is too heavy.

So what are the alternatives that work with both Alpine and Debian? What are people using? Is it relatively easy to move from systemD to whatever is your alternative?

Thanks!

16
1

I see that when people ask for music servers, people frequently suggest Navidrome or mpd/mopidy. I haven't tried either. I'm just using Jellyfin as an all-in-one. I'm wondering why do people choose to use a dedicated music server over an all-in-one like Jellyfin?

Is the extra overhead worth it?

17
1

I'm planning out a proxmox box with an OPNsense VM for an upcoming build. I want to consolidate multiple little boxes into one more capable device.

I was planning on using a dual port NIC that I would passthru to the OPNsense VM. I like the idea of the WAN interface being piped directly to the VM rather than passing through the host and being presented as a virtual device. But that means BSD has to play nice with it and as I understand it, BSD network drivers can be temperamental and intel's drivers are just better.

I was looking at using a cheap dual port intel 226v NIC for this, but intel's not in a great place right now so I'd like to consider other options. Everywhere online, people scream "only use intel NICs for this" but I find it ridiculous that in 2025, nobody else has managed to make stable drivers for their hardware in this use case.

What are your experiences with non-intel NICs in OPNsense?

18
1

In the middle of trying to set up Caddy as a reverse proxy for my *arr stack. All local only - no domains or accessing from outwith the LAN.

Wondering if anyone has done similar and wouldn't mind sharing their docker compose files/Caddyfiles? Struggling to find real-work examples that don't error when I compose.

19
1

Hi everyone, I've been working on my homelab for a year and a half now, and I've tested several approaches to managing NAS and selfhosted applications. My current setup is an old desktop computer that boots into Proxmox, which has two VMs:

  • TrueNAS Scale: manages storage, shares and replication.
  • Debian 12 w/ docker: for all of my selfhosted applications.

The applications connect to the TrueNAS' storage via NFS. I have two identical HDDs as a mirror, another one that has no failsafe (but it's fine, because the data it contains is non-critical), and an external HDD that I want to use for replication, or some other use I still haven't decided.

Now, the issue is the following. I've noticed that TrueNAS complains that the HDDs are Unhealthy and has complained about checksum errors. It also turns out that it can't run S.M.A.R.T. checks, because instead of using an HBA, I'm directly passing the entire HDDs by ID to the VM. I've read recently that it's discouraged to pass virtualized disks to TrueNAS, as data corruption can occur. And lately I was having trouble with a selfhosted instance of gitea, where data (apparently) got corrupted, and git was throwing errors when you tried to fetch or pull. I don't know if this is related or not.

Now the thing is, I have a very limited budget, so I'm not keen on buying a dedicated HBA just out of a hunch. Is it really needed?

I mean, I know I could run TrueNAS directly, instead of using Proxmox, but I've found TrueNAS to be a pretty crappy Hypervisor (IMHO) in the past.

My main goal is to be able to manage the data that is used in selfhosted applications separately. For example, I want to be able to access Nextcloud's files, even if the docker instance is broken. But maybe this is just an irrational fear, and I should instead backup the entire docker instances and hope for the best, or maybe I'm just misunderstanding how this works.

In any case, I have some data that I want to store and want to reliably archive, and I don't want the docker apps to have too much control over it. That's why I went with the current approach. It has also allowed for very granular control. But it's also a bit more cumbersome, as everytime I want to selfhost a new app, I need to configure datasets, permissions and mounting of NFS shares.

Is there a simpler approach to all this? Or should I just buy an HBA and continue with things as they are? If so, which one should I buy (considering a very limited budget)?

I'm thankful for any advice you can give and for your time. Have a nice day!

20
1

Sorry for the dumb question and hopefully this is relevant enough to the sub. I have my own firewall and right now it connects to my ISPs provided home router over rj45, their router gets a fiber hookup to their network and it's the only ISP device in my home. If I have a firewall with a fiber port, can I take the fiber to the modem and hook that straight to my firewall, or is there a reason I need their device?

21
1
submitted 4 days ago* (last edited 4 days ago) by Ek-Hou-Van-Braai@piefed.social to c/selfhosted@lemmy.world

A domain I'd like to use expired a week ago, It's registered at https://www.ionos.com/

It might be a domain other people are after, as it's something like mySurname.net

From my small amount of research, there is a 30 day cooldown period, and then another 30 days before it actually becomes available

I'd like to have the domain how do I go about it? I see there are some services that charge ~80 Euro to jump on it immediately when it's available, but that doesn't sit well with me.

The domain was only registered 2 years ago, and it's likely that nobody else is interested in it.

How do I best go about this?

22
1

I have a pile of part lists for tools I'm maintaining, in pdf format; and I'm looking for a good way to take a part number, search through the collection of pdfs, and output which files contain that number. Essentially letting me match random unknown part numbers to a tool in our fleet.

I'm pretty sure the majority of them are actual text you can select and copy+paste, so searching those shouldn't be too difficult; but I do know there's at least a couple in there that are just a string of jpgs packed in a pdf file. They will probably need OCR, but tbh I can probably live with skipping over those altogether.

I've been thinking of spinning up an instance of paperless-ngx and stuffing them all in there so I can let it index the contents including using OCR, then use it's search feature; but that also seems a tad overkill.

I'm wondering if you fine folks have any better ideas. What do you think?

23
1

I’ve been experiencing some perplexing and frustrating issues with my server, and need some advice from those more knowledgeable than me.

Recently I decided to upgrade my raspberry pi server and I found a good deal on an HP Elite Mini 600 G9 on eBay so I took the plunge. It’s got an Intel Core i5-12500T and came with 8gb ram and a 256 gb ssd. I bumped it up to 32gb ram and added a 4tb ssd. It came with windows installed but I installed Debian on there.

With the basics taken care of I got setup with my couple of docker containers (if it matters: caddy, actual budget, immich, prometheus, grafana). But ever since then, anytime some CPU-heavy process runs, the whole machine freezes and stays frozen (I’ve tried letting it go to see if it recovers but it stays frozen for days), and I am forced to physically power it down. I tried to isolate it, thinking it was one of the docker containers but it happened with immich, prometheus, & grafana individually, as well as a borg backup running directly on the machine. When I power it back on after one of these freezes there are not even any system logs from the entire period of the freeze, so I can’t learn anything from them to indicate the issue.

Anyone have any ideas what the issue could be or even where to look? I’m starting to think it’s a hardware problem but I’m not sure and I don’t know what my next step should be.

24
1

cross-posted from: !nostupidquestions@lemmy.world

Wplace is a freemium online game that lets anyone create pixel art on top of a map of the world. I got into it just yesterday (and started a community showcasing the stuff I found). But basically any places with human population are already densely drawn-upon, so I've kinda missed out on a lot of those. If I don't care about the entire world being able to see what I draw, and I just want to be able to collab with a couple of friends and maybe share screenshots, what would be involved in basically making my own private clone of it?

25
1

As of right now, I currently have a working Docker container for Caddy which can successfully get TLS certs and I am able to access my own test site with an external web browser.

What I want to do use the same files (Dockerfile, docker-compose.yml and Caddyfile) to do the same with Podman Compose. When I run podman compose up -d I am able to build the Caddy container and it will also successfully get it's own TLS cert.

docker-compose.yml

services:
  caddy:
    container_name: caddy
    build: .
    restart: always
    ports:
      - 80:80
      - 5050:443
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
      - /home/sxc-pi/shared/:/srv:Z
    networks:
      - reverse_proxy

volumes:
  caddy_data:
  caddy_config:

networks:
  reverse_proxy:
    external: true

While on the same device, I can use curl localhost:5050 and get the message Client sent an HTTP request to an HTTPS server. which is the same result as if I were using Docker. If I try to access my site through my domain name or local network ip address from an external device, the connection times out.

I didn't make any changes to my firewall or router's port forwarding because I expect Rootful Podman Compose to work similar to Docker.

I checked iptables and below are the differences between using Docker and Podman but I don't really know networking enough to understand what it's really saying

iptables differences

sxc-pi:/srv/caddy$ diff ~/iptables-docker ~/iptables-podman 
***
/home/sxc-pi/iptables-docker
+++ /home/sxc-pi/iptables-podman
@@ -31,8 +31,6 @@
 
 Chain DOCKER (2 references)
 target     prot opt source               destination         
-ACCEPT     tcp  --  anywhere             172.18.0.2           tcp dpt:https
-ACCEPT     tcp  --  anywhere             172.18.0.2           tcp dpt:http
 DROP       all  --  anywhere             anywhere            
 DROP       all  --  anywhere             anywhere            
 
@@ -70,15 +68,20 @@
 Chain NETAVARK_FORWARD (1 references)
 target     prot opt source               destination         
 DROP       all  --  anywhere             anywhere             ctstate INVALID
+ACCEPT     all  --  anywhere             10.89.0.0/24         ctstate RELATED,ESTABLISHED
+ACCEPT     all  --  10.89.0.0/24         anywhere            
 
 Chain NETAVARK_INPUT (1 references)
 target     prot opt source               destination         
+ACCEPT     udp  --  10.89.0.0/24         anywhere             udp dpt:domain
+ACCEPT     tcp  --  10.89.0.0/24         anywhere             tcp dpt:domain
 
 Chain NETAVARK_ISOLATION_2 (1 references)
 target     prot opt source               destination         
 
 Chain NETAVARK_ISOLATION_3 (0 references)
 target     prot opt source               destination         
+DROP       all  --  anywhere             anywhere            
 NETAVARK_ISOLATION_2  all  --  anywhere             anywhere            
 
 Chain ufw-after-forward (1 references)

I've also rebooted after starting the Podman containers incase there were any iptables issues but that still didn't help.

I've searched what I can but haven't gotten anything to work or get me closer to finding an answer.

I'm hoping to use Rootless Podman if I can figure this out, if not I have Docker as a fall back plan.

Any help or insight would be appreciated.

view more: next ›

Selfhosted

50740 readers
34 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS