1
1

I recently had a Proxmox node I was using as a NAS fail catastrophically. Not surprising as it was repurposed 12 year old desktop. I was able to salvage my data drive, but the boot drive was toast. Looks like the sata controller went out and fried the SSD I was using as the boot drive. This system was running TurnKey FileServer as a LXC with the media storage on a subvol on a ZFS storage pool.

My new system is based on OpenMediaVault and I'm am happy with it, but I'm hitting my head against a brick wall trying to get it to mount the ZFS drive from the old system. I tried installing ZFS using the instructions here as OMV is based on Debian but haven't had any luck so far.

2
1

Hi all!

i have a nice setup with some containers (podman rootless) and bare metal services (anything i can install bare metal, goes bare metal usually).

I used Monit, in the past, to keep an eye on my services and automatically restart something that for any reason goes down. I stopped using Monit because doesnt scale well on mobile browser and it's frankly clumsy to configure.

I could go back to Monit i guess, but i am wondering if there is anything better out there to try.

A few requirements (not necessarily mandatory, but preferable):

  • Open Source (ideally: true open source, not just commercial sulutions with dumbed down free verisons)
  • Not limited, or focuesd, on containers (no Watchtower and similar)
  • For containers, it can just support "works" or "restart"
  • For containers, if it goes above the minimum "works" and "restart" must support podman
  • Must support bare metal services (status, start, stop)
  • Must send email or other kind of notifications (ok IM notifications, but email preferred)
  • Should additionally monitor external machines (es other servers on the LAN), or generic IP addresses
  • Should detect if a web service is alive but blocked
  • No need for fancy GUIs or a Web GUI (it's a pro point, but not required)
  • No need for data reporting, graphics and such aminities. They are a plus, but 100% not required.

What do you guys use?

3
1
submitted 1 day ago* (last edited 1 day ago) by asbestos@lemmy.world to c/selfhosted@lemmy.world

As it stands, both Piped and Invidious are dead. Because od that, I almost completely stopped watching youtube but l'd still sometimes like to check what the people I follow posted (I used to do that via Piped). Are there any new ways of following people without actually using Google? I'm aware of the tools that download new videos as they come out but I'm more interested in just "subscribing", kinda like RSS?
Ideally it would be on iOS

4
1
Selfhosted Journal (lemmy.world)

So I have been selfhosting my calendar and todo list on a local server for sometime now. I use thuunderbird's tasks on my laptop and jtx board on my phone.

I see that jtx board has a journaling feature. It looks like maybe it is just for notes rather than a place to write self reflections. Is there something similar to this app in self hosting with a mobile and desktop component?

5
1

I am building a Proxmox server running on an SFF PC. Right now I have:

  • 1 x 250 GB Kingston A400 Sata SSD
  • 1 x 512 Gb Samsung NVMe 970 Evo Plus
  • 1 x 512 Gb Kingston NVMe KC3000
  • 1 x 12 Tb Seagate Ironwolf Re-certified disk

I plan to install Proxmox on the 250Gb Kingston disk using ext4 and use it only for Proxmox and nothing else.

I am thinking of configuring ZFS mirrored raid on the two NVMe disks. Here one disk is on my mobo, and the other is connected to the PCIe slot with an adapter, as I have only one M2 slot on the mobo. I plan to use this zpool for VMs and containers.

Finally, the re-certified 12 Tb disk is currently going through a long smarctl test to confirm that it is usable and it is supposed to be used primarily for storing media and non-critical data and VM snapshots, which I don't care much about it. I will in parallel most likely adopt the critical data to a cloud location as an additional way to protect my most important data.

My question is should I be really concerned about the lack of DRAM in the Kingston A400 SSD and its relatively low TBW endurance (85 TB) in case I would run it only to boot Proxmox from it and I think the wear out of the drive would be negligible.

  • I have the option to exchange the Proxmox boot drive with a proper SSD, like a Samsung 870 Evo (SATA SSD, using MLC NAND and having DRAM cache). I would of course need to pay around 60% more but I am just thinking that this might be an overkill.
  • Do you think that using ZFS pool for the two NVMe drives will wear them out very quickly? I will have 3-4 VMs and a bunch of containers.
  • Is the use of a slow Proxmox boot drive (SATA SSD) going to slow down the VMs and containers as they will run on much quicker NVMe SSDs, or it won't matter?
  • Shall I format the Seagate HDD in xfs to speed up the transfer of large files or shall I stick to ext4?
  • What other tests shall I run to confirm that the HDD is indeed fine and I can use it?
6
1

Hi all,

I'm wondering if anyone has any suggestions for a (ideally) FOSS app that can help me transfer a large amount of files between mobile devices. The exact scenario I'm trying to solve for is transferring large amount of pictures and videos from a family member's iPhone to my Android mobile phone.

I've tried a few solutions (see below list) but they all had some short coming or issue. I would ideally love something that has a mobile app that can be installed, but that's only because in my experience mobile web browsers tend to timeout / hang when dealing with a large number of file uploads at once.

  • Filerun - Filerun worked the best in my testing and, if there are no other suggestions, I'll probably return to this one, despite it not being FOSS.
  • Pingvin - Worked the next best, but would time out more frequently than Filerun. As long as I would batch the upload to only a few hundred pictures at a time and kept the screen alive, it would handle the upload.
  • PairDrop - Loved the simplicity of this web app and not having to send or deal with share links, however I was unable to get it to send uploads of more than ~100 files at a time.
  • Immich - Honestly, a perfect solution but since I'm only trying to send select pictures between devices, way overkill. Plus, family members were uncomfortable with a solution that gave the perception that it was automatically uploading ALL of their pictures to my server.

Thanks in advance for the suggestions!

7
1
submitted 2 days ago by kiol@lemmy.world to c/selfhosted@lemmy.world

More comprehensive show notes on the flarum forum. Enjoy this federated, self-hosted foss podcast about DIY and learning. Looking forward to expanding it to include more DIY, hardware and other sorts of projects like cooking and music. Added mixing through Stereotool, run off my old Pi.

8
1

I have a complex Tailscale-based network setup that includes blocking all Google hostnames. Unfortunately RCS on iOS doesn’t work when sending photos.

I’ve scoured AT&T’s website and App Privacy Report on iOS (which doesn’t show DNS names for Messages, Phone apps) but I do know they switched to Google as their RCS provider at one point.

I’d like to set up a Tailscale App Connector using hostnames, but if they’re using IP addresses I can work with those as well (subnet routing).

9
1
Mini pc arriving tomorrow (lemmy.dbzer0.com)

Greetings, so I final got wife permission to buy a pi zero 2 and a beeline 12s pro (n100) arriving tomorrow. I already have a nas drive for my media.

Question is what is the average setup and guides for this?

Of course I will be scouring this and other communities for info but the immediate items I want to fix are my plex/jellyfin server, setup RetroArch or equivalent gaming, then of course arr servers. But I would like to also get into reverse proxy and searxng, next cloud and pihole.

Any tips on how to make this beautiful?

OS recommendations? I currently run manjaro on my daily, but would think a kubuntu or kde fedora/debian spin might be better for these items.

Guides you can point me to? Suggestions for more or better options? There are plenty of answers in this community and I will look at what’s posted but any assistance is appreciated.

Thank you in advance.

I’m excited to start plying with the simple things

10
1

Hey, jusy sharing Faridoon which was recently released, and just got reshared on the selfh.st podcast. You can publish your favourite quotes, and upvote them too. Great for communities looking to save some of their history.

11
1

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

12
1

Someone on Lemmy posted a phrase recently: "If you're not prepared to manage backups then you're not prepared to self host."

This seems like not only sound advice but a crucial attitude. My backup plans have been fairly sporadic as I've been entering into the world of self hosting. I'm now at a point where I have enough useful software and content that losing my hard drive would be a serious bummer. All of my most valuable content is backed up in one way or another, but it's time for me to get serious.

I'm currently running an Ubuntu Server with a number of Docker containers, and lots of audio, video, and documents. I'd like to be able to back up everything to a reliable cloud service. I currently have a subscription to proton drive, which is a nice padding to have, but which I knew from the start would not be really adequate. Especially since there is no native Linux proton drive capability.

I've read good things about iDrive, S3, and Backblaze. Which one do you use? Would you recommend it? What makes your short list? what is the best value?

13
1
submitted 3 days ago* (last edited 3 days ago) by a_fancy_kiwi@lemmy.world to c/selfhosted@lemmy.world

tldr: I'd like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I'm not sure what the best/safest way to do it is. Asking my partner to use tailsclae or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I'm kind of unsure what the best approach is. Hosting services on the internet has risk and I'd like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What's the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

14
1
submitted 3 days ago* (last edited 2 days ago) by shaked_coffee@feddit.it to c/selfhosted@lemmy.world

I've made the following backup script for my immich stack to be automatically run every day

# Load variables from the .env file
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
set -a
source "$SCRIPT_DIR/../.env"
set +a

# Create a dump of the database and back it up
docker exec -it immich_db pg_dumpall -c -U immich > immich/latest_db_dump.sql
rustic -r $BUCKET_NAME/immich backup immich/latest_db_dump.sql --password=$REPO_PWD

# Backup the library, uploads and profile folders from the upload volume
rustic -r $BUCKET_NAME/immich backup immich/uploads_v/library --password=$REPO_PWD
rustic -r $BUCKET_NAME/immich backup immich/uploads_v/upload --password=$REPO_PWD
rustic -r $BUCKET_NAME/immich backup immich/uploads_v/profile --password=$REPO_PWD

# Apply forget policy
rustic -r $BUCKET_NAME/immich forget $RUSTIC_FORGET_POLICY --password=$REPO_PWD

and when I test it everything works properly, and the created sql dump file is complete and properly backed up.

However, when the execution is triggered automatically by a cronjob (as specified in this crontab line)

"30 3 * * *    root    /home/admin/WinguRepo/scripts/docker_backupper.sh"

(the line is taken from the nixos configuration file, that's why it also contains the user executing the operation)

it seems something breaks in the dumping process, because the script completes successfully but the sql dump file is an empty file (as can be noticed in the following output of rustic -r myrepo snapshots

snapshots for (host [wingu-box], label [], paths [immich/latest_db_dump.sql])
| ID       | Time                | Host      | Label | Tags | Paths                     | Files | Dirs |      Size |
|----------|---------------------|-----------|-------|------|---------------------------|-------|------|-----------|
| 10a32a83 | 2025-01-06 20:56:48 | wingu-box |       |      | immich/latest_db_dump.sql |     1 |    2 | 264.6 MiB |
| 1174bc2e | 2025-01-07 12:50:36 | wingu-box |       |      | immich/latest_db_dump.sql |     1 |    2 | 264.6 MiB |
| 00977334 | 2025-01-08 03:31:24 | wingu-box |       |      | immich/latest_db_dump.sql |     1 |    2 |       0 B |
| 513fffa1 | 2025-01-10 03:31:25 | wingu-box |       |      | immich/latest_db_dump.sql |     1 |    2 |       0 B |
4 snapshot(s)

(the first two snapshots were manually triggered by me executing the script, the latter two instead are triggered automatically by the cronjob)

Any idea about what is causing this behavior?

EDIT: Solution found thank's to @farcaller@fstab.sh comment:

You don’t need -it because you don’t run an interactive session in docker. It might be failing because you ask for a pseudoterminal in an environment where it doesn’t make sense.

15
1
16
1

Hi everyone,

So I have a VPN pointing to an home server running 24/7 at 192.168.1.60.

I am using network manger to import the wireguard configuration on my client.

nmcli connection import type wireguard file home.conf

On the client when connecting to another wifi, I couldn't ping the server address, because at the time I thought that since they were using the same subnet 192.168.1.X, the router assumedthat It was a local ip, adding the route manually to my client worked:

sudo ip route add 192.168.1.60/32 via 10.8.0.1 dev home

Later I started thinking that since I have 0.0.0.0/0 in the Allowed Ips, all of my traffic should go by the vpn correct ?

but my route still defaults to the local wifi not the vpn gateway:

$ ip route
default via 192.168.1.254 dev wlp4s0 proto dhcp src 192.168.1.79 metric 600
10.8.0.0/24 dev home proto kernel scope link src 10.8.0.2 metric 10
169.254.0.0/16 dev home scope link metric 1000
192.168.1.0/24 dev wlp4s0 proto kernel scope link src 192.168.1.79 metric 600

shouldn't the default be the 10.8.0.0 line ?

Do I need to run this command every time I enable the Network Manager profile:

sudo ip route replace default via 10.8.0.1 dev home

The output of nmcli:

$ nmcli
wlp4s0: connected to MEO-FAFD00
        "Intel 8260"
        wifi (iwlwifi), 14:AB:C5:84:50:67, hw, mtu 1500
        ip4 default, ip6 default
        inet4 192.168.1.79/24
        route4 192.168.1.0/24 metric 600
        route4 default via 192.168.1.254 metric 600
        inet6 2001:8a0:e953:b600:2b47:f53f:cfd6:1f13/64
        inet6 fe80::bd36:f271:51dd:f0b3/64
        route6 fe80::/64 metric 1024
        route6 2001:8a0:e953:b600::/64 metric 600
        route6 2001:8a0:e953:b600::/64 via fe80::ce19:a8ff:fefa:fcff metric 605
        route6 default via fe80::ce19:a8ff:fefa:fcff metric 600

lo: connected (externally) to lo
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536
        inet4 127.0.0.1/8
        inet6 ::1/128

home: connected to home
        "home"
        wireguard, sw, mtu 1420
        inet4 10.8.0.2/24
        route4 default metric 10
        route4 10.8.0.0/24 metric 10
        route4 169.254.0.0/16 metric 1000

My home.conf(removed the private and public keys).

[Interface]
PrivateKey = 
Address = 10.8.0.2/24
DNS = 1.1.1.1

[Peer]
PublicKey = 
PresharedKey = 
AllowedIPs = 0.0.0.0/0, ::/0
PersistentKeepalive = 0
Endpoint =  MY_HOME_EXTERNAL_IP:51820
17
1
submitted 4 days ago* (last edited 4 days ago) by Maroon@lemmy.world to c/selfhosted@lemmy.world

I have an old OnePlus 5T that has LineageOS installed. I don't really do anything with it and I thought it would be cool to host my first ever website (static) on it.

What I've done so far:

  1. Got the HTML file for my website.
  2. Got the CSS style sheet for that site.
  3. Purchased a domain name.

I request help/guidance with:

  1. Minimal install of Debian, nginx, Docker, and Fail2ban. (I feel I need help with the Debian installation because the rest is seems easy enough).
  2. Hosting my website from my home, so like if I should consider subnet or vlan for my home to protect other devices when I expose port 80 (http) and 443 (https) of my router so other servers can access my server phone.

I know this sounds like complicating matters for something I have never done before, but any help would be greatly appreciated. I have hosted stuff at home (pihole, LibreTranslate, etc) but I think this website project may not be straightforward.

18
1

Hi guys!

Postiz is an open-source social media scheduling tool. After much digging, I finally got Lemmy to work with Postiz.

And, of course, it's available in the open source! Let me know if it works for you!

And if you have suggestions for more Fediverses, I am happy to hear :)

19
1

I tried setting up the all in one container on a computer I have at home and it was a bit of a mess to get set up. Back in the day I used an ansible script to set up nextcloud and that seemed much better. Any advice or pointers?

20
1

My post on hosting Tailscale got removed for rule three though it is directly related to selfhosting. I’ve messaged three mods more than once. This is sort of a let down for one of the biggest communities here to be like this… :/

21
1

I recently moved my files to a new zfs-pool and used that chance to properly configure my datasets.

This led me to discovering zfs-deduplication.

As most of my storage is used by my jellyfin library (~7-8Tb), which is mostly uncompressed bluray rips I thought I might be able to save some storage using deduplication in addition to compression.

Has anyone here used that for similar files before? What was your experience with it?

I am not too worried about performance. The dataset in question is rarely changed. Basically only when I add more media every couple of months. I also have overshot my cpu-target when originally configuring my server so there is a lot of headroom there. I have 32Gb of ram which is not really fully utilized either (but I also would not mind upgrading to 64 too much).

My main concern is that I am unsure it is useful. I suspect just because of the amount of data and similarity in type there would statistically be a lot of block-level duplication but I could not find any real world data or experiences on that.

22
1

Now that we know AI bots will ignore robots.txt and churn residential IP addresses to scrape websites, does anyone know of a method to block them that doesn't entail handing over your website to Cloudflare?

23
1

I'm here to address some FUD and questions from people who think Plebbit won’t succeed. Let’s talk about why peer-to-peer is better than all those other social media platforms

list of reason why P2P is better than:

  1. mastodon / lemmy / activitypub
  • Instance admins can delete user accounts and communities. Instance admins can block other instances. It's too difficult to run your own instance, you need to buy a domain name, server, DDOS protection, set up SSL, etc.
  • No mechanism for a community owner to communicate a challenge to post to his community, so impossible to prevent spam.
  1. bluesky
  • Bluesky instances cannot delete user accounts and communities (as long as they are backed up somewhere else), but they can block user accounts and communities. Since running your own instance is difficult, your user account and community will be blocked most of the time and you won't be able to reach your users.
  • No mechanism for a community owner to communicate a challenge to post to his community, so impossible to prevent spam.
  1. nostr
  • Bluesky instances cannot delete user accounts and communities (as long as they are backed up somewhere else), but they can block user accounts and communities. Since running your own instance is difficult, your user account and community will be blocked most of the time and you won't be able to reach your users.
  • No mechanism for a community owner to communicate a challenge to post to his community, so impossible to prevent spam.
  1. farcaster
  • Hubs cannot delete user accounts and communities (as long as they are backed up somewhere else), but they can block user accounts and communities. Since running your own hub is difficult (long sync time, lots of bandwidth/storage/ram), your user account and community will be blocked most of the time and you won't be able to reach your users.
  • Hubs in general cannot scale infinitely as they keep growing forever, like a blockchain.
  • Must pay $5 on optimism to be able to post, most users don't want to pay. Also can be censored by the optimism RPC or USDC.
  • No mechanism for a community owner to communicate a challenge to post to his community, so impossible to prevent spam.
  1. steemit
  • Blockchain RPCs cannot delete user accounts and communities (as long as they are backed up somewhere else), but they can block user accounts and communities. Since running your own blockchain node is difficult (long sync time, lots of bandwidth/storage/ram), your user account and community will be blocked most of the time and you won't be able to reach your users.
  • Blockchains in general cannot scale infinitely as they keep growing forever.
  • Must pay blockchain transaction fees to post, most users don't want to pay.
  • No mechanism for a community owner to communicate a challenge to post to his community, so impossible to prevent spam.

plebbit solves each problem:

  • instances/hubs/rpcs cannot block a user account or community, because there are no instances, it's directly peer to peer. a community node can be run from home on consumer internet, no server, domain name, SSL, sync time, etc. it's as easy as running a bittorrent client.
  • it can scale infinitely because there are no historical ledger like a blockchain or hub, it's like bittorrent, if a community no longer has any seeds, it stops existing. (this is also a downside of plebbit, but scaling is more important, not scaling makes the system useless)
  • it has no cost to publish, like bittorrent, because is has no historical ledger that each node must sync. users seed their communities for free while they use it, like bittorrent.
  • a community node can communicate a challenge to a user to post to his community (like a minimum user account age, or karma, or a captcha, whitelist, etc), because it's directly peer to peer, the community node is the instance, so it can gatekeep it however it wants. (this is also a downside of plebbit, a community node must be online 24/7, but it's also possible to delegate running a node to an RPC/instance/hub, you just lose some censorship resistance, so it's not inferior in this regards, it's strictly superior because of the optionality).
24
1
25
1
SMB + Docker (lemmy.world)

Is there a way to setup an SMB share or similar via docker? I want to be able to easily turn it off and bind it to a specific folder, and I am comfortable with docker.

Thanks!

view more: next ›

Selfhosted

41041 readers
184 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS