1
83

Hello all,

This is a follow-up from my previous post: Is it a good idea to purchase refurbished HDDs off Amazon ?

In this post I will give you my experience purchasing refurbished hard drives and upgrading my BTRFS RAID10 arrray by swaping all the 4 drives.

TL;DR: All 4 drives work fine. I was able to replace the drives in my array one at a time using an USB enclosure for the data transfer !

1. Purchasing & Unboxing

After reading the reply from my previous post, I ended up purchasing 4x WD Ultrastar DC HC520 12TB hard drives from Ebay (Germany). The delivery was pretty fast, I received the package within 2 days. The drive where very well packed by the seller, in a special styrofoam tray and anti-static bags packaging

2. Sanity check

I connect the drives to a spare computer I have and spin-up an Ubuntu Live USB to run a S.MA.R.T check and read the values. SMART checks and data are available from GNOME Disks (gnome-disk-utility), if you don't want to bother with the terminal. All the 4 disks passed the self check, I even did a complete check on 2 of them overnight and they both passed without any error. More surprisingly, all the 4 disks report Power-ON Hours=N/A or 0. I don't think it means they are brand new, I suspect the values have been erased by the reseller. smart data

3. Backup everything !

I've selected one of the 12TB drives and installed it inside an external USB3 enclosure. On my PC I formatted the drive to BTRFS with one partition with the entire capacity of the disk. I then connected the, now external, drive to the NAS and transfer the entirety of my files (excluding a couple of things I don't need for sure), using rsync:

rsync -av --progress --exclude 'lost+found' --exclude 'quarantine' --exclude '.snapshots' /mnt/volume1/* /media/Backup_2024-10-12.btrfs --log-file=~/rsync_backup_20241012.log

Actually, I wanted to run the command detached, so I used the at command at (not sure if this is the best method to do this, feel free to propose some alternatives):

echo "rsync -av --progress --exclude 'lost+found' --exclude 'quarantine' --exclude '.snapshots' /mnt/volume1/* /media/Backup_2024-10-12.btrfs --log-file=~/rsync_backup_20241012.log" | at 23:32

The total volume of the data is 7.6TiB, the transfer took 19 hours to complete.

4. Replacing the drives

My RAID10 array, a.k.a volume1 is comprise of the disks sda, sdb, sdc and sdd, all of which are 6TB drives. My NAS has only 4x SATA ports and all of them are occupied (volume2 is an SSD connected via USB3).

m4nas:~:% lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    1   5.5T  0 disk /mnt/volume1
sdb            8:16   1   5.5T  0 disk 
sdc            8:32   1   5.5T  0 disk 
sdd            8:48   1   5.5T  0 disk 
sde            8:64   0 111.8G  0 disk 
└─sde1         8:65   0 111.8G  0 part /mnt/volume2
sdf            8:80   0  10.9T  0 disk 
mmcblk2      179:0    0  58.2G  0 disk 
└─mmcblk2p1  179:1    0  57.6G  0 part /
mmcblk2boot0 179:32   0     4M  1 disk 
mmcblk2boot1 179:64   0     4M  1 disk 
zram0        252:0    0   1.9G  0 disk [SWAP]

According to documentation I could find (btrfs replace - readthedocs.io, Btrfs, replace a disk - tnonline.net), the best course of action is definitely to use the builtin BTRFS command replace. From there, there are 2 method I can use:

  1. Connect new drive, one by one, via USB3 to run replace, then swap the disks in the drive-bay
  2. Degraded mode, swap the disks one by one in the drive-bays and rebuild the array

Method #1 seems to me faster and safer, and I've decided to tried this one first. If it doesn't work, I can fallback to method #2 (which I had to for one of the disks !).

4.a. Replace the disks one-by-one via USB

NAS setup with external drive

I've installed a blank 12TB disk in my USB enclosure and mounted it to the NAS. It is showing as sdf. Now, it's time to run the replace command as described here: Btrfs, Replacing a disk, Replacing a disk in a RAID array

sudo btrfs replace start 1 /dev/sdf /mnt/volume1

We can see the new disk is shown as ID 0 while the replace operation takes place:

m4nas:~:% btrfs filesystem show
Label: 'volume1'  uuid: 543e5c4f-4012-4204-bf28-1e4e651ce2e8
	Total devices 4 FS bytes used 7.51TiB
	devid    0 size 5.46TiB used 3.77TiB path /dev/sdf
	devid    1 size 5.46TiB used 3.77TiB path /dev/sda
	devid    2 size 5.46TiB used 3.77TiB path /dev/sdb
	devid    3 size 5.46TiB used 3.77TiB path /dev/sdc
	devid    4 size 5.46TiB used 3.77TiB path /dev/sdd

Label: 'ssd1'  uuid: 0b28580f-4a85-4650-a989-763c53934241
	Total devices 1 FS bytes used 46.78GiB
	devid    1 size 111.76GiB used 111.76GiB path /dev/sde1

It took around 15 hours to replace the disk. After it's done, I've got this:

m4nas:~:% sudo btrfs replace status /mnt/volume1
Started on 19.Oct 12:22:03, finished on 20.Oct 03:05:48, 0 write errs, 0 uncorr. read errs
m4nas:~:% btrfs filesystem show                 
Label: 'volume1'  uuid: 543e5c4f-4012-4204-bf28-1e4e651ce2e8
	Total devices 4 FS bytes used 7.51TiB
	devid    1 size 5.46TiB used 3.77TiB path /dev/sdf
	devid    2 size 5.46TiB used 3.77TiB path /dev/sdb
	devid    3 size 5.46TiB used 3.77TiB path /dev/sdc
	devid    4 size 5.46TiB used 3.77TiB path /dev/sdd

Label: 'ssd1'  uuid: 0b28580f-4a85-4650-a989-763c53934241
	Total devices 1 FS bytes used 15.65GiB
	devid    1 size 111.76GiB used 111.76GiB path /dev/sde1

In the end, the swap from USB to SATA worked perfectly !

m4nas:~:% lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    0 111.8G  0 disk 
└─sda1         8:1    0 111.8G  0 part /mnt/volume2
sdb            8:16   1  10.9T  0 disk /mnt/volume1
sdc            8:32   1   5.5T  0 disk 
sdd            8:48   1   5.5T  0 disk 
sde            8:64   1   5.5T  0 disk 
mmcblk2      179:0    0  58.2G  0 disk 
└─mmcblk2p1  179:1    0  57.6G  0 part /
mmcblk2boot0 179:32   0     4M  1 disk 
mmcblk2boot1 179:64   0     4M  1 disk 
zram0        252:0    0   1.9G  0 disk [SWAP]
zram1        252:1    0    50M  0 disk /var/log
m4nas:~:% btrfs filesystem show
Label: 'volume1'  uuid: 543e5c4f-4012-4204-bf28-1e4e651ce2e8
	Total devices 4 FS bytes used 7.51TiB
	devid    1 size 5.46TiB used 3.77TiB path /dev/sdb
	devid    2 size 5.46TiB used 3.77TiB path /dev/sdc
	devid    3 size 5.46TiB used 3.77TiB path /dev/sdd
	devid    4 size 5.46TiB used 3.77TiB path /dev/sde

Label: 'ssd1'  uuid: 0b28580f-4a85-4650-a989-763c53934241
	Total devices 1 FS bytes used 13.36GiB
	devid    1 size 111.76GiB used 89.76GiB path /dev/sda1

Note that I haven't expended the partition to 12TB yet, I will do this once all the disks are replace. The replace operation has to be repeated 3 more times, taking great attention each time to select the correct disk ID (2, 3 and 4) and replacement device (e.g: /dev/sdf).

4.b. Issue with replacing disk 2

While replacing disk 2, a problem occurred. The replace operation stopped progressing, despite not reporting any errors. After waiting couple of hours and confirming it was stuck, I decided to do something reckless that cause me a great deal of troubles later: To kick-start the replace operation, I unplugged the power from the USB enclosure and plugged it back in (DO NOT DO THAT !), It seemed to work and the transfer started to progress again. But once completed, the RAID array was broken and the NAS wouldn't boot anymore. (I will only talk about the things relevant to the disk replacement and will skip all the stupid things I did to make the situation worst, it took me a good 3 days to recover and get back on track...).

I had to forget and remove from the RAID array, both the drive ID=2 (the drive getting replaced) and ID=0 (the 'new' drive) in order to mount the array in degraded mode and start over the replace operation with the method #2. In the end it worked, and the 12TB drive is fully functional. I suppose the USB enclosure is not the most reliable, but the next 2 replacement worked just find like the first one.

What I should have done: abort the replace operation, and start over.

4.c. Extend volume to complete drives

Now that all 4 of my drives are upgraded to 12TB in my RAID array, I extend the filesystem to use all of the available space:

sudo btrfs filesystem resize 1:max /mnt/volume1
sudo btrfs filesystem resize 2:max /mnt/volume1
sudo btrfs filesystem resize 3:max /mnt/volume1
sudo btrfs filesystem resize 4:max /mnt/volume1

5. Always keep a full backup !

Earlier, I mentioned using one of the 'new' 12TB drive as a backup of my data. Before I use it in the NAS, and therefore erase this backup, I assembled 2 of the old drives into my spare computer and once again did a full copy of my NAS data using rsync over the network. This took a long while again, but I wouldn't skip this step !

6. Conclusion: what did I learn ?

  1. Buying and using refurbished drives was very easy and the savings are great ! I saved approximately 40% compared to the new price. Only time will tell if this was a good deal. I hope to get at least 4 more years out of these drives. That's my goal at least...
  2. Replacing HDDs via a USB3 enclosure is possible with BTRFS, it works 3 time out of 4 ! 😭
  3. Serial debug is my new best friend ! This part, I didn't detail in this post. Let's say my NAS is somewhat exotic NanoPi M4V2, I couldn't have unborked my system without a functioning UART adapter, and the one I already had in hand didn't work correctly. I had to buy a new one. And all the things I did (blindly) to try fixing my system were pointless and wrong.

I hope this post can be useful to someone in the future, or at least was interesting to some of you !

2
28
submitted 9 hours ago by skar3@feddit.it to c/selfhosted@lemmy.world
3
30

Last night I was writing a script and it made a directory literally named "~" on accident. It being 3am I did an rm -rf ~ without thinking and destroyed my home dir. Luckily some of the files were mounted in docker containers which my user didn't have permission to delete. I was able to get back to an ok state but lost a bit of data.

I now realize I really should be making backups because shit happens. I self host a pypi repository, a docker registry both with containers and some game servers in and out of containers. What would be the simplest tool to backup to Google drive and easily restore?

4
15
submitted 22 hours ago by terraborra@lemmy.nz to c/selfhosted@lemmy.world

Google pushed their Ai Overview onto my country last night and that finally gave me the push to change search engines.

One thing I did find useful was having product prices displayed in the search result headers but this doesn’t appear to be enabled in any other engine. I used it to quickly scan between retailers as not everything shows up in pricespy or priceme.

I deployed a searxng instance this morning and have heard that you can use json to modify result presentation. Does anyone know if it’s possible to use that to display prices?

5
49
6
31
submitted 1 day ago* (last edited 1 day ago) by athes@lemmy.world to c/selfhosted@lemmy.world

Hello,

Just spent a good week installing my home server. Time to pause and lookback to what I've setup and ask your help/suggestions as I am wondering if my below configuration is a good approach or just a useless convoluted approach.

I have a Proxmox instance with 3 VLAN:

  • Management (192.168.1.x) : the one used by proxmox host and that can access all other VLANs

  • Servarr (192.168.100.x) : every arr related software + Jellyfin (all LXC). All outbound connectivity goes via VPN. Cant access any VLAN

  • myCloud (192.168.200.X): WIP, but basically planning to have things like Nextcloud, Immich, Paperless etc...

The original idea was to allow external access via Cloudlfare tunnel but finally decided to switch back to Tailscale for "myCloud" access (as I am expected to share this with less than 5 accounts). So:

  • myCloud now has Tailscale running on it.
  • myCloud can now access Servarr VLAN

Consequently to my choice of using tailscale, I had now to use a DNS server to resolve mydomain.com:

  • Servarr now has pihole as DNS server reachable across all VLAN

On the top of all that I have yet another VLAN for my raspberry Pi running Vaultwarden reachable only via my personal tailscale account.

I'm open to restart things from scratch (it's fun), so let me know.

Also wondering if using LXCs is better than docker especially when it comes to updates and longer term maintenance.

7
61
submitted 1 day ago* (last edited 1 day ago) by iso@lemy.lol to c/selfhosted@lemmy.world

The price seems pretty good. I don't really know much about mini PCs. Do you think there is a better alternative?

Update: ok, not price efficient. Noted 👍

8
15

A long long time ago, I bought a domain or two, and a shared hosting plan from Dreamhost w/ unlimited bandwidth/storage. I don't have root access, and can't do containers on this. It's been useful for a Piwigo instance to share scanned family photos. The problem I have is the limited resources really limit Piwigo's ability to handle the large TIF files involved in the archival scans. There are ways around this, but they all add time to the workflow that already eats into my free time enough. I'm looking at moving Piwigo to my local server that has plenty of available resources. That leaves me with little reason to keep the Dreamhost space. So what's a decent use case for cheap, shared hosting space anymore?

To be clear, I'm not looking for suggestions to move to a cheap VPS. I've looked into them, and might use one in the future, but don't need it right now. The shared hosting costs about $10.99/month at the moment. If there was a way I could leverage the unlimited bandwidth/storage as an offsite backup, that would be amazing, but I'm not sure it would be a great idea backing up stuff to a webserver where there best security I can add it via an .htaccess file.

9
33

Hi folks, I know many of you are elite system admins running custom built NAS solutions networked together with servers tucked in every spare closet and space in your home, which is awesome. Having said that, I am still newer in my self hosted journey and my existing knowledge is more from running Linux as an daily driver OS since 2005 rather than actually hosting anything. For this reason, even though it's not ideologically pure, I opted for a SynologyNAS for simplicity of management. This was the next step for me after dipping my toes into self hosting after messing around with some VMs and an old laptop.

With the new DSM update, Synology removes several apps and codec support, most notably h.256. I experienced something similar on Linux where I cannot view videos recorded on my action cam. I don't know how many of these photos and videos I have in my file system, but my NAS is local network only and basically contains my photos, videos, ebooks, documents, etc. in separate shares containing a hierarchical folder structure.

My questions:

  1. How can I most easily search my NAS for files needing the removed codecs so I can gauge how much this will actually effect me? I want to approach the problem in a simple way that I can understand.
  2. With Linux and Synology DSM both dropping codecs, I am considering just taking the storage hit to convert to h.264 or another format. What would you recommend? I havent recoded video in ages so I'm learning from scratch, but I do have a desktop with dual 1080s that should be up to the task.
  3. I access my shares via dolphin on KDE. When it comes to thumbnails for a remote filesystem like this are they generated and stored on my PC or will the PC save them to the folder on the NAS where other programs could use them. I just want to make sure I can visually browse the videos and photos on my NAS and have them show up appropriately.

I'm a bit frustrated and kind of favoring just moving things to a different format. I bought a Synology device for an easier experience, and having said that, even if I built a custom solution, didn't Debian remove h.265 as well? I will probably do a TrueNAS or whatever at some point, but I've had way to many family events in the last few years and have to take an easier path right now.

My Linux knowledge is intermediate and my self-hosting knowledge is still fairly basic.

10
242

After almost 3 years of work, I've finally managed to get this project stable enough to release an alpha version!

I'm proud to present Managarr - A TUI and CLI for managing your Servarr instances! At the moment, the alpha version only supports Radarr.

Not all features are implemented for the alpha version, like managing quality profiles or quality definitions, etc.

Here's some screenshots of the TUI:

Additionally, you can use it as a CLI for Radarr; For example, to search for a new film:

managarr radarr search-new-movie --query "star wars"

Or you can add a new movie by its TMDB ID:

managarr radarr add movie --tmdb-id 1895 --root-folder-path /nfs/movies --quality-profile-id 1

All features available in the TUI are also available via the CLI.

11
13
submitted 2 days ago* (last edited 1 day ago) by Dust0741@lemmy.world to c/selfhosted@lemmy.world

I use Crafty Controller for Minecraft. I have a server running at 192.168.50.16:25540. I want it to resolve to minecraft.example.com. I have Nginx Proxy Manager setup for my domain and can access it from inside my network, but it'd be nice to be able to use a domain instead.

NPM only has options for http and https, so is this even possible using NPM?

EDIT: this is for only internal access I have external access via tailscale.

12
70
submitted 2 days ago by Sunny@slrpnk.net to c/selfhosted@lemmy.world

Hiya, I am looking into a few different services to better manage my finances, among the highest recommended ones there is ActualBudget. Actualbugdet itself is opensource and private, however, to get the most out of this service you may connect it to your bank, via a third party service. Has anyone here actually done this? The service (for EU folks) is called GoCardless. This however, to me is ringing many alarms..

Here is the screenshot showing the message before connecting to my bank..

Here GoCardless's list of partners/suppliers:

https://assets.ctfassets.net/40w0m41bmydz/6Mg3PGztGEQh11N3MNRmYc/1f186cf883151ca04b9c71c23b5ee4d3/GoCardless_material_supplier_list_v2024.09.pdf

I assume there is no private alternative that allows you to connect to your bank into AcualBudget or another service, if so please let me know! Managing finances would be so much more convenient if it all was automatically synced into a self-hosted service.

Let me know how you manage your finances :)

13
103
submitted 3 days ago by Sunny@slrpnk.net to c/selfhosted@lemmy.world

These small little handy-dandy devices seem to get more and more popular. Anyone here chipped in for a JetKVM yet? Looks and sounds pretty solid. Are there a lot of you that have aquired a nanoKVM?

14
9
submitted 2 days ago* (last edited 1 day ago) by ntn888@lemmy.ml to c/selfhosted@lemmy.world

Hi, I have a home server (basically a NAS) currently running Debian. Basically it's configuration is as follows

  • debian host running 3 VMs

  • debian running inside each VM as docker host

I just manually install KVM on the host then docker on each VM after creating each of them. I documented the process so I know how to replicate it in case I need to rebuild.

I now dream of being able to automate the rebuild process using config files. I know this is done using Ansible.

But I've now heard of Talos.. (A thin layer for kubernetes) and intrigued. But I suppose I need a setup for the VM host to achieve automation through config files..

What setup are you guys using?

Thank you.


Thanks for all your suggestions! I've chosen to go with just bash scripting (given my simple setup) and keep the setup as it is.. Just gotta learn bash and virsh :)

15
9

Hey, I've setup a promox server and running some stuffs in it, and basically I want to know how to have alerts notifications that goes from one service to another, for example : I'm running NUT on proxmox shell, and I want to have an alert in Truenas (and in NextCloud running in Truenas) to say that the server will shutdown soon to the users that are actually using my cloud. Thank you 😄

16
27
submitted 2 days ago* (last edited 2 days ago) by chandler@lemmy.world to c/selfhosted@lemmy.world

Hi self-hosters, we're building a self-hostable, MIT-licensed alternative to Klaviyo, Braze, Mailchimp, etc. You can automate email, SMS, WhatsApp, and lots of other channels.

The core functionality of the platform includes a user segmentation builder, a low-code email template editor, and a low-code drag-and-drop journey builder for creating automated messaging workflows. We also have subscription groups to manage unsubscribes.

Link to repo: https://github.com/dittofeed/dittofeed

If you need any help with deploying an instance, reach out on Discord! https://discord.gg/HajPkCG4Mm

17
107

This is a quite popular repo of scripts used by the selfhosting community, so I think it's worth sharing it here. It's unfortunately saddening news related to tteck's health. I wish him the best, and that he enjoys his well deserved rest in peace.

Dear Community,

I wanted to share a personal update. I’ve recently transitioned into hospice care and, as a result, will be slowing down the development of this project. While I’m grateful for the progress we’ve made together, I recognize that I’ll be taking a step back for some rest and reflection during this time.

Thank you for your continued support, encouragement, and understanding. Your dedication to the community and this project means the world to me, and I am grateful for each of you.

Warm regards,

tteck/tteckster

18
-14
19
125
submitted 4 days ago* (last edited 4 days ago) by johnnyfish@lemmy.world to c/selfhosted@lemmy.world

Hi all, I’m one of the creators of ChartDB.

ChartDB to simplify database design and visualization, providing a powerful, intuitive tool that’s fully open-source. This database diagram tool is similar to traditional ones you can find: dbeaver, dbdiagram, drawsql, etc.

https://github.com/chartdb/chartdb

Key Features:

  • Instant schema import with just one query.
  • AI-powered export to generate DDL scripts for easy database migration.
  • Supports multiple database types: PostgreSQL, MySQL, SQLite, Mssql, ClickHouse and more.
  • Customizable ER diagrams to visualize your database structure.
  • Fully open-source and easy to self-host.

Tech Stack:

  • React + TypeScript
  • Vite
  • ReactFlow
  • Shadcn-ui
  • Dexie.js
20
17

I’m looking to replace a 2013 Mac Mini running Proxmox. Just curious if anyone has one of these or anyone heard of any negatives about them? Watched a bunch of videos and outside of a lack of 10G Ethernet, it seems to be well received!

21
40

Yet another question about self-hosting email, but I haven't found the answer at least phrased in a way that makes sense with my question.

I've got ~15 GBs of old gmail data that I've already downloaded, and google is on my ass about "91% full" and we know I'm not about to pay them for storage (I'll sooner spend 100 hours trying to solve it myself before I pay them $3/month).

What I want is to have the same (or relatively close to the same) access and experience to find stuff in those old emails when they are stored on my hardware as I do when they are in my gmail. That is, I want to have a website and/or app that i search for emails from so-and-so, in some date-range, keywords. I don't actually want to send any emails from this server or receive anything to it (maybe I would want gmail to forward to it or something, but probably I'd just do another archive batch every year).

What I've tried so far, which is sort of working, is that I've set up docker-mailserver on my box, and that is working and accessible. I can connect to it via Thunderbird or K-9 mail. I also converted big email download from google, which was a .mbox, into maildir using mb2md (apt install mb2md on debian was nice). This gave me a directory with ~120k individual email files.

When I check this out in Thunderbird, I see all those emails (and they look like they have the right info) (as a side - I actually only moved 1k emails into the directory that docker-mailserver has access to, just for testing, and Thunderbird only sees that 1k then). I can do some searching on those.

When I open in K-9, it by default looks like it just pulls in 100 of them. I can pull in more or refresh it sort of thing. I don't normally use K-9, so I may just be missing how the functionality there is supposed to work.

I also just tried connecting to the mail server with Nextcloud Mail, which works in the sense that it connects but it (1) seems like it is struggling, and (2) is putting 'today' as the date for all the emails rather than when they actually came through. I don't really want to use Nextcloud Mail here...

So, I think my question here is now really around search and storage. In Thunderbird, I think that the way it works (I don't normally use Thunderbird much either) is that it downloads all the files locally, and then it will search them locally. In K-9 that appears to be the same, but with the caveat that it doesn't look like it really wants to download 120k emails locally (even if I can).

What I think I want to do, though, is have the search running on the server. Like I don't want to download 15GBs (and another 9 from gmail soon enough) to each client. I want it all on the server and just put in my search and the server do the query and give me a response.

docker-mailserver has a page for setting up Full-Text Search with Xapian, where it'll make all the indices and all that. I tinkered with this and think I got it set up. This is another sort of thing where I would want the search to be utilizing the server rather than client since the server is (hopefully) optimizing for some of this stuff.

Should I be using a different server for what I want here? I've poked around at different ones and am more than open to changing to something else that is more for what I need here.

For clients, should I be using Roundcube or something else? Will that actually help with this 'use the server to search' question? For mobile, is there any way to avoid downloading all the emails to the client?

Thanks for the help.

22
40

Hi. Some friends of mine are starting a business and they want to setup a server to host a simple "contact" website, run an e-mail service (about 10 accounts for now but with possibilities of expanding it to support more) and to store and remote access documents.

Im a computer savvy person so they asked me for help, but dont know much about self-hosting so I come here asking you:

What kind of hardware do they need and would be best? What OS and other software is required and recomended?
How to set it up/configure it? Im partial to foss but if there are good propietary options they are acceptable too. And last: What do we have to watch out for or avoid.

Also, space is a bit of an issue, I was thinking they could use something small like an intel nuc but Im worried that hardware would be underpowered for their needs.

I have been googling for stuff myself but I get overwhelmed by the ammount of information and some contradicting opinions so I appreciate your recomendations and guidance. Im not asking you to give me a full tutorial, although I would appreciate it too, but just to be pointed in the right direction to avoid, as much as possible, spending money and time on things they might not really need or might not perform as well.

Thanks in advance.

23
37

Hello.

I've been trying to get familiar with self hosting. The only roadblock I have is I'm unable to do so because I am a university student living in student accommodation where it is against WiFi policy to host anything. And currently I don't even have my raspberry pi with me. My laptop is relatively low specced, so I can't exactly do VMs, but I want to learn more about hosting stuff and the services I can host. I recently signed up for a free managed Nextcloud instance because I wanted to see what it's like and whether I'd be interested in hosting my own.

I know VPS-es are an option but they can get pretty costly, especially for a student like me. Do you have any recommendations, including any cheapz reliable VPS-es for a UK student to dip his toes into self-hosting? Thank you.

P.S I know this isn't exactly self-hosting as I'm technically reliant on third party hardware but it's the only option in my situation.

24
18

I’m thinking about self-hosting my own Lemmy server and I probably have too many questions than answers. But maybe some simple ones… do server owners get to set the amount of days a post can be retained before it’s deleted or are there defaults baked into the software package?

Can server owners restrict image sizes or the number of images that can be uploaded?

Can a server owner restrict the creation of new communities? I’m curious how granular permissions can get.

Would I be better off hosting my own instance to get some of these questions answered? 😁

Thanks in advance!

25
94
submitted 4 days ago* (last edited 4 days ago) by jivandabeast@lemmy.browntown.dev to c/selfhosted@lemmy.world

As requested by /u/funkless_eck@sh.itjust.works, this is a walkthrough of how I set up NGINX Proxy Manager with a custom domain to give me the simplicity of DNS access to my services with the security of Tailscale to restrict public access. This works great for things that you want easy remote access to, but don't want to have open to the internet in general (unRAID GUI, Portainer, Immich, Proxmox, etc.)

Prerequesites

  1. A custom domain (obviously, because that's the whole point of this tutorial)
  2. A Tailscale account with your devices linked to it

Steps

  1. On the server that you want to serve as the entry point into your network, install the NGINX Proxy Manager Docker container (you could absolutely use a different installation method, but I prefer Docker so that's how this guide will be written)

    I. For this, I have a Raspberry Pi that is dedicated to being my network entry. This method is probably overkill for most, but for me it works wonders because I have multiple different devices working as servers and if one goes down I can still access the services hosted on the others.

    II. I'm not going to go super in detail here, because there is plenty of documentation elsewhere but you install it the same way you would install any Docker container and follow the first time setup

  2. Log into your Tailscale account and get the Tailscale IP for the entry device (ex. 100.113.123.123)

  3. Get the SSL information from NGINX Proxy Manager for your domain

    I. Navigate to "SSL Certificates" and then "Add SSL Certificate"

    II. Select "Let's Encrypt"

    III. Type in your domain/subdomain name in the first box

    IV. Enter your email address for Let's Encrypt

    V. Select "Use a DNS Challenge"

    VI. Select your DNS provider in the dropdown

    VII. From here, you're all set for now. We will continue with this later

  4. In your domain DNS dashboard, you will need to do a few things (I use Cloudflare, but the process should be more or less the same with whatever provider you use):

    I. Set up an A record that redirects the root of your domain (or a subdomain, depending on your configuration) to your Tailscale IP from step 2

    II. Set up a wildcard redirect that points back to your domain root. This is important because it will redirect subdomain requests (i.e. service.example.org to your root example.org which then points to the Tailscale IP)

    III. (This is going to be dependent on your provider) Generate an API key for NGINX to use for domain verification, this can easily be achieved in the Cloudflare dashboard in the API key section. The key needs to have permissions for Zone.DNS

  5. Back in NGINX Proxy Manager, drop in your API key in the text box where it asks for it (you need to replace the sample key).

  6. The hard part is done, now it's just time to add in your services!

Here's an example of proxying Portainer through NGINX Proxy Manager:

  1. Might be obvious, but open up NGINX Proxy Manager

  2. Navigate to Hosts -> Proxy Hosts

  3. Click "Add Proxy Host"

  4. Type in the URL that you want to use for navigating to the host, I prefer subdomains (i.e. portainer.example.org)

  5. Type in the IP address and port for the service

    I. Here's the neat part: because NGINX is running in Tailscale, you can connect to both other services in your tailnet or other devices running in your network that don't necessarily have Tailscale running on them.

    II. An example of this, would be if you have two houses (yours and your friends), where you have services deployed at both locations. You can have NGINX reach out through Tailscale to the other device and proxy the service through your main network without needing to set it up twice. Neat, right?

    III. Conversely, if you have a server running in your network that you cannot install Tailscale onto (for support reasons, security reasons, whatever), you can just use the internal IP for that device, as long as the device NGINX Proxy Manager is running on can access it.

  6. Navigate to the SSL tab of the window, and select your recently generated Let's Encrypt certificate

  7. And you're done

Now, you can connect your phone or laptop to Tailscale, and navigate to the URL that you configured. You should see your service load up, with SSL, and you can access it normally. No more remembering IP addresses and port numbers! I don't personally meet this usecase, but this solution could also be useful for people running their homelab behind CGNAT where they can't open ports easily -- this would allow them to access any service remotely via Tailscale easily.

EDIT: The picture formatting is weird and I'm not really sure how else to do it. Let me know if there's a better way :)

view more: next ›

Selfhosted

39677 readers
353 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS