26
26

Well I set up my email server thru cloudflare and managed to receive emails directly to my basement server. I could live with this and the various security threats incoming thru my unifi. But one thing is for sure, my wife won't have any of it. She's a total backwards thinking give me windows or I'll jump kind of Gal. So I found that I could run a dockerized Thunderbird instance and I thought ... Wow! I can just login to it from my computer or my phone, Surely this is it! I can have emails backed up from Gmail to my server and just access my server! And you know what? It works! I can access my Gmail on my browser! It's beautiful!.... But then I login through my phone and wow! I can access my Gmail! Thru my phone! Except the interface is the same as my desktop. It's literally a VNC to the server. I can login to it on my desktop and watch the mouse move as I move my finger on my phone! Great party trick, but....the text is microscopic. So is there another way to get IMAP and SMTP interface to Gmail, archiving all emails on my own server? I literally don't want any of my emails to live on a Gmail server, but I want to be able to send receive and search emails I previously passed through Gmail but now live on my server.

27
11

cross-posted from: https://discuss.tchncs.de/post/21001865

I just installed Piped using podman-compose but when open up the frontend in my browser, the trending page is just showing the loading icon. The logs aren't really helping, the only error is in piped-backend:

java.net.SocketTimeoutException: timeout
	at okhttp3.internal.http2.Http2Stream$StreamTimeout.newTimeoutException(Http2Stream.kt:675)
	at okhttp3.internal.http2.Http2Stream$StreamTimeout.exitAndThrowIfTimedOut(Http2Stream.kt:684)
	at okhttp3.internal.http2.Http2Stream.takeHeaders(Http2Stream.kt:143)
	at okhttp3.internal.http2.Http2ExchangeCodec.readResponseHeaders(Http2ExchangeCodec.kt:97)
	at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:110)
	at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:93)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
	at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
	at me.kavin.piped.utils.RequestUtils.getJsonNode(RequestUtils.java:34)
	at me.kavin.piped.utils.matrix.SyncRunner.run(SyncRunner.java:97)
	at java.base/java.lang.VirtualThread.run(VirtualThread.java:329)

Would appreciate it if anyone could help me. I also wasn't sure what info to include, so please ask if there's any more info you need.

28
26
Proxmox rebuild (programming.dev)

Greetings fellow enthusiasts.

I'm going to rebuild my proxmox server and would like to have a few opinions.

First thing is I use my server as a NAS and then run VMs off that.

I have 2 x 20tb in ZFS mirror but I'm planning on changing that to 3 x 24tb in ZFS1.

I currently have a ZFS pool in proxmox and then add that pool to Open Media Vault.

Issue is, if my OMV breaks and I'll have to create another VM, I'm pretty sure all that data would become inaccessible to my OMV.

I've heard of people creating a NFS in proxmox and then passing it through to OMV?

Or should I get HMB cards and then just pass it through the VM and then just run it natively within OMV. I'd need to install the ZFS kernal into OMV as well.

Would like to hear some options and tips.

29
13
submitted 6 days ago* (last edited 5 days ago) by Sandbag@lemm.ee to c/selfhosted@lemmy.world

I have a spare 3070 GPU, as well as 16GB of Memory and my friend has a spare PSU, this part list has everything else I would need+everything I already have. Is there anything I should tweak or modify or will this build work, I plan to use it as a headless server.

Thanks for the feedback!

https://pcpartpicker.com/list/2fJJYN

Update:

Use case, I currently run a docker swarm cluster with two older Optiplexes and a raspberry Pi, like I said before, I have a spare PSU, GPU and Memory and would rather put it to work then sell it. I would like to add this new PC to my cluster and utilize it for my home services and also learning. The only items I would really be buying is the case, cpu and board. I would like to run some local AI models on this PC as well.

30
5

I have my own invidious instance, and i want all the new videos from my subscriptions to automatically get added to a playlist. Anyone know how do do this?

31
171

I assume most users here have some sort of tech/IT/software background. However, I've seen some comments of people who might not have that background (no problem with that) and I wonder if you are self-hosting anything, how did you decide that you would like to self-host?

32
69

In a few months, I will have the space and infrastructure to join the selfhost community. I'm trying to prepare, as I know it can be challenging, but I somehow ended up with more questions than answers.

For context, I want to run a server with torrents, media (plex, Jellyfin or something else entirely - I didn't make a decision yet), photos(Emmich, if its stable, or something else), Rook, Paperless, Home Assistant, Frigate, Adguard Home... Possibly lots more. Also, I will need storage - I'm planning for 3x18tb drives to begin with, but will certainly be adding more later.

My initial intention was to set up a NAS in Silverstone CS382(or Jonsbo N3/N5, if they're in a reasonable price). I heard good things about Unraid and it's capabilities of running docker. On the other hand, I'm hearing hood things about Proxmox or NixOS with NAS software running in a VM, too - but for Unraid, it seems hacky. Maybe I should run NAS and a separate server? That'd be more costly and seems like more work on maintenance with no real benefit. Maybe I should go with TrueNAS in a VM? If I don't do anything other than NAS, TrueNAS shouldn't be that hard to set up, right?

I'm also wondering whether I should go with Intel for QuickSync, AMD and Arc graphics or something else entirely. I've read that AV1 is getting popular, is AMD getting more support there? I will buy Intel if it's clearly the better option, but I'm team Red and would prefer AMD.

Also, could anyone with a non-technical SO tell me how do they find your selhosted things? I've read about Cloudflare Tunnels and Tailscale, which will be a breeze for me, but I gotta think about other users aswell.

That's another concern for me - am I correct in thinking Tailscale and Cloudflare Tunnels are all I need to access the server remotely? I will probably set up a PiKVM or the Risc one aswell, can it be exposed aswell? I will have a dream machine from Ubiqiti, anything that needs to run to access the server I may run there. I'm not looking to set up anything more complicated like Wireguard - it's too much.

For additional context, I'm a software developer, I know my way with Docker and the command line and I consider myself to be tech savvy, but I'm not looking to spend every weekend reading changelogs and doing manual updates. I want to have an upgrade path (that's why Im not going with Synology for example), but I also don't want to obsess over it. Money isn't much of an issue, I can spare 1-2k$ on the build, not including the drives.

Any feedback and suggestions appreciated :)

33
61
34
102
35
19

inspired by this post

I have aac mini with an infared reciever on it. I'd love to use it as a TV PC. And ideally an infared remote too.

I am looking for software recommendations for this, as I've done basically no research.

What's my best option? Linux with kodi? How would a remote connect / which software is required for the remote to work??

Thanks!

36
24

Does anyone know of a hosting service that offers Silverblue as a possible choice for OS?

It seems to me that for a server running only docker services the greatly reduced attack surface of an immutable distro presents a definitive advantage.

37
64

Hi all,

I found a hobby in trying to secure my Linux server, maybe even beyond reasonable means.

Currently, my system is heavily locked down with user permissions. Every file has a group owner, and every server application has its own user. Each user will only have access to files it is explicitly added to.

My server is only accessible from LAN or VPN (though I've been interested in hosting publicly accessible stuff). I have TLS certs for most everything they can use it (albeit they're self signed certs, which some people don't like), and ssh is only via ssh keys that are passphrase protected.

What are some suggestions for things I can do to further improve my security? It doesn't have to be super useful, as this is also fun for me.

Some things in mind:

  • 2 factor auth for SSH (and maybe all shell sessions if I can)
  • look into firejail, nsjail, etc.
  • look into access control lists
  • network namespace and vlan to prevent server applications from accessing the internal network when they don't need to
  • considering containerization, but so far, I find it not worth foregoing the benefits I get of a single package manager for the entire server

Other questions:

  • Is there a way for me to be "notified" if shell access of any form is gained by someone? Or somehow block all shell access that is not 2FA'd?
  • my system currently secures files on the device. But all applications can see all process PIDs. Do I need to protect against this?

threat model

  • attacker gains shell access
  • attacker influences server application to perform unauthorized actions
  • not in my threat model: physical access
38
95

With Chromecasts being discontinued, increase in ads, telemetry, etc I'm wondering if anyone else is going back to old school HTPCs or if they have some other solution to do this in house.

I think the options here are likely:

  1. Rooted streamer (ie Chromecast, firestick)
  2. Android Box
  3. Mini PC

I'm actually most interested in experimenting with #3, a mini PC running KDE Plasma Bigscreen. Most of my self hosted apps can be run in browser windows, and a full desktop (while harder to navigate) is better than the browsers you can get on Android.

What is everyone esle, especially the privacy / de-googled self hosters doing for their media front end?

39
29

I'm finally taking the leap from upgrading from a media drive sitting in my desktop PC to a self-build NAS. The parts are on their way and I have to figure out what to do when they actually arrive.

Current setup: Desktop PC with a single 20TB media drive (zfs, 15TB in use)

My knowledge: I use Linux as my daily driver, but I'm far from a power user. I can figure out and fix problems with online resources or the kind help of others like you

The goal: I want to move to a small NAS (2 additional 20TB drives are on their way). The system will have 32GB of DDR5 RAM. 1 disk parity for 40TB of usable storage

What will I use it for:

  • Backup for Desktop PC
  • Media server (Jellyfin)
  • Arr stack
  • (other small services int he future?)

My questions:

  1. What OS should I use? The obvious answers being Unraid or TrueNAS. The 40TB of storage (1 disk parity) will likely be enough for a couple of years. So adding additional drives is not planned for some time.

  2. How can I import the data from my current drive to the NAS? I am very new to the topic and my initial searches were not that helpful. With Unraid I should just be able to setup the first two disks and import the data from the other. I am unsure how to accomplish that with TrueNAS.

Some advice and tips would be great. Feel free to ask for more details if I forgot some crucial info.

Thanks for reading!

40
104
submitted 1 week ago by otter@lemmy.ca to c/selfhosted@lemmy.world

About the project

Plant-it is a self-hosted gardening companion app. Useful for keeping track of plant care, receiving notifications about when to water plants, uploading plant images, and more.

About this release:

Highlights In this release, we've made significant improvements to both the app and server, focusing on performance, notifications, and overall user experience. One of the most notable changes is the switch from Ubuntu to Alpine as the base Docker image for the server, resulting in a much smaller image size, which should lead to faster deployments and lower resource usage. We've also introduced Gotify notifications across both the app and server, providing alerts to keep you informed. Additionally, we've addressed various small fixes and enhancements to improve stability and usability.

41
27
submitted 1 week ago* (last edited 1 week ago) by skybox@lemm.ee to c/selfhosted@lemmy.world

I'm working on starting up my first home server which I'm trying to make relatively foolproof and easily recoverable. What is some common maintenance people do to avoid dire problems, including those that accumulate over time, and what are ways to recover a server when issues pop up?

At first, I figured I'd just use debian with some kind of snapshot system and monitor changelogs to update manually when needed, but then I started hearing that immutable distros like microOS and coreOS have some benefits in terms of long term "os drift", security, and recovering from botched updates or conflicts? I don't even know if I'm going to install any native packages, I'm pretty certain every service I want to run has a docker image already, so does it matter? I should also mention, I'm going to use this as a file server with snapraid, so I'm trying to figure out if there will be conflicts to look out for there or with hardware acceleration for video transcoding.

42
14
submitted 1 week ago by ramenu@lemmy.ml to c/selfhosted@lemmy.world

I've heard people having problems with them for web hosting, but I'm not sure if this applies to their VPS as well.

43
22

I am setting up a Linux server (probably will be NixOS) where my VM disk files will be stored on top of an NTFS partition. (Yes I know NTFS sucks but it has to be this way.)

I am asking which guest filesystem will have the best performance for a very mixed workload. If I had access to the extra features of BTRFS or ZFS I would use them but I have no idea how CoW interacts with NTFS; that is why I am asking here.

Also I would like some NTFS performance tuning pointers.

44
16

I am using hd-idle (see link) to spin down my one external hard drive on my RPI server. It is not used for large parts of the day and night so it has been quite useful to set up hd-idle, which spins down the drive after an hour or so of no activity.

Now hd-idle can generate a log file where it notes down some data, e.g. when the drive was spun down, how long it was running, what time it spun down.

You can read the file to get an impression how well it works, but I'd like to see the data visualised or analysed in some way. Seeing the past month of how often per day the drive was spun down, or average length of long it was running and so on.

Searching online I couldn't really find anything. Maybe anybody here knows more? Or what ways of recording and looking at this type of data are you using?

45
26

I've been using paperless-ngx to consume mail from outlook/hotmail for a while now, but recently had the mail server refuse connections while mail was being processed. (Not sure why, consuming is working now with no changes and no errors besides 'connection refused', while retrieving that mail. Temporary outage I guess?)

This left me with a couple pieces of mail not imported. However, now everytime the mail consume task runs, it recognizes that those pieces of mail are there but refuses to process them with the message:

Skipping mail '421' '<email subject>' from '<sender email>', already processed.

How can I get it to recognize those mails HAVE NOT been processed?

46
18
submitted 1 week ago* (last edited 6 days ago) by lal309@lemmy.world to c/selfhosted@lemmy.world

Does anyone have a working Vikunja instance sending emails through Gmail? I’ve enabled the mailer options and entered the info but the test_email function times out. I’ve checked all the information and even tried different ports.

Honestly at this point it doesn’t have to be Gmail (I’m just most familiar with this workflow). I just need my Vikunja instance to send emails.

Edit: I was able to solve my issue. You can only create Gmail app passwords if you have 2FA enabled. I also had the wrong address (it’s smtp.gmqil.com not smtp.google.com)

47
29
submitted 1 week ago* (last edited 1 week ago) by rambos@lemm.ee to c/selfhosted@lemmy.world

After upgrading my internet connection I immediatelly noticed that my HDD tops 40 MB/s and bottlnecking download speed in qbittorrent. Is it possible to use SSD drive as a catch drive for 12 TB HDD so it uses SSD speeds when downloading and moves files to HDD later on? If yes, does it make sense? Anyone using anything simmilar? Would 512 GB be enough or could I benefit from 2TB SSD?

HDD is just for jellyfin (movies/shows), not in raid, dont need backup for that drive, I can afford risking data if that matters at all

All suggestions are welcome, Thx in advance

EDIT: I obviously have upset some of you, wasn't my intention, I'm sorry about that. I love to tinker and learn new things, but I could live with much lower speeds tho... Please don't hate me if I couldn't understand your comment or not being clear with my question.

HDD being bottleneck at 40 MB/s was wrong assumption (found out in meantime). I'm still trying to figure out what was the reason for download to be that slow, but I'm interested in learning about the main question anyway. I just thought I'm experiencing the same issue like many people today, having faster internet than storage. Some of you provided solutions I will look into, but need time for that and also have to fix whatever else I'm having issue with.

Keep this community awesome because it is <3

48
38

I have a decent 2 bay synology, but want to put all my docker images/ VMs running on a more powerful machine connected to the same LAN. Does it ever make sense to do the for media serving or will involving an extra device add too much complexity vs just serving from the NAS itself. I was hoping to have calibre/home assistant/tube type services, etc. all running off a mini PC with a Ryzen 7 and 64gb ram vs the NAS.

My Linux knowledge is intermediate; my networking knowledge is begintermediate, and I can generally follow documentation okay even if it's a bit above my skill level.

49
20

I've been in the process of migrating a lot things back to kubernetes, and I'm debating whether I should have separate private and public clusters.

Some stuff I'll keep out of kubernetes and leave in separate vms, like nextcloud/immich/etc. Basically anything I think would be more likely to have sensitive data in it.

I also have a few public-facing things like public websites, a matrix server, etc.

Right now I'm solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

The main concern I'd have is reducing the blast radius if something gets compromised. But I also don't know if I really want to maintain multiple personal clusters. I am using Omni+Talos for kubernetes, so it's not too difficult to maintain two clusters. It would be more inefficient as far as resources go since some of the nodes are baremetal servers and others are only vms. I wouldn't be able to share a large baremetal server anymore, unless I split it into vms.

What are y'all's opinions on whether to keep everything in one cluster or not?

50
73
submitted 2 weeks ago* (last edited 2 weeks ago) by Fisch@discuss.tchncs.de to c/selfhosted@lemmy.world

All the public Piped instances are getting blocked by YouTube but do small selfhosted instances, that are only used by a handful of users or just yourself, still working? Thinking of just selfhosting it.

On a side note, if I do it, I'd also like to install the new EFY redesign or is that branch too far behind?

Edit: As you can see in the replies, private instances still work. I also found the instructions for running the new EFY redesign here

view more: ‹ prev next ›

Selfhosted

39081 readers
330 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS