1
1

I recently joined Pixelfed and, considering there's no algorithm, hashtags are the only way to be discovered.

I hate hashtag optimizing, but I also don't want to upload my image to someone else's random server before posting it to pixelfed, just to generate hashtags. Where should I look to find something I can host myself, or even something that runs natively on Android/Linux, that'll generate hashtags/keywords for an image?

2
1

I have a Pixel 8.... a PC with Linux Mint. How do I learn to "self host". Mainly for photo storage backup. Where do I start? I know nothing, absolutely nothing

3
1

So I’ve tried Jellyfin Book plug-in but since it doesn’t allow pinch-to-zoom (at least on my phone’s app) that will not work for me

I’ve already thinker with Kavita so I’ll probably be ok for the first install.

My problem come when I want to run it in the background (as a client ?) at this point I’m completely lost

4
1

So, a buddy of mine dropped off a box of 18 Wyse 3040 & 5010 thin clients. I believe they all run W10 embedded, but doing some research, I think I can also run a lightweight Linux OS like maybe Tiny Core. The 5010 can run SuSE Linux Desktop 11, ThinOS, or ThinOS PCOIP acording to Dell.

So, the burning question I have today is 'If you were gifted a box of 18 Wyse 3040 & 5010 thin clients, what would you do with them'? I want something I can incorporate into my already established homelab.

Inundate me with ideas!

5
1
submitted 1 day ago* (last edited 23 hours ago) by HumanPerson@sh.itjust.works to c/selfhosted@lemmy.world

Basically title. I want an app that can integrate with a server to run whisper and small local llms to place phone calls and such. Already have ollama and such, so that side is done. I just need the integration.

Edit for clarity: I mean a local siri type thing.

6
1

Hey everyone,

We're excited to finally share the results summary of the survey we posted in this community a few months ago! A massive thank you to the n=2158 active self-hosters from communities like r/selfhosted on Reddit and c/selfhosted on Lemmy.World who participated. Your input has led to a comprehensive academic paper that investigates the core reasons why we stick with self-hosting over the long haul.

Our study examined which factors most influence the Continuance Intention (the desire to keep using) and Actual Usage of self-hosted solutions. We confirmed that self-hosting is a principle-driven and hobby-driven practice, challenging traditional models of technology adoption.

The Top 3 most important Positive Drivers for Continued Self-Hosting

The most significant positive predictors of your intention to continue self-hosting were all rooted in intrinsic satisfaction and personal gain, rather than just basic utility:

  1. Perceived Enjoyment (The 'Fun Factor'): The sheer joy, pleasure, and personal satisfaction of configuring, maintaining, and experimenting with your own systems is a powerful, primary motivator for long-term engagement.
  2. Perceived Autonomy (Control/Digital Sovereignty): The desire for explicit control over your data and services, and the rejection of vendor lock-in inherent in third-party cloud services, is a fundamental driver.
  3. Perceived Usefulness: The belief that your self-hosted solution efficiently delivers specific personal outcomes (e.g., operational efficiency, powerful features, and privacy) is important, but its influence was less pronounced than Enjoyment or Autonomy.

The Critical Role of Technical Skill

We found that your self-assessed technical ability, or Perceived Competence, acts as a crucial link between wanting to self-host and actually doing it. Having a high intention to keep self-hosting is only half the battle. Your confidence in your technical skill is what gives you the self-assurance to handle the necessary, demanding tasks like maintenance, security, and updates. Importantly, a certain critical threshold of knowledge is required before competence starts driving that actual, continuous usage.

Other Key Insights

  • Privacy Matters: Concerns about privacy in cloud services positively influence the decision to stick with self-hosting.
  • The 'Push' Factor: If a user reports high Trust or high Autonomy when using commercial cloud services, they are significantly less motivated to continue self-hosting. This confirms that dissatisfaction with the commercial cloud effectively "pushes" people toward decentralized alternatives.
  • Maintenance Isn't a Dealbreaker: The high effort and time required for upkeep, or Perceived Maintenance Cost, was not a statistically significant factor for giving up on self-hosting. Our intrinsic motivation is powerful enough to absorb the necessary effort.

Implications for the Self-Hosting Ecosystem

For developers and the community, these findings suggest that sustained usage depends not only on functionality but also on fostering empowerment and a great user experience. By making self-hosting more enjoyable and reinforcing the user's sense of digital sovereignty, we strengthen the intrinsic motivation that fuels this movement.

Thank you again for helping us publish this research on the future of decentralized digital solutions! This work would not have been possible without your participation.

The full open-access article "A Model of Factors Influencing Continuance Intention and Actual Usage of Self-Hosted Software Solutions": https://www.mdpi.com/2071-1050/17/22/10009

7
1

Hi folks,

My small homelab (if it even qualifies as such) currently has a separate NAS host running TrueNAS CE and an additional Proxmox VE host. I want to set up Proxmox Backup Server and ultimately feed the backups to my TrueNAS, but I'm trying to figure out the best way to do so. I know the official guide suggests a whole separate machine (so a third host), but I'd prefer to not have to buy more hardware and keep it running 24/7 if I can avoid it (though if it is really critical I could probably get a little N150 box if it was strictly necessary, but this feels like a little much).

I am also considering virtualizing PBS on Proxmox itself, but it seems like either option is not ideal. For LXCs, it seems like trying to create a stable NFS share out to the TrueNAS system means I'd have to go with a privileged LXC vs. an unprivileged one (though I'd be happy to be wrong on that if folks have other experience with it), but of course this gives root access to the host itself. Alternatively, if I go with a VM, I'd heard that there are sometimes recursion issues where PBS ends up attempting to back up all VMs, including the VM that contains it, which leads to instability and just overall not having a great time.

As another alternative, I suppose I could pick up an NVMe and try to run PBS as an app on TrueNAS itself (but my understanding is that PBS is snappier when backups are on the same host and then pushed out via NFS afterward)?

Before I rip too much of my hair out, I figured I'd try to crowdsource and see how more experienced folks are approaching this. Thanks very much!

8
1
Backups of Backups (lemmy.today)

Hi all, I'm just getting my feet wet in self hosting and have a plan to start with Nextcloud on a Pi 4 for photo backups, and then try other things for calendar, phone backups, media hosting, etc.

One thing I worry about is losing my data. I have heard "if it's not backed up in two locations, it's not backed up." I'm curious what all of you do for backing up the setup. Remote backup to hard drives in the garage? Pay for cloud backup and encrypt it? Just another backup site over wifi in the house?

I'd be most afraid of losing photos and if there were a house fire or something. So my inital thought was a way of backing up to a server in my detached garage in a weather resistent container, but I want to know what you all think. Thanks for any insight.

9
1
submitted 2 days ago* (last edited 2 days ago) by utopiah@lemmy.world to c/selfhosted@lemmy.world

I don't actually understand and listed quite a few possibilities so at this point, any idea welcomed.

10
1

Technitium DNS Server (TDNS) has gotten a new release with many awesome features: TOTP authentication, an upgraded .NET library, and many security and performance fixes.

But most important of all, it now supports clustering. A long-awaited feature, this allows Technitium to sync DNS zones and configurations across multiple nodes, without needing an external orchestrator like Kubernetes, or an out-of-band method to replicate underlying data. For selfhosters, this would enable resilience for many use cases, such as internal homelab adblocks or even selfhosting your public domains.

From a discussion with the developer and his sneak peek on Reddit, it is now known that the cluster is set up as a single-primary/multiple-secondary topology. They communicate via good-old REST API calls, and transported via HTTPS for on-the-wire encryption.

To sync DNS zones (i.e. domains), the primary server provisions the "catalog" of domains, for secondary ones to dynamically update records in a method known as Zone Transfers. This feature, standardized as Catalog Zones (RFC9432), were actually supported since the previous v13 release as groundwork for the current implementation.

As an interesting result, nodes can sync to a cluster's catalog zone, as well as define their own zones and even employs other catalog zones from outside the cluster. This would allow setups where, for example, some domains are shared between all nodes, and some others only between a subset of servers.

To sync the rest of the data such as blocklists, allowlists, and installed apps, the software simply sends over incremental backups to secondaries. The admin UI panel is also revamped to improve multi-node management: it now allows logging in to other cluster nodes, as well as collating some aggregated statistics for the central Dashboard. Lastly, a secondary node can be promoted to primary in case of failures, with signing keys also managed within for a seamless transition of DNSSEC signed zones.

More details about configuring clusters is to be provided in a blogpost in the upcoming days. It is important to note that this feature only supports DNS stuff, and not DHCP just yet (Technitium is also a DHCP server). This, along with DHCPv6 and auto-promotion rules for secondaries, is planned for the upcoming major release(s) later on.

As a single-person copyleft project, the growth of this absolute gem of a software has been tremendous, and can only get better from here. I personally can't wait to try it out soon

Disclaimer: I'm just a user, not the maintainer of the project. Information here may be updated for correctness and you can repost this to whatever

11
1
submitted 2 days ago by mudkip@lemdro.id to c/selfhosted@lemmy.world

TL;DR: Stop running a Jellyfin server. MPV can directly play anything from your NAS, stream YouTube ad-free, handle literally every codec, and is infinitely customizable. It's like vim for video.

Why I ditched my Jellyfin setup

I used to run Jellyfin on my NAS. Transcoding, web interface, the works. Then I realized... why am I running a whole server stack when MPV can just directly play files from my NAS with zero setup?

What MPV Actually Is

MPV is a command-line video player that plays literally everything. But it's way more than that - it's a video engine you can build workflows around.

The Basics That Blow Minds

Direct NAS streaming (zero server needed):

mpv smb://192.168.1.100/media/movies/whatever.mkv
mpv nfs://nas.local/shows/season1/*

No transcoding. No server. No web interface overhead. Just direct file access with perfect quality and zero latency.

YouTube (and 1000+ sites) with ZERO ads:

brew install yt-dlp
mpv "https://youtube.com/watch?v..."

That's it. Ad-free YouTube in your video player with all your custom keybinds. Works with Twitch, Vimeo, Twitter, Reddit, literally hundreds of sites via yt-dlp.

Play entire directories:

mpv /Volumes/NAS/shows/BreakingBad/Season1/*

Boom. Instant binge session. Space bar skips to next episode. No library scanning, no metadata scraping, just files.

Workflows That Changed My Life

1. The "Watch Anywhere" Setup

Mount your NAS shares in Finder (or /etc/fstab for auto-mount). Now MPV treats your entire media library like local files. Add this to your shell config:

alias play="mpv"
alias tv="mpv /Volumes/NAS/shows/"
alias movies="mpv /Volumes/NAS/movies/"

2. YouTube as Your Streaming Service

alias yt="mpv"
alias ytm="mpv --no-video"  # audio only for music

Now:

  • yt "youtube-url" = instant ad-free playback
  • ytm "youtube-playlist" = whole playlists as audio
  • Keep your YouTube history/recommendations in browser, watch in MPV

3. Picture-in-Picture for Anything

Add ontop=yes to config, resize window small = instant PiP for any video source while you work. Works with live streams, security cameras, whatever.

4. The "No Plex Shares Needed" Share

Send someone an SMB/NFS share to your media. They install MPV. They can now browse and play your media library like it's local. No Plex accounts, no streaming limits, no transcoding quality loss.

5. Live Stream Monitoring

mpv http://192.168.1.50:8080/stream.m3u8

Home security cameras, baby monitors, anything streaming HLS/RTMP = instant monitoring with keybind controls.

Customization That Makes Jellyfin Look Basic

My Config (vim-style keybinds + YouTube controls)

Saved as ~/.config/mpv/mpv.conf:

input-default-bindings=no

> add speed 0.1
< add speed -0.1
j seek -10
k cycle pause
l seek 10
LEFT seek -5
RIGHT seek 5
UP add volume 5
DOWN add volume -5
. frame-step
, frame-back-step

m cycle mute
f cycle fullscreen
s cycle sub
a cycle audio
0 seek 0 absolute-percent
1 seek 10 absolute-percent
2 seek 20 absolute-percent
3 seek 30 absolute-percent
4 seek 40 absolute-percent
5 seek 50 absolute-percent
6 seek 60 absolute-percent
7 seek 70 absolute-percent
8 seek 80 absolute-percent
9 seek 90 absolute-percent

[ add speed -0.25
] add speed 0.25
SPACE cycle pause
ESC set fullscreen no

i script-binding stats/display-stats
S screenshot video

profile=gpu-hq
scale=ewa_lanczossharp
cscale=ewa_lanczossharp
hwdec=auto-safe
vo=gpu

screenshot-format=png
screenshot-png-compression=9
screenshot-directory=~/Downloads

cache=yes
demuxer-max-bytes=150M

osd-level=1
osd-duration=2000
save-position-on-quit=yes
keep-open=yes
alang=jpn,jp,eng,en
slang=eng,en

ytdl-format=bestvideo[height<=1080]+bestaudio/best

Profiles for Different Content

[anime]
profile-desc="Anime settings"
deband=yes

[lowpower]
profile-desc="Laptop battery mode"
profile=fast
hwdec=yes

Use with: mpv --profile=anime episode.mkv

Scripts That Make It Insane

MPV supports Lua/JS scripts. Drop them in ~/.config/mpv/scripts/ and they just work.

Must-have scripts:

  1. sponsorblock - Auto-skips YouTube sponsors/intros/outros

    curl -o ~/.config/mpv/scripts/sponsorblock.lua \
      https://raw.githubusercontent.com/po5/mpv_sponsorblock/master/sponsorblock.lua
    
  2. quality-menu - Change YouTube quality on the fly

  3. autosubsync - Auto-fixes subtitle timing

  4. playlistmanager - Visual playlist editor

  5. mpv-discordRPC - Show what you're watching on Discord

Advanced Workflows

Watch Parties (Syncplay)

Install syncplay, point it at MPV, now you and friends watch your NAS content together in perfect sync. No Plex share limits, no quality loss.

Audio Streaming

ytm "youtube-playlist-url"
# or
mpv --no-video /Volumes/NAS/music/*

No GUI needed. Terminal command plays audio, you use keybinds (k=pause, j/l=skip, etc). Or just minimize and use as background music player.

For GUI: IINA (Mac) is literally just MPV with a pretty interface and uses your MPV config.

Frame-by-Frame Analysis

Built-in keybinds (. and , in my config) step forward/back frame-by-frame. Perfect for animation analysis, sports breakdown, debugging video issues.

Automated Workflows

# Watch anything in clipboard
mpv $(pbpaste)

# Random episode
mpv "$(find /Volumes/NAS/shows -name "*.mkv" | shuf -n1)"

# Continue last watched (auto position restore)
mpv /Volumes/NAS/shows/CurrentShow/*

Why This Beats Jellyfin For Me

Pros:

  • Zero server maintenance
  • No transcoding = perfect quality
  • Plays literally any codec without setup
  • Way faster (direct file access)
  • Keyboard-driven workflow
  • Works offline/online seamlessly
  • Infinitely scriptable
  • Cross-platform (Linux/Mac/Windows)

Cons:

  • No pretty web UI (I consider this a pro)
  • No user management (just use OS permissions)
  • No watch tracking (unless you script it)
  • No mobile app (VLC on phone + SMB works though)

Who This Is For

  • You're comfortable with terminal/config files
  • You want maximum quality (no transcoding ever)
  • You prefer keyboard controls
  • You value simplicity over features
  • You already have a NAS/file server
  • You want YouTube ad-free without browser extensions

Getting Started

# macOS
brew install mpv yt-dlp

# Linux
sudo apt install mpv yt-dlp

# Windows
scoop install mpv yt-dlp

Create config at:

  • Mac/Linux: ~/.config/mpv/mpv.conf
  • Windows: %APPDATA%/mpv/mpv.conf

Mount your NAS shares, point MPV at files. Done.

Resources


EDIT: Holy shit, didn't expect this response. Common questions:

Q: But I need to share with family who aren't technical A: IINA (Mac) or mpv.net (Windows) give them a normal GUI that uses MPV underneath. Or just... teach them? play movie.mkv isn't rocket science.

Q: What about mobile? A: VLC on phone + SMB share to your NAS. Or just use MPV on desktop/laptop like a civilized person.

Q: No watch history tracking? A: save-position-on-quit=yes remembers position per file. For tracking across devices, write a simple script or just... remember what you watched?

Q: This sounds like gatekeeping A: It's literally a config file. If you can set up Jellyfin, you can handle this.

12
1
podman quadlets on lxc (piefed.social)
submitted 3 days ago* (last edited 1 day ago) by immobile7801@piefed.social to c/selfhosted@lemmy.world

tldr: is this possible?

I'm trying to move from docker compose to podman quadlets and while I've got some basic differences down I'm having an issue with using quadlets in proxmox lxc. it works fine in a vm, so my question is, has anyone gotten quadlets to work in a lxc? And if so, how do I fix the below error?

when I try to take a working quadlet file from a vm to lxc I get the following error:

Failed to connect to user scope bus via local transport: No such file or directory  

I've tried researching the error and did all the troubleshooting in this url: https://linuxconfig.org/how-to-fix-failed-to-connect-to-system-scope-bus-error-in-linux

which suggests it's because systemd isn't running, but it is.

podman@podman-test:~$ ps aux | grep systemd~  
podman       349  0.0  0.2   3508  1480 pts/1    S+   19:55   0:00 grep systemd~ 

again, I'm very new to quadlets so it's very possible I'm missing something.

Thanks in advance!

ETA: I prefer lxc for the resource overhead savings.

Edit2: running rootless podman on proxmox 9 Debian lxc. I've also tried Alma Linux lxc

edit3: not sure where to go from here. as shown systemd doesn't appear to be running, but dbus is and a reinstall of dbus doesn't fix the issue. for now I think I'm going to stick to vm until I can figure this out.

13
1
submitted 3 days ago* (last edited 2 days ago) by perishthethought@piefed.social to c/selfhosted@lemmy.world

I am moving my personal web site from netlify to a VPS. It runs via Caddy as just a plain, static web site generated from Hugo. Netlify offered a free contact form, but now I need to provide my own solution, somehow.

I'd like to self-host that too, if possible, on the VPS but then I need to handle spam blocking, form validation, possibly capcha, and sending the outbound email myself - and I don't have experience doing any of that. An AI chatbot suggested a python script using flask but I would still need to do a bunch to make that work, I think.

There are a number of form handling cloud services online (such as staticforms dot xyz) but then the form submissions are going through their servers. My visitors probably wouldn't care, but I would still prefer to self-host something, if it's not too hard to setup.

What do you all recommend? Anyone find a clever solution to this already?

I already have a "comments" feature wired into the site - I could just stick with that but some people seem to still prefer to contact me directly, instead of using that.

EDIT: This is all done - thanks for all the great ideas, you all rock!

14
1
submitted 3 days ago* (last edited 1 day ago) by LazerDickMcCheese@sh.itjust.works to c/selfhosted@lemmy.world

cross-posted from: https://sh.itjust.works/post/49393596

I've been running Jellyfin on a Synology DS923+ for a couple years with 'linuxserver/jellyfin:latest' with no issue until that big update recently. Suddenly it's borked...extremely slow speeds, failing to play files half the time, stuttering even when it does play. It was time for a hardware update regardless; it was a miracle that the NAS was able to run as many services on it as it was anyway.

So I built a Proxmox machine with the intent of adding hardware acceleration and transcoding (ideally I'd like to stream to a couple old CRTs): -ASRock B760M PRO RS -Intel i5-13500 -2x32GB Crucial DDR5-4800 -1TB WD SN850X NVMe

Using the Proxmox community Jellyfin script (https://community-scripts.github.io/ProxmoxVE/scripts?id=jellyfin&category=Media+%26+Streaming) I set up an LXC and the iGPU is supposedly being utilized properly. I added an NFS mount from the NAS's media folder to the Proxmox host, then bound the mount point to the LXC. So at this point, it is accessible to clients via web browser, but I'm having a few issues:

  1. (Probably a Prox issue but...) Jellyfin isn't seeing all the media. I added all the libraries and did a full scan, but *maybe *10% of the media is actually available. Hopefully this is a moot point because--

  2. My old docker config isn't available. I made an NFS mount from the NAS's docker folder to the Proxmox host and tried to route it to the LXC as well, but the Proxmox-NAS refuses to work so I'd need a workaround.

  3. I have no idea if my transcoding settings are right. Intel's specs for my CPU and Jellyfin's recommendations seems to conflict slightly, but between both sets of info there's still some settings that lack guidance. Basically, can someone with a computer engineering degree double check my settings? I tried a screenshot, but Lemmy didn't appreciate it

Hardware acceleration: Intel Quicksync (QSV) QSV Device: /dev/dri/renderD128

X H264

X HEVC

MPEG2

VC1

VP8

X VP9

X AV1

HEVC 10bit

VP9 10bit

HEVC RExt 8/10bit

HEVC RExt 12bit

X Prefer OS native DXVA or VA-API hardware decoders

X Enable hardware encoding

Enable Intel Low-Power H.264 hardware encoder

Enable Intel Low-Power HEVC hardware encoder

X Allow encoding in HEVC format

Allow encoding in AV1 format

Edit: forgot to include logs: "ffmpeg version 7.1.2-Jellyfin Copyright (c) 2000-2025 the FFmpeg developers built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04) configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc libavutil 59. 39.100 / 59. 39.100 libavcodec 61. 19.101 / 61. 19.101 libavformat 61. 7.100 / 61. 7.100 libavdevice 61. 3.100 / 61. 3.100 libavfilter 10. 4.100 / 10. 4.100 libswscale 8. 3.100 / 8. 3.100 libswresample 5. 3.100 / 5. 3.100 libpostproc 58. 3.100 / 58. 3.100 [AVHWDeviceContext @ 0x7ab87d07ffc0] No VA display found for device /dev/dri/renderD128. Device creation failed: -22. Failed to set value 'vaapi=va:/dev/dri/renderD128,driver=iHD' for option 'init_hw_device': Invalid argument Error parsing global options: Invalid argument"

"[WRN] The WebRootPath was not found: "/var/lib/jellyfin/wwwroot". Static files may be unavailable. [ERR] FFmpeg exited with code 234"

Edit: appreciate all the help!

15
1

My siblings and I are doing secret santa every christmas. I would like to self host a webapp, that will randomly appoint a secret santa for each of us. And each of us should be able to save some wishes to that app, which then only the secret santa of that person will see.

Is there something, that would fit this description? Thanks for your help

16
1
17
1
submitted 3 days ago* (last edited 3 days ago) by EonNShadow@pawb.social to c/selfhosted@lemmy.world

I'd like to get a dashcam, but unfortunately my phone isn't one that has an SD card slot.

I'm going on a road trip soon and would like to pick up a dash cam for it as I've had issues in the past.

I'm looking to see if you guys have any cleaner (as in more automated, less fiddly post-setup) solutions than just using an SD Card reader on the phone to manually upload the data to a NAS via VPN

Thanks in advance! I'm definitely more of a tech person than a car person so any help would be appreciated.

Edit: you all seem to be making the same point, I'm coming at this from the wrong POV. I'm worried about vendor lock-in and being reliant on whatever service the vendor wants to use instead of just handling the data myself. But it seems like it shouldn't be too much of a problem. Thanks for all your answers!

18
1
Announcing IncusOS (discuss.linuxcontainers.org)
19
1

Prices are rising across Netflix, Spotify, and their peers, and more people are quietly returning to the oldest playbook of the internet: piracy. Is the golden age of streaming over?

20
1

Hiya,

Recently upgraded my server to an i5-12400 CPU, and have neen wanting to push my server a bit. Been looking to host my own LLM tasks and workloads, such as building pipelines to scan open-source projects for vulnerabilities and insecure code, to mention one of the things I want to start doing. Inspiration for this started after reading the recent scannings of the Curl project.

Sidenote: I have no intention of swamping devs with AI bugreports, i will simply want to scan projects that i personally use to be aware of its current state and future changes, before i blindly update apps i host.

What budget friendly GPU should i be looking for? Afaik VRAM is quite important, higher the better. What other features do i need to be on the look out for?

21
1

cross-posted from: https://lemmy.nocturnal.garden/post/344011

Found in this reddit post. The lacking encryption in Komodo is something I miss and I'm not satisfied with how to handle .env files plus it's really big for what it's doing. Of course I discover this the day after migrating one of the last stacks to Komodo but I'm tempted to give this a try at some point.

Full Quote from the reddit post:


Hey all, I just felt like making a post about a project that I feel like is the most important and genuinely game changing pieces of software I've seen for any homelab. It's called Doco-CD.

I know that's high praise. I'm not affiliated with the project in any way, but I really want to get the word out.

Doco-CD is a docker management system like Portainer and Komodo but is WAY lighter, much more flexible, and Git focused. The main features that stand out to me:

  • Native encryption/decryption via SOPS and Age

  • Docker Swarm support

  • And runs under a single, tiny, rootless Go based container.

I would imagine many here have used Kubernetes, and Git-Ops tools like FluxCD or ArgoCD and enjoyed the automation aspect of it, but grown to dislike Kubernetes for simple container deployments. Git Ops on Docker has been WAY overshadowed. Portainer puts features behind paid licenses, Komodo does much better in my opinion, but to get native decryption to work it's pretty hacky, has zero Docker Swarm support (and removed a release for it's roadmap), and is a heavier deployment that requires a separate database.

Doco-CD is the closest thing we have to a true Git Ops tool for Docker, and I just came across it last week. And beforehand I've desperately wanted a tool such as this. I've since deployed a ton of stuff with it and is the tool I will be managing the rest of my services with.

It seems to be primarily developed by one guy. Which is in part why I want to share the project. Yet, he's been VERY responsive. Just a few days ago, bind mounts weren't working correctly in Docker Swarm, I made an issue on Github and within hours he had a new version to release fixing the problem.

If anyone has been desperately wanting a Docker Git Ops tool that really does compete with feature parity with other Kubernetes based Git Ops tools. This is the best one out there.

I think for some the only potential con is it has no UI. (Like FluxCD) Yet, in some ways that can be seen as a pro.

Go check it out.

22
1

Several performance issues addressed and bug fixes applied.

Updates:

  • fix: equalizer missing referenced value
  • Fix: Album track list bug
  • fix: Add listener to enable equalizer when audioSessionId changes
  • chore(i18n): Update Spanish (es-ES) translation
  • shuffle for artists without using getTopSongs
  • Update USAGE.md with instant mix details
  • feat: sort artists by album count
  • Fix downloaded tab performance
  • fix: remove NestedScrollViews for fragment_album_page
  • fix: playlist page should not snap
  • chore: update media3 dependencies
  • fix: update MediaItems after network change
  • fix: skip mapping downloaded item

https://github.com/eddyizm/tempus/releases/tag/v4.1.3

23
1
submitted 4 days ago* (last edited 4 days ago) by Deebster@infosec.pub to c/selfhosted@lemmy.world

cross-posted from: https://infosec.pub/post/37292398

My personal domain has hundreds of aliases - one for each site I deal with. This is great for identifying the source of spam, and I retire any aliases that get spam.

haveibeenpwned.com lets me add a domain, but wants 3912 USD a year to actually tell me which addresses leaked. This is obviously an insane price for a nice-to-have.

Is there an alternative for free or very cheap? A self-hosted tool that would pull down lists would be great, but I suppose those lists aren't public.

24
1
submitted 5 days ago* (last edited 3 days ago) by irmadlad@lemmy.world to c/selfhosted@lemmy.world

I'm almost embarrassed to ask this question, but it's been bugging me for years. I've read the documentation, searched online. Perhaps my search-fu is lacking.

In ntopng there is a panel called Traffic Classification. One of the classifications is 'fun'. Exactly how is this classification derived, and what is classified as 'fun'?

25
1
submitted 5 days ago* (last edited 5 days ago) by stratself@lemdro.id to c/selfhosted@lemmy.world

Hi all, I made a simple container to forward tailscale traffic towards a WireGuard interface, so that you can use your commercial VPN as an exit node. It's called tswg

https://github.com/stratself/tswg

Previously I also tried Gluetun + Tailscale like some guides suggested, but found it to be slow and the firewall too strict for direct connections. Tswg doesn't do much firewalling aside from wg-quick rules, and uses kernelspace networking which should improve performance. This enables direct connections to other Tailscale nodes too, so you can hook up with DNS apps like Pi-hole/AdguardHome.

I've shilled for this previously, but now I wanna promote with an actual post. Having tested on podman, I'd like to know if it also works on machines behind NATs and/or within Docker. Do be warned though that I'm a noob w.r.t. networking, and can't guarantee against IP leaks or other VPN-related problems. But I'd like to improve.

Let me know your thoughts and any issues encountered, and thank you all for reading

view more: next ›

Selfhosted

52855 readers
32 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS