1
1
submitted 3 days ago* (last edited 3 days ago) by gedaliyah@lemmy.world to c/selfhosted@lemmy.world

I'm having trouble automating the restic backup using systemd.

I followed the linked guide, which seems pretty straightforward. Backup works fine when I run it manually, but when I try to run systemctl status restic-backup.service I get the following error: Fatal: parsing repository location failed: s3: bucket name not found

I have triple-checked the file paths, and also added PassEnvironment=AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY RESTIC_REPOSITORY RESTIC_PASSWORD_FILE B2_ACCOUNT_ID B2_ACCOUNT_KEY to the restic-backup.service file, which I saw used elsewhere. This is my first time using systemd, so I'm not sure if I am overlooking an obvious step or what.

OS: Xubuntu

restic: installed locally following these steps

backup: Backblaze B2 bucket with s3

2
1
submitted 4 days ago by udc@lemmy.world to c/selfhosted@lemmy.world
3
1

I'm looking at some CWWK, topton, and oaknode boards online for an upcoming build. I'm throwing proxmox and OPNsense on this. There's a ryzen 8845HS board I'm curious about but there's also some intel boards I could drop an i5-14600T used CPU into that could work well too. Either way I would have an intel ARC GPU in the PCI slot for media decode/encode and a coral TPU in the E-key M2 slot for frigate object recognition.

But I get conflicting info online about these boards being a waste of time and money. I see things about them burning out, or having weird BIOS bugs that never get fixed. On the other hand, NAScompares seems to like these boards. Are these something I should avoid?

4
1

It looks like a massive update. Here are some excerpts, with more changes listed in the link above. I'm especially excited about the companion app:

"Calibre-Web Automated is extremely lucky and privileged to have such a large and vibrant community of people who support, enjoy and contribute to the project. The bulk of the new features and bugfixes this update brings were created by the best and brightest of our community and I want to celebrate that and their work here in the hope that our community only continues to grow!" - CrocodileStick

Major Changes 🚀

NEW: Split Library Support 💞

  • As promised, all CWA features are now fully compatible with Calibre-Web's Split Library Functionality
  • This enables users to store their Calibre Library in a a separate location to their metadata.db file
  • To configure this, in the Admin Panel, navigate to Edit Calibre Database Configuration -> Separate Book Files from Library
    • The use of Network Shares (especially NFS) with this functionality is discouraged as they sometimes don't play well with CW & CWA's SQLite3 heavy stack. Many users use network shares without issues but there aren't enough resources to support those who can't get it working on their own

NEW: Hardcover API Integration 💜📖

  • Hardcover is now officially not only available as a Metadata Provider, but using Hardcover's API, Kobo Shelves & Read Progress can now also be synced to a user's Hardcover account!

  • Current workflow is scraping a book by title, you can then use the resulting hardcover-id identifier to search for editions of that book, by searching "hardcover-id:". Edition results are filtered to exclude Audiobooks editions, and sorted by ebook then physical book.

  • If a shelf in CWA is selected for Kobo sync, when a book with id and edition identifiers is added to the shelf, it will also be added to Hardcovers want to read list. As the book is read on the Kobo device progress is synced to Hardcover as well when pushed to CWA.

  • To use Hardcover as a Metadata Provider, simply provided a Hardcover API Token in your docker-compose under the HARDCOVER_TOKEN Environment Variable

    • To enable Kobo sync, a Hardcover API Token must be provided for each user in each user's respective Profile Page
  • Thanks to demitrix! <3

NEW: Greatly Improved Metadata Selection UI 🎨

  • Demitrix was really on a roll the last couple of months and also contributed some really cool functionality to the Metadata Selection UI

CWA New Metadata Fetch UI - V3.1.0

  • Much more Elegant & Readable UI, both on Mobile & on Desktop

    • Improved CSS for the Fetch Metadata interface—making it easier and clearer for you to review and select metadata sources.
  • Individually Selectable Elements

    • Say goodbye to having to having all of your book's metadata overwritten simply becuasse you wanted a better looking cover!
    • As of V3.1.0, all metadata elements can be individually updated from multiple sources instead of the only option being to take everything for a single source!
  • Visual Quality Comparison Between the Cover Your Book Already Those Available from Metadata Providers

    • Looking for a specific cover but not sure if the image file is low quality or not? As of V3.1.0, the resolution of cover images is now displayed on the bottom right corner of the preview, the background of which is colour-coded to indicate whether the available cover is of greater, lower or equal quality to the one already attached to the ebook!
  • Thanks to demitrix for their contributions to this! <3

NEW: KoReader Sync Functionality! 📚🗘

  • CWA now includes built-in KOReader syncing functionality, providing a modern alternative to traditional KOReader sync servers!
  • Universal KOReader Syncer: Works across all KOReader-compatible devices, storing sync data in a readable format for future CWA features
  • Modern Authentication: Uses RFC 7617 compliant header-based authentication instead of legacy MD5 hashing for enhanced security
  • CWA Integration: Leverages your existing CWA user accounts and permissions - no additional server setup required
  • Easy Installation: Plugin and setup instructions are available directly from your CWA instance at /kosync
  • Provided by sirwolfgang! <3

NEW: Support for the Latest Versions of Calibre, even on devices with older Kernels! 🆕🎉

  • ABI tag from the extracted libQt6* files removed to allow them to be used with older kernels
  • Adds binutils to install strip for calibre-included Dockerfile. strip libQt6*.so files of the ABI tag so that they can work with older kernels (harmless for newer kernels). These libraries appear to still contain fallbacks for any missing syscalls that calibre might use. add .gitattributes to enforce LF checkout on .sh files (useful for those who build on windows)
  • Thanks to these changes, CWA now has much greater compatibility with a much wider range of devices & is able to keep up to date with the latest Calibre Releases! 🎉
  • Provided by FennyFatal <3

NEW: Calibre Plugin Support (WIP) 🔌

  • Users can now install Calibre plugins such as DeDRM
  • The feature is still a work in progress but users with existing Calibre instances can simply bind their existing Calibre plugins folder to /config/.config/calibre/plugins in their docker-compose file

NEW: Bulk Add Books to Shelves 📚📚📚

Contributed by netvyper, you can now select multiple books from the book list page and add them to a shelf in one go!

  • New "Add to Shelf" button in bulk actions on the book list.
  • Modal dialog lets you pick your shelf.
  • Backend checks for permissions, duplicates, and provides clear success/error feedback.

NEW: Better Docs Cometh - The Birth of the CWA Wiki 📜

  • The documentation for CWA while for many enough, could really be better in helping as many users find the answers and information they need as quickly as possible
  • Therefore We have started work on the CWA Wiki to strive towards this goal!
  • While still very much a work in progress, submissions for pages, edits ect. are open to the community so if you stumble across something that seems wrong, missing or outdated, please jump in and change it if you can or let us know if you're not sure :)

Affliated Projects 👬

  • In the spirit of community, I also wanted to give a shout out to some really great affiliate projects made by members of our community!
  • As well as being featured here in the release, affiliated projects will now also be prominently feature on the CWA GitHub page to drive as much traffic & enthusiasm to them as possible
  • If you've had an idea for a companion project for CWA, or want to get involved in helping improve CWA and/or it's affiliated projects, please just do so! We're all open-source here so you don't need anyone's permission, just go for it! :)

Calibre-Web Companion

  • Built with Flutter and using Material You, Calibre Web Companion is an unofficial companion application for Calibre Web & Calibre Web Automated that allows you to browse your book collection and download books directly on your device, providing a much more modern, mobile-friendly UX than either service can currently provide on its own

Calibre Web Companion Preview


Calibre-Web Automated Book Downloader

  • An intuitive web interface for searching and requesting book downloads, designed to work seamlessly with Calibre-Web-Automated. This project streamlines the process of downloading books and preparing them for integration into your Calibre library

Supporting the Project ❤️

If you are in a position to, donations no matter how small are really appreciated & really help to keep the project going. Currently all money that has been and will be received is going towards a Kobo device so I can finally help out with the development & testing of CWA's KoSync & Kobo specific features :)

5
1

With Lidarr being not very functional due to the Unable to communicate with LidarrAPI - Lidarr API "Internal Server Error" 500 | Invalid response received from LidarrAPI | HTTP Request Timeout · Issue #5498 · Lidarr/Lidarr I have been thinking about getting rid of it altogether. I have only started using it recently and don't like it.

What I use Lidarr for:

  • Find metadata for music
    • organize files in a consistent way base on metadata
    • obtain album art
    • create .nfo or other files
  • Identify desired music and instruct download utility to get it (this is optional for me--- I can handle myself if needed)
  • Do the above via a web interface which can be browsed nicely

I don't like about Lidarr:

  • The not-really-open-source nature of it, e.g. this current problem, where you are reliant on their external server to run your own home server. I feel this might be a more pervasive issue in the Arrs but not sure of all the implications
  • How unsupported it is to include work that the lidarr servers don't know about. There will never be a metadata database which includes all music. There is just too much music in the world!
  • no audiobook/podcast support

I also have Jellyfin going for the actual serving/streaming of the music. Am not sure if it is able to fully manage the metadata and files?

Lots of options in the awesome-selfhosted list.

I could use a linux desktop app if it was better than a selfhosted server.

Thoughts?

6
1
submitted 5 days ago by yuris@lemmy.ml to c/selfhosted@lemmy.world

I don't usually post, but thought I'd share.

I rebuilt my homelab with OpenTofu. Now my entire setup, from containers to networking, lives in a Git repo.

The best part is that new services get published automatically. I just set a flag in the code, and it builds the Caddy proxy or Cloudflare tunnel for me. No more manual config editing.

Here's my quick write-up on it: https://yuris.dev/blog/homelab-opentofu

And the code is all public if you want to see how it works: https://github.com/yurisasc/homelab

Hope this is interesting to someone. Happy to answer any questions if you have them. Curious to hear if anyone else has gone down this particular rabbit hole with IaC for their Docker stack.

7
1
submitted 5 days ago* (last edited 5 days ago) by ohshit604@sh.itjust.works to c/selfhosted@lemmy.world

I’ve been working on adding security headers to my reverse proxy and so far I believe to have gotten most of them except for Content Security Policies, I honestly can’t find a simplified way to apply a CSP to 20+ docker applications and hope folks of Lemmy know the best way to go about this.

I want to note that I never worked with headers in the past, I tried interpreting the Traefik documentation and Mozilla documentation as well as a bunch of random YT videos but can’t seem to get it right.

    headers:
      headers:
        customRequestHeaders:
          X-Forwarded-Proto: https
        accessControlAllowMethods:
          - GET
          - OPTIONS
          - PUT
        accessControlMaxAge: 100
        hostsProxyHeaders:
          - "X-Forwarded-Host"
        stsSeconds: 31536000
        stsIncludeSubdomains: true
        stsPreload: true
        forceSTSHeader: true # This is a good thing but it can be tricky. Enable after everything works.
        customFrameOptionsValue: SAMEORIGIN # https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
        contentTypeNosniff: true
        browserXssFilter: true
        contentSecurityPolicy: ""
        referrerPolicy: "same-origin"
        permissionsPolicy: "camera=(), microphone=(), geolocation=(), usb=()"
        customResponseHeaders:
          X-Robots-Tag: "none,noarchive,nosnippet,notranslate,noimageindex," # disable search engines from indexing home server
          server: "traefik" 
8
1
9
1

Hello. Does anyone here use Zabbix to monitor their self-hosted environment? If so, what architecture do you have, and what does your deployment look like?

10
1
11
1

I have been fiddling with trying to build a Dockerfile and container for the following.

A Alpine Linix image with LFTP, cron, & openssh installed for use with my external server to sync folders.

I have a Alpine Linix VM that I connect with a external server using SSH keys, and a cron task running a LFTP script on schedule.

Any help or pointing me at a container you know of is appreciated.

12
1
submitted 6 days ago* (last edited 6 days ago) by ruffsl@programming.dev to c/selfhosted@lemmy.world

How many folks already self-host UniFi on their own hardware vs native consoles?

Related Discussion:

13
1
Reverse Proxy Monitoring (lemmy.nocturnal.garden)

I'm interested how y'all check/monitor your reverse proxy logs. I run an nginx vm that has ports 80 and 443 forwarded that exposes some of my services to the internet on different domains. I use nginx exporter for Prometheus, but I would like a better monitoring to see what connects to my services (like my Lemmy instance).

If I would be under pressure by LLM scrapers for example, I would only notice via application and hardware metrics, but I would have to figure out what's going on.

14
1
submitted 1 week ago by a@91268476.xyz to c/selfhosted@lemmy.world

For people running a family chat in their #selfhosted #homelab: What is the system with the best mobile experience (both Android and iOS)? I've been using mattermost, but my family is not super excited about it. I'm thinking of running something else instead, but I don't want to test every single platform available

cc @selfhosted @selfhostedchat

15
11
submitted 1 week ago* (last edited 1 week ago) by otters_raft@lemmy.ca to c/selfhosted@lemmy.world

Maybe is pivoting to B2B financial forecasting and scenario planning and as a company, will no longer be actively maintaining this repository. What this means:

  • This final release is a working, “as-is” version of the software
  • As a company, we will be turning 100% of our focus to the pivot, and therefore, will not be actively maintaining / accepting contributions to this repository

It had a nice UI, but it never really felt finished. There are a few other more popular financial trackers out there, which one do you use?

16
13

cross-posted from: https://feddit.nl/post/39230816

How will nostr relays deal with the uk's Online Safety Act?

Would a uk netizen only be able to interact with onion nostr relays or will nostr sees to exist in the uk without a vpn?

17
19

Hey everyone, just wanted to give a quick update.

After opening things up more on Plebbit/Seedit, we got hit pretty hard with spam and some NSFW content. It got out of hand fast and honestly, its worse than we expected.

To stop that from messing everything up, we’re thinking about adding optional email or SMS verification when people sign up.

This isn’t something we wanted to do at first, but it seems necessary to protect the space and avoid getting buried in garbage.

we’re still fully open source, and still want this to feel like a community. If you’ve got other ideas or feedback, feel free to share.

18
5

cross-posted from: https://infosec.pub/post/32151664

This is a generic metrics post to leverage a spare ESP32 meshtastic node to ingest metrics into Grafana! We've had some congestion issues due to poor config in my area, and this has helped me pinpoint which nodes are causing the biggest problems, and block them at my repeater.

19
-2

A response to Drew Lyton’s "The Future is NOT Self-Hosted"


Related Discussions:

20
-2
submitted 1 week ago* (last edited 1 week ago) by gandalf_der_12te@discuss.tchncs.de to c/selfhosted@lemmy.world

As a follow up to this post in this community: The Future is NOT Self-Hosted

I have thought about how to set up local, community-hosted fediverse servers that respect privacy and anonymity while still guaranteeing that users joining the server are human-beings.

The reasoning behind these requests is that:

  • You want anonymity to guarantee that people won't face repercussions in real life for the opinions they voice in the internet. (liberty of free speech)
  • You want to keep the fediverse human, i.e. make sure that bot accounts are in the minority.

This might sound like an impossible and self-contradictory set of constraints, but it is indeed possible. Here's how:

Make the local library set up a fediverse server. Once a month, there's a "crypto party" where participants throw a piece of paper with their fediverse account name into a box. The box is then closed and shaked to mix all the tokens in it. Then, each one is picked out and the library confirms that this account name is indeed connected to a human. Since humans have to be physically present to throw in a paper, it is guaranteed that no bot army just opens a hundred anonymous accounts. Also, the papers are not associated to a particular person that way.

21
2

For the past several years, I’ve been updating a little self-hosting puzzle I’ve made for myself to keep me busy. I have an old power Mac G4 cube shell. I’ve been designing, 3d printing, and releasing the completed designs for new skeletons to replace Apple’s core with one of my own design that allows off the shelf parts to fit into this little case and serve my home networking needs. I technically beat apple to the internet with the first ARM-powered Mac years ago.

I switched back to intel for the most recent incarnation of this rig, but I’m definitely cheating a little at this point to get it all to fit in there. I’m not proud of this latest version and I don’t think I even released the final version publicly. The project gave me a fun little engineering challenge but now I fear I cannot go further with it without an electrical engineering degree and several years of PCB design work under my belt. I need to make a big boy server now. It’s time to move on.

This little server has always been quiet and energy efficient. It’s been stable, and reliable. It’s just getting to the point where I cannot realistically fit all the parts I want into that shell anymore.

One of the things I want to do is declutter my equipment a bit, combining several things into one enclosure if possible. I’d like to move from my firewalla purple to something that doesn’t need the cloud at all. I’d like to be able to replace more of my cloud based things, including my home cameras, with self hosted options. The way I want to do those things means I really can’t cram them into that box anymore.

My current plan is to build a PC with a Ryzen CPU and at least 6 cores, 7000 or 9000 series, install proxmox, put OPNsense in a VM with a few cores pinned to it and PCI passthru to an intel dual-port NIC and installing a coral TPU for frigate detection. I would install an intel GPU to handle the media decode and encode for frigate managing my cameras and jellyfin managing my media. I’d install immich, and qBitTorrent and i2PD and if its going to be doing all that anyway, why not throw in a second AMD GPU just for rendering games in a Linux VM with the GPU accessed via PCI passthru and stream those to my tv via sunshine? Just throw it all in that one box. Build it into a rack case, put it in the basement. Make it nice and clean and tidy. All in one piece.

There are two problems with this. First, it’s going to be expensive, but that’s less of a problem as I can just do this over time, stretch the project out and add resources to it as they are needed. It doesn’t have to do everything on day 1, the old stuff still works. The other issue is power consumption. This thing is NOT going to be as efficient as my arm based firewalla purple and the N100 lattepanda Mu in my G4 cube. It will be able to do much more than those two on their own could ever hope to do, but most of the time, it’s not going to need all the muscle it has. It’s just sitting idle, and consuming more energy to do the same shit.

I thought maybe I could just skip the gaming requirement, modify something like an WTX Pro off amazon to use an intel ARC GPU for the transcoding and camera decoding and thought would work well enough. It would sip power, cost less, do almost everything the other set up would, but be less versatile and more janky thanks to the modifications I would have to do for the GPU. I looked at other ryzen embedded boards and intel based NAS boards and they all had something about them that would just make them impractical to use for this. then I saw a video on YouTube today about someone going the other direction with his homelab due to the energy expense and breaking it all up into smaller, weaker hardware all tied together with 2.5G Ethernet. A little N100 based NAS, a little ARM based this and that and all the separate things tied together through network but all acting as their own independent boxes. Ugly, sloppy, more complex, but they used MUCH less power than 1 big box.

I figured maybe I could set up proxmox to only spin up the gaming VM when I needed it, and when its shut down, power down the AMD GPU and maybe even disable the CPU cores I would have pinned to the gaming VM. The CPU cores probably wouldn’t save that much power though and may even be more efficient just leaving them available for the running containers rather than collapsing those smaller container loads into the few remaining cores and clocking them up to compensate. I wasn’t sure of the math on this. According to chatGPT it’s kind of a wash but the AI is really only useful if I know what I’m doing so I can correct it or question it when it says something suspicious so that doesn’t tell me much.

I’m just in the planning stages now. I considered intel for the CPU but the prices are higher, the chips aren’t as good as AMDs per dollar, and I’d get a longer life out of an AM5 socket than the intel stuff which changes every time a board member sneezes. Plus the AMDs are generally lower on power consumption.

I’m kind of thinking I’m getting carried away with this, that the power draw won’t be all that major considering it’s just in my basement and not churning through heavy traffic in an enterprise environment. But I’ve always only ever built 24/7 stuff out of more efficient stuff so I’m not sure what I’m in for here. I know I can’t build an ARM server because OPNsense isn’t supported there.

I need some outside opinions. I’m drowning in options here.

22
9

So I’ve been trying to get into peertube and away from google, but I’m having trouble finding a frontend to actually use peertube, or perhaps I’m missing something with how to use peertube. Does anyone have any recommendations for this?

23
24
submitted 1 week ago* (last edited 1 week ago) by Charger8232@lemmy.ml to c/selfhosted@lemmy.world

I am looking for recommendations for an open source self-hosted ~~version control system~~ source code hosting service. I found a few, but I can't decide on which one to pick:

If there's a better one than the ones I've listed here, I'd love to hear about it!

I care primarily about privacy and security, if that makes any difference.

24
13

Hey everyone. For a variety of reasons I’ve ended up with a paperless-ngx install that has not been upgraded for a while. It’s currently on 1.17.1, and I’ve been researching to figure out the best way to get back up to current. I’m worried about major changes that have happened over time and what the best way to go about this is, but I’ve not had good luck finding something that gives me the confidence to go about it. Hoping someone here has some guidance. Cheers!

25
118

I made a video about copyparty, the selfhosted fileserver I've been making for the past 5 years.

The main focus of the video is the features, but it also touches upon configuration. Was hoping it would be easier to follow than the readme on github... not sure how well that went, but hey :D

This video is also available to watch on the copyparty demo server, as a high-quality AV1 file and a lower-quality h264.

view more: next ›

Selfhosted

50093 readers
20 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS