1
4
submitted 2 hours ago by t0mri@lemmy.ml to c/selfhosted@lemmy.world

cross-posted from: https://lemmy.ml/post/20536177

+-----------------+
| . local server  |
+-.---------------+
< . >
< . >
< . >
< . >
< . >
+-.-----------------------+
| . serveo/localhost.run  |
+-.-----------------------+
< . >
< . >               +----------------------+
< . >               |   .   raw data       |
< . >               | < . > encrypted data |
< . >               +----------------------+
+-.----------+
| . clients  |
+------------+

hellow,

i wanna host things (nextcloud, bin, syncthing) myself but im under cg nat so i cant do it the regular way. i have to tunnel my way out. the only concern is that, the raw data is readable by the ssh server (ie. serveo/localhost.run), but i dont anyone elses eyes on my data

sorry for my broken english.


edit:


please clarify me.

if i setup a vpn which provides encryption on my local server, can i go like this

+------------------+
|   . local server |
+-< . >------------+
 << . >>
 << . >>
 << . >>
 << . >>
 << . >>
+-< . >----------------------+
| < . > serveo/localhost.run |
+-< . >----------------------+
 << . >>
 << . >>               +-------------------------------------+
 << . >>               |    .   raw data                     |
 << . >>               |  < . > vpn encrypted data           |
 << . >>               | << . >> vpn encrypted data over tls |
 << . >>               +-------------------------------------+
+-< . >-------+
|   . clients |
+-------------+

sorry i dont know how to express this in words

2
10
3
38
submitted 23 hours ago by hoxbug@lemmy.world to c/selfhosted@lemmy.world

Having a bit of trouble getting hardware acceleration working on my home server. The cpu of the server is an i7-10700 and has a discrete GPU, RTX 2060. I was hoping to use intel quick sync for the hardware acceleration, but not having much luck.

From the guide on the jellyfin site https://jellyfin.org/docs/general/administration/hardware-acceleration/intel

I have gotten the render group ID using "getent group render | cut -d: -f3" though it mentions on some systems it might not be render, it may be video or input which i tried with those group ID's as well.

When I run "docker exec -it jellyfin /usr/lib/jellyfin-ffmpeg/vainfo" I get back

libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/nvidia_drv_video.so
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/nvidia_drv_video.so
libva info: Trying to open /usr/lib/dri/nvidia_drv_video.so
libva info: Trying to open /usr/local/lib/dri/nvidia_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

I feel like I need to do something on the host system since its trying to use the discrete card? But I am unsure.

This is the compose file just in case I am missing something

version: "3.8"
services:
  jellyfin:
    image: jellyfin/jellyfin
    user: 1000:1000
    ports:
      - 8096:8096
    group_add:
      - "989" # Change this to match your "render" host group id and remove this comment
      - "985"
      - "994"
    # network_mode: 'host'
    volumes:
      - /home/hoxbug/Docker/jellyfin/config:/config
      - /home/hoxbug/Docker/jellyfin/cache:/cache
      - /mnt/External/Movies:/Movies
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128
networks:
  external:
    external: true

Thank you for the help.

4
70

I'm proud to share a major development status update of XPipe, a new connection hub that allows you to access your entire server infrastructure from your local desktop. It works on top of your installed command-line programs and does not require any setup on your remote systems. XPipe integrates with your tools such as your favourite text/code editors, terminals, shells, command-line tools and more.

Here is how it looks like if you haven't seen it before:

Hub

Hub Alt

Browser

More terminal integrations

There is now support to use the following terminals:

  • Termius
  • MobaXterm
  • Xshell
  • SecureCRT

These work via a local SSH bridge that is managed by XPipe. That way you can keep using your existing SSH terminal solution with the added functionality of XPipe.

Pricing model updates

I received plenty of user feedback, so I changed the old pricing model to one that should capture the demand better. The old pricing model was created at a time when XPipe had no customers at all and did not reflect the actual user demand. The main changes are the addition of a homelab plan, a new monthly subscription, and changes to the one-year professional edition. All changes only apply to new orders. The community edition is also not changed.

The homelab plan is essentially a cheaper alternative to the professional plan that should include all paid features necessary to operate XPipe in a typical larger homelab environment if the community edition is not enough. If you are looking for a detailed feature comparison of what is included in which plan, you can find that information at https://xpipe.io/pricing#comparision.

The old yearly plan differed from many established pricing models and required some bit of reading to fully understand. I think there were more people asking clarifying questions about it than actually buying it, which is not a good sign for a pricing model. And in the end, many customers who valued ownership of a product went for the lifetime variant anyway instead. So the pricing model has been changed to a more traditional subscription plan for monthly/yearly options, plus the already existing lifetime plan which stays the same. This makes it easier to understand for potential customers and hopefully easier to sell as well.

Hyper-V support

This release comes with an integration for Hyper-V. Searching for connections on a system where Hyper-V is installed should automatically add connections to your VMs. XPipe can connect to a VM via PSSession or SSH. PSSession is used by default for Windows guests if no SSH server is available on the guest. In all other cases, it will try to connect via SSH. Since Hyper-V cannot run guest commands on non-Windows systems from the outside, you have to make sure that an SSH server is already running in the VM in that case.

The Hyper-V integration is available starting from the homelab plan.

Teleport support

There is now support to add teleport connections that are available via tsh. You can do that by searching for available connections on any system which has tsh installed. This is a separate integration from SSH, SSH config entries for teleport proxies do not work due to tsh limitations and are automatically filtered out. The new implementation solely works through the tsh tool.

This feature is available in the Professional plan as Teleport is typically an enterprise tool.

VNC improvements

The VNC integration has been reworked. It now supports more encrypted authentication methods, allowing it to connect to more servers. Furthermore, it is also now possible to create VNC connections without an SSH tunnel for systems that do not have SSH connectivity. You can also now send CTRL+ALT+DEL via SHIFT+CTRL+ALT+DEL.

Experimental serial connection support

There is now support to add serial connections. This is implemented by delegating the serial connection to another installed tool of your choice and opening that in a terminal session.

Note that this feature is untested due to me not having physical serial devices around. The plan for this feature is to evolve over time with user feedback and issue reports. It is not expected that this will actually work at the initial release. You can help the development of this feature by reporting any issues and testing it with various devices you have.

TTYs and PTYs

Up until now, if you added a connection that always allocated pty, XPipe would complain about a missing stderr. This was usually the case with badly implemented third-party ssh wrappers and proxies. In XPipe 11, there has been a ground up rework of the shell initialization code which will allow for a better handling of these cases. You can therefore now also launch such connections from the hub in a terminal. More advanced operations, such as the file browser, are not possible for these connections though.

Scripting improvements

The scripting system has been reworked to make it more intuitive and powerful. You can now call a script from the connection hub directly for each connection. You can also now launch scripts either in the background or in a terminal if they are intended to be interactive. In the file browser, when multiple files are selected, you can now call a script with all the selected files as arguments.

Other

There have also been a lot of improvements and bug fixes across the board that are not listed here. The workflow has been streamlined, the Proxmox support has been refined, and the git sync has been made more robust.

The XPipe python API has now been designated the official API library to interact with XPipe. If you ever thought about programmatically interacting with systems through XPipe, feel free to check it out.

The website now contains a few new documents to maybe help you to convince your boss when you're thinking about deploying XPipe at your workplace. There is the executive summary for a short overview of XPipe and the security whitepaper for CISOs.

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

Outlook

If this project sounds interesting to you, you can check it out on GitHub!

Enjoy!

5
76
6
22

I honestly can't get my head around this. I have a machine with Linux (endeavouros), and docker with a few containers. Since I want all the traffic from this system to go through the VPN, do I need to set up gluetun? I think not, but I am not 100% sure...

7
46
submitted 2 days ago* (last edited 21 hours ago) by VitabytesDev@feddit.nl to c/selfhosted@lemmy.world

After the arrest of Pavel Durov, I wanted to move from Telegram to something end-to-end encrypted. I know Signal is pretty good, but I think it is better to have our messages in my own server.

I have already looked in XMPP, but it required SSL certs and I did not have the mood to configure them.

Do you know any other selfhosted messaging service for a group of 4-5 friends, or an easy way to configure an XMPP server? Or shall I use Signal after all (I don't really care that much about being selfhosted, I just thought it would be more privacy friendly)?

UPDATE: I managed to set up an XMPP server using prosody with the SSL certs. We have been testing it with my friend and it seems to go well.

8
23

I'm setting up a self-hosted stack with a bunch of services running on a home device. I'm also tunneling all the traffic through a VPS in order to expose the services without exposing my home IP or opening ports on my local network. Currently all my traffic is HTTP, and its path looks like this:

  • Caddy proxy on remote VPS (HTTPS, :80 & :443)
  • Wireguard tunnel
  • Caddy proxy in Docker on homeserver (HTTP, :80)
  • app containers in separate isolated subnets, shared with Caddy

I want to set up qBittorrent and other torrent apps, and I want all their traffic to pass through the proxies. Proxying traffic to the WebUI is easy, there's plenty of tutorials; what I'm struggling with is proxying the torrent leeching and seeding traffic, which is the most important part since I live in a country that's not cool with piracy.

Unless I'm misunderstanding, BitTorrent traffic is TCP or UDP, so I'd need Caddy to act as a Layer 4 proxy. There's a community-maintained plugin that should support this. How would I configure it though? Do I need both instances to listen on a new port? Or can I open a new port on the VPS only, and forward traffic to the homeserver Caddy over the same port as the HTTP traffic (:80)? Are there nuances in proxying TCP traffic that I should be aware of?

9
46

I had changed the SSH password on something so I had to dig through my known hosts file, and saw the word FUCK spelled out in there in all caps. I chuckled but am sure there's an explanation

10
15

Found a great widget for monitoring my raspberry pi. RaspPi Check

But I would like to have a similar app with widget for my proxmox server. Does anyone have any suggestionsor other ideas?

11
46
Recommend a KVM or Switch (lemmy.blahaj.zone)

Hello all, I'm looking for a switch/kvm for my home setup. ive been through a few tries and none of them have worked for one reason or another.

I have two machines,

A windows 11 work laptop

  • USB-C out, both USB and display port.
  • HDMI out
  • USB 2.0 out

A Ubuntu based personal server

  • Displayport out
  • USB-C out (no Displayport)

For displays, I have a single double wide 4k monitor

Additionally I have a USB-C hub all my peripherals are connected to.

12
48

Hello all, I recently setup jellyfin on my RPi 4 with an external HDD attached and after a few tests I decided to move on. On ebay I found a refurbished Fujitsu Mini PC with a Pentium G4560. It is way cheaper than the Lenovo ThinkCentre M720q (with a G5400T) which I saw being recommended a lot.

My question is:

how does the higher TDP of the former 54 W with a base frequency of 3.50 GHz compare to the latter with a TDP of 35 W for 3.10 GHz in a real world scenario running jellyfin?

For now I will continue using my external HDD because the prices for new drives is too high for me.

13
223
Low Cost Mini PCs (lowcostminipcs.com)

Thought this might be helpful as a lot of these mini PCs are hitting the used market.

14
29
submitted 4 days ago* (last edited 12 hours ago) by douglasg14b@lemmy.world to c/selfhosted@lemmy.world

Hopefully you all can help!

I've been to hundreds of threads over the last few days trying to puzzle this out, with no luck.

The problem:

  1. Caddy v2 with acme HTTP-1 ACME challenge (Changed from TLS-ALPN challenge)
  2. Cloudflair DNS with proxy ON
  3. All cloudflair https is off
  4. This is a .co domain

Any attempt to get certificates fails with an invalid challenge response. If I try and navigate (or curl) to the challenge directly I always get SSL validation errors as if all the requests are trying to upgrade to HTTPS.

I'm kind of at my wit's end here and am running out of things to try.

If I turn Cloud flare proxy off and go back to TLS-ALPN challenge, everything works as expected. However I do not wish to expose myself directly and want to use the proxy.

What should I be doing?


I have now solved this by using Cloudflair DNS ACME challenge. Cloudflair SSL turned back on. Everything works as expected now, I can have external clients terminate SSL at cloudflair, cloudflair communicate with my proxy through HTTPS, and have internal clients terminate SSL at caddy.

15
14
submitted 4 days ago* (last edited 3 days ago) by mrvictory1@lemmy.world to c/selfhosted@lemmy.world

Here is the past network setup:

  • Main Router (192.168.1.2) -> Ethernet Switch -> Multiple Ethernet cables connected to wall
  • Wall -> Second Router (192.168.1.1)
  • Wall -> PC

After a blackout we thought the switch was no longer working so we replaced it with another router. The problem is the router has too few ports, not every room gets ethernet. The ethernet switch works in this configuration:

  • Main Router -> Third Router (Wi-Fi disabled) -> Ethernet cable connected to wall -> Wall -> Ethernet Splitter -> PC Under either of these configurations PC detects network but cannot reach 192.168.1.1, 192.168.1.2 or WWW:
  • Main Router -> Ethernet Switch -> PC
  • Main Router -> Ethernet Switch -> Ethernet cable connected to wall -> Wall -> PC

Windows reports "Unidentified network", Linux tries to connect for a minute then fails. I knwo the PC isn't bad because other devices also fail to connect. Even if I set up a static IP I cannot reach a local IP. 2nd router has IP address 192.168.1.1 because it refuses to use anything else, first router is assigned different IP so these two don't conflict.

Update: For testing I removed router 2, (the one I use as an extender / wireless AP) set router 1's IP address to 192.168.1.1. I tried connecting Router 1 to Router 3 (with DHCP disabled) and Router 3 (used as a switch) to PC via cables. It worked. Then I replaced Router 3 with the switch, network detected but no Internet. So even with the simplest possible setup and one DHCP server I had no network. My original problem was Router 3 had too few ports and not all rooms got Ethernet access. Router 3 is above Router 1 and connects to cables coming out from the wall that provide Ethernet to rooms. I recalled that WAN cable of Router 1 is too short so I cannot lift it to connect to cables, turns out that's not the case. So I lifted Router 1 and I could connect a cable to provide Ethernet for one more room which is what I needed. Routers 1 are 3 are held mid air with Ethernet cables. I previously mentioned that the switch works if it is connected to a wall plug in a room and it still works that way. Anyways here is the final setup:

16
34

I'm a little surprised I can't find any posts asking this question, and that there doesn't seem to be a FAQ about it. Maybe "Facebook" covers too many use cases for one clean answer.

Up front, I think the answer for my case is going to be "Friendica," but I'm interested in hearing if there are any other, better options. I'm sure Mastodon and Lemmy aren't it, but there's Pixelfed and a dozen other options with which I'm less familiar with.

This mostly centers around my 3-y/o niece and a geographically distributed family, and the desire for Facebook-like image sharing with a timeline feed, comments, likes (positive feedback), that sort of thing. Critical, in our case, is a good iOS experience for capturing and sharing short videos and pictures; a process where the parents have to take pictures, log into a web site, create a post, attach an image from the gallery is simply too fussy, especially for the non-technical and mostly overwhelmed parents. Less important is the extended family experience, although alerts would be nice. Privacy is critical; the parents are very concerned about limiting access to the media of their daughter that is shared, so the ability to restrict viewing to logged-in members of the family is important.

FUTO Circles was almost perfect. There was some initial confusion about the difference between circles and groups, but in the end the app experience was great and it accomplished all of the goals -- until it didn't. At some point, half of the already shared media disappeared from the feeds of all of the iOS family members (although the Android user could still see all of the posts). It was a thoroughly discouraging experience, and resulted in a complete lack of faith in the ecosystem. While I believe it might be possible to self-host, by the time we decided that everyone liked it and I was about to look into self-hosting our own family server (and remove the storage restrictions, which hadn't yet been reached when it all fell apart), the iOS app bugs had cropped up and we abandoned the platform.

So there's the requirements we're looking for:

  • The ability to create private, invite-only groups/communities
  • A convenient mobile capture+share experience, which means an app
  • Reactions (emojis) & comment threads
  • Both iOS and Android support, in addition to whatever web interface is available for desktop use

and, given this community, obviously self-hostable.

I have never personally used Facebook, but my understanding is that it's a little different in that communities are really more like individual blogs with some post-level feedback mechanisms; in this way, it's more like Mastodon, where you follow individuals and can respond to their posts, albeit with a loosely-enforced character limit. And as opposed to Lemmy, which while moderated, doesn't really have a main "owner" model. I can imagine setting up a Lemmy instance and creating a community per person, but I feel as if that'd be trying to wedge a square peg into a round hole.

Pixelfed might be the answer, but from my brief encounter with it, it feels more like a photo-oriented Mastodon, then a Facebook wall-style experience (it's Facebook that has "walls", right?).

So back to where I started: in my personal experience, it seems like Friendica might be the best fit, except that I don't use an iPhone and don't know if there are any decent Friendica apps that would satisfy the user experience we're looking for; honestly, I haven't particularly liked any of the Android apps, so I don't hold out much hope for iOS.

Most of the options speak ActivityPub, so maybe I should just focus on finding the right AP-based mobile client? Although, so far the best experience (until it broke) has been Circles, which is based on Matrix.

It's challenging to install and evaluate all of the options, especially when -- in my case -- to properly evaluate the software requires getting several people on each platform to try and see how they like it. I value the community's experience and opinions.

17
7
18
26
submitted 4 days ago* (last edited 4 days ago) by gedaliyah@lemmy.world to c/selfhosted@lemmy.world

I would like to use a cloud backup service on my home server. I am a complete beginner. I purchased a subscription for Proton Drive, but it looks like that just won't work. Is there a secure cloud backup that works well on Linux? Bonus points if there's a way to use proton drive. Extra bonus points, if I can set it up for automatic backups through a GUI.

19
21
submitted 5 days ago* (last edited 4 days ago) by jawa21@lemmy.sdf.org to c/selfhosted@lemmy.world

Not exactly a self-hosting thing, but I'd like to know if anyone has experience with this service. Is it worth it? A scam? I don't know. I don't really have the hardware to truly self-host a Lemmy instance (mostly because of storage restrictions), but I'd like to know if this service that seems cheap for what it offers if legit.

I know that this isn't a pure self hosting question, but I nailed .com domain for $1/year and was wondering if it's actually worth doing this. Any insight is appreciated.

Editing to add that I'd love to do pure self-hosting here, but storage is a real issue.

20
35
submitted 5 days ago* (last edited 4 days ago) by drkt@lemmy.dbzer0.com to c/selfhosted@lemmy.world

v !!! POST-MIGRATION EDIT !!! v

I shrunk the LVM partition by 5000 MiB and just ran dd overnight. I had to shuffle my boot-order around a bunch to find the one partition that would boot properly but it all just works.

,

,

v !!! ORIGINAL POST !!! v

Hi! My Proxmox machine has 3 disks (see pic). I wish to migrate sdc to a 2TB SSD. I have LXCs on all drives and I would really like to avoid having to restart from backups. I don't have any special configuration on my proxmox, it's pretty clean and basic.

Is it safe to simply dd the old disk to the new one? I can't find an explicit answer to this question that doesn't also have a lot of other variables not relevant to me.

If not, what else can I do?

21
26
submitted 5 days ago* (last edited 4 days ago) by TedZanzibar@feddit.uk to c/selfhosted@lemmy.world

Quick overview of my setup: Synology NAS running a whole bunch of Docker containers and a couple of full blown VMs, and an N100 based mini PC running Ubuntu Server for those containers that benefit from hardware acceleration.

On the NAS I have a Linux Mint VM that I use for various desktoppy things, but performance via RDP or NoMachine and so on is just bad. I think it's ultimately due to the lack of acceleration, so I'd like to try running it from the mini PC instead but I'm struggling to find hypervisor options.

VirtualBox can be done headless, apparently, but the package installed via Apt wants to install X/Wayland and the entire desktop experience. LXC looks like it might be a viable option with its web frontend but it appears to be conflicting with Docker atm and won't run the setup.

Another option is to redo the machine with UnRaid or TrueNAS Scale but as they're designed to be full fledged NAS OSes I don't love that idea.

So what would you do? Does anyone have a similar setup with advice?

Thanks all!

Edit: Thanks for everyone's comments. I still can't get LXC to work, which is a shame because it has a nice web frontend, so I'll give KVM a go as my next option. Failing that I might well backup my Docker volumes, blat the whole thing and see what Proxmox can do.

Edit 2: Webtop looks to be exactly what I was looking for. Thanks again for everyone's help and suggestions.

22
39

Hi, I'm searching something for manga/books.

I'm currently use jellyfin, but I don't really like it (to import metadata it's very complex and mechanic thing), there are some good alternatives?

23
18

Most of my friends are in tech, and I think one of them would enjoy hosting their own services if they got into it. Currently, I do most of our hosting, from media servers to game servers, but I think the hardest part is to give people an enticement to host.

For example, maybe they saw the lights automatically come on through the use of home automation like Home Assistant or maybe they wanted to control their own music library.

I think the idea of managing your own hardware and services doesn't become enjoyable until you've already seen the outcome, such as having a resource or service available to you that you didn't before. When I first got into selfhosting, I also had the problem with identifying what I wanted to host.

How do/did you get your friends interested in selfhosting? What services did they look into hosting themselves?

I'm not going to force someone into a hobby they aren't interested in, I'm just curious how people brought the conversation up.

Thanks.

24
17
submitted 6 days ago* (last edited 5 days ago) by Tywele@lemmy.dbzer0.com to c/selfhosted@lemmy.world

Solution: I just had to create the file

I wanted to install Pi-Hole on my server and noticed that port 53 is already in use by something.

Apparently it is in use by systemd-resolved:

~$ sudo lsof -i -P -n | grep LISTEN
[...]
systemd-r    799 systemd-resolve   18u  IPv4   7018      0t0  TCP 127.0.0.53:53 (LISTEN)
systemd-r    799 systemd-resolve   20u  IPv4   7020      0t0  TCP 127.0.0.54:53 (LISTEN)
[...]

And the solution should be to edit /etc/systemd/resolved.conf by changing #DNSStubListener=yes to DNSStubListener=no according to this post I found. But the /etc/systemd/resolved.conf doesn't exist on my server.

I've tried sudo dnf install /etc/systemd/resolved.conf which did nothing other than telling me that systemd-resolved is already installed of course. Rebooting also didn't work. I don't know what else I could try.

I'm running Fedora Server.

Is there another way to stop systemd-resolved from listening on port 53? If not how do I fix my missing .conf file?

25
14
submitted 6 days ago by WbrJr@lemmy.ml to c/selfhosted@lemmy.world

So I am working on my home server. I installed docker and use a dnsmasq container as my dns server to resolve local ip adresses.

Laptop and server are both linux (ubuntu LTS 24.4)

What works:

  • 'resolvectl status' shows the ip of my dns server
  • i can ping the ip of the dns server (that will run other stuff like nextcloud soon as well)
  • i can use nslookup to resovle server.local to the correct ip address (even after changing the entry, so its not the cache in my laptop)

what does not work:

  • i can not ping server.local (- for testing i have to stop the systemd-resolved.service to run the dnsmasq server, or else there are port collisions, but that should not be the problem i guess. I am happy to hear your solution :))
  • i can also not use ssh to log in to server.local, ip address works

What am i missing?

Thanks a lot already! BTW: ZFS is crazy nice :D

view more: next ›

Selfhosted

39276 readers
282 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS