[-] IsoKiero@sopuli.xyz 5 points 3 days ago

It depends. I've ran small websites and other services on a old laptop at home. It can be done. But you need to realize the risks that come with it. If the thing I'm running for fun goes down. someone might be slightly annoyed that the thing isn't accessible all the time, but it doesn't harm anyones business. And if someones livelihood is depending on the thing then the stakes are a lot higher and you need to take suitable precautions.

You could of course offload the whole hardware side to amazon/hetzner/microsoft/whoever and run your services on leased hardware which simplifies things a lot, but you still run into a problem where you need to meet more or less arbitary specs for an email server so that Microsoft or Google even accept what you're sending, you need to have monitoring and staff available to keep things running all the time, plan for backups and other disaster recovery and so on. So it's "a bit" more than just 'apt install dovecot postfix apache2' on a Debian box.

[-] IsoKiero@sopuli.xyz 16 points 4 days ago

Others have already mentioned about the challenges on the software/management side, but you also need to take into consideration hardware failures, power outages, network outages, acceptable downtime and so on. So, even if you could technically shoehorn all of that into a raspberry pi and run it on a windowsill, and I suppose it would run pretty well, you'll risk losing all of the data if someone spills some coffee on the thing.

So, if you really insist doing this on your own hardware and maintenance (and want to do it properly), you'd be looking (at least):

  • 2 servers for reundancy, preferably 3rd one laying around for a quick swap
  • Pretty decent UPS setup, again multiple units for reundancy
  • Routers, network hardware, internet uplinks and everything at least duplicated and configured correctly to keep things running
  • A separate backup solution, on at least two different physical locations, so a few more servers and their network, power and other stuff taken care of
  • Monitoring, alerting system in case of failures, someone being on-call for 24/7

And likely a ton of other stuff I can't think of right now. So, 10k for hardware, two physical locations and maintenance personnel available all the time. Or you can buy a website hosting (VPS even if you like) for few bucks a month and email service for a 10/month (give or take) and have the services running, backed up and taken care of for far longer than your own hardware lifetime is for a lot cheaper than that hardware alone.

[-] IsoKiero@sopuli.xyz 3 points 5 days ago

I live in Europe. No unpaid overtime here and productivity requirements are reasonable, so no way to blame for my tools on that. And even if my laptop OS broke itself completely then I'm productive at reinstallation, as keeping my tools in a running shape is also on my job description. So, as long as I'm not just scratching my balls and scrolling instagram reels all day long that's not a concern.

[-] IsoKiero@sopuli.xyz 6 points 5 days ago

I'm currently more of an generic sysadmin than linux admin, as I do both. But the 'other stuff' at work runs around teams, office, outlook and things like that, so I'm running a win11 with WSL and it's good enough for what I need from a workstation. There's technically a policy in place that only windows workstations are supported, but I suppose I could run linux (and I have separate laptop for linux-only stuff). At the current environment it's just not worth the hassle, spesifically since I need to maintain windows servers too.

So, I have my terminals, firefox and whatever I need and I also have the mandated office-suite, malware protection/IDR/IDS by the book and in my mindset I'm using company tools for company jobs. If they take longer, could be more efficient or whatever, it's not my problem. I'll just browse my (personal) cellphone while the throbber spins on the screen and I get paid to do that.

If I switched to linux I'd need to personally take care of my system to meet specs and I wouldn't have any kind of helpdesk available should I ever need one. So it's just simpler to stick with what the company provides and if it's slow then it's not my headache and I've accepted that mindset.

[-] IsoKiero@sopuli.xyz 1 points 6 days ago

The package file, no matter if it's rpm, deb or something else, contains few things: Files for the software itself (executables, libraries, documentation, default configuration), depencies for other packages (as in to install software A you need also install library B) and installation scripts for the package. There's also some metadata, info for uninstallation and things like that, but that's mostly irrelevant for end user.

And then you need suitable package manager. Like dpkg for deb-packages, rpm (the program) for rpm-packages and so on. So that's why you mostly can't run Debian packages on Fedora or other way around. But with derivative distributions, like kubuntu and lubuntu, they use Ubuntu packages but have different default package selection and default configuration. Technically it would be possible to build a kubuntu package which depends on some library version which isn't on lubuntu and thus the packages wouldn't be compatible, but I'm almost certain that on those spesific two it's not the case.

And then there's things like Linux Mint, which originally based on Ubuntu but at least some point they had builds from both Debian and Ubuntu and thus they had different package selection. So there's a ton of nuances on this, but for the most part you can ignore them, just follow documentation for your spesific distribution and you're good to go.

[-] IsoKiero@sopuli.xyz 53 points 2 weeks ago

This is the same as complaining that my job puts a filter on my work computer that lets them know if I’m googling porn at work. You can cry big brother all you want, but I think most people are fine with the idea that the corporation I work for has a reasonable case for putting monitoring software on the computer they gave me.

European point of view: My work computer and the network in general has filters so I can't access porn, gambling, malware and other stuff on it. It has monitoring for viruses and malware, that's pretty normal and well understood need to have. BUT. It is straight up illegal for my work to actively monitor my email content (they'll of course have filtering for incoming spam and such), my chats on teams/whatever and in general be intrusive of my privacy even at work.

There's of course mechanisms in place where they can access my email if anyting work related requires that. So in case I'm laying in a hospital or something they are allowed to read work related emails from my inbox, but if there's anything personal it's protected by the same laws which apply to traditional letters and other communication.

Monitoring 'every word' is just not allowed, no matter how good your intentions are. And that's a good thing.

[-] IsoKiero@sopuli.xyz 42 points 1 month ago

Medvedev found keys for the booze cabinet again? They seem to happily forget the fact that Moscow is well within reach of multiple Nato countries by now. Obviously a ton of things need to change before anyone with a gun is standing on a red square, but Finland, Sweden, Estonia and Poland (among others) are quite capable of hitting the Kreml (in theory, and in practise if needed) with fighter jets in less than 30 minutes. Additionally their ports opening to gulf of Finland are in reach of both Finns and Estonians with traditional artillely, and at least we in Finland are pretty capable and accurate with our hardware.

So, even if they find some old soviet relic still functional, Nato has multiple options to level multiple cities at Russia before their missile hits the ground. Nuclear attack against Ukraine would of course be a humongous tragedy with terrible price on civil casualties, but I'm pretty confident that it would be the last thing the Russia we currently know would do as a country.

[-] IsoKiero@sopuli.xyz 41 points 10 months ago

Dd. It writes on disk at a block level and doesn't care if there's any kind of filesystem or raid configuration in place, it just writes zeroes (or whatever you ask it to write) to drive and that's it. Depending on how tight your tin foil hat is, you might want to write couple of runs from /dev/zero and from /dev/urandom to the disk before handing them over, but in general a single full run from /dev/zero to the device makes it pretty much impossible for any Joe Average to get anything out of it.

And if you're concerned that some three-letter agency is interested of your data you can use DBAN which does pretty much the same than dd, but automates the process and (afaik) does some extra magic to completely erase all the data, but in general if you're worried enough about that scenario then I'd suggest using an arc furnace and literally melting the drives into a exciting new alloy.

11

I'm not quite sure if electronics fit in with the community, but maybe some of you could point me into right direction with ESPHome and IR transmitter to control my minisplit heatpump at the garage.

The thing is cheapest one I could find (I should've paid more, but that's another story). It's rebranded cheap chinese crap and while vendor advertised that you could control it over wifi I didn't find any information beyond 'use SmartApp to remote control' (or whatever that software was called) but it's nowhere to be found and I don't want to let that thing into internet anyways.

So, IR to the rescue. I had 'infrared remote control module' (like this around and with arduino uno I could capture IR codes from the remote without issues.

But, transmitting those back out seems to be a bit more challenging. I believe I got the configuration in place and I even attempted to control our other heat pump with IR Remote Climate component which should have support out of the box.

I tried to power the IR led straight from nodemcu pin (most likely a bad idea) and via IRFZ44N mosfet (massive overkill, but it's what I had around) from 3.3V rail. The circuit itself seems to work and if I replace IR led with a regular one it's very clear that LED lights up when it should.

However, judging by the amount of IR light I can see trough cellphone camera, it feels like that either the IR LED is faulty (very much a possibility, what you can expect from a 1€ kit) or that I'm driving it wrong somehow.

Any ideas on what's wrong?

40
submitted 1 year ago by IsoKiero@sopuli.xyz to c/linux@lemmy.ml

I think that installation was originally 18.04 and I installed it when it was released. A while ago anyways and I've been upgrading it as new versions roll out and with the latest upgrade and snapd software it has become more and more annoying to keep the operating system happy and out of my way so I can do whatever I need to do on the computer.

Snap updates have been annoying and they randomly (and temporarily) broke stuff while some update process was running on background, but as whole reinstallation is a pain in the rear I have just swallowed the annoyance and kept the thing running.

But now today, when I planned that I'd spend the day with paperwork and other "administrative" things I've been pushing off due to life being busy, I booted the computer and primary monitor was dead, secondary has resolution of something like 1024x768, nvidia drivers are absent and usability in general just isn't there.

After couple of swear words I thought that ok, I'll fix this, I'll install all the updates and make the system happy again. But no. That's not going to happen, at least not very easily.

I'm running LUKS encryption and thus I have a separate boot -partition. 700MB of it. I don't remember if installer recommended that or if I just threw some reasonable sounding amount on the installer. No matter where that originally came from, it should be enough (this other ubuntu I'm writing this with has 157MB stored on /boot). I removed older kernels, but still the installer claims that I need at least 480MB (or something like that) free space on /boot, but the single kernel image, initrd and whatever crap it includes consumes 280MB (or so). So apt just fails on upgrade as it can't generate new initrd or whatever it tries to do.

So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I'll spend couple of hours at reinstalling the whole system. It'll be quite a while before I install ubuntu on anything.

And it's not just this one broken update, like I mentioned I've had a lot of issues with the setup and at least majority of them is caused by ubuntu and it's package management. This was just a tipping point to finally leave that abusive relationship with my tool and set it up so that I can actually use it instead of figuring out what's broken now and next.

5
submitted 1 year ago* (last edited 1 year ago) by IsoKiero@sopuli.xyz to c/homeassistant@lemmy.world

Maybe this hivemind can help out debugging Z-wave network. I recently installed two devices on the network (currently up to 15) with two repeaters, light switches, wall plugs, thermostat and couple battery operated motion sensors.

Before latest addition everything worked almost smoothly, every now and then the motion sensor messages didn't go trough, but it was rare enough that I didn't pay too much attention to it as I have plenty of other stuff to do than tinker with occasional hiccup on home automation.

However for the last 48 hours (or so) the system has become unreliable enough that I need to do something about it. I tried to debug the messages a bit, but I'm not too famliar on what to look for, however these messages are frequent and they seem to be a symptom of an issue:

Dropping message with invalid payload

[Node 020] received S2 nonce without an active transaction, not sure what to do with it

Failed to execute controller command after 1/3 attempts. Scheduling next try in 100 ms.

Specially the 'invalid payload' message appears constantly on the logs. I'd quess that some of the devices is malfunctioning, but other option is that there's somehow a loop on the network (I did attempt to reconfigure the whole thing, didn't change much) or that my RaZberry 7 pro is faulty.

Could someone give a hint on how to proceed and verify which the case might be?

Edit: I'm running Home Assistant OS on a raspberry pi 3.

7
submitted 1 year ago* (last edited 1 year ago) by IsoKiero@sopuli.xyz to c/homeassistant@lemmy.world

I've been trying to get a bar graph from nordpool electricity prices, but for some reason the graph style won't change no matter how I try to configure it.

I'm running Home assistant OS (or whatever that was called) on a raspberry pi 3:

  • Home Assistant 2023.10.1
  • Supervisor 2023.10.0
  • Operating System 10.5
  • Frontend 20231005.0 - latest

Currently my configuration for the card is like this:

type: custom:mini-graph-card
name: Pörssisähkö
entities:
  - entity: sensor.nordpool
    name: Pörssisähkö
    group-by: hour
    color: '#00ff00'
    show:
      graph: bar

But no matter how I try to change that the graph doesn't change and there's also other variables, like line graph with/without fill which doesn't work as expected. Granted, I'm not that familiar with yaml nor home assistant itself, but this is something I'd expect to "just work" as the configuration for mini-graph-card is quite simple. It displays correct data from the sensor, but only in a line format.

Is this something that recent update broke or am I doing something wrong? I can't see anything immediately wrong on any logs nor javascript console

[-] IsoKiero@sopuli.xyz 46 points 1 year ago

I don't know what to pick, but something else than PDF for the task of transferring documents between multiple systems. And yes, I know, PDF has it's strengths and there's a reason why it's so widely used, but it doesn't mean I have to like it.

Additionally all proprietary formats, specially ones who have gained enough users so that they're treated like a standard or requirement if you want to work with X.

210

cross-posted from: https://derp.foo/post/250090

There is a discussion on Hacker News, but feel free to comment here as well.

22
submitted 1 year ago* (last edited 1 year ago) by IsoKiero@sopuli.xyz to c/selfhosted@lemmy.world

This question has already been around couple of times, but I haven't found an option which would allow multiple users and multiple OS's (Linux and Windows mostly, mobile, both android and ios, support would be nice at least for viewing) to conviniently share the same storage.

This has been an issue on my network for quite some time and now when I rebuilt my home server I installed TrueNAS on a VM and I'm currently organizing my collections over there with Shotwell so the question became acute again.

Digikam seems to be promising for the rest than organizing the actual files (which I can live with, either shotwell or a shell script to sort them by exif-dates), but I haven't tried that yet with windows and my kubuntu desktop seems to only have snap-package of that without support for external SQL.

On "editing" part it would be pretty much sufficient to tag photos/folders to contain different events, locations and stuff like that, but it would be nice to have access to actual file in case some actual editing needs to be done, but I suppose SMB-share on truenas will accomplish that close enough.

Other need-to-have feature is to manage RAW and JPG versions of the same image at least somehow. Even removing JPGs and leaving only RAW images would be sufficient.

And finally, I really like to have the actual files laying around on a network share (or somewhere) so that they're easy to back up, copy to external nextcloud for sharing and in general have more flexibility in the future in case something better comes up or my environment changes.

view more: next ›

IsoKiero

joined 1 year ago