[-] IsoKiero@sopuli.xyz 8 points 3 days ago

Not that it's really relevant for the discussion, but yes. You can do that, with or without chroot.

That's obviously not the point, but we're already comparing oranges and apples with chroot and containers.

[-] IsoKiero@sopuli.xyz 2 points 6 days ago

All of those are still standing on Firefox's shoulders and the actual rendering engine on the browser isn't really trivial thing to build. Sure, they're not going away, and likely Firefox will be around too for quite a while, but the world wide web as we currently know it is changing and Google and Microsoft are few of the bigger players pushing the change.

If you're old enough you'll remember the banners 'Best viewed with on ', and it's not too far off from the future we'll have if the big players get their wishes. Things like google suite, whatever meta is offering and pretty much "the internet" as your Joe Average understands it wants to implement technology where it's not possible to block ads or modify the content you're shown in any other way. It's not too far off from your online banking and other very much real life affecting services start to have boundaries in place where they require certain level of 'security' from your browser and you can bet that things which allow content modifying things, like adblocker, doesn't qualify for the new standards.

On many places it's already illegal to modify or tamper DRM protected content in any ways (does anyone remember libdvdcss?) and the plan is to include similar (more or less) restrictions to the whole world wide web, which would say that we'll have things like fediverse who allow browsers like firefox and 'the rest' like banking, flight/ticket/hotel/whatever booking sites, big news outlets and so on who only allow the 'secure' version of the browser. And that of course has very little to do with actual security, they just want control over your device and what content is fed to you, regardless if you like it or not.

[-] IsoKiero@sopuli.xyz 53 points 1 month ago

This is the same as complaining that my job puts a filter on my work computer that lets them know if I’m googling porn at work. You can cry big brother all you want, but I think most people are fine with the idea that the corporation I work for has a reasonable case for putting monitoring software on the computer they gave me.

European point of view: My work computer and the network in general has filters so I can't access porn, gambling, malware and other stuff on it. It has monitoring for viruses and malware, that's pretty normal and well understood need to have. BUT. It is straight up illegal for my work to actively monitor my email content (they'll of course have filtering for incoming spam and such), my chats on teams/whatever and in general be intrusive of my privacy even at work.

There's of course mechanisms in place where they can access my email if anyting work related requires that. So in case I'm laying in a hospital or something they are allowed to read work related emails from my inbox, but if there's anything personal it's protected by the same laws which apply to traditional letters and other communication.

Monitoring 'every word' is just not allowed, no matter how good your intentions are. And that's a good thing.

[-] IsoKiero@sopuli.xyz 36 points 1 month ago

I don't have answer for you, but Alec over at Technology Connections made a video few days ago related to the topic. That might not have the answer for you either, but as his videos (and there's a ton of those, even for refridgerators) are among of the best at youtube that is worth cheking out.

But as a rule of thumb, new materials and hardware are better on pretty much every metric. And if your current one doesn't work properly anymore it'll most likely uses way more power than it should, as coolant flow/insulation/something isn't in fully working condition and thus compressor needs to run more often than on a new unit.

[-] IsoKiero@sopuli.xyz 42 points 2 months ago

Medvedev found keys for the booze cabinet again? They seem to happily forget the fact that Moscow is well within reach of multiple Nato countries by now. Obviously a ton of things need to change before anyone with a gun is standing on a red square, but Finland, Sweden, Estonia and Poland (among others) are quite capable of hitting the Kreml (in theory, and in practise if needed) with fighter jets in less than 30 minutes. Additionally their ports opening to gulf of Finland are in reach of both Finns and Estonians with traditional artillely, and at least we in Finland are pretty capable and accurate with our hardware.

So, even if they find some old soviet relic still functional, Nato has multiple options to level multiple cities at Russia before their missile hits the ground. Nuclear attack against Ukraine would of course be a humongous tragedy with terrible price on civil casualties, but I'm pretty confident that it would be the last thing the Russia we currently know would do as a country.

[-] IsoKiero@sopuli.xyz 41 points 11 months ago

Dd. It writes on disk at a block level and doesn't care if there's any kind of filesystem or raid configuration in place, it just writes zeroes (or whatever you ask it to write) to drive and that's it. Depending on how tight your tin foil hat is, you might want to write couple of runs from /dev/zero and from /dev/urandom to the disk before handing them over, but in general a single full run from /dev/zero to the device makes it pretty much impossible for any Joe Average to get anything out of it.

And if you're concerned that some three-letter agency is interested of your data you can use DBAN which does pretty much the same than dd, but automates the process and (afaik) does some extra magic to completely erase all the data, but in general if you're worried enough about that scenario then I'd suggest using an arc furnace and literally melting the drives into a exciting new alloy.

[-] IsoKiero@sopuli.xyz 36 points 1 year ago

Bare feet are a bit clickbaity on the headline. That alone doesn't mean much, but when it happens on a area where you should have full protective gear at the (supposed to be) sterile part of the manufacturing it's of course a big deal. But it would be equally big deal if you just stroll there in your jeans and t-shirt with boots you stepped on a dog shit on your way to work. And even then it's not even close of being the biggest issue on manufacturing where they constantly ignored all of the safety protocols, including ignoring test results which told them that the product is faulty.

11

I'm not quite sure if electronics fit in with the community, but maybe some of you could point me into right direction with ESPHome and IR transmitter to control my minisplit heatpump at the garage.

The thing is cheapest one I could find (I should've paid more, but that's another story). It's rebranded cheap chinese crap and while vendor advertised that you could control it over wifi I didn't find any information beyond 'use SmartApp to remote control' (or whatever that software was called) but it's nowhere to be found and I don't want to let that thing into internet anyways.

So, IR to the rescue. I had 'infrared remote control module' (like this around and with arduino uno I could capture IR codes from the remote without issues.

But, transmitting those back out seems to be a bit more challenging. I believe I got the configuration in place and I even attempted to control our other heat pump with IR Remote Climate component which should have support out of the box.

I tried to power the IR led straight from nodemcu pin (most likely a bad idea) and via IRFZ44N mosfet (massive overkill, but it's what I had around) from 3.3V rail. The circuit itself seems to work and if I replace IR led with a regular one it's very clear that LED lights up when it should.

However, judging by the amount of IR light I can see trough cellphone camera, it feels like that either the IR LED is faulty (very much a possibility, what you can expect from a 1€ kit) or that I'm driving it wrong somehow.

Any ideas on what's wrong?

40
submitted 1 year ago by IsoKiero@sopuli.xyz to c/linux@lemmy.ml

I think that installation was originally 18.04 and I installed it when it was released. A while ago anyways and I've been upgrading it as new versions roll out and with the latest upgrade and snapd software it has become more and more annoying to keep the operating system happy and out of my way so I can do whatever I need to do on the computer.

Snap updates have been annoying and they randomly (and temporarily) broke stuff while some update process was running on background, but as whole reinstallation is a pain in the rear I have just swallowed the annoyance and kept the thing running.

But now today, when I planned that I'd spend the day with paperwork and other "administrative" things I've been pushing off due to life being busy, I booted the computer and primary monitor was dead, secondary has resolution of something like 1024x768, nvidia drivers are absent and usability in general just isn't there.

After couple of swear words I thought that ok, I'll fix this, I'll install all the updates and make the system happy again. But no. That's not going to happen, at least not very easily.

I'm running LUKS encryption and thus I have a separate boot -partition. 700MB of it. I don't remember if installer recommended that or if I just threw some reasonable sounding amount on the installer. No matter where that originally came from, it should be enough (this other ubuntu I'm writing this with has 157MB stored on /boot). I removed older kernels, but still the installer claims that I need at least 480MB (or something like that) free space on /boot, but the single kernel image, initrd and whatever crap it includes consumes 280MB (or so). So apt just fails on upgrade as it can't generate new initrd or whatever it tries to do.

So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I'll spend couple of hours at reinstalling the whole system. It'll be quite a while before I install ubuntu on anything.

And it's not just this one broken update, like I mentioned I've had a lot of issues with the setup and at least majority of them is caused by ubuntu and it's package management. This was just a tipping point to finally leave that abusive relationship with my tool and set it up so that I can actually use it instead of figuring out what's broken now and next.

5
submitted 1 year ago* (last edited 1 year ago) by IsoKiero@sopuli.xyz to c/homeassistant@lemmy.world

Maybe this hivemind can help out debugging Z-wave network. I recently installed two devices on the network (currently up to 15) with two repeaters, light switches, wall plugs, thermostat and couple battery operated motion sensors.

Before latest addition everything worked almost smoothly, every now and then the motion sensor messages didn't go trough, but it was rare enough that I didn't pay too much attention to it as I have plenty of other stuff to do than tinker with occasional hiccup on home automation.

However for the last 48 hours (or so) the system has become unreliable enough that I need to do something about it. I tried to debug the messages a bit, but I'm not too famliar on what to look for, however these messages are frequent and they seem to be a symptom of an issue:

Dropping message with invalid payload

[Node 020] received S2 nonce without an active transaction, not sure what to do with it

Failed to execute controller command after 1/3 attempts. Scheduling next try in 100 ms.

Specially the 'invalid payload' message appears constantly on the logs. I'd quess that some of the devices is malfunctioning, but other option is that there's somehow a loop on the network (I did attempt to reconfigure the whole thing, didn't change much) or that my RaZberry 7 pro is faulty.

Could someone give a hint on how to proceed and verify which the case might be?

Edit: I'm running Home Assistant OS on a raspberry pi 3.

7
submitted 1 year ago* (last edited 1 year ago) by IsoKiero@sopuli.xyz to c/homeassistant@lemmy.world

I've been trying to get a bar graph from nordpool electricity prices, but for some reason the graph style won't change no matter how I try to configure it.

I'm running Home assistant OS (or whatever that was called) on a raspberry pi 3:

  • Home Assistant 2023.10.1
  • Supervisor 2023.10.0
  • Operating System 10.5
  • Frontend 20231005.0 - latest

Currently my configuration for the card is like this:

type: custom:mini-graph-card
name: Pörssisähkö
entities:
  - entity: sensor.nordpool
    name: Pörssisähkö
    group-by: hour
    color: '#00ff00'
    show:
      graph: bar

But no matter how I try to change that the graph doesn't change and there's also other variables, like line graph with/without fill which doesn't work as expected. Granted, I'm not that familiar with yaml nor home assistant itself, but this is something I'd expect to "just work" as the configuration for mini-graph-card is quite simple. It displays correct data from the sensor, but only in a line format.

Is this something that recent update broke or am I doing something wrong? I can't see anything immediately wrong on any logs nor javascript console

[-] IsoKiero@sopuli.xyz 46 points 1 year ago

I don't know what to pick, but something else than PDF for the task of transferring documents between multiple systems. And yes, I know, PDF has it's strengths and there's a reason why it's so widely used, but it doesn't mean I have to like it.

Additionally all proprietary formats, specially ones who have gained enough users so that they're treated like a standard or requirement if you want to work with X.

211

cross-posted from: https://derp.foo/post/250090

There is a discussion on Hacker News, but feel free to comment here as well.

[-] IsoKiero@sopuli.xyz 40 points 1 year ago

DNS is a quite well matured technology and it's just as complex as it needs to be and not a bit more. It's a very robust system which has been a big part of the backbone of the internet as we know it today for decades and it's responsible for quite a large chunk of stuff working as intended globally for millions and billions of people all day every day.

It's not hard to learn per se (it's something you can explain on a basic level to every layman in 15 minutes or so), it's just a complex system and understanding complex systems isn't always easy nor fast. Running your own DNS-server/forwarder for a /24 private subnet is rather trivial thing to do, but doing it well requires that you understand at least some of the underlying tehcnology.

You really need to learn how to walk at first and build on that to run. It's just a fundamental piece of technology and there's no shortcuts with it due to nature of DNS services. You can throw whatever running on a container by following step-by-step instructinos and call it a day, but that alone doesn't give you the knowledge to understand what's going on under the hood. That's just how the things are and should I have my way with things, that same principle should apply to everything, specially if it's going to face the public internet.

22
submitted 1 year ago* (last edited 1 year ago) by IsoKiero@sopuli.xyz to c/selfhosted@lemmy.world

This question has already been around couple of times, but I haven't found an option which would allow multiple users and multiple OS's (Linux and Windows mostly, mobile, both android and ios, support would be nice at least for viewing) to conviniently share the same storage.

This has been an issue on my network for quite some time and now when I rebuilt my home server I installed TrueNAS on a VM and I'm currently organizing my collections over there with Shotwell so the question became acute again.

Digikam seems to be promising for the rest than organizing the actual files (which I can live with, either shotwell or a shell script to sort them by exif-dates), but I haven't tried that yet with windows and my kubuntu desktop seems to only have snap-package of that without support for external SQL.

On "editing" part it would be pretty much sufficient to tag photos/folders to contain different events, locations and stuff like that, but it would be nice to have access to actual file in case some actual editing needs to be done, but I suppose SMB-share on truenas will accomplish that close enough.

Other need-to-have feature is to manage RAW and JPG versions of the same image at least somehow. Even removing JPGs and leaving only RAW images would be sufficient.

And finally, I really like to have the actual files laying around on a network share (or somewhere) so that they're easy to back up, copy to external nextcloud for sharing and in general have more flexibility in the future in case something better comes up or my environment changes.

view more: next ›

IsoKiero

joined 1 year ago