1
Release v0.6.0 Β· ilya-zlobintsev/LACT
(github.com)
This is a big release, adding several new major features:
- Nvidia support! LACT now works with Nvidia GPUs for all of the core functionality (monitoring, clocks configuration, power limits and fan control). It uses the NVML library, so unlike the Nvidia control panel it doesn't rely on X11 extensions and works under Wayland.
- Multiple profiles for configuration. Currently it is not possible to switch them automatically, but they are configurable through the UI or the unix socket.
- Clocks configuration now works on AMD IGPUs (at least RDNA2). Previously it was not parsed properly due to lack of VRAM settings.
- Zero RPM mode settings on RDNA3. Currently this needs a linux-next to be used, and the functionality is expected to land in kernel 6.13. But this resolves a long-standing issue with RDNA3 that made the fan always disabled below a certain temperature, even if using a custom curve.
There are many other improvements as well, such as better looking and more efficient plots rendering in the historical charts window (thanks to @In-line ) and a Fedora COPR repository providing LACT packages (currently in testing).
Nvidia showcase:
Full list of changes:
π Features
- Add support for multiple settings profiles (#327)
- Show dialog when attempting to reconnect to daemon
- Include device info and stats responses in debug snapshot
- Improve plot rendering, use supersampling and do it in a background thread
- [breaking] Add initial Nvidia support (#388)
- Implement clocks control on Nvidia (#398)
- Add special case for invalid throttle mask
- Add snapshot command to CLI
- Add RDNA3 zero RPM setting (#393)
π Bug Fixes
- Getting pci info in snapshot
- Retry reading p-states if the value is nonsensical
- Increase retry intervals when evaluating GPUs at start
- Make throttling flags ellipsized to avoid massively oversized window (#402)
- Deduplicate throttle status bits
- Update amdgpu-sysfs with iGPU fixes, add steam deck quirk (#407)
- Fedora spec non-default builds (#410)
π Refactor
- Make info page a relm component (#404)
- Drop redundant ClockSettings structure in the ui
π Documentation
- Update issue template to mention common RDNA3 problems
- Fix issue template yaml
- Move description to label in issue template
βοΈ Miscellaneous Tasks
- Bump version
- Update docs, enforce minimum rust version
- Set codegen-units=1 to decrease binary size in release (#390)
- Include service log in debug snapshot
- Drop old bench feature
- Bump dependencies
- Bump version
- Remove unused Cargo features (#405)
Developer
- Automatically create release on tag push
- Trigger workflow on tag push
- Bump workflow rust version
- Add debug builds to makefile
- Skip building signed packages if signing secret is not found
- Don't run rust checks on master pushes, only PRs
I'd suspect the controller or cable first.
You say that as if it's a good thing. If you HDD is "literally dying", you want the filesystem to fail safe to make you (and applications) aware and not continue as if nothing happened. extfs doesn't fail here because it cannot even detect that something is wrong.
btrfs has its own share of bugs but, in theory, this is actually a feature.
Not any issue that you know of. For all extfs (and, by extension, you) knows, the disk/cable/controller/whatever could have mangled your most precious files and it would be none the wiser; happily passing mangled data to applications.
You have backups of course (right?), so that's not an issue you might say but if the filesystem isn't integer, that can permeate to your backups because the backup tool reading those files is none the wiser too; it relies on the filesystem to return the correct data. If you don't manually verify each and every file on a higher level (e.g. manual inspection or hashing) and prune old backups, this has potential for actual data loss.
If your hardware isn't handling the storage of data as it should, you want to know.
While the behaviour upon encountering an issue is in theory correct, btrfs is quite fragile. Hardware issues shouldn't happen but when they happen, you're quite doomed because btrfs doesn't have the option to continue despite the integrity of a part of it being compromised.
btrfs-restore
disables btrfs' integrity; emulating extfs's failure mode but it's only for extracting files from the raw disks, not for continuing to use it as a filesystem.I don't know enough about btrfs to know whether this is feasible but perhaps it could be made a bit more log-structured such that old data is overwritten first which would allow you to simply roll back the filesystem state to a wide range of previous generations, of which some are hopefully not corrupted. You'd then discard the newer generations which would allow you to keep using the filesystem.
You'd risk losing data that was written since that generation of course but that's often a much lesser evil. This isn't applicable to all kinds of corruption because older generations can become corrupted retroactively of course but at least a good amount of them I suspect.