[-] PAPPP@lemmy.sdf.org 17 points 1 month ago

I initially didn't do enough of that in my PhD thesis (CS, about some weird non-frame-based imaging tech that is still only of academic interest), and my committee demanded I add more stake-claiming favorable comparisons to other tech to my introduction before I submitted.

[-] PAPPP@lemmy.sdf.org 8 points 3 months ago

Neat.

I set up some basic compute stuff with the ROCm stack on a 6900HX-based mini computer the other week (mostly to see if it was possible as there are some image processing workloads a colleague was hoping to accelerate on a similar host) and noticed that the docs occasionally pretend you could use GTT dynamicly allocated memory for compute tasks, but there was no evidence of it ever having worked for anyone.

That machine had flexible firmware and 64GB of RAM stuffed in it so I just shuffled the boot time allocation in the EFI to give 8GB to the GPU to make it work, but it's not elegant.

It's also pretty clumsy to actually make things run, lot of "set the magic environment variable because the tool chain will mis-detect the architecture of your unsupported card" and "Inject this wall of text into your CMake list to override libraries with our cooked versions" to make things work. Then it performs like an old GTX1060, which is on one hand impressive for an integrated part in a fairly low wattage machine, and on the other hand is competing with a low-mid range card from 2016.

Pretty on brand really, they've been fucking up their compute stack since before any other vendor was doing the GPGPU thing (abandoning CTM for Stream in like a year).

I think the OpenMP situation was the least jank of the ways I tried getting something to offload on an APU, but it was also one of the later attempts so maybe I was just getting used to it's shit.

[-] PAPPP@lemmy.sdf.org 11 points 6 months ago

Don't trust that they're 100% compatible with mainline Linux, ChromeOS carries some weird patches and proprietary stuff up-stack.

I have a little Dell Chromebook 11 3189 that I did the Mr.Chromebox Coreboot + Linux thing on, a couple years ago I couldn't get the (weird i2c) input devices to work right, that has since been fixed in upstream coreboot tables and/or Linux but (as of a couple months ago) still don't play nice with smaller alternative OSes like NetBSD or a Haiku nightly.

The Audio situation is technically functional but still a little rough, the way the codec in bay/cherry trail devices is half chipset half external occasionally leads to the audio configuration crapping itself in ways that take some patience and/or expertise to deal with (Why do I suddenly have 20 inoperable sound cards in my pulse audio settings?).

This particular machine also does some goofy bullshit with 2 IMUs in the halves instead of a fold-back sensor, so the rotation/folding stuff via iio sensors is a little quirky.

But, they absolutely are fun, cheap hacker toys that are generally easy targets.

[-] PAPPP@lemmy.sdf.org 6 points 8 months ago

They don't have to be specified in a monolithic fashion, but some things - like the input plumbing and session management examples I made - do have to be specified for for software to work when running under different compositors. FD.o basically exists because we already learned this lesson with other compat problems, and solved it without putting it in the X monolith - it's why things like ICCM and EWMH happened; there were more details than were in the existing APIs that everyone needed to agree on to make software interoperate.

Competing implementations are great, but once you have significant inertia behind competing implementations which are not compatible or at least interoperable, you've fragmented the already-small Linux market share into a maze of partially-incompatible micro-platforms. We're not going to have compositing and non-compositing, we're going to have 3ish (KDE/Qt [kde], Gnome/Gtk who aren't even doing documented protocols, and Everyone else - mostly [wlr] extensions) incompatible sets of protocols for basic functionality.

Looking at the slow bitter process to extend or replace components once implementations that rely on them exist, that's not something to count on. Remember how it took 15 years of contention to eventually transition to D-Bus after CORBA/Bonobo and DCOP? That's whats about to happen with things like the incompatible gtk and qt session management schemes. And that resolution was forced by the old HAL system using it, not the other parties involved getting their shit together of their own accord.

One place we're about to see innovation is wayland-stack-bypassing workarounds. Key remapping is currently in that category, the wayland protocols suite punted.... so instead, keyd sniffing all the HID traffic at the evdev and/or uinput layer and outputting the rule-edited streams to virtual HID devices. That one does have a certain global elegance (works on ttys!), but it's also layering violations with privileged processes.

[-] PAPPP@lemmy.sdf.org 22 points 8 months ago* (last edited 8 months ago)

I will preface that Xorg is obviously an unmaintainable mess of legacy decisions and legacy code, and I have both a machine that runs Hyprland and a machine that usually starts Plasma in Wayland mode so the Wayland situation getting to be more-or-less adequate with persistent irritations here and there... but Wayland is trauma-driven-development. It's former xorg developers minimizing their level of responsibility for actual platform code, but controlling the protocol spec, and in the position to give up on X in time with their preferred successor.

Essentially all of the platform is being outsourced to other libraries and toolkits, who are all doing their own incompatible things (Which is why we have like 8 xdg-desktop-portal back-ends with different sets of deficiencies, because portals were probably designed at the wrong level of abstraction), and all have to figure out how to work around the limitations in the protocols. Or they can spend years bikeshedding about extensions over theoretical security concerns in features that every other remotely modern platform supports.

Some of that outsourcing has been extremely successful, like Pipewire.

Some attempts have been less successful, like the ongoing lack of a reasonable way to handle input plumbing in a Wayland environment (think auto-type and network kvm functionality) because they seem to have imagined their libinput prototype spun out of Weston would serve as complete generic input plumbing, and it's barely adequate for common hardware devices - hopefully it's not too late to get something adequate widely standardized upon, but I'm increasingly afraid we missed the window of opportunity.

Some things that had to be standardized to actually work - like session management - have been intentionally abdicated, and now KDE and Gnome have each become married to their own mutually-incompatible half solution, so we're probably boned on that ever working properly until the next "start over to escape our old bad decisions" cycle.... which, if history holds, isn't that far away.

We're 15 years in to Wayland, and only in the last few years has it made it from "barely a tech demo" through "Linux in the early 2000s" broken, and in the last year to "problems with specific features" broken ... and it is only 4 years younger than the xf86->xorg fork.

[-] PAPPP@lemmy.sdf.org 13 points 11 months ago

The argument was that if you put all your static resources in /usr, you can mount it RO (for integrity, or to use a ROM on something embeddedish) or from a shared volume (it's not uncommon to NFS mount a common /usr for a pool of managed similar machines).

...that said, many of the same people who made that argument are also the ones that went with making it so systemd won't boot without /usr populated anymore, so that feature is now less useful because it has to be something your initramfs/initcpio/whatever preboot environment mounts rather than mounted by the normal fstab/mount behavior, and the initcpio/initramfs/dracut schemes for doing that all (1) require a redundant set of tools and network configs in the preboot (2) are different and (3) are brittle in annoying ways.

It still works OK if you're using a management tool like Warewulf to manage configs and generate all the relevant filesystems and such, but it's a lot more fucking around than a line in fstab to mount usr once the real system is up like the old days.

[-] PAPPP@lemmy.sdf.org 6 points 1 year ago

The 2.5 development only tree had a ton of behind the scenes big long projects that weren't visible to users until the stable 2.6 dropped and everything suddenly changed.

Like a complete redesign of the scheduling code especially but not exclusively for multiprocessor systems, swapping much of the networking stack, and the change from devfs to udev.

If you hold udev up next to devd and devpubd that solve similar problems on the BSDs, it's a clear leap into "Linux will do bespoke binary interfaces, and DSLs for configuration and policy, and similar traditionally un-UNIX-y things that trade accepting complex state and additional abstractions to make things faster and less brittle to misconfiguration" which is the path that the typical Linux pluming has continued down with eg. Systemd.

A lot of modern Kernel development flow is about never having that kind of divergence and sudden epoch change again.

[-] PAPPP@lemmy.sdf.org 6 points 1 year ago* (last edited 1 year ago)

The CB3-431 is device name EDGAR. You'd most likely pull the write protect screws and flash a UEFI payload into the firmware, probably using Mr. Chromebox's tooling and payloads. Most modern Chromebooks boot Coreboot with a depthcharge payload, and it can either be coerced to boot something different with a lot of effort, or easily swapped with a Tianocore UEFI payload to make it behave like a normal PC. Once flashed, it's an ordinary Braswell generation PC with 4GB of RAM and 32GB of storage.

The S330 is an ARM machine built on a Mediatech MT8173C. Installing normal Linux on ARM Chromebooks is substantially less well-established, but often possible. It looks like those are doable but you won't get graphics acceleration, and the bootloader situation is a little klutzy.

Of the two, the CB3-431will be easier and better documented to bend to your will.

The major limitation with Chromebooks is really just that there isn't much onboard storage, so you'll want to pick reasonably light software (A distro where you pick packages on a small base install or at least a lighter spin will be preferable) and avoid storage-intensive distros (eg. Nix or the immutable-core-plus-containers schemes whose packaging models have substantial storage overhead are probably unsuitable). You may have a little hassle with sound because many Chromebooks have a goofy half-soc-half-external-codec sound layout for which the Linux tooling is still improving - a pair of annoying PipeWire and Kernel bugs that sometimes cause them to come up wrong and spew log messages got fixed last week but aren't in a release yet.

They aren't fancy machines, but hacked used Chromebooks make great beaters.

[-] PAPPP@lemmy.sdf.org 9 points 1 year ago

Most of my machines are KDE on X, but I have one where I've been feeling stuff out in Wayland-land. The most appealing thing I've tried has been Hyprland with Waybar. It's a little bit of a kit in traditional WM fashion, but easy to configure from straightforward config files, fairly light, and not "Just like this X WM, but broken because of missing Wayland functionality" (I know, I know, it's not technically Wayland deficiencies, its "not yet complete extensions", because it's all extensions, the Wayland protocol itself does almost nothing).

I've been using Kitty for a terminal emulator and it's pleasing as well.

I haven't found a launcher I love, I have fuzzel right now and the only major issue is it doesn't currently support mouse interaction, and I prefer a "use whichever input device your hand is on at the time" to keyboard-only.

[-] PAPPP@lemmy.sdf.org 21 points 1 year ago* (last edited 1 year ago)

Most Chromebook's firmware is Coreboot, but it's running a Depthcharge payload instead of UEFI (or BIOS or whatever). Mr. Chromebox maintains UEFI Coreboot payloads and install tools for a wide variety of (x86) Chromebooks, which can be used to flash a normal UEFI payload and boot normal OSes. It's strictly possible to boot normal Linux systems on a the Depthcharge payload modern Chromebooks use, but uh... here's the gentoo wiki on it, it's a substantial pain in the ass.

[-] PAPPP@lemmy.sdf.org 15 points 1 year ago* (last edited 1 year ago)

Yup.

I have a little Dell 3189 2-in-1 that I originally got used just to see what the ChromeOS fuss was about and hack on.

I'd rooted it, and played with the various hosted/injected Linux options (like chromebrew and the 1st party Linux VM stuff, neither of which was great) while it was under support, but some time after it went AUE I went ahead and flashed a Mr. Chromebox UEFI payload onto it and just slammed normal Linux onto it. It basically "Just Works" though that's thanks to considerable efforts in the Coreboot port and Kernel because there is a bunch of cheap bullshit (badly plumbed i2c input devices, that stupid bay/cherry trail style half integrated audio setup, etc.) in the hardware. I had briefly flashed it over a couple years ago and that hadn't all been smoothed over yet back then.

Lately its an Arch system playing with various Wayland options - Hyprland is ricer bullshit, but it actually does a pretty decent job at being not wildly broken compared to the big environments in Wayland mode, tiling makes good use of the not enough pixels, and the search key in the left pinkie position makes a great WM key.

It's not a nice computer, an N3060 with 4GB of RAM 32GB of emmc and a 1366x768 panel is distinctly in craptop territory these days, but you can also get them for like $50 now because no one wants past AUE Chromebooks, and they make nice beaters - and unlike refurb SFF boxes, SBCs, and similar usual sub-$100 beater options, they come with a screen and keyboard and battery.

[-] PAPPP@lemmy.sdf.org 15 points 1 year ago

In the same kind of vein as imagemagick, Dave Coffin's dcraw tool at least partly underlies almost every non-proprietary RAW image decoder, and some of the commercial ones (if they don't use code, they use constant matrices and such).

He's not a sole maintainer to any of his major projects anymore, but honorable mention to Fabrice Bellard who initiated both ffmpeg and qemu among other notable activities.

IIRC the Expat XML parser that's embedded everywhere was basically on spare-time maintenance by Clark Cooper and Fred Drake for a couple decades, but I think they have a little more resources now.

SQLite is a BDFL situation more than single-maintainer, but D. Richard Hipp still has his hands on everything, and there are only a relatively small number of folks with commit access.

view more: next ›

PAPPP

joined 1 year ago