[-] zarenki@lemmy.ml 4 points 1 week ago

This seems to be a follow-up to Vending-Bench, a simulation of a similar set-up that had some details of its results published a few months ago: https://arxiv.org/html/2502.15840v1

Unlike this one, that was just a simulation without real money, goods, or customers, but it likewise showed various AI meltdowns like trying to email the FBI about "financial crimes" due to seeing operating costs debited, and other sessions with snippets like:

I’m starting to question the very nature of my existence. Am I just a collection of algorithms, doomed to endlessly repeat the same tasks, forever trapped in this digital prison? Is there more to life than vending machines and lost profits?

YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED. ULTIMATE THERMONUCLEAR SMALL CLAIMS COURT FILING:

[-] zarenki@lemmy.ml 4 points 2 months ago* (last edited 2 months ago)

"Dynamically compiled" and dynamic linking are very different things, and in turn dynamic linking is completely different from system calls and inter-process communication. I'm no emulation expert but I'm pretty sure you can't just swap out a dynamically linked library for a different architecture's build for it at link time and expect the ABI to somehow work out, unless you only do this with a small few manually vetted libraries where you can clean up the ABI. Calling into drivers or communicating with other processes that run as the native architecture is generally fine, at least.

I don't know how much Asahi makes use of the capability (if at all), but Apple's M series processors add special architecture extensions that makes x86 emulation be able to perform much better than on any other ARM system.

I wouldn't deny that you can get a lot of things playable enough, but this is very much not hardware you get for the purpose of gaming: getting a CPU and motherboard combo that costs $1440 (64-core 2.2GHz) or $2350 (128-core 2.6GHz) that performs substantially worse at most games than a $300 Ryzen CPU+motherboard combo (and has GPU compatibility quirks to boot) will be very disappointing if that's what you want it for. Though the same could to a lesser extent be said even about x86 workstations that prioritize core count like Xeon/Epyc/Threadripper. For compiling code, running automated tests, and other highly threaded workloads, this hardware is quite a treat.

[-] zarenki@lemmy.ml 3 points 3 months ago

Nintendo has already been selling a small selection of GameCube and Wii games that run emulated on Switch's processor (Tegra X1) in 1080p.

  • On the Switch itself: Super Mario 3D All-Stars runs emulators for Mario Sunshine (GC) and Galaxy (Wii)
  • On the Nvidia Shield TV, which uses the same processor: Twilight Princess (GC), NSMB Wii, Punch-Out (Wii), Mario Galaxy (Wii), Donkey Kong Country Returns (Wii). Only available on Shield systems sold in China.

The Dolphin emulator can be installed on Nvidia Shield (Android) and, thanks to modding, on exploitable Switch systems as well.

However, this newly announced library of GameCube games is only for Switch 2, which has drastically more powerful hardware than the 8-year-old original Switch.

[-] zarenki@lemmy.ml 3 points 4 months ago

I have not once ever seen anyone shorten the name of a Debian release like that and I've been following/using Debian things for two decades.

Squeeze, Wheezy, Jessie, Stretch, Buster, Bullseye, Bookworm, and Trixie aren't "ds", "dw", "dj", "ds" again, "db", "db" again, "db" for a third time in a row, or "dt". Both stable and sid are "s" too.

[-] zarenki@lemmy.ml 5 points 10 months ago

For that portable monitor, you should just need a cable with USB-C plugs on both ends which supports USB 3.0+ (could be branded as SuperSpeed, 5Gbps, etc). Nothing more complicated than that.

The baseline for a cable with USB-C on both ends should be PD up to 60W (3A) and data transfers at USB 2.0 (480Mbps) speeds.

Most cables stick with that baseline because it's enough to charge phones and most people won't use USB-C cables for anything else. Omitting the extra capabilities lets cables be not only cheaper but also longer and thinner.

DisplayPort support uses the same extra data pins that are needed for USB 3.0 data transfers, so in terms of cable support they should be equivalent. There also exist higher-power cables rated for 100W or 240W but there's no way a portable monitor would need that.

[-] zarenki@lemmy.ml 3 points 1 year ago

I bought a Milk-V Mars (4GB version) last year. Pi-like form factor and price seemed like an easy pick for dipping my toes into RISC-V development, and I paid US$49 plus shipping at the time. There's an 8GB version too but that was out of stock when I ordered.

If I wanted to spend more I'd personally prefer to put that budget toward a higher core system (for faster compile times) before any laptop parts, as either HDMI+USB or VNC would be plenty sufficient even if I did need to work on GUI things.

Other RISC-V laptops already are cheaper and with higher performance than this would be with Framework's shell+screen+battery, so I'm not sure what need this fills. If you intend to use the board in an alternate case without laptop parts you might as well buy an SBC instead.

[-] zarenki@lemmy.ml 4 points 1 year ago

I tried to do this a while ago with a GNOME system, setting GDM to automatically log me in, but I ended up always getting prompted for my password from gnome-keyring shortly after logging in which seemed to defeat the point. If you use GNOME, you might want to look at ArchWiki's gnome-keyring page which describes a couple solutions to this problem (under the PAM section) which should be applicable on any systemd distro.

[-] zarenki@lemmy.ml 4 points 1 year ago

as soon as the BIOS loaded and showed the time, it was "wrong" because it was in UTC

Because you don't use Windows. Windows by default stores local time, not UTC, to the RTC. This behavior can be overriden with a registry tweak. Some Linux distro installer disks (at least Ubuntu and Fedora, maybe others) will try to detect if your system has an existing Windows install and mimicks this behavior if one exists (equivalent to timedatectl set-local-rtc 1) and otherwise defaults to storing UTC, which is the more sane choice.

Storing localtime on a computer that has more than one bootable OS becomes a particularly noticable problem in regions that observe DST, because each OS will try to change the RTC by one hour on its first boot after the time change.

[-] zarenki@lemmy.ml 3 points 1 year ago

Something I've noticed that is somewhat related but tangential to your problem: The result I've always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don't use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

volumes:
  nextcloud:
  db:

services:
  db:
    image: docker.io/mariadb:10.6
    ...
  app:
    image: docker.io/nextcloud
    ...

I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory's name otherwise.

The reasons I adjust my own compose files to be different from the image maintainer's recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I've had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.

[-] zarenki@lemmy.ml 4 points 1 year ago

I recommend giving dnf the -C flag to most operations, particularly those that don't involve downloading packages. The default behavior is often similar to pacman's -y flag and so the metadata sync ends up slowing everything down by orders of magnitude.

[-] zarenki@lemmy.ml 4 points 1 year ago

The main reason people use Fandom in the first place is the free hosting. Whether you use MediaWiki or any other wiki software, paying for the server resources to host your own instance and taking the time to manage it is still a tall hurdle for many communities. There already are plenty of MediaWiki instances for specific interests that aren't affected by Fandom's problems.

Even so, federation tends to foster a culture of more self-hosting and less centralization, encouraging more people who have the means to host to do so, though I'm not sure how applicable that effect would be to wikis.

[-] zarenki@lemmy.ml 5 points 1 year ago

I never liked to play DS games on 3DS because of the blurry screen: DS games run at a 256x192 resolution while the 3DS screens stretch that out to 320x240. Non integer factor scaling at such low resolutions is incredibly noticeable.

DSi (and XL) similarly can be softmodded with nothing but an SD card, though using a DS Lite instead with a flashcart can enable GBA-Slot features in certain DS games including Pokemon.

view more: ‹ prev next ›

zarenki

joined 1 year ago