[-] duncesplayed@lemmy.one 56 points 8 months ago

You just don't appreciate how prestigious it is to get a degree from Example U.

[-] duncesplayed@lemmy.one 59 points 11 months ago

Holy shit. If I understand correctly, the trains were programmed to use their GPS sensors to detect if they were ever physically moved to an independent repair shop. If they detected that they were at an independent repair shop, they were programmed to lock themselves and give strange and nonsensical error codes. Typing in an unlock code at the engineer's console would allow the trains to start working normally again.

If there were a corporation-sized mirror, I don't know how NEWAG could look at itself in it.

[-] duncesplayed@lemmy.one 52 points 11 months ago* (last edited 11 months ago)

I'm going to reframe the question as "Are computers good for someone tech illiterate?"

I think the answer is "yes, if you have someone that can help you".

The problem with proprietary systems like Windows or OS X is that that "someone" is a large corporation. And, in fairness, they generally do a good job of looking after tech illiterate people. They ensure that their users don't have to worry about how to do updates, or figure out what browser they should be using, or what have you.

But (and it's a big but) they don't actually care about you. Their interest making sure you have a good experience ends at a dollar sign. If they think what's best for you is to show you ads and spy on you, that's what they'll do. And you're in a tricky position with them because you kind of have to trust them.

So with Linux you don't have a corporation looking after you. You do have a community (like this one) to some degree, but there's a limit to how much we can help you. We're not there on your computer with you (thankfully, for your privacy's sake), so to a large degree, you are kind of on your own.

But Linux actually works very well if you have a trusted friend/partner/child/sibling/whoever who can help you out now and then. If you've got someone to help you out with it, Linux can actually work very very well for tech illiterate people. The general experience of browsing around, editing documents, editing photos, etc., works very much the same way as it does on Windows or OS X. You will probably be able to do all that without help.

But you might not know which software is best for editing photos. Or you might need help with a specific task (like getting a printer set up) and having someone to fall back on will give you much better experience.

[-] duncesplayed@lemmy.one 42 points 1 year ago

You're just not cloud-native enough to understand how revolutionary it is to run GNOME on Fedora.

1
submitted 1 year ago* (last edited 1 year ago) by duncesplayed@lemmy.one to c/neurodivergentlifehacks@sh.itjust.works

I'm a university professor and I often found myself getting stressed/anxious/overwhelmed by email at certain times (especially end-of-semester/final grades). The more emails that started to pile in, the more I would start to avoid them, which then started to snowball when people would send extra emails like "I sent you an email last week and haven't got a response yet...", which turned into a nasty feedback loop.

My solution was to create 10 new email folders, called "1 day", "2 days", "3 days", "4 days", "5 days", "6 days", "7 days", "done", "never" and "TIL", which I use during stressful times of the year. Within minutes of an email coming into my inbox, I move it into one of those folders. "never" is for things that don't require any attention or action by me (mostly emails from the department about upcoming events that don't interest me). "TIL" is for things that don't require an action or have a deadline, but I know I'll be referring to a lot. Those are things like contact information, room assignments, plans for things, policy updates.

The "x days" folders are for self-imposed deadlines. If I want to ensure I respond to an email within 2 days, I put it in the "2 days" folder, for example.

And the "done" folder is for when I have completed dealing with an email. This even includes emails where the matter isn't resolved, but I've replied to it, so it's in the other person's court, so to speak. When they reply, it'll pop out of "done" back into the main inbox for further categorizing, so it's no problem.

So during stressful, email-heavy times of year, I wake up to a small number of emails in my inbox. To avoid getting stressed, I don't even read them fully. I read just enough of them that I can decide if I'll respond to them (later) or not, categorize everything, and my inbox is then perfectly clean.

Then I turn my attention to the "1 day" box, which probably only has about 3 or 4 emails in it. Not so overwhelming to only look at those, and once I get started, I find I can get through them pretty quickly.

The thing I've noticed is that once I get over the initial dread of looking at my emails (which used to be caused by looking at a giant dozens-long list of them), going through them is pretty quick and smooth. The feeling of cleaning out my "1 day" inbox is a bit intoxicating/addictive, so then I'll want to peek into my "2 days" box to get a little ahead of schedule and so on. (And if I don't want to peek ahead that day, hey, no big deal)

Once I'm done with my emails, I readjust them (e.g., move all the "2 days" into "1 day", then all the "3 days" into "2 days", and so on) and completely forget about them guilt-free for the rest of day.

Since implementing this system a year ago, I have never had an email languish for more than a couple weeks, and I don't get anxiety attacks from checking email any more.

[-] duncesplayed@lemmy.one 49 points 1 year ago

Ironically neither GNU nor Linux has a clipboard (well GNU Emacs probably has like 37 of them for some reason). "Primary selection" (the other clipboard that people don't tell you about) started off on X11, which of course had to implement by XFree86, which became Xorg, and then it copied (ha ha) by other non-X-related software like gpm and toolkits like GTK when using Wayland.

[-] duncesplayed@lemmy.one 52 points 1 year ago

This is the major reason why maintainers matter. Any method of software distribution that removes the maintainer is absolutely guaranteed to have malware. (Or if you don't consider 99% software on Google Play Store the App Store to be "malware", it's at the very least hostile to and exploitative of users). We need package maintainers.

[-] duncesplayed@lemmy.one 48 points 1 year ago

You mean Linux isn't going to have 200% market share one day? Shit, I'm starting to think my calculations may have not been totally serious.

[-] duncesplayed@lemmy.one 103 points 1 year ago

Just an FYI that at this rate it's only going to take another 115 years before Linux has 100% market share.

[-] duncesplayed@lemmy.one 73 points 1 year ago* (last edited 1 year ago)

Unfortunate title, but it's a good video and some good thoughts from both Linus and AC.

Interestingly, this video is just 2 years after Linus and Alan Cox had a bit of a blowup that caused AC to resign from the TTY subsystem. And even more interestingly, the blowup was specifically about the very topic they're discussing: not breaking userspace and keeping a consistent user experience. Linus felt AC had broken userspace unnecessarily, was too proud/stubborn to revert the change and save the user experience. AC felt Linus was trivializing how easy "just fix it" was going to be and threw up his hands and resigned.

I was curious if they were still on good terms after that, and it's nice to see that they were. For newcomers to Linux, Alan Cox used to be (in the 1990s) the undisputed Riker to Linus' Picard, the #2 in command, ready to take over all of Linux at a moment's notice. As we got into the 2000s, that changed, and this video (2011) was from the middle of a chaotic time for him. In 2009 he quit Red Hat, then joined Intel 2 years later, then quit shortly after that and has just a few years ago stopped kernel development permanently.

[-] duncesplayed@lemmy.one 46 points 1 year ago* (last edited 1 year ago)

Anne Frank advertising baby clothes before discussing the horrors of the Holocaust

Wow, that is amazingly inhumane.

My first thought is they're necessarily making characters who aren't people. A person who has lived through the Holocaust just cannot cheerfully peddle baby clothes. I don't mean that it's physically not possible because she's dead: I mean in terms of the human psyche, a person just flat-out psychologically could not do that. A young boy who succumbed to torture and murder psychology cannot just calmly narrate it.

So obviously, yeah, it's quite a ghoulish and evil thing to take what used to be a person, and a figure who has been studied and mourned because of their personhood, because we can relate to them as a person, and just completely strip them of their personhood and turn them into an inhumane object.

But then that leads to me the question of, who's watching these things, and why? The article says they got quite a lot of views. Is it just for shock value? I don't quite understand.

[-] duncesplayed@lemmy.one 51 points 1 year ago

If I can try to summarize the main findings:

  1. Computer-generated (e.g.., Stable Diffusion) child porn is not criminalized in Japan, and so many Japanese Mastodon servers don't remove it
  2. Porn involving real children is removed, but not immediately, as it depends on instance admins to catch it, and they have other things to do. Also, when an account is banned, the Mastodon server software is not sending out a "delete" for all of their posted material (which would signal other instances to delete it)

Problem #2 can hopefully be improved with better tooling. I don't know what you do about problem #1, though.

104
submitted 1 year ago by duncesplayed@lemmy.one to c/linux@lemmy.ml

Thomas Glexiner of Linutronix (now owned by Intel) has posted 58 patches for review into the Linux kernel, but they're only the beginning! Most of the patches are just first steps at doing more major renovations into what he calls "decrapification". He says:

While working on a sane topology evaluation mechanism, which addresses the short-comings of the existing tragedy held together with duct-tape and hay-wire, I ran into the issue that quite some of this tragedy is deeply embedded in the APIC code and uses an impenetrable maze of callbacks which might or might not be correct at the point where the CPUs are registered via MPPARSE or ACPI/MADT.

So I stopped working on the topology stuff and decided to do an overhaul of the APIC code first. Cleaning up old gunk which dates back to the early SMP days, making the CPU registration halfways understandable and then going through all APIC callbacks to figure out what they actually do and whether they are required at all. There is also quite some overhead through the indirect calls and some of them are actually even pointlessly indirected twice. At some point Peter yelled static_call() at me and that's what I finally ended up implementing.

He also, at one point, (half-heartedly) argues for the removal of 32-bit x86 code entirely, arguing that it would simplify APIC code and reduce the chance for introducing bugs in the future:

Talking about those museums pieces and the related historic maze, I really have to bring up the question again, whether we should finally kill support for the museum CPUs and move on.

Ideally we remove 32bit support alltogether. I know the answer... :(

But what I really want to do is to make x86 SMP only. The amount of #ifdeffery and hacks to keep the UP support alive is amazing. And we do this just for the sake that it runs on some 25+ years old hardware for absolutely zero value. It'd be not the first architecture to go SMP=y.

Yes, we "support" Alpha, PARISC, Itanic and other oddballs too, but that's completely different. They are not getting new hardware every other day and the main impact on the kernel as a whole is mostly static. They are sometimes in the way of generalizing things in the core code. Other than that their architecture code is self contained and they can tinker on it as they see fit or let it slowly bitrot like Itanic.

But x86 is (still) alive and being extended and expanded. That means that any refactoring of common infrastructure has to take the broken hardware museum into account. It's doable, but it's not pretty and of really questionable value. I wouldn't mind if there were a bunch of museum attendants actively working on it with taste, but that's obviously wishful thinking. We are even short of people with taste who work on contemporary hardware support...

While I cursed myself at some point during this work for having merged i386/x86_64 back then, I still think that it was the correct decision at that point in time and saved us a lot of trouble. It admittedly added some trouble which we would not have now, but it avoided the insanity of having to maintain two trees with different bugs and "fixes" for the very same problems. TBH quite some of the horrors which I just removed came out of the x86/64 side. The oddballs of i386 early SMP support are a horror on their own of course.

As we made that decision more than 15 years [!] ago, it's about time to make new decisions.

Linus responded to one of the patches, saying "I'm cheering your patch series", but has obviously diplomatically not acknowledged the plea to remove 32-bit support.

87
submitted 1 year ago by duncesplayed@lemmy.one to c/linux@lemmy.ml
5
submitted 1 year ago* (last edited 1 year ago) by duncesplayed@lemmy.one to c/technology@beehaw.org

Hey all technology people!

Not my community, but I thought I'd advertise someone else's new lemmy community to see if anyone else is interested.

Head over to !bbses@lemmy.dbzer0.com for BBSes and retrocomputing.

2
submitted 1 year ago* (last edited 1 year ago) by duncesplayed@lemmy.one to c/privacyguides@lemmy.one

It feels like we have a new privacy threat that's emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

  1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
  2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we're fighting it mostly by avoiding Big Tech ("De-Googling", switching from social media to communities, etc.).
  3. Now we're in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it's all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn't help, since they can access our posts no matter where we post them.

So for that third one...what do we do? Anything that's online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you've provided? If you do care, do you think there's any reasonable way we can fight back? Can we poison their training data somehow?

view more: next ›

duncesplayed

joined 1 year ago