[-] Aarkon@feddit.de 7 points 4 months ago

Had a coworker five years ago who wouldn’t let go of it. And he was really productive.

To my understanding, there are still some things it does better than IntelliJ, for instance being able to add all missing imports in one go instead of one by one.
I’ll admit though that this is a rather tiny advantage, and as I haven’t touched Java in quite a while, it may be even outdated.

[-] Aarkon@feddit.de 7 points 5 months ago

Well, that or go to court for a movie collection. I'd phrase my statement differently, but I can see the appeal of the settlement.

[-] Aarkon@feddit.de 6 points 6 months ago

Thanks for pointing that out.

[-] Aarkon@feddit.de 4 points 6 months ago

I guess this is probably the solution to my riddle. Thanks.

34
submitted 6 months ago by Aarkon@feddit.de to c/linux@lemmy.ml

I was reading GitLab's documentation (see link) on how to write to a repository from within the CI pipeline and noticed something: The described Docker executor is able to authenticate e.g. against the Git repository with only a private SSH key, being told absolutely nothing about the user's name it is associated with.
If I'm correct, that would mean that technically, I could authenticate to an SSH server without supplying my name if I use a private key?

I know that when I don't supply a user explicitly like ssh user@server or via .ssh/config, the active environment's user is used automatically, that's not what I'm asking.

The public key contains a user name/email address string, I'm aware, is the same information also encoded into the private key as well? If yes, I don't see the need to hand that info to an SSH call. If no, how does the SSH server know which public key it's supposed to use to challenge my private key ownership? It would have to iterate over all saved keys, which sounds rather inefficient to me and potentially unsafe (timing attacks etc.).

I hope I'm somewhat clear, for some reason I find it really hard to phrase this question.

[-] Aarkon@feddit.de 40 points 6 months ago

Automounts as drive V:\

[-] Aarkon@feddit.de 4 points 6 months ago
868
submitted 7 months ago by Aarkon@feddit.de to c/memes@lemmy.ml
[-] Aarkon@feddit.de 4 points 8 months ago

I’m referring to this controversy, just in case that wasn’t obvious: https://www.theregister.com/2023/04/17/rust_foundation_apologizes_trademark_policy/

[-] Aarkon@feddit.de 8 points 8 months ago

For representational reasons, I miss the logo of the Rust programming language, but I have the hint of an idea why the creator of the meme didn’t put it in there.

[-] Aarkon@feddit.de 5 points 11 months ago

A cock with a cock?

17
submitted 1 year ago by Aarkon@feddit.de to c/linux@lemmy.ml

I've got a reoccurring issue with all of the home servers I've ever had and because it happened again just today, now the pain is big enough to ask publicly about it.
As of now, I'm running some Intel NUC ripoff with a JBOD attatched via USB 3, spinning a ZFS sort of-RAID. It's nothing that special tbh. In the past I had several other configurations with external drives, wired via fstab to Raspberry Pis and the like. All of those shared a similar issue: I can't recall exactly when, but I figure most of the time after updates to the kernel or docker, the computer(s) become stuck at boot. I had to unplug the external drives just to get the respective machine up, after which varying issues occurred with drives not being recognized anymore and such.

With my current setup, I run several docker containers which have their volumes on subdirectories/datasets on the /tank mountpoint, and when booting the machine without the drives, some of the containers create new directories at that destination, which now lives on my main drive /dev/sda.
It's not only painful to go through the manual process with the drives, I only have access the machine when I'm home, which I'm not all the time. Also, it's kind of time consuming as I'm backup up data that I fear might become inconsistent along the way. Every time I see a big kernel update, I fear that the computer will get stuck in such a situation once again and I'm reluctant to do a proper reboot.

I know that external drives are not best practice when it comes to handling "critical" data, but I don't want to run another machine just in order to provide access to the disks via network. Any ideas where these issues stem from and how to avoid them in the future?

[-] Aarkon@feddit.de 8 points 1 year ago* (last edited 1 year ago)

Then how is it we often times find the skeletons of our ancestors deep in the soil?

(Don’t want to sound sour though)

[-] Aarkon@feddit.de 25 points 1 year ago

Swans are dangerous mfs. Maybe they are just trying to strangle the person in the photos

152
submitted 1 year ago by Aarkon@feddit.de to c/memes@lemmy.ml
view more: next ›

Aarkon

joined 3 years ago