[-] edinbruh@feddit.it 4 points 6 days ago

How do we know what the trees looked like? I thought they got buried and crumbled into carbon or something

[-] edinbruh@feddit.it 9 points 6 days ago* (last edited 6 days ago)

ISO/OSI is a neatly separated model mostly used on theory.

In practice, actual network stacks are often modeled after a simpler model that is called TCP/IP. Which despite the name is not actually TCP specific.

Here's the general description and correspondence to ISO/OSI:

  1. Host to network / network access layer: it's mostly the nic and nic driver. It's sometimes numbered as 0 because some don't consider it part of the TCP/IP stack, but simply the nic driver. Corresponds to:
    1. Physical
    2. Datalink
  2. Network layer: Corresponds to: 3. Network
  3. Transport layer: Corresponds to: 4. Transport
  4. Application layer: everything that's part of the application and not the network stack. Corresponds to: 5. Session 6. Presentation 7. Application

Or, you can just not care about how the actual software stack is separated, and continue to use the most complete model, knowing that everyone will understand what you when you say "layer 2/3/4" anyway.

Plus, some could say that the TCP/IP model is equally unfit because the Linux network subsystem doesn't care about layers.

Edit: I hope the formatting of that table isn't broken on your client, because it is on mine

1
OMG! Trumps! (feddit.it)
1

cross-posted from: https://feddit.it/post/23350094

what are your experiences using game controllers with linux, I'm especially interested in the xbox series s controller because it's the one I have, but I'm also interested in other controllers. From my experience the latency is disappointing, but I have no way of proving it.

So, I primarily use this controller in bluetooth mode using xpadneo. There's definitely noticeable latency, but in most games it's fine, I played through a lot of games without bother... until I played Conker: Live and Reloaded. On the infamous race level, it took me like two days to pass it, and I only made some progress when i connected the cable and dropped BT. Even that was fine though, It was just one old game and just one level, there could be a number of things to blame for that. Come hollow knight, as the game got harder after beating Hornet, it quickly became apparent that I couldn't get far without the cable, save for traversing the world, still, not that bad... until I got to fight radiance. It has been extremely frustrating, I tried it for days and eventually I started just doing a few attempts every few days, without any improvement, finding it hard to get to the second phase. Today I visited my parents and in the late evening decided to try it on a windows computer I left here, mind you, the last time I played was more than a week ago. So, I start the game, plug the same controller in, with the same cable, I beat Radiance on the fucking first try, with half health bar left...

It literally happened 10 minutes ago, I'm still riled up, this doesn't make sense, this has to be latency, there is no way I got that better just like that, It is literally impossible.

So, after all that, I need to unfuck the latency of my controller someway... Ok, it's fine on most games, but this situation is... frustrating

12
submitted 3 weeks ago* (last edited 3 weeks ago) by edinbruh@feddit.it to c/linux@lemmy.ml

what are your experiences using game controllers with linux, I'm especially interested in the xbox series s controller because it's the one I have, but I'm also interested in other controllers. From my experience the latency is disappointing, but I have no way of proving it.

So, I primarily use this controller in bluetooth mode using xpadneo. There's definitely noticeable latency, but in most games it's fine, I played through a lot of games without bother... until I played Conker: Live and Reloaded. On the infamous race level, it took me like two days to pass it, and I only made some progress when i connected the cable and dropped BT. Even that was fine though, It was just one old game and just one level, there could be a number of things to blame for that. Come hollow knight, as the game got harder after beating Hornet, it quickly became apparent that I couldn't get far without the cable, save for traversing the world, still, not that bad... until I got to fight radiance. It has been extremely frustrating, I tried it for days and eventually I started just doing a few attempts every few days, without any improvement, finding it hard to get to the second phase. Today I visited my parents and in the late evening decided to try it on a windows computer I left here, mind you, the last time I played was more than a week ago. So, I start the game, plug the same controller in, with the same cable, I beat Radiance on the fucking first try, with half health bar left...

It literally happened 10 minutes ago, I'm still riled up, this doesn't make sense, this has to be latency, there is no way I got that better just like that, It is literally impossible.

So, after all that, I need to unfuck the latency of my controller someway... Ok, it's fine on most games, but this situation is... frustrating

edit: I think it was steam input. The game was running natively, but I had to use steam input because the controller were broken. I solved by running the game in proton, so I wouldn't need steam input anymore.

19
submitted 1 month ago* (last edited 1 month ago) by edinbruh@feddit.it to c/linux@lemmy.ml

I'm trying to find a better solution to manage configuration files, both user's dotfiles and system files in /etc. I'm running an ubuntu server where I have a bunch services with custom configurations, and systemd drop-in files, but on top of that I also have some scripts and user dotfiles that I need to track.

What I'm doing right now is that I have a folder full of symlinks in the admin user's directory (poor username choice, btw) and I'm using bindfs to mount this directory inside a git repository, this way git won't see them as symlinks, and will version them as regular files. The problem with doing this is that as git deletes and rewrites files, bindfs fails to track the changes and converts the symlink to regular files.

I looked into chezmoi, but that is only meant to track user dotfiles and will refuse to add a file from /etc, that is unless doing some extra work. But even so, chezmoi will not track the user:group of files, so I would still have to manage that manually.

I also looked into GNU Stow, and that would not complain about files from /etc or anywhere, but it similarly will not track permissions and I would have to manage that manually.

I see that some people are using ansible to manage dotfiles, but at that point, it would make sense to just migrate to ansible, except I don't want to rebuild my server from scratch to use ansible. Also it looks like a lot to learn.

Is there a better solution I'm not seeing? Maybe something using git hooks?

Edit:

I ended up using pre-commit and post-merge git hooks to launch a python script. The python script reads from a yaml file where I annotate the file paths and permissions, and then copies to or from the file location to the git repository.

I used the sudoers file to allow the admin user to run this specific script with specific arguments as root without password (because the git commands are run from VS Code and not manually), which is dangerous, be careful when doing that. I have taken special care to make this secure:

  • I used absolute paths for everything, to avoid allowing running from a different pwd as a way to copy different files
  • The script itself is installed in a root-owned location, so an unprevileged user cannot edit it
  • The configuration yaml is root-owned, so an unprevileged user cannot modify which files are copied or their permissions
  • Configuration files that can grant permission are not managed by this script (the yaml, /etc/passwd, /etc/groups, polkit rules, the sudoers file, ...)
[-] edinbruh@feddit.it 72 points 2 months ago

That's like... It's purpose. Compilers always have a frontend and a backend. Even when the compiler is entirely made from scratch (like Java or go), it is split between front and backend, that's just how they are made.

So it makes sense to invest in just a few highly advanced backends (llvm, gcc, msvc) and then just build frontends for those. Most projects choose llvm because, unlike the others, it was purpose built to be a common ground, but it's not a rule. For example, there is an in-developement rust frontend for GCC.

[-] edinbruh@feddit.it 59 points 4 months ago

Laser printers are the best. You sacrifice the quality of dense pictures, but gain incomparable speed and reliability. It's especially worth it if you print less often, because the ink dries up if you don't print every once in a while and you end up buying new ink even though the cartridge is full, but the toner just sits there indefinitely.

79
submitted 4 months ago by edinbruh@feddit.it to c/linuxmemes@lemmy.world
10

Reposting my question here to cast a wider net

141
His man.go (feddit.it)
155
His man.go (feddit.it)
submitted 6 months ago by edinbruh@feddit.it to c/linuxmemes@lemmy.world
[-] edinbruh@feddit.it 113 points 8 months ago

They die. Full stop.

Not even Microsoft had the strength to maintain a browser engine, that's why they moved Edge to Chromium, they gave up.

1075
submitted 1 year ago by edinbruh@feddit.it to c/memes@lemmy.world
[-] edinbruh@feddit.it 65 points 1 year ago

Tech Bros make a panopticon and call it a novel approach

[-] edinbruh@feddit.it 56 points 1 year ago

Download Firefox/ Look inside/ Still Firefox.

Download thunderbird/ Look inside/ Older Firefox.

28
submitted 1 year ago* (last edited 1 year ago) by edinbruh@feddit.it to c/linux@lemmy.ml

I'm using sunshine for remote gaming on my Linux PC. Because I use Wayland and don't have an Nvidia I use kmsgrab for capture (under the hood sunshine uses ffmpeg).

I have noticed that I can enter tty and kmsgrab will capture it as well. If it just captured after logging in my user I wouldn't be surprised, but it also captures the login screen.

I autostart it at login using my systemd user configuration (not systemwide) so it should just have my user's permission level. I get the same results if I put it in KDE's autostart section, so it's not a systemd thing.

Why does that work? Shouldn't you need special privileges to capture everything?

The installation instructions tells you to do sudo setcap -r $(readlink -f $(which sunshine)) is this the reason why it works? What does the command do exactly?

2

SOTTR can now run in proton-experimental (it used to crash due to a missing vulkan feature), but how does it compare to the native version?

Normally I would just use the native version, but got the game from epic, which doesn't provide the native build. So if I wanted to run native I would have to acquire the game from other sources (keep in mind that I own the game on epic), which is less than ideal. But I wouldn't do it if there's no advantage.

20
submitted 2 years ago by edinbruh@feddit.it to c/linux_gaming@lemmy.ml

SOTTR can now run in proton-experimental (it used to crash due to a missing vulkan feature), but how does it compare to the native version?

Normally I would just use the native version, but got the game from epic, which doesn't provide the native build. So if I wanted to run native I would have to acquire the game from other sources (keep in mind that I own the game on epic), which is less than ideal. But I wouldn't do it if there's no advantage.

[-] edinbruh@feddit.it 60 points 2 years ago* (last edited 2 years ago)

The USB protocol was simple by design, so it could be implemented in small dumb devices like pen drives. More specifically, it used two couples of cables, one couple was for power and the other for data (four wires in total). Having a single half-duplex data line means you need some way of arbitrating who can send data at any time. The easiest way to do it is having a single machine that decides who gets to send data (master), and the easiest way to decide the master is to not do it and have the computer always do the master. This means you couldn't connect two computers together because they would both try to be the master.

I used the past tense because you may have noticed that micro USB have 5 pins and not 4, that's because phones are computers and they use the 5th pin to decide how to behave. If it's grounded they act as a slave (the male micro to male A cable grounds it). If it has a resistor (the otg cable has it) it act as master. And if the devices are connected with a wire on that pin (on some special micro to micro) they negotiate the connection.

When they made usb 3.0 and they realized that not having the 5th wire on the usb-A was stupid, so they put it (along side some extra data lines) that's why they have an odd number of wires. So with usb 3 you can connect computers together, but you need a special cable that uses the negotiation wire. Also I don't know what software you need for it to work.

Usb-c is basically two USB 3.0 in the same cable, so you can probably connect computers with that. But often the port on the devices only uses one, so it might not be faster. Originally they put the pins for two connections so you could flip the connector, but later they realized they could use them to get double speed.

[-] edinbruh@feddit.it 92 points 2 years ago

AI upscaling, I think

[-] edinbruh@feddit.it 92 points 2 years ago

I am a computer scientist after all

[-] edinbruh@feddit.it 106 points 2 years ago

If I get back to 2005 I can easily get more than 10 millions by the time it's 2024 again. Plus all the other perks of restarting your life

[-] edinbruh@feddit.it 78 points 2 years ago

Dude what are you talking about, it was still here less than 15 years ago. The Nintendo Wii literally had an ATI GPU

view more: next ›

edinbruh

joined 2 years ago