[-] stardreamer@lemmy.blahaj.zone 29 points 4 months ago

Harder to write compilers for RISC? I would argue that CISC is much harder to design a compiler for.

That being said there's a lack of standardized vector/streaming instructions in out-of-the-box RISC-V that may hurt performance, but compiler design wise it's much easier to write a functional compiler than for the nightmare that is x86.

[-] stardreamer@lemmy.blahaj.zone 27 points 8 months ago

Here you dropped this:

#define ifnt(x) if (!(x))
[-] stardreamer@lemmy.blahaj.zone 22 points 8 months ago* (last edited 8 months ago)

An API is an official interface to connect to a service, usually designed to make it easier for one application to interact with another. This is usually kept stable and provides only the information needed to serve the request of the application requesting it.

A scraper is an application that scrapes data from a human readable source (i.e. website) to obtain data from another application. Since website designs can update frequently, these scrapers can break at any time and need to be updated alongside the original application.

Reddit clients interact with an API to serve requests, but Newpipe scrapes the YouTube webpage itself. So if YouTube changes their UI tomorrow Newpipe could very easily break. No one wants to design their app around a fragile base while building a bunch of stuff on top of it. It's just way too much work for very little effort.

It's like I can enter my house through the door or the chimney. I would always take the door since it's designed for human entry. I could technically use the chimney if there's no door. But if someone lights up the fireplace I'd be toast.

[-] stardreamer@lemmy.blahaj.zone 20 points 9 months ago* (last edited 9 months ago)

Having a good, dedicated e-reader is a hill that I would die on. I want a big screen, with physical buttons, lightweight, multi-weeklong battery, and an e-ink display. Reading 8 hours on my phone makes my eyes go twitchy. And TBH it's been a pain finding something that supports all that and has a reasonably open ecosystem.

When reading for pleasure, I'm not gonna settle for a "good enough" experience. Otherwise I'm going back to paper books.

[-] stardreamer@lemmy.blahaj.zone 39 points 10 months ago* (last edited 10 months ago)

The argument is that processing data physically "near" where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

Personally, I'd say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn't allow loops, recursion, etc). No matter how fast your fancy new architecture is, it's worthless if most programmers on the job market won't be able to work with it. Second, there're too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It's just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

[-] stardreamer@lemmy.blahaj.zone 19 points 10 months ago* (last edited 10 months ago)

So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?

[-] stardreamer@lemmy.blahaj.zone 19 points 10 months ago

No, the 2037 problem is fixing the Y2k38 problem in 2037.

Before that there's no problem :)

[-] stardreamer@lemmy.blahaj.zone 19 points 10 months ago

A more recent example:

"Nobody needs more than 4 cores for personal use!"

[-] stardreamer@lemmy.blahaj.zone 25 points 11 months ago
  1. Attempt to plug in the USB A device
  2. If you succeed. End procedure
  3. Otherwise, destroy the reality you currently reside in. All remaining universes are the ones where you plugged in the device on the first try.

That wasn't so hard, was it?

[-] stardreamer@lemmy.blahaj.zone 25 points 11 months ago

The year is 5123. We have meticulously deciphered texts from the early 21st century, providing us with a wealth of knowledge. Yet one question still eludes us to this day:

Who the heck is Magic 8. Ball?

[-] stardreamer@lemmy.blahaj.zone 40 points 1 year ago* (last edited 1 year ago)

ELI5, or ELIAFYCSS (Explain like I'm a first year CS student): modern x86 CPUs have lots of optimized instructions for specific functionality. One of these is "vector instructions", where the instruction is optimized for running the same function (e.g. matrix multiply add) on lots of data (e.g. 32 rows or 512 rows). These instructions were slowly added over time, so there are multiple "sets" of vector instructions like MMX, AVX, AVX-2, AVX-512, AMX...

While the names all sound different, the way how all these vector instructions work is similar: they store internal state in hidden registers that the programmer cannot access. So to the user (application programmer or compiler designer) it looks like a simple function that does what you need without having to micromanage registers. Neat, right?

Well, problem is somewhere along the lines someone found a bug: when using instructions from the AVX-2/AVX-512 sets, if you combine it with an incorrect ordering of branch instructions (aka JX, basically the if/else of assembly) you get to see what's inside these hidden registers, including from different programs. Oops. So Charlie's "Up, Up, Down, Down, Left, Right, Left, Right, B, B, A, A" using AVX/JX allows him to see what Alice's "encrypt this zip file with this password" program is doing. Uh oh.

So, that sounds bad. But lets take a step back: how bad would this affect existing consumer devices (e.g. Non-Xeon, non-Epyc CPUs)?

Well good news: AVX-512 is not available on most Intel/AMD consumer CPUs until recently (13th gen/zen 4, and zen 4 isn't affected). So 1) your CPU most likely doesn't support it and 2) even if your CPU supports it most pre-compiled programs won't use it because the program would crash on everyone else's computer that doesn't have AVX-512. AVX-512 is a non-issue unless you're running Finite Element Analysis programs (LS-DYNA) for fun.

AVX-2 has a similar problem: while released in 2013, some low end CPUs (e.g. Intel Atom) didn't have them for a long time (this year I think?). So most compiled programs wouldn't compile with AVX-2 enabled. This means whatever game you are running now, you probably won't see a performance drop after patching since your computer/program was never using the optimized vector instructions in the first place.

So, the affect on consumer devices is minimal. But what do you need to do to ensure that your PC is secure?

Three different ideas off the top of my head:

  1. BIOS update. The CPU has a some low level firmware code called microcode which is included in the BIOS. The new patched version adds additional checks to ensure no data is leaked.

  2. Update the microcode package in Linux. The microcode can also be loaded from the OS. If you have an up-to-date version of Intel-microcode here this would achieve the same as (1)

  3. Re-compile everything without AVX-2/AVX-512. If you're running something like Gentoo, you can simply tell GCC to not use AVX-2/AVX-512 regardless of whether your CPU supports it. As mentioned earlier the performance loss is probably going to be fine unless you're doing some serious math (FEA/AI/etc) on your machine.

[-] stardreamer@lemmy.blahaj.zone 21 points 1 year ago

that one NetBSD user bursts into flames

view more: next ›

stardreamer

joined 1 year ago