Not necessarily, you still need backups or snapshots especially on home directory in case software have a nasty bug like deleting your data.

Yup and I am getting sick of hearing this even on Arch Linux. Like, mofo, you could literally run a snapshot or backup before upgrading, don't blame us if you're yoloing your god damn computer. Windows have exactly the same problem too and this is why we have backups. Christ.

On my Arch Linux Install, I literally have a Pacman Hook that would forcibly run backup and verify the said backup before doing a system-wide update.

That one was an old documentation that some of the Chinese folks actually document a lot of quirks related to X11 protocol. I paid about $6000 for translator to work on translating that doc to English and I use it to build my own GUI Toolkit on Linux that I still use to this day.

[-] TheTrueLinuxDev@programming.dev 16 points 1 year ago* (last edited 1 year ago)

How it really works:

mpf_t temperature;

If confused...It's arbitrary sized floating precision number provided in LibGMP and you can find more information about mpf_t here.

Lol, that one way to put it. Basically a language convergence, not a bad thing to be honest.

[-] TheTrueLinuxDev@programming.dev 1 points 1 year ago* (last edited 1 year ago)

Yup, been writing a new shader language to replace GLSL and HLSL for Vulkan Compute purposed, but I eventually switch from SPIR-V IR to MLIR and use IREE Compiler which accepts the MLIR and compile it to any of CUDA, ROCm, SPIR-V and so forth.

A lot of it was because of my unadulterated hatred toward our current Machine Learning Frameworks...It's one of the project that I've been working on to outright replace Pytorch/Tensorflow and ban those two framework from my office forever. I got fed up not knowing how much exactly do I need in memory allocation, computational cost, and so forth when running or training neural net models. Plus I want an easier way to split the model across lower-end GPU too that doesn't rely on Nvidia-only GPU for CUDA code. I also wanted to have SPIR-V as a fallback compute kernel, because if CUDA/ROCm is too new for GPU, you're SOL, but if you have SPIR-V, chances are, any GPU made in the last 10 years that have a Vulkan Driver, would likely be supported.

One of the biggest plus with MLIR is that you are also future proofing your code, because that code could feasibly be recompiled for new devices like Neural Net accelerator cards, ASIC, FPGA, and so forth.

I agree on avoiding on the idea of avoiding having to make your own parser generator, this is precisely what I'm doing and it's hell. I assumed that you probably want to pick up some understanding on how parser differs when it come to writing grammars. As for ease of use and requiring the least understanding, using something like Earley parser is probably the easiest, it would be slower than other parser algorithms, but it could handle ambiguous grammars making it ideal for first timers to learn how to write a programming language.

Yep, and if open source licensing could be revoked on a whim, you can imagine the chaos that ensued. That would be my understanding as well, old version that have MPL license is perfectly fine to fork off, newer version might not be as it is under a different license. One of the reason why I liked Apache License is that it have make it explicitly clear that it's irrevocable whereas MPL it is operating on an assumption that it's not revocable. The most fundamental problem with the legal system in USA is that no law is "set in stone" and leaving things to assumption is open to reinterpretation by the judge who may have sided against you. (Hell, Google vs Oracle on Copyrighted API is still on case-to-case basis, so take it as you will.)

Disclaimer: I am not a lawyer. I just share what I learned from Legal Eagle youtube and few other sources.

[-] TheTrueLinuxDev@programming.dev 3 points 1 year ago* (last edited 1 year ago)

I definitely recommends that you start learning about the LL(k), LALR, and perhaps even Earley Parser algorithms. I am assuming you have picked up a little bit on LL(1) parser and some basic lexer, so mastering the parser algorithms are basically the next stop for you.

Once you get the grasp of those things, you are well on your way to designing a programming language.

I would spend it on language translation basically, paying someone to translate international documentations on things that aren't documented in USA no matter where you look.

view more: next ›

TheTrueLinuxDev

joined 1 year ago