[-] FizzyOrange@programming.dev 0 points 15 hours ago* (last edited 15 hours ago)

Right, I'm not saying it isn't simpler in terms of syntax. The point I was making is that the syntax is simpler but in a way that makes it worse because while it's easier for computers to read, it's harder for humans.

it was only later discovered that they can be compiled down to native code.

That sounds extremely unlikely. I think you're misinterpreting this quote (which is fair enough; it's not very clear):

Steve Russell said, look, why don't I program this eval ... and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bugs, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today ...

As far as I can tell Lisp was always intended to be compiled and executed. That quote is about compiling the eval() function (which was just meant to explain how Lisp is executed) into a binary and using that as an interpreter.

Also I skimmed the paper that is from, and in fact Lisp was intended to be targeted by AI (in the same way that we get AI to write and execute Python to solve problems), which explains a lot. It wasn't designed for humans to write, so why bother with nice syntax; just have the machine write the AST directly!

(I expect that was only part of the motivation tbf, but still!)

[-] FizzyOrange@programming.dev 0 points 1 day ago

This comment perfect captures why I don't like Lisp. Essentially "it's simple, this easy to read code transforms to this AST". Lisp basically says "we can make parsing way easier if we force programmers to write the AST directly!" which is really stupid because computers can perfectly well parse syntax that is easy for humans to read and turn it into ASTs automatically.

It makes it easier to parse for computers at the cost of being much harder to parse for humans, which is really the wrong choice in most cases. (The exception is if you're DIYing your compiler, e.g. if you're teaching how to write a compiler then Lisp is a good target.)

[-] FizzyOrange@programming.dev 1 points 1 day ago

I'm not a fan of Lisps either. But I also thing the Nic language is kind of awful... So maybe in comparison it's ok?

[-] FizzyOrange@programming.dev 13 points 1 day ago

Yeah it's great for little scripts. There's even a cargo script feature that's being worked on so you can compile & run them using a shebang.

I'd use a shell script if it is literally just a list of commands with no control logic or piping. Anything more than that and you're pointing a loaded gun at your face, and should switch to a proper language, of which Rust is a great choice.

[-] FizzyOrange@programming.dev 4 points 2 days ago

It's definitely a growing problem with Rust. I have noticed my dependency trees growing from 20-50 a few years ago to usually 200-500 now.

It's not quite as bad as NPM yet, where it can easily get into the low thousands. Also the Rust projects I have tend to have justifiably large dependencies, e.g. wasmtime or Slint. I don't think it's unreasonable to expect a whole GUI toolkit to have quite a few dependencies. I have yet to find any dependencies that I thought were ridiculous like leftpad.

We could definitely do with better tooling to handle supply chain attacks. Maybe even a way of (voluntarily) tying crate authors to verified real identities.

But I also wouldn't worry about it too much. If you a really worried, develop in a docker container, use a dependency cooldown, and whatever you do don't use cryptocurrencies on your dev machine.

[-] FizzyOrange@programming.dev 9 points 2 days ago

The real costs are the ~$100m they spend a year on developing GitHub and providing it for free to most people - including free CI.

They charge 3x the cost price for runners so that they can actually make money. This change is so that they can't get undercut by alternative hosted runner providers.

I do think they could have just explained that and it probably would have been more palettable than their "we're making it cheaper!" lie, but I guess there are also a lot of people that still think the only moral pricing is cost plus.

[-] FizzyOrange@programming.dev 2 points 3 days ago

I'm surprised there aren't more companies offering managed Forgejo instances.

[-] FizzyOrange@programming.dev 1 points 4 days ago

Best option is to switch to C++ and use QtWidgets. You don't need to know much C++ for that - if you want to tediously micromanage strings you can still do that in your business part of the program.

[-] FizzyOrange@programming.dev 60 points 2 months ago* (last edited 2 months ago)

He's right. I think it was really a mistake for RISC-V to support it at all, and any RISC-V CPU that implements it is badly designed.

This is the kind of silly stuff that just makes RISC-V look bad.

Couldn't agree more. RISC-V even allows configurable endianness (bi-endian). You can have Machine mode little endian, supervisor mode big endian, and user mode little endian, and you can change that at any time. Software can flip its endianness on the fly. And don't forget that instruction fetch ignores this and is always little endian.

Btw the ISA manual did originally have a justification for having big endian but it seem to have been removed:

We originally chose little-endian byte ordering for the RISC-V memory system because little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and Windows for ARM). A minor point is that we have also found little-endian memory systems to be more natural for hardware designers. However, certain application areas, such as IP networking, operate on big-endian data structures, and certain legacy code bases have been built assuming big-endian processors, so we have defined big-endian and bi-endian variants of RISC-V.

This is a really bad justification. The cost of defining an optional big/bi-endian mode is not zero, even if nobody ever implements it (as far as I know they haven't). It's extra work in the specification (how does this interact with big endian?) in verification (does your model support big endian?) etc.

Linux should absolutely not implement this.

25
submitted 9 months ago* (last edited 9 months ago) by FizzyOrange@programming.dev to c/linux@programming.dev

Edit: rootless in this context means the remote windows appear like local windows; not in a big "desktop" window. It's nothing to do with the root account. Sorry, I didn't come up with that confusing term. If anyone can think of a better term let's use that!

This should be a simple task. I ssh to a remote server. I run a GUI command. It appears on my screen (and isn't laggy as hell).

Yet I've never found a solution that really works well in Linux. Here are some that I've tried over the years:

  • Remote X: this is just unusably slow, except maybe over a local network.
  • VNC: almost as slow as remote X and not rootless.
  • NX: IIRC this did perform well but I remember it being a pain to set up and it's proprietary.
  • Waypipe: I haven't actually tried this but based on the description it has the right UX. Unfortunately it only works with Wayland native apps and I'm not sure about the performance. Since it's just forwarding Wayland messages, similar to X forwarding, and not e.g. using a video codec I assume it will have similar performance issues (though maybe not as bad?).

I recently discovered wprs which sounds interesting but I haven't tried it.

Does anyone know if there is a good solution to this decades-old apparently unsolved problem?

I literally just want to ssh <server> xeyes and have xeyes (or whatever) appear on my screen, rootless, without lag, without complicated setup. Is that too much to ask?

[-] FizzyOrange@programming.dev 66 points 10 months ago* (last edited 10 months ago)

His point could be valid, if C was working fine and Rust didn't fix it. But C isn't working fine and Rust is the first actual solution we've ever had.

He's just an old man saying we can't have cars on the road because they'll scare the horses.

[-] FizzyOrange@programming.dev 68 points 1 year ago

Ok after reading the article this is bullshit. It's only because they are counting JavaScript and Typescript separately.

[-] FizzyOrange@programming.dev 75 points 1 year ago

Actual blog post.

Great accomplishment. I think we all knew it must happen like this but it's great to see real world results.

I think this is probably actually the most useful part of the post:

Increasing productivity: Safe Coding improves code correctness and developer productivity by shifting bug finding further left, before the code is even checked in. We see this shift showing up in important metrics such as rollback rates (emergency code revert due to an unanticipated bug). The Android team has observed that the rollback rate of Rust changes is less than half that of C++.

I think anyone writing Rust knows this but it's quite hard to convince non-Rust developers that you will write fewer bugs in general (not just memory safety bugs) with Rust than with C++. It's great to have a solid number to point to.

17

Does anyone know of a website that will show you a graph of open/closed issues and PRs for a GitHub repo? This seems like such an obvious basic feature but GitHub only has a useless "insights" page which doesn't really show you anything.

10
Dart Macros (youtu.be)

Very impressive IDE integration for Dart macros. Something to aspire to.

view more: next ›

FizzyOrange

joined 2 years ago