341
submitted 2 months ago by neme@lemm.ee to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] shaked_coffee@feddit.it 77 points 2 months ago

Anyone willing to summarize those mistakes here, for those who can't watch the video rn?

[-] transientpunk@sh.itjust.works 133 points 2 months ago* (last edited 2 months ago)

He doesn't list what the mistakes will be. He said that he fears that because hardware people aren't software people, that they will make the same mistakes that x86 made, which were then made by Arm later.

He did mention that fixing those mistakes was faster for Arm than x86, so that brings hope that fixing the mistakes on Risc V will take less time

[-] MonkderDritte@feddit.de 27 points 2 months ago* (last edited 2 months ago)

I think it was something with instruction sets? Pretty sure i read something about this months ago.

[-] Hotzilla@sopuli.xyz 6 points 2 months ago

No, it was about the prediction engines that contain security vulnerabilities. Problem is that software has no control over that, because hardware does future predictions for performance optimization.

[-] MonkderDritte@feddit.de 1 points 2 months ago

Aah, right, that.

[-] wewbull@feddit.uk 1 points 2 months ago

Prediction is a hard problem when coupled with caches. It relatively easy to say that no speculative instruction has any effect until it's confirmed taken if you ignore caches. However caches need to fetch information from memory to allow an instruction to evaluate, and rewinding a cache to it's previous state on a mispredict is almost impossible. Especially when you consider that the amount of time you're executing non-speculative code on a modern processor is very low.

Not having predictions is consigning yourself to 1990s performance, with faster clocks.

[-] Cocodapuf@lemmy.world 4 points 2 months ago

I mean, that's all chip architectures are, so yes.

[-] SpikesOtherDog@ani.social 23 points 2 months ago* (last edited 2 months ago)

Basically, his concern is that if they are not cooperating with software engineers that the product won't be able to run AAA games.

It's more of a warning than a prediction.

[-] catloaf@lemm.ee 1 points 2 months ago
[-] SpikesOtherDog@ani.social 5 points 2 months ago

Sorry, AAA games. I was swiping on my keyboard and didn't see the mistake.

[-] Reverendender@sh.itjust.works 2 points 2 months ago
[-] sugar_in_your_tea@sh.itjust.works 10 points 2 months ago

Not OP, but consider using FUTO Keyboard. It's made by the group Louis Rossmann works with, and it has offline speech to text (no sending data to Google), swipe keyboard, and completions. It's also source-available, which isn't as good as open source, but you could examine the code and verify their claims if you wanted to.

I'm using it and, while it's not perfect, it's way better than the open source Android keyboards with swiping that I've tried.

[-] victorz@lemmy.world 8 points 2 months ago* (last edited 2 months ago)

Thanks, will try it out! I need an emoji picker though. Does it have that?

Edit: typing with it now. It had an emoji picker. ๐Ÿ‘

  1. I like the picker's grouping, actually. Body parts (hands) are closer to faces.
  2. The recent emoji section doesn't work.
  3. It doesn't have the latest emoji set, as far as I can tell.
  4. The swiping is much more sensitive than Gboard. I'm not a fan as of yet. Maybe it's still learning. Seems like it can't handle the speed as well as Gboard can.
  5. Prediction suggestions are terrible so far.
  6. I don't like that swipe delete doesn't delete whole words.

All in all, I don't think I can recommend it in its current state.

But, if you type by pressing buttons, the predictions are actually pretty good. Maybe that saves a bit of time if you're very stationary and not on the move.

[-] sugar_in_your_tea@sh.itjust.works 5 points 2 months ago

Yeah, it's very much alpha software, but it works surprisingly well for being in such an early state. I'm using it as my keyboard now, and it works well enough, but certainly not perfect.

Then again, I'm willing to deal with a lot of nonsense to avoid Google, so YMMV.

I hear the speech to text is pretty good. I haven't tried it (I hate dictation), but maybe you could give it a whirl before you give up on it, it's supposed to be its killer feature.

[-] victorz@lemmy.world 2 points 2 months ago

Yeah, I will admit that, too. Very good for alpha software. ๐Ÿ‘

[-] SpikesOtherDog@ani.social 1 points 2 months ago

I'll give it a shot. I'm using Google

[-] Geth@lemmy.dbzer0.com 1 points 2 months ago

Giving it a whirl right now. Thanks for the recommendation.

[-] SpikesOtherDog@ani.social 1 points 2 months ago

Google in this case. I'll try the alternative mentioned

[-] JayDee@lemmy.ml 11 points 2 months ago

Instruction creep maybe? Pretty sure I've also seen stuff that seems to show that Torvalds is anti-speculative-execution due to its vulnurabilities, so he could also be referring to that.

[-] Traister101@lemmy.today 4 points 2 months ago

Counterintuitive but more instructions are usually better. It enables you (but let's be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.

[-] JayDee@lemmy.ml 1 points 2 months ago

I understand some instruction expansions today are used to good effect in x86, but that there are also a sizeable number of instructions that are rarely utilized by compilers and are mostly only continuing to exist for backwards compatibility. That does not really make me think "more instructions are usually better". It makes me think "CISC ISAs are usually bloated with unused instructions".

My whole understanding is that while more specific instruction options do provide benefits, the use-cases of these instructions make up a small amount of code and often sacrifice single-cycle completion. The most commonly cited benefit for RISC is that RISC can complete more work (measured in 'clockcycles per program' over 'clockrate') in a shorter cyclecount, and it's often argued that it does so at a lower energy cost.

I imagine that RISC-V will introduce other standards in the future (hopefully after it's finalized the ones already waiting), hopefully with thoroughly thought out instructions that will actually find regular use.

I do see RISC-V proponents running simulated benchmarks showing RISC-V is more effective. I have not seen anything similar from x86 proponents, who usually either make general arguments, or worse , just point at the modern x86 chips that have decades of research, funding, and design behind them.

Overall, I see alot of doubt that ISAs even matter to performance in any significant fashion, and I believe it for performance at the GHz/s level of speed.

[-] Cocodapuf@lemmy.world 1 points 2 months ago

This is probably correct.

this post was submitted on 12 Jul 2024
341 points (92.7% liked)

Technology

58108 readers
3888 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS