50
you are viewing a single comment's thread
view the rest of the comments
[-] jarfil@beehaw.org 1 points 5 months ago

It's a "push as much data as a baby gets to train its NN" step, which is several orders of magnitude more, and more focused, than any training dataset in existence right now.

Even with diminishing returns, it's bound to get better results.

[-] vrighter@discuss.tchncs.de 1 points 5 months ago

that's not how asymptotes work.

[-] jarfil@beehaw.org 1 points 5 months ago

That's not how watching the video or reading the paper works either.

Whatever.

this post was submitted on 10 May 2024
50 points (100.0% liked)

Technology

37705 readers
379 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS