94
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 Jun 2024
94 points (100.0% liked)
technology
23313 readers
278 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
No, it's not. Maybe strictly for LLMs, but they were never the endpoint. They're more like a Frontal Lobe emulator, the rest of the "brain" still needs to be built. Conceptually, Intelligence is largely about interactions between Context and Data. We have plenty of written Data. In order to create Intelligence from that Data we'll need to expand the Context for that Data into other sensory systems; Which we are beginning to see in the combo LLM/Video/Audio models. Companies like Boston Dynamics are already working with and collecting Audio/Video/Kinesthetic Data in the Spatial Context. Eventually researchers are going to realize (if they haven't already) that there's massive amounts of untapped Data being unrecorded in virtual experiences. Though I'm sure some of the delivery/ remote driver companies are already contemplating how to record their Telepresence Data to refine their models. If capitalism doesn't implode on itself before we reach that point, the future of gig work will probably be Virtual Turks where, via VR, you'll step into the body of a robot when it's faced with a difficult task, complete the task, and then that recorded experience will be used to train future models. It's sad, because under socialism there's an incredible potential for building a society where AI/Robots and humanity live in symbiosis akin to something like The Culture, but it's just gonna be another cyber dystopia panopticon.
me
They already have. A lot of robots are already training using simulated environments, and nvidia is developing frameworks to help accelerate this. Also this is how things like alpha go were trained, with self-play, and these reinforcement learning algorithms will probably be extended for LLMs.
Also like you said there's a lot of still untapped data in audio / video and that's starting to be incorporated into the models.
Yeah, I'm familiar with a bunch of autonomous vehicles/drones being trained in simulated environments, but I'm also thinking stuff like VRChat.
My one quibble: that's not the future of gig work, it's the present
It's been a few years since I've used mturk, but there were very few VR based jobs when I last used it. Has that changed?
Ah sorry, I was just being a smartass, no idea how much VR is on mturk now. To be clear I think you've got an accurately bleak view of the future of this stuff
Ah, no worries. Yeah, pretty grim, and I've not even gotten into the horror of what they're gonna do with our biometric data. lol.
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy: