77
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 21 Jun 2023
77 points (100.0% liked)
World News
22057 readers
102 users here now
Breaking news from around the world.
News that is American but has an international facet may also be posted here.
Guidelines for submissions:
- Where possible, post the original source of information.
- If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
- Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
- Do not post ragebait or shock stories. These will be removed.
- Do not post tabloid or blogspam stories. These will be removed.
- Social media should be a source of last resort.
These guidelines will be enforced on a know-it-when-I-see-it basis.
For US News, see the US News community.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said "NO, WE WILL ONLY USE VISION CAMERA SENSORS." And that doesn't work.
Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can't.
What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.
Sounds to me like they should full steam ahead with new sensors, they will never deliver on what they've promised with the tech they are using today.
Old customers situation won't change and it would only be better going forward.
I don't see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?