this post was submitted on 24 Jun 2024
31 points (100.0% liked)
technology
23313 readers
73 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
it's just as much a statistical accident when the models correspond with reality as when they don't
I would not necessarily say that is true, and the article summarizes a philosophically interesting reason why:
Have you actually used ChatGPT? The vast majority of the time it spits out good enough info. We use it at work frequently to write more tedious code. Ex: It's written approximately 7 trillion queryselectors for me, and as long as I hand hold it it will do a good job.
The biggest problem is when it comes to anything involving human safety. You also have to know that you have to hand hold it to get it to spit out something that's more or less exactly what you intended. But if you use it to draft a custom cover letter for you it's probably gonna do a good enough job, and it's not like anyone is actually reading that shit. It's great at doing basic math equations that involve a lot of conversions for me. It sure as hell aint the end all be all that every tech company seems to be pushing, but it's sure as hell not wrong 50% of the time.
For me it is wrong more than 95% of the time. I stopped using it because it was just a waste of time. I am not doing particularly difficult or esoteric programming work and it just could not hack it at all. Often the ways it was wrong were quite subtle. And it presents wrong answers with the exact same confidence it presents right answers.