181
AGI is so close, it's almost scary
(hexbear.net)
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.
This fuckgin stupid quantum computer can't even solve math problems my classical von Neumann architecture computer can solve! Hahahah, this PROVES computers will never be smart. Only I am smart! The computer doesn't even possess a fraction of my knowledge of anime!!
in rapidly deteriorating ecology throwing 300 billion per year in this tech of turning electricity into heat does seem ill-advised, yes.
then I guess it's a good thing that in addition to producing humorous output when prompted with problems it's ill suited to solve, it can also pass graduate level examinations and diagnose disease better than a doctor.
Amazing, it can pass tests which it churned through 1000 times but cannot produce simple answer a child might stumble through. It's not cognition, it's regurgitation. You do get diagnosed at llm-shop mate, have fun
Yeah you're right! What use is having the entirety of medical knowledge in every language REGURGITATED at you in a context aware fashion to someone who can't afford a doctor? After all it's not cognition in the same way that I do it.
How many shitty doctors getting nudged towards a better outcome for real people does this tech need to demonstrate to offset the OCEAN BOILING costs of this tech do you think?
at least 3 millions.
Cite your sources mate, ai driven image recognition of lung issues is kind of a semi-joke in the field.
Majority of shit health outcomes is not missing esoteric cancer on an image, it's an overworked nurse missing bone fracture, it's not getting urea/blood analysis done in time, it's a doctor prescribing antibiotics without pro biotics afterwards, it's a drug being locked by ip in poor country or drug costing too much cause johnson acquisition spent that much money for patent or nuts pricing of clinical trials. Developing new working drug costs like 40 mil, trialing it costs 2 billion in fda. Now you do tell me how ai making 40 mil to 20 mil will make it cheaper.
Majority of healthcare work is, you know, work. Patient care, surgery, not fucking doctor house, md finding right drug. 95 % cases could be solved by honest web md, congrats. who will set your broken arm? Will ai do mri scan of acl? Maybe x-ray? A dipshit can look at an image and say that's wrong, ai can tell you you should put it in a cast and avoid lateral movements for a month, so what then?
This is so off the mark its not worth my time.
Can't wait to pick up my prescription for hyperactivated antibiotics.
https://www.cio.com/article/3593403/patients-may-suffer-from-hallucinations-of-ai-medical-transcription-tools.html
How often do you think use of AI improves medical outcomes vs makes them worse? It's always super-effective in the advertising but when used in real life it seems to be below 50%. So we're boiling the oceans to make medical outcomes worse.
To answer your question, AI would need to demonstrate improved medical outcomes at least 50% of the time (in actual use) for me to even consider looking at it being useful.
50% is the number yeah? I wish yall took "no investigation no right to speak" more seriously.
They've provided a source, indicating that they have done investigation into the issue.
The quote isn't "If you don't do the specific investigation that I want you to do and come to the same conclusion that I have, then no right to speak."
If you believe their investigation led them to an erroneous position, it is now incumbent on you to make that case and provide your supporting evidence.
Y'all are suffering because of the lack of downvotes, so you need to actually dunk on someone instead of downvoting and moving on
We need to make a chat gpt powered dunking bot
ChatGPT is censored, this calls for some more advanced LLMing, perhaps even a finetune based on the Hexbear comment section argument corpus. It's only ethical if we do it for the purpose of dunking on chuds/libs
LLMs are categorically not AI, they're overgrown text parsers based on predicting text. They do not store knowledge, they do not acquire knowledge, they're basically just that little bit of speech processing that your brain does to help you read and parse text better, but massively overgrown and bloated in an attempt to make that also function as a mimicry of general knowledge. That's why they hallucinate and are constantly wrong about anything that's not a rote answer from their training data: because they do not actually have any sort of thinking bits or mental model or memory, they're just predicting text based on a big text log and their prompts.
They're vaguely interesting toys, though not for how ludicrously expensive they are to actually operate, but they represent a fundamentally wrong approach that's receiving an obscene amount of resources to trying to make it not suck without any real results to show for it. The sorts of math and processing involved in how they work internally have broader potential, but these narrowly focused chatbots suck and are a dead end.
These models absolutely encode knowledge in their weights. One would really be showing their lack of understanding about how these systems work to suggest otherwise.
Except they don't, definitionally. Some facts get tangled up in them and can consistently be regurgitated, but they fundamentally do not learn or model them. They no more have "knowledge" than image generating models do, even if the image generators can correctly produce specific anime characters with semi-accurate details.
"Facts get tangled up in them". lol Thanks for conceding my point.
Don't be fatuous. See my other comment here: https://hexbear.net/comment/5726976
This is exactly what I'm talking about. This is potentially the biggest technological innovation in a long time and it's going to completely sideswipe all of you because of this toxic attitude. Separate the players from the actual game.
Thankfully planners in China don't succumb to this western learned helpless routine or they'd miss out on all the potential gains of the last few decades by sleeping on the extremely obvious potential for factory automation.
Like how can you see this kind of thing and just be like treat printer bazinga boil the oceans waifu etc. Is it perfect yet? No, did it just come out in the past couple years and already obviate decades of expert systems research? Yes, it absolutely did.
Yeah, great I look forward to the western left leading the butlerian jihad. There's a reason why Luddites are synonymous with getting jack shit done, and I would expect so called materialists to make the cold calculation necessary to understand this.
It's obviously not just language models at work here, it's transformer based architectures in general. Why do you think we can generate video, text, and transcribe audio, and a host of other things like protein discovery all dramatically better than a couple years ago? And this all happened around the same time? Major tech companies are currently folding up their prior machine learning efforts because they've been BTFO by this leap in tech. This is something that is absolutely happening in the ML space across the software industry.
The fact that you think I'm even talking about LLMs exclusively is such a myopic view of what's really going on. There has been an explosion of robotics breakthroughs in the past year alone because of this kind of thing, it's not just that video, look at what Unitree is doing or any of the other Chinese robotics companies that are dominating this space now.
You guys are the redditors you hate when you come at this with the same energy as a liberal about genocide in Xinjiang or something, like you've already made up your mind. Just believe what you want to believe about it. I'm done trying to educate you. Only time will tell and it's not like I get anything out of it when the goalposts move yet again.
Sorry to be a dickhead. This is just what strong difference of opinion looks. Everyone jumps up my ass about daring to say that "AI" is not just a grift but an actual threat (and opportunity). Like yes, silicon valley are grifters, but that doesn't preclude them from cooking up useful engineering once in a while.
My whole point is that we need to abandon our immature/dirtbag analysis of this issue and get more professionalized about things or we're gonna get really rinsed in the 21st century.
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
Because of the way society runs, everything we do is tremendously damaging to the environment unfortunately. The upside of that is that people who want to automate labour have a lot of carbon budget to work with e.g. keeping people off roads and out of offices and such. With algorithmic and hardware efficiencies that are already slated we may end up saving energy in the near future.
There's nothing we can really do to stop these systems from being utilized either, anymore than we can ban gaming hardware (based). But it's sort of a prisoners dilemma like military spending.