30
Leak confirms OpenAI's ChatGPT will integrate MCP
(www.bleepingcomputer.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
No, you can't. It cannot reason. It's just been fed so much existing text that it appears like it can in some cases. That's an extremely dangerous foundation on which to build anything.
I doubt it's been fed text about "bergro", "parava", and "rortx", this looks like basic reasoning to me:
For the sake of completeness, this is qwen3:1.7b running on ollama on a smartphone. Its reasoning is more convoluted (and slow), yet the conclusion is the same:
think
Okay, let's see. The user is asking if all rortx are parava, given that all bergro are rortx and all parava are rortx. Hmm, so first, I need to parse the statements correctly.
The first part says "If all bergro are rortx, and all parava are rortx, are all rortx parava?" So, the premises are that all bergro are rortx, and all parava are rortx. The conclusion is that all rortx are parava.
Wait, but the question is a bit confusing. Let me break it down. The premises are:
Conclusion: All rortx are parava.
So, the conclusion is that every rortx is a parava. But the premises only say that some things (bergro and parava) are rortx. But there's no information about the relationship between bergro and parava. Are bergro and parava different sets? Or is there an overlap?
If bergro and parava are distinct groups, then the premises only state that bergro are rortx and parava are rortx. But that doesn't necessarily mean that rortx are parava. Because rortx could be a subset of bergro or a separate set. For example, if bergro is a subset of rortx, and parava is another subset of rortx, then rortx would have two subsets, but the conclusion would be that all rortx are parava, which would require that rortx is entirely parava. But unless there's a statement that rortx is a subset of parava, that conclusion wouldn't hold.
But wait, the premises don't say that bergro and parava are related. They just state that all bergro are rortx and all parava are rortx. So, if bergro and parava are separate, then rortx could have elements from both. So, the conclusion that all rortx are parava would require that there's no bergro in rortx. But the premises don't say that. So, unless there's an implicit assumption that bergro and parava are disjoint, which isn't stated here, the conclusion can't be drawn.
Alternatively, if bergro and parava are the same set, but that's not stated. The problem is that the premises don't establish any relationship between bergro and parava. So, the conclusion that all rortx are parava can't be logically derived from the given premises. Therefore, the answer would be no, not all rortx are parava, unless there's additional information linking bergro and parava.
The premises state that all bergro are rortx and all parava are rortx. However, there is no information linking bergro and parava. If bergro and parava are distinct groups, rortx could consist of elements from both, making it impossible to conclude that all rortx are parava. Without additional constraints, the conclusion cannot be logically derived.
Answer: No, not all rortx are parava. The premises do not establish a relationship between bergro and parava, so rortx could include elements from both groups.
Yeah, it looks like basic reasoning but it isn't. These things are based on pattern recognition. "Assume all x are y, all z are y, are all z x?" is a known formulation ... I've seen it a fair number of times in my life.
Recent development has added this whole "make it prompt itself about the question" phase to try and make things more accurate ... but that also only works sometimes.
AI in LLM form is just a sick joke. It's like watching a magic trick where half of people expect the magician to ACTUALLY levitate next year because ... "they're almost there!!"
Maybe I'll be proven wrong, but I don't see it...
You're not wrong, but I don't think you're 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don't have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I'm not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they're used and it needs addressed, andI think that we're only a few clever optimizations away from a threat.
I don't buy the "it's a neural network" argument. We don't really understand consciousness or thinking ... and consciousness is possibly a requirement for actual thinking.
Frankly, I don't think thinking in humans is based anywhere near statical probabilities.
You can of course apply statistics and observe patterns and mimic them, but coorilation is not causation (and generally speaking, society is far too willing to accept coorilation).
Maybe everything reduces to "neural networks" in the same way LLM AI models them ... but that seems like an exceptionally bold claim for humanity to make.
It makes sense that you don't buy it. LLMs are built on simplified renditions of neural structure. They're totally rudimentary.