You don’t understand what science or scientific thinking is.
Uhh.. elaborate please?
Clean with bidet, dry with tp. Also uses less tp
A consciousness is not an “output” of a human brain.
Fair enough. Obviously consciousness is more complex than that. I should have put "efferent neural actions" first in that case, consciousness just being a side effect, something different yet composed of the same parts, an emergent phenomenon. How would you describe consciousness, though? I wish you would offer that instead of just saying "nuh uh" and calling me chatGPT :(
Not sure how you interpreted what I wrote in the rest of your comment though. I never mentioned humans teaching each other causal relations? I only compared the training of neural networks to evolutionary principles, where at one point we had entities that interacted with their environment in fairly simple and predictable ways (a "deterministic algorithm" if you will, as you said in another comment), and at some later point we had entities that we would call intelligent.
What I am saying is that at some point the pattern recognition "trained" by evolution (where inputs are environmental distress/eustress, and outputs are actions that are favorable to the survival of the organism) became so advanced that it became self-aware (higher pattern recognition on itself?) among other things. There was a point, though, some characteristic, self-awareness or not, where we call something intelligence as opposed to unintelligent. When I asked where you draw the line, I wanted to know what characteristic(s) need to be present for you to elevate something from the status of "pattern recognition" to "intelligence".
It's tough to decide whether more primitive entities were able to form causal relationships. When they saw predators, did they know that they were going to die if they didn't run? Did they at least know something bad would happen to them? Or was it just a pre-programmed neural response that caused them to run? Most likely the latter.
Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.
From another comment, I'm not sure what you mean by "understands". It could mean having knowledge about the nature of a thing, or it could mean interpreting things in some (meaningful) way, or it could mean something completely different.
To your last point, logical thinking is possible, but of course humans can't do it on our own. We had to develop a system for logical thinking (which we call "logic", go figure) as a framework because we are so bad at doing it ourselves. We had to develop statistical methods to determine causal relations because we are so bad at doing it on our own. So what does it mean to "understand" a thing? When you say an animal "understands" causal relations, do they actually understand it or is it just another form of pattern recognition (why I mentioned pavlov in my last comment)? When humans "understand" a thing, do they actually understand, or do we just encode it with the frameworks built on pattern recognition to help guide us? A scientific model is only a model, built on trial and error. If you "understand" the model you do not "understand" the thing that it is encoding. I know you said "to varying degrees", and this is the sticking point. Where do you draw the line?
When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless. [...] You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.
I recognize that you understand the point I am trying to make. I am trying to make the same point, just with a different perspective. Your description of an "actually intelligent" artificial intelligence closely matches how sensory data is integrated in the layers of the visual cortex, perhaps on purpose. My question still stands, though. A more primitive species would integrate data in a similar, albeit slightly less complex, way: take in (visual) sensory information, integrate the data to extract easier-to-process information such as brightness, color, lines, movement, and send it to the rest of the nervous system for further processing to eventually yield some output in the form of an action (or thought, in our case). Although in the process of integrating, we necessarily lose information along the way for the sake of efficiency, so what we perceive does not always match what we see, as you say. Image recognition models do something similar, integrating individual pixel information using convolutions and such to see how it matches an easier-to-process shape, and integrating it further. Maybe it can't reason about what it's seeing, but it can definitely see shapes and colors.
You will notice that we are talking about intelligence, which is a remarkably complex and nuanced topic. It would do some good to sit and think deeply about it, even if you already think you understand it, instead of asserting that whoever sounds like they might disagree with you is wrong and calling them chatbots. I actually agree with you that calling modern LLMs "intelligent" is wrong. What I ask is what you think would make them intelligent. Everything else is just context so that you understand where I'm coming from.
What do you call the human brain then, if not billions of “switches” as you call them that translate inputs (senses) into an output (intelligence/consciousness/efferent neural actions)?
It’s the result of billions of years of evolutionary trial and error to create a working structure of what we would call a neural net, which is trained on data (sensory experience) as the human matures.
Even early nervous systems were basic classification systems. Food, not food. Predator, not predator. The inputs were basic olfactory sense (or a more primitive chemosense probably) and outputs were basic motor functions (turn towards or away from signal).
The complexity of these organic neural networks (nervous systems) increased over time and we eventually got what we have today: human intelligence. Although there are arguably different types of intelligence, as it evolved among many different phylogenetic lines. Dolphins, elephants, dogs, and octopuses have all been demonstrated to have some form of intelligence. But given the information in the previous paragraph, one can say that they are all just more and more advanced pattern recognition systems, trained by natural selection.
The question is: where do you draw the line? If an organism with a photosensitive patch of cells on top of its head darts in a random direction when it detects sudden darkness (perhaps indicating a predator flying/swimming overhead, though not necessarily with 100% certainty), would you call that intelligence? What about a rabbit, who is instinctively programmed by natural selection to run when something near it moves? What about when it differentiates between something smaller or bigger than itself?
What about you? How will you react when you see a bear in front of you? Or when you’re in your house alone and you hear something that you shouldn’t? Will your evolutionary pattern recognition activate only then and put you in fight-or-flight? Or is everything you think and do a form of pattern recognition, a bunch of electrons manipulating a hundred billion switches to convert some input into a favorable output for you, the organism? Are you intelligent? Or just the product of a 4-billion year old organic learning system?
Modern LLMs are somewhere in between those primitive classification systems and the intelligence of humans today. They can perform word associations in a semantic higher dimensional space, encoding individual words as vectors and enabling the model to attribute a sort of meaning between two words. Comparing the encoding vectors in different ways gets you another word vector, yielding what could be called an association, or a scalar (like Euclidean or angular distance) which might encode closeness in meaning.
Now if intelligence requires understanding as you say, what degree of understanding of its environment (ecosystem for organisms, text for LLM. Different types of intelligence, paragraph 4) does an entity need for you to designate it as intelligent? What associations need it make? Categorizations of danger, not danger and food, not food? What is the difference between that and the Pavlovian responses of a dog? And what makes humans different, aside from a more complex neural structure that allows us to integrate orders of magnitude more information more efficiently?
Where do you draw the line?
Technically a vagina has six holes (assuming this guy is talking about the whole genital when he says vagina):
The urethra, where pee comes out
The vagina, where sex
Two paraurethral glands (Skene’s glands), which secrete lubricating mucous during arousal and also produce female ejaculate when squirting (it’s not piss!) - these glands are analogous to prostatic glands in males
Two greater vestibular glands (Bartholin glands, which are paravaginal), which also secrete lubricating fluid.
Although I would advise against putting anything in those last four (they are visible to the naked eye but still very small). Also not sure how he counted 5.
I think you are, bro. http://soulism.net/