view the rest of the comments
No Stupid Questions
No such thing. Ask away!
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here.
Credits
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
The AIs we have at our disposal can't invent a thing - yet - because they aren't true AIs - again: yet.
They are merely, and should be perceived as tools, nothing more. It's the people who use them that may apply them to tasks that will result in invention, but on their own, they are closer to the Chinese Room principle, than to thinking and inventive constructions.
I agree with the basic idea, but there's not some fundamental distinction between what we have now and true AI. Maybe we'll find breakthroughs that help, but the systems we're using now would work given enough computing power and training. There's nothing the human brain can do that they can't, so with enough resources they can imitate the human brain.
Making one smarter than a human wouldn't be completely trivial, but I doubt it would be all that difficult given that the AI is powerful enough to imitate something smarter than a human.
Are AIs we have at our disposal able and allowed to self-improve on their own? As in: can they modify their own internal procedures and possibly reshape their own code to better themselves, thus becoming more than their creators predicted them to be?
Human brain can:
These are of course tongue-in-cheek examples of what a human brain can, but - from the persepctive of neuroscience, psychology and a few adjacent fields of study - it is absolutely incorrect to say that AIs can do what a human brain can, because we're still not sure how our brains work, and what they are capable of.
Based on some dramatic articles we see in news that promise us "trauma erasing pills", or "new breakthrough in healing Alzheimer" we may tend to believe that we know what this funny blob in our heads is capable of, and that we have but a few small secrets to uncover, but the fact is, that we can't even be sure just how much is there to discover.
Yes. That's what training is. There's systems for having them write their own training data. And ultimately, an AI that's good enough at copying a human can write any text that human can. Humans can improve AI by writing code. So can an AI. Humans can improve AI by designing new microchips. So can an AI.
We know they follow the laws of physics, which are turing complete. And we have pretty good reason to believe that their calculations aren't reliant on quantum physics.
Individual neurons are complicated, but there's no reason to believe they exact way they're complicated matters. They're complicated because they have to be self-replicating and self-repairing.
I'm not talking about building a database of data harvested from external sources. I'm not talking about the designs they make.
I'm asking whether AIs are able and allowed to modify THEIR OWN code.
Scientists are continuously baffled by the universe - very physical thing - and things they discover there. The point is that the knowledge that a thing follows certain specific laws does not give us the understanding of it and the mastery over it.
We do not know the full extent of what our brains are capable of. We do not even know where "the full extent" may end. Therefore we can't say that AIs are capable to do what our brains can, even if the underlying principle seem "basic" and "straightforward".
It's like comparing a calculator to a supercomputer and claiming the former can do what the latter does, because "it's all 0s and 1s, man". 😉
Yes. They can write code. Right now the don't have a big enough context window to write anything very useful, but scale everything up enough and they could.
And my point is that neural networks don't require understanding of whatever they're trained on. The reason I brought up that human brains are turing complete is just to show that an algorithm for human-level intelligence exists. Given that, a sufficiently powerful neural network would be able to find one.
You don't seem to understand me, or are trying very hard to not understand me.
I'll try again, but if it fails, I'll assume it's "bring horse to the water" case.
So: can AIs write their own code? As in "rewrite the code that is them"? Not write some small pieces of code, a small app, but can their write THEIR OWN code, the one that makes them run?
Your point does not address my argument.
You can't compare a thing to a thing you neither understand nor can predict its capabilities.
I think the term you want is Artificial general intelligence.
True AI could mean many things.