[-] Perspectivist@feddit.uk 4 points 2 days ago* (last edited 2 days ago)

Way to move the goalposts.

If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.

Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.

So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.

[-] Perspectivist@feddit.uk 8 points 2 days ago

Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.

What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?

Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.

Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.

Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.

[-] Perspectivist@feddit.uk 3 points 2 days ago* (last edited 2 days ago)

It won’t solve anything

Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy.

[-] Perspectivist@feddit.uk 5 points 5 days ago

Finland recently passed a law prohibiting under 15 year olds from riding electric scooters and similar vehicles. Up untill now, the average age of the people hospitalized for accidents with these has been 12 years.

[-] Perspectivist@feddit.uk 5 points 1 week ago

Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.

[-] Perspectivist@feddit.uk 5 points 1 week ago

It’s certainly not any task, that’d be AGI.

Any individual task I mean. Not every task.

[-] Perspectivist@feddit.uk 9 points 1 week ago

If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.

The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You're blaming the hammer for not turning screws.

[-] Perspectivist@feddit.uk 6 points 1 week ago

Consciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.

[-] Perspectivist@feddit.uk 13 points 1 week ago

In computer science, the term AI at its simplest just refers to a system capable of performing any cognitive task typically done by humans.

That said, you’re right in the sense that when people say “AI” these days, they almost always mean generative AI - not AI in the broader sense.

[-] Perspectivist@feddit.uk 4 points 1 week ago

You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.

A calculator doesn’t qualify because it runs "fixed code" with no learning or generalization. There's no flexibility to it. It can't adapt.

152
submitted 1 week ago* (last edited 1 week ago) by Perspectivist@feddit.uk to c/youshouldknow@lemmy.world

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

[-] Perspectivist@feddit.uk 13 points 1 week ago

I don't get her logic at all. No amount of mental gymnastics allows me to find anything sexist in that.

[-] Perspectivist@feddit.uk 3 points 1 week ago

I mostly downvote bad faith, hostile or just generally angry/mean comments independent of whether I agree with them or not. Basically I'm using it against the people that are polluting the air here.

view more: next ›

Perspectivist

joined 1 week ago