198
submitted 1 week ago* (last edited 1 week ago) by Allah@lemm.ee to c/technology@lemmy.world

LOOK MAA I AM ON FRONT PAGE

you are viewing a single comment's thread
view the rest of the comments
[-] Nanook@lemm.ee 40 points 1 week ago

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[-] MNByChoice@midwest.social 17 points 1 week ago

The "Apple" part. CEOs only care what companies say.

[-] kadup@lemmy.world 15 points 1 week ago

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[-] homesweethomeMrL@lemmy.world 9 points 1 week ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

[-] dubyakay@lemmy.ca 4 points 1 week ago

Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.

[-] Clent@lemmy.dbzer0.com 3 points 1 week ago

Yes, Apple haters can't admit nor understand it but Apple doesn't do pseudo-tech.

They may do silly things, they may love their 100% mark up but it's all real technology.

The AI pushers or today are akin to the pushers of paranormal phenomenon from a century ago. These pushers want us to believe, need us to believe it so they can get us addicted and extract value from our very existence.

[-] MCasq_qsaCJ_234@lemmy.zip 3 points 1 week ago

They need to convince investors that this delay wasn't due to incompetence. The problem will only be somewhat effective as long as there isn't an innovation that makes AI more effective.

If that happens, Apple shareholders will, at best, ask the company to increase investment in that area or, at worst, to restructure the company, which could also mean a change in CEO.

[-] Venator@lemmy.nz 2 points 1 week ago

Apple always arrives late to any new tech, doesn't mean they haven't been working on it behind the scenes for just as long though...

[-] Clent@lemmy.dbzer0.com 14 points 1 week ago

Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

[-] JohnEdwa@sopuli.xyz 6 points 1 week ago* (last edited 1 week ago)

"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect.

As Larry Tesler puts it, "AI is whatever hasn't been done yet.".

[-] kadup@lemmy.world 4 points 1 week ago

That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they're clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

[-] Grimy@lemmy.world 6 points 1 week ago

No, it shows how certain people misunderstand the meaning of the word.

You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

[-] homesweethomeMrL@lemmy.world 1 points 1 week ago

Strangely inconsistent + smoke & mirrors = profit!

[-] technocrit@lemmy.dbzer0.com -2 points 1 week ago* (last edited 1 week ago)

Who is "you"?

Just because some dummies supposedly think that NPCs are "AI", that doesn't make it so. I don't consider checkers to be a litmus test for "intelligence".

[-] Grimy@lemmy.world 2 points 1 week ago

"You" applies to anyone that doesnt understand what AI means. It's a portmanteau word for a lot of things.

Npcs ARE AI. AI doesnt mean "human level intelligence" and never did. Read the wiki if you need help understanding.

[-] Clent@lemmy.dbzer0.com -2 points 1 week ago

Intellegence has a very clear definition.

It's requires the ability to acquire knowledge, understand knowledge and use knowledge.

No one has been able to create an system that can understand knowledge, therefor me none of it is artificial intelligence. Each generation is merely more and more complex knowledge models. Useful in many ways but never intelligent.

[-] 8uurg@lemmy.world 2 points 1 week ago

Wouldn't the algorithm that creates these models in the first place fit the bill? Given that it takes a bunch of text data, and manages to organize this in such a fashion that the resulting model can combine knowledge from pieces of text, I would argue so.

What is understanding knowledge anyways? Wouldn't humans not fit the bill either, given that for most of our knowledge we do not know why it is the way it is, or even had rules that were - in hindsight - incorrect?

If a model is more capable of solving a problem than an average human being, isn't it, in its own way, some form of intelligent? And, to take things to the utter extreme, wouldn't evolution itself be intelligent, given that it causes intelligent behavior to emerge, for example, viruses adapting to external threats? What about an (iterative) optimization algorithm that finds solutions that no human would be able to find?

Intellegence has a very clear definition.

I would disagree, it is probably one of the most hard to define things out there, which has changed greatly with time, and is core to the study of philosophy. Every time a being or thing fits a definition of intelligent, the definition often altered to exclude, as has been done many times.

[-] Grimy@lemmy.world 1 points 1 week ago

Dog has a very clear definition, so when you call a sausage in a bun a "Hot Dog", you are actually a fool.

Smart has a very clear definition, so no, you do not have a "Smart Phone" in your pocket.

Also, that is not the definition of intelligence. But the crux of the issue is that you are making up a definition for AI that suits your needs.

[-] cyd@lemmy.world 4 points 1 week ago

By that metric, you can argue Kasparov isn't thinking during chess, either. A lot of human chess "thinking" is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn't a magic process, nor is it tightly coupled to human-like brain processes as we like to think.

[-] kadup@lemmy.world 1 points 1 week ago

By that metric, you can argue Kasparov isn’t thinking during chess

Kasparov's thinking fits pretty much all biological definitions of thinking. Which is the entire point.

[-] vala@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Yesterday I asked an LLM "how much energy is stored in a grand piano?" It responded with saying there is no energy stored in a grad piano because it doesn't have a battery.

Any reasoning human would have understood that question to be referring to the tension in the strings.

Another example is asking "does lime cause kidney stones?". It didn't assume I mean lime the mineral and went with lime the citrus fruit instead.

Once again a reasoning human would assume the question is about the mineral.

Ask these questions again in a slightly different way and you might get a correct answer, but it won't be because the LLM was thinking.

[-] xthexder@l.sw0.com 5 points 1 week ago

I'm not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I'd expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

This kind of just goes to show there's multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely "good" at making up answers to leading questions, even if it's completely false.

[-] JohnEdwa@sopuli.xyz 1 points 1 week ago* (last edited 1 week ago)

Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

The one thing they can't do is verify if what they are talking about is true as it's all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.

[-] postmateDumbass@lemmy.world 4 points 1 week ago

Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

[-] xthexder@l.sw0.com 4 points 1 week ago

The tension of the strings would actually be a pretty miniscule amount of energy too, since there's very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).

Compared to burning a piece of wood, which would release orders of magnitude more energy.

[-] antonim@lemmy.dbzer0.com 2 points 1 week ago

But 90% of "reasoning humans" would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.

[-] technocrit@lemmy.dbzer0.com -3 points 1 week ago* (last edited 1 week ago)

I'm going to write a program to play tic-tac-toe. If y'all don't think it's "AI", then you're just haters. Nothing will ever be good enough for y'all. You want scientific evidence of intelligence?!?! I can't even define intelligence so take that! \s

Seriously tho. This person is arguing that a checkers program is "AI". It kinda demonstrates the loooong history of this grift.

[-] JohnEdwa@sopuli.xyz 3 points 1 week ago* (last edited 1 week ago)

It is. And has always been. "Artificial Intelligence" doesn't mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it's a vast field of research in computer science with many, many things under it.

[-] Endmaker@ani.social 0 points 1 week ago

ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

Y'all are too patient. I can't be bothered to spend the time to give people free lessons.

[-] antonim@lemmy.dbzer0.com 0 points 1 week ago

Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we're bombarded with daily through the media.

[-] Clent@lemmy.dbzer0.com -2 points 1 week ago

The computer science industry isn't the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

[-] LandedGentry@lemmy.zip 0 points 1 week ago* (last edited 1 week ago)

Yeah that’s exactly what I took from the above comment as well.

I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.

Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.

[-] Melvin_Ferd@lemmy.world -2 points 1 week ago

This is why I say these articles are so similar to how right wing media covers issues about immigrants.

There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

[-] hansolo@lemmy.today 0 points 1 week ago

Because it's a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won't kill us all is the hard part.

I'm a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven't been nuked by SkyNet, all of this will look quaint and silly.

[-] technocrit@lemmy.dbzer0.com 0 points 1 week ago* (last edited 1 week ago)
[-] technocrit@lemmy.dbzer0.com 0 points 1 week ago* (last edited 1 week ago)

This is why I say these articles are so similar to how right wing media covers issues about immigrants.

Maybe the actual problem is people who equate computer programs with people.

Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

You mean laws like this? jfc.

https://www.inc.com/sam-blum/trumps-budget-would-ban-states-from-regulating-ai-for-10-years-why-that-could-be-a-problem-for-everyday-americans/91198975

[-] Melvin_Ferd@lemmy.world -1 points 1 week ago* (last edited 1 week ago)

Literally what I'm talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can't even see you're making my argument for me.

[-] antonim@lemmy.dbzer0.com 2 points 1 week ago

That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that's actually supposed to mean).

[-] Melvin_Ferd@lemmy.world -1 points 1 week ago* (last edited 1 week ago)

What isn't there to gain?

Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

[-] antonim@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago)

I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

[-] Melvin_Ferd@lemmy.world -3 points 1 week ago* (last edited 1 week ago)

Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


  1. Straw Man Fallacy

"Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

This misrepresents the original claim:

"AI can help create a framework at the very least so they can get their ideas down."

The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


  1. False Dichotomy

"If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


  1. Hasty Generalization

"Supposed 'brilliant ideas' are a dime a dozen..."

While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


  1. Appeal to Ridicule / Ad Hominem (Light)

"...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


  1. Tu Quoque / Whataboutism (Borderline)

"For now I see no particular benefits that the right-wing has obtained by using AI either..."

This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


Summary of Fallacies Identified:

Type Description

Straw Man Misrepresents the role of AI in creative assistance. False Dichotomy Assumes one must either be visually skilled or not attempt visual media. Hasty Generalization Devalues “brilliant ideas” universally. Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis. Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

[-] die444die@lemmy.world 4 points 1 week ago

Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

[-] Melvin_Ferd@lemmy.world -1 points 1 week ago

I did and it was because it didn't have the previous context. But it did find the fallacies as present. Logic is literally what a chat AI is going. A human still needs to review the output but it did what it was asked. I don't know AI programming well. But I can say that logic is algorithmic. An AI has no problem parsing an argument and finding the fallacies. It's a tool like any other.

[-] antonim@lemmy.dbzer0.com 3 points 1 week ago

That was a roundabout way of admitting you have no idea what logic is or how LLMs work. Logic works with propositions regardless of their literal meaning, LLMs operate with textual tokens irrespective of their formal logical relations. The chatbot doesn't actually do the logical operations behind the scenes, it only produces the text output that looks like the operations were done (because it was trained on a lot of existing text that reflects logical operations in its content).

[-] Melvin_Ferd@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

This is why I said I wasn't sure how AI works behind the scenes. But I do know that logic isn't difficult. Just to not fuck around between us. I have a CS background. Only saying this because I think you may have it as well and we can save some time.

It makes sense to me that logic is something AI can parse easily. Logic in my mind is very easy if it can tokenize some text. Wouldn't the difficulty be if the AI has the right context.

[-] antonim@lemmy.dbzer0.com 4 points 1 week ago

Excellent, these "fallacies" are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC "bad"), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!

It's funny how you think AI will help refine people's ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That's why I don't feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.

Saying it’ll be boring comics missed the entire point.

So what was the point exactly? I re-read that part of your comment and you're talking about "strong ideas", whatever that's supposed to be without any actual context?

Saying it is the same as google is pure ignorance of what it can do.

I did not say it's the same as Google, in fact I said it's worse than Google because it can add a hallucinated summary or reinterpretation of the source. I've tested a solid number of LLMs over time, I've seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what's supposed to be similar between these two viewpoints? I don't live in a country with particularly developed anti-immigrant stances so maybe I'm missing something here?).

The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

"They’ve bought into the hype and need to justify it"? Are you sure you're talking about the anti-AI crowd here? Because that's exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.

But actually those who "buy into the hype" are the average people who just don't want to use this tech? Huh? What does that have to do with the concept of "hype"? Do you think hype is simply any trend that doesn't agree with your viewpoints?

[-] Melvin_Ferd@lemmy.world 0 points 1 week ago

Hype flows in both directions. Right now the hype from most is finding issues with chatgpt. It did find the fallacies based on what it was asked to do. It worked as expected. You act like this is fire and forget. Given what this output gave me, I can easily keep working this to get better and better arguments. I can review the results and clarify and iterate. I did copy and paste just to show an example. First I wanted to be honest with the output and not modify it. Second is an effort thing. I just feel like you can't honestly tell me that within 10 seconds having that summary is not beneficial. I didn't supply my argument to the prompt, only yours. If I submitted my argument it would be better.

[-] antonim@lemmy.dbzer0.com 2 points 1 week ago

Right now the hype from most is finding issues with chatgpt

hype noun (1)

publicity

especially : promotional publicity of an extravagant or contrived kind

You're abusing the meaning of "hype" in order to make the two sides appear the same, because you do understand that "hype" really describes the pro-AI discourse much better.

It did find the fallacies based on what it was asked to do.

It didn't. Put the text of your comment back into GPT and tell it to argue why the fallacies are misidentified.

You act like this is fire and forget.

But you did fire and forget it. I don't even think you read the output yourself.

First I wanted to be honest with the output and not modify it.

Or maybe you were just lazy?

Personally I'm starting to find these copy-pasted AI responses to be insulting. It has the "let me Google that for you" sort of smugness around it. I can put in the text in ChatGPT myself and get the same shitty output, you know. If you can't be bothered to improve it, then there's absolutely no point in pasting it.

Given what this output gave me, I can easily keep working this to get better and better arguments.

That doesn't sound terribly efficient. Polishing a turd, as they say. These great successes of AI are never actually visible or demonstrated, they're always put off - the tech isn't quite there yet, but it's just around the corner, just you wait, just one more round of asking the AI to elaborate, just one more round of polishing the turd, just a bit more faith on the unbelievers' part...

I just feel like you can’t honestly tell me that within 10 seconds having that summary is not beneficial.

Oh sure I can tell you that, assuming that your argumentative goals are remotely honest and you're not just posting stupid AI-generated criticism to waste my time. You didn't even notice one banal way in which AI misinterpreted my comment (I didn't say SMBC is bad), and you'd probably just accept that misreading in your own supposed rewrite of the text. Misleading summaries that you have to spend additional time and effort double checking for these subtle or not so subtle failures are NOT beneficial.

[-] Melvin_Ferd@lemmy.world 1 points 1 week ago

Ok let's give a test here. Let's start with understand logic. Give me a paragraph and let's see if it can find any logical fallacies. You can provide the paragraph. Only constraint is that the context has to exist within the paragraph.

this post was submitted on 08 Jun 2025
198 points (98.1% liked)

Technology

71505 readers
1228 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS