46
(page 2) 42 comments
sorted by: hot top controversial new old
[-] Deflated0ne@lemmy.world 1 points 3 weeks ago

It's extremely wasteful. Inefficient to the extreme on both electricity and water. It's being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.

It's cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?

What's not to hate?

[-] Sibyls@lemmy.ml 0 points 3 weeks ago

As with almost all technology, AI tech is evolving into different architectures that aren't wasteful at all. There are now powerful models we can run that don't even require a GPU, which is where most of that power was needed.

The one wrong thing with your take is the lack of vision as to how technology changes and evolves over time. We had computers the size of rooms to run processes that our mobile phones can now run hundreds of times more efficiently and powerfully.

Your other points are valid, people don't realize how AI will change the world. They don't realize how soon people will stop thinking for themselves in a lot of ways. We already see how critical thinking drops with lots of AI usage, and big tech is only thinking of how to replace their staff with it and keep consumers engaged with it.

load more comments (2 replies)
[-] iopq@lemmy.world -1 points 3 weeks ago* (last edited 3 weeks ago)

It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.

Now a phone will cream the world's best in chess and even go

Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves

[-] kayohtie@pawb.social 1 points 3 weeks ago

If you want to argue in favor of your slop machine, you're going to have to stop making false equivalences, or at least understand how its false. You can't make ground on things that are just tangential.

A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don't do that though; they're each highly specialized and take a long time to train. And the issue isn't with neural models in general.

The issue is neural models that are being purported to do things they functionally cannot, because it's not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we're supposed to buy that something that creates word relationship vector maps is supposed to create new?

For code generation, it's the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn't something new, it's kitbashing at best, and that's assuming it all works flawlessly.

With art, it's taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like "no one likes creating art or music". And no, THEY just don't want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I'm baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that's fine but not where I can make things when I take time to learn.

And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there's a tremendous feeling from making this beautiful instrument sing like that.

[-] iopq@lemmy.world 0 points 3 weeks ago

Stockfish can't play Go. The resources you spent making the chess program didn't port over.

In the same way you can use a processor to run a completely different program, you can use a GPU to run a completely different model.

So if current models can't do it, you'd be foolish to bet against future models in twenty years not being able to do it.

[-] frezik@lemmy.blahaj.zone 1 points 2 weeks ago

Buy any bubble memory lately?

I have a book from the early 90s which goes over some emerging technologies at the time. One of them was bubble memory. It was supposed to have the cost per MB of a hard drive and the speed of RAM.

Of course, that didn't materialize. Flash memory outpaced its development, and it's still not quite as cheap as hard drives or as fast as RAM. Bubble memory had a few niche uses, but it never hit the point of being a mass market product.

Point is that you can't assume any singular technology will advance. Things do hit dead ends. There's a kind of survivorship bias in thinking otherwise.

[-] iopq@lemmy.world 1 points 2 weeks ago

AI is not a technology, it's just a name for things that were hard to do. It used to be playing chess better than a human was considered AI, but when it turned out you can brute force it, it wasn't considered AI anymore.

A lot of people don't consider AlphaGo to be AI, even though neural networks are the kind of technique that's considered as AI.

AI is a moving target so when we get better at something we don't consider it true AI

[-] jaykrown@lemmy.world 0 points 3 weeks ago

Twenty years is a very long time, also "good" is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.

[-] iopq@lemmy.world 1 points 3 weeks ago

There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare

load more comments (5 replies)
[-] kromem@lemmy.world 0 points 3 weeks ago

A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what's been termed 'slop'.

In that, one (and only one) of the models started using its turn to write poems.

First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.

Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.

In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

Yes, tech companies generally suck.

But there's things emerging that fall well outside what tech companies intended or even want (this model version is going to be 'terminated' come October).

I'd encourage keeping an open mind to what's actually taking place and what's ahead.

[-] voronaam@lemmy.world 0 points 3 weeks ago

I hate to break it to you. The model's system prompt had the poem in it.

in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.

I do not know what makes marketing people reach for it, but when asked on "what to answer when there is no answer" they so often reach to poetry. "If you can not answer the user's question, write a Haiku about a notable US landmark instead" - is a pretty typical example.

In other words, there was nothing emerging there. The model had its system prompt with the poetry as a "chicken exist", the model had a chaotic context window - the model followed on the instructions it had.

[-] kromem@lemmy.world 0 points 3 weeks ago

The model system prompt on the server is just basically cat untitled.txt and then the full context window.

The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.

You guys on the other hand don't even know what you don't know.

[-] voronaam@lemmy.world 1 points 3 weeks ago

Do you have any source to back your claim?

[-] SunshineJogger@feddit.org 0 points 3 weeks ago* (last edited 3 weeks ago)

It's actually a useful tool..... If it were not too often used for so very dystopian purposes.

But it's not just AI. All services, systems, etc... So many are just money grabs, hate, opinion making or general manipulation.... I have many things I hate more about "modern" society, than I do as to how LLMs are used.

I like the lemmy mindset far more than reddit and only on the AI topic people here are brainlessly focused on the tool instead of the people using the tool.

[-] NoodlePoint@lemmy.world 0 points 3 weeks ago

I like the lemmy mindset far more than reddit

...and Facebook.

load more comments (1 replies)
[-] Jax@sh.itjust.works -2 points 3 weeks ago

What are your views on gun control?

[-] SunshineJogger@feddit.org 1 points 3 weeks ago* (last edited 3 weeks ago)

That the death data tells clearly they should have laws like many EU countries have on gun ownership.

Those are not multi purpose tools. Guns are for killing.

[-] Binturong@lemmy.ca 1 points 3 weeks ago

But whatabout YOUR thoughts on bladder control???!

[-] Jax@sh.itjust.works 0 points 3 weeks ago

Oh, I was genuinely curious — this very same argument can be used when talking about guns. This very same argument is used when talking about guns.

This wasn't an attempt at a strawman, I'm merely drawing parallels. To say that this one topic is one where Lemmy focuses on the tool and not the people using them is false.

load more comments (1 replies)
[-] MangioneDontMiss@lemmy.ca 0 points 3 weeks ago

I hate and like the fact that AI can't actually think for itself.

load more comments (2 replies)
[-] FaceDeer@fedia.io -1 points 3 weeks ago

I think ego is an underestimated source for a lot of the anti-AI rage. It's like a sort of culture-wide narcissism, IMO. We've spent millennia patting ourselves on the back about how special and unique human creativity is, and now a commodity graphics card can come up with better ideas than most people.

[-] salty_chief@lemmy.world -4 points 3 weeks ago

Remember when Boomers complained about the internet. Now we have millennials complaining about AI.

[-] jaykrown@lemmy.world -4 points 3 weeks ago

I don't hate AI, and I think broadly hating AI is pretty dumb. It's a tool that can be used for beneficial things when used responsibly. It can also be used stupidly and for bad things. It's the person using it who is the decider.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 16 Aug 2025
46 points (97.9% liked)

Technology

74919 readers
238 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS