169
top 50 comments
sorted by: hot top controversial new old
[-] Kolanaki@pawb.social 20 points 3 weeks ago
[-] Ghostalmedia@lemmy.world 14 points 3 weeks ago

That’s exactly what an LLM trained on Reddit would say.

[-] Kolanaki@pawb.social 10 points 3 weeks ago* (last edited 3 weeks ago)

I am an LLM

Large

Lazy

Mammal

[-] HowAbt2day@futurology.today 4 points 3 weeks ago

With Large Luscious Mammaries ?

[-] joyjoy@lemmy.zip 19 points 3 weeks ago

Are you AI? You have to tell me if you're AI, it's the law.

[-] ChaoticEntropy@feddit.uk 11 points 3 weeks ago

I'm required by law to inform my neighbours that I am AI.

[-] MrLLM@ani.social 2 points 3 weeks ago

Are you AI?

load more comments (1 replies)
[-] pHr34kY@lemmy.world 16 points 3 weeks ago* (last edited 3 weeks ago)

It would be nice if this extended to all text, images, audio and video on news websites. That's where the real damage is happening.

[-] BrianTheeBiscuiteer@lemmy.world 3 points 3 weeks ago

Actually seems easier (probably not at the state level) to mandate cameras and such digitally sign any media they create. No signature or verification, no trust.

[-] CosmicTurtle0@lemmy.dbzer0.com 9 points 3 weeks ago

I get what you're going for but this would absolutely wreck privacy. And depending on how those signatures are created, someone could create a virtual camera that would sign images and then we would be back to square one.

I don't have a better idea though.

[-] howrar@lemmy.ca 2 points 3 weeks ago

Privacy concern for sure, but given that you can already tie different photos back to the same phone from lens artifacts, I don't think this is going to make things much worse than they already are.

someone could create a virtual camera that would sign images

Anyone who produces cameras can publish a list of valid keys associated with their camera. If you trust the manufacturer, then you also trust their keys. If there's no trusted source for the keys, then you don't trust the signature.

[-] cley_faye@lemmy.world 5 points 3 weeks ago

No signature or verification, no trust

And the people that are going to check for a digital signature in the first place, THEN check that the signature emanates from a trusted key, then, eventually, check who's deciding the list of trusted keys… those people, where are they?

Because the lack of trust, validation, verification, and more generally the lack of any credibility hasn't stopped anything from spreading like a dumpster fire in a field full of dumpsters doused in gasoline. Part of my job is providing digital signature tools and creating "trusted" data (I'm not in sales, obviously), and the main issue is that nobody checks anything, even when faced with liability, even when they actually pay for an off the shelve solution to do so. And I'm talking about people that should care, not even the general public.

There are a lot of steps before "digitally signing everything" even get on people's radar. For now, a green checkmark anywhere is enough to convince anyone, sadly.

[-] howrar@lemmy.ca 1 points 3 weeks ago

I think there's enough people who care about this that you can just provide the data and wait for someone to do the rest.

load more comments (1 replies)
[-] Evotech@lemmy.world 16 points 3 weeks ago
[-] skisnow@lemmy.ca 14 points 3 weeks ago

My LinkedIn feed is 80% tech bros complaining about the EU AI Act, not a single one of whom is willing to be drawn on which exact clause it is they don't like.

[-] utopiah@lemmy.world 3 points 3 weeks ago

My LinkedIn feed

Yes... it's so bad that I just never log in until I receive a DM, and even then I login, check it, if it's useful I warn people I don't use LinkedIn anymore then log out.

[-] Don_alForno@feddit.org 3 points 3 weeks ago

Oh, so just like with the GDPR, cool.

load more comments (3 replies)
[-] Evotech@lemmy.world 3 points 3 weeks ago* (last edited 3 weeks ago)

I get it though, if you’re an upstart. Having to basically hire an extra guy just to do ai compliance is a huge hit to the barrier of entry

[-] skisnow@lemmy.ca 4 points 3 weeks ago* (last edited 3 weeks ago)

That’s not actually the case for most companies though. The only time you’d need a full time lawyer on it is if the thing you want to do with AI is horrifically unethical, in which case fuck your little startup.

It’s easy to comply with regulations if you’re already behaving responsibly.

[-] Don_alForno@feddit.org 3 points 3 weeks ago

That's true with many regulations. The quiet part that they're trying to avoid saying out loud is that behaving ethically and responsibly doesn't earn them money.

[-] notarobot@lemmy.zip 3 points 3 weeks ago

Did you seriously use LinkedIn? I always thougt that it was just narsisitic people posting about themselves never having any real conversations and only adding superficial replies to posts that align 100% with them

[-] skisnow@lemmy.ca 5 points 3 weeks ago

If I could delete it without impacting my job or career I would. Sadly they’ve effectively got a monopoly on the online professional networking industry. Cunts

load more comments (1 replies)
[-] AceFuzzLord@lemmy.zip 11 points 3 weeks ago* (last edited 3 weeks ago)

Okay, but when can the law straight up ban companies who don't comply with the law from operating in the state instead of just slapping them on the wrist and telling them "no" the same way a pushover parent tells their child "no". Especially after they just ignore the law.

[-] hedge_lord@lemmy.world 9 points 3 weeks ago

I am of the firm opinion that if a machine is "speaking" to me then it must sound a cartoon robot. No exceptions!

[-] vaultdweller013@sh.itjust.works 2 points 3 weeks ago

I propose that they must use vocaloid voices or that old voice code that Wasteland 3 uses for the bob the robot looking guys.

[-] HeyThisIsntTheYMCA@lemmy.world 1 points 3 weeks ago

i would like my GPS to sound like Brian Blessed otherwise i want all computers to sound like Niki Yang

[-] Deceptichum@quokk.au 4 points 3 weeks ago
[-] wreckedcarzz@lemmy.world 2 points 3 weeks ago

Straight to jail

[-] metallic_substance@lemmy.world 1 points 3 weeks ago

Devils advocate here. Any human can also hallucinate. Some of them even do it as a recreational activity

[-] MrLLM@ani.social 1 points 3 weeks ago* (last edited 3 weeks ago)
You can clearly identify when that’s happening; with LLMs, it’s often uncertain unless you’re an expert in the field or at least knowledgeable.
load more comments (1 replies)
[-] Wilco@lemmy.zip 4 points 3 weeks ago

Ok, this is a REALLY smart law!

[-] Lucidlethargy@sh.itjust.works 3 points 3 weeks ago

As a Califirnian, I will do my job from here on out.

[-] Lost_My_Mind@lemmy.world 3 points 3 weeks ago

Same old corporations will ignore the law, pay a petty fine once a year, and call it the cost of doing business.

[-] guest123456@lemmynsfw.com 2 points 3 weeks ago

Headline is kind of misleading. It requires a notice to be shown in a chat or interface that said chatbot is not a real person if it's not obvious that it's an LLM. I originally took the headline to mean that an LLM would have to tell you if it's an LLM or not itself, which is, of course, not really possible to control generally. A nice gesture if it were enforced, but it doesn't go nearly far enough.

[-] SpaceCowboy@lemmy.ca 1 points 3 weeks ago

I think it's one of those perfect is the enemy of good kinds of situations. Go further is more complicated and requires more consideration and more analysis of consequences, etc. and that can take some time. But this is kinda no-brainer kind of legislation so pass this now while making the considerations on some more robust legislation to pass later.

[-] ashar@infosec.pub 2 points 3 weeks ago

I am an AI, I think. Probably.

[-] utopiah@lemmy.world 2 points 3 weeks ago

Don't devaluate yourself, you're infinitely more.

[-] Attacker94@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago)

Has anyone been able to find the text of the law, the article didn't mention the penalties, I want to know if this actually means anything.

Edit: I found a website that says the penalty follows 5000*sum(n+k) where n is number of days since first infraction, this has a closed form of n^2+n= (7500^-1)y where y is the total compounded fee. This makes it cost 1mil in 11 days and 1bil in a year.

reference

Yeah, this is an important point. If the penalty is too small, AI companies will just consider it a cost of doing business. Flat-rate fines only being penalties for the poor, and all that.

[-] cactusfacecomics@lemmy.world 1 points 3 weeks ago

Seems reasonable to me. If you're using AI then you should be required to own up to it. If you're too embarrassed to own up to it, then maybe you shouldn't be using it.

[-] Rooster326@programming.dev 1 points 3 weeks ago

What about my if else AI algorithm?

It's not really an llm

[-] eldebryn@lemmy.world 1 points 3 weeks ago

IMO if your "A*" style algorithm is used for chatbot or any kind of user interaction or content generation, it should still be explicitly declared.

That being said, there is some nuance here about A) use of Copyrighted material and B) Non-deterministic behaviour. Neither of which is (usually) a concern in more classical non-DL approaches to AI solutions.

load more comments (2 replies)
[-] DeathByBigSad@sh.itjust.works 1 points 3 weeks ago

Fun Fact:

Did you know, that cops are required to tell you if they're a cop? It's in the constitution!

load more comments
view more: next ›
this post was submitted on 13 Oct 2025
169 points (100.0% liked)

Technology

76657 readers
265 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS