672
AI Image Rule (file.coffee)
submitted 1 year ago by sag@lemm.ee to c/196@lemmy.blahaj.zone
top 25 comments
sorted by: hot top controversial new old
[-] AVincentInSpace@pawb.social 70 points 1 year ago

Seems to me it'd be pretty easy to tell. If the footage was AI generated, fingers would be appearing and disappearing.

[-] Pregnenolone@lemmy.world 34 points 1 year ago

It’s a solved problem now. Most good AI models generate correct fingers these days.

[-] riskable@programming.dev 25 points 1 year ago

Well, no, actually: The AI image model will generate bad fingers constantly it's just become easier to fix via a secondary step (e.g. img2img) or you just tell it to generate 50 images and just pick the ones that don't have messed up fingers 🤷

[-] harmsy@lemmy.world 5 points 1 year ago

Cries in Artbreeder credits.

I mean, probably not in 3-5 years let alone 10.

[-] AVincentInSpace@pawb.social 10 points 1 year ago

And by that time the neural nets will have figured out anatomically correct hands anyway making this product doubly moot

[-] Diabolo96@lemmy.dbzer0.com 63 points 1 year ago* (last edited 1 year ago)

It's scary that the speed of improvement in the AI sector is so fast that people are still talking about something that wasn't a problem one month after it's launch if the person sharing the picture spent a bit more time than just writing the prompt and tapping "GENERATE". not only it wasn't a problem even back then, you could even choose the pose of the character and all other sorts of parameters. Since several months you can just do the bare minimum and it output the correct number of fingers.

People are underestimating AI improvement rate by a lot and big tech's gonna abuse it.

[-] kautau@lemmy.world 47 points 1 year ago* (last edited 1 year ago)

Big tech proved in 48 hours with the OpenAI fiasco that, as with every other industry, ethics are gone and money wins in today’s hyper-capitalist system. Whatever promise AI ever held for being used for good is now vastly overshadowed by its likelihood to be used to increase quarterly profits for the highest bidder, along with whatever side effects that entails.

[-] Even_Adder@lemmy.dbzer0.com 22 points 1 year ago* (last edited 1 year ago)

Luckily, AI is a public technology. That's why they're already trying their hand at regulatory capture. And they might just get it. Just like they're trying to destroy encryption. Support open source development, It's our only chance. Their AI will never work for us. John Carmack put it best.

[-] Diabolo96@lemmy.dbzer0.com 4 points 1 year ago

AM and the other AIs from the short story "I have no mouth and I must scream" could be a reality. The deep hatred it has towards humans was never explained and could be an alignment problem. They're AGIs made to wage wars after all.

I really recommend robert miles videos. He's been uploading videos about AI research safety for 6 years, when the most powerful AI were in millions of parameters and vastly under trained.

https://youtu.be/bJLcIBixGj8

[-] PipedLinkBot@feddit.rocks 2 points 1 year ago

Here is an alternative Piped link(s):

https://piped.video/bJLcIBixGj8

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[-] riskable@programming.dev 16 points 1 year ago

big tech's gonna abuse it.

Actually, it's everyone that's going to abuse it. Big tech wants to be the exclusive "AI provider" for everyday people's AI needs and desires but the reality is that the tech isn't that easy to keep secret/proprietary because most of the innovations pushing AI forward come from individuals fooling around with the technology and academia. Not from big tech R&D (which lately seems to all be spent trying to improve business processes).

Big tech is spending billions on hardware and entire data centers just to do AI stuff with the expectation that it'll give them a competitive advantage but the truth is that it'll be the small companies and individuals that end up taking advantage of AI in ways that actually improve things for everyday people and/or make real money.

My guess is that they're betting on acquisitions of companies using their AI processing power 🤷. Either that or it's just wishful thinking.

[-] HiddenLayer5@lemmy.ml 7 points 1 year ago* (last edited 1 year ago)

AI scams are already rampant where they pretend to be a loved one asking for help (read: "I'm in a bad situation right now can you send me as much money as you can?") And unsurprisingly it's unreasonably effective especially on older people.

Just a reminder that the tech companies absolutely do not see the above as an issue BTW, in fact all they seem to do is tacitly endorse it by advertising that you can use their service to clone people and "bring them to life" virtually and stuff. Because they're still making money when you use the AI (not to mention they collect and retain the training data you give them, with or without the subject's consent) and it's not like it's that easy for investigators to tell which AI was responsible for a particular scam campaign so there's really no risk to their reputation at all.

I'm serious when I say this: If you have elderly or otherwise less tech inclined family members and especially if you have them and your voice and/or photos are publicly available online, set up some kind of password that you have to get right before they send you money, absolutely no exceptions no matter how distressed "you" look or sound. It can be as simple as a word or phrase, or pick a specific shared memory that people outside your family don't know about that you'll always mention before asking for money. Do this in advance and tell them that AI can now convincingly replicate human speech and even photos and videos, and that if "you" don't know the password then they should hang up/block the account immediately and not respond further. You might even want to practice with them if they might forget. The vast majority of these types of scammers are just scraping the internet for information and have no idea who either of you are, so even a simple check like this should be able to significantly reduce the risk of scams.

[-] FrankTheHealer@lemmy.world 40 points 1 year ago

Modern problems require modern solutions

[-] Agent641@lemmy.world 21 points 1 year ago

Dystopian problems require dystopian solutions

[-] uriel238@lemmy.blahaj.zone 34 points 1 year ago

The fingers wouldn't work, as they'd move in real action like fake fingers rather than blending in and out like AI blended footage.

This reminds me of the product image (it was never a real product) of a gun that disguises itself as a cell phone. It was never a real product, but US Law Enforcement uses it to justify shooting people brandishing a cell phone.

[-] akd@lemm.ee 14 points 1 year ago

IANAL but this seems stupid to try in court assuming the footage is under good chain of custody.

[-] leaky_shower_thought@feddit.nl 12 points 1 year ago

criminals got to crimi

[-] andrew_bidlaw@sh.itjust.works 8 points 1 year ago
[-] PipedLinkBot@feddit.rocks 4 points 1 year ago

Here is an alternative Piped link(s):

Edward Penishands

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[-] FlyingSquid@lemmy.world 3 points 1 year ago

Saw it years ago. Surprisingly boring.

[-] andrew_bidlaw@sh.itjust.works 2 points 1 year ago

One needs a talent to turn one stupid joke into something special, compelling. They laked it. At least, they were dedicated to make a material for meme cuts. And I feel they themselves had fun filming it (:

[-] FlyingSquid@lemmy.world 2 points 1 year ago

True, but it wasn't even good porn. And that doesn't take a huge amount of talent.

[-] selokichtli@lemmy.ml 7 points 1 year ago

You mean politicians.

[-] UnkTheUnk@midwest.social 6 points 1 year ago

truth is dead

this post was submitted on 25 Nov 2023
672 points (99.9% liked)

196

16514 readers
2293 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS