I'm personally on the fence about this type of stuff. On one hand, yes I 100% agree about actually keeping kids safer online (not like the politicians "Think of the kids!" type of "safety"). On the other I don't want anyone to have to give up privacy by having to confirm their age by sending some form of verification, whether that picture/video of ID with birth date on it or having an AI that will inevitably get so many false positives judge you, just to access a service online.
I'm 100% in the second camp. Facebook having my ID is a much bigger issue than having my kids' profile be public. I as a parent can ensure my kids' profiles are acceptable, or mark them as private myself. I can't ensure Facebook deletes my ID after verifying my identity.
Yes, kids should be safer online, and that starts at home. Educate parents and kids about how to stay safe, that's as far as it should go.
I'm also in the second camp. Plus, censoring the bad words on specific users is a few too many steps closer to don't say gay on the internet. Is ass ok but not fuck? Is sex talk forbidden? All mention of anatomy, including general questions about health? How about they ban anti-capitalist language too? The tiktok language phenomenon shows that users will absolutely just make do getting around communication bans, "unalive" and "le$beans" being the most popular. This type of censorship has already happened on other platforms, and it's all bullshit and useless.
I completely agree. I'm reading a book related to 1984, and all of the thought crime and whatnot it talks about is scarily on-point when it comes to social media censorship. For example, "sex crime" is strictly controlled, and in the same chapter that someone gets taken away for getting pregnant, the MC talks about sexual relationships she has and plans to have. Nobody can talk about love or relationships, yet everyone seems to engage in them, or at least one-night stands. In fact, the word used for "abortion" in that book is "unbirth," which is right there with the term "unalived."
Blocking out a huge part of human culture doesn't help anyone, and it doesn't actually work, because people will find a way. What can work is giving users the tools to hide stuff they don't want to see.
The obvious answer is that Facebook should not be used by anyone, ever. The model is cancer, whatever FB does of value for the user can be accomplished without a social media platform.
Choice becomes much, much harder once you listen to accounts about CSAM. Darknet Diaries has a few episodes on this. Some accounts are stomach churning. You can see reasoning of people pushing for the laws
And I agree. Education would go a long way. Much further than some ID verification.
But, see, education makes people smarter. What if people see through the lies of politicians?!
Both politicians and agencies are drooling at the thought of such laws. Because no one answers one simple aspect the people want answered. Who watches the watchers? Who are they accountable to?
Exactly.
People like easy solutions to complex problems. If you don't see the problems, it's easy to assume they don't exist, but what actually happens is that by banning things, you just push them underground, where they fester. Alcohol prohibition created the mafia, which caused so many more problems than alcohol ever did, and it's still around today. Banning drugs seems to have created, or at least strengthened, the drug cartels. I wouldn't be surprised if strict controls around CSAM actually ends up harming more kids as people who would be casual observers end up getting caught up in the worst of it and end up actually harming children. I'm not saying CSAM should be legal or anything like that, I'm just saying the strict censorship of anything close to it is more likely to push someone who is casually interested to go and find it. The more strictly something is controlled, the more valuable it is for the person who controls it.
In other words, it's the Streisand Effect, but for crime.
No, what we need is better education and better (not more) policing.
Anything to prevent getting my i.d in a database, i would actually be ok with using an ai to verify my age by my appearance if it really came down to it and I had to choose legally some form of age verification.
How many users does IG have that are registered as under 18?
I'm 25 now, but I still always say I was born in the 80s out of habit...
It's a good step, but it won't fix things.
as someone from the 80's I'm offended somehow.
70s checking in, whippersnapper.
Yeah, I always pick 1920 for a b day, even though I’m 45.
Nothing can fix things because teenagers will not cooperate. If Instagram could identify all its teenage users, those users would move to a platform that couldn't. The only thing the restrictions achieve is a reduction in the market share of the platform with the restrictions.
I think it would be naive to think that they don't know this already. Not to say that I think you're making that argument, but that I think the losses are calculated against the benefit of the appearance of care that this move affords them. Sure, these new restrictions and tooling means that some parents will be more willing to allow their teens to engage with the platform, but there's no way that will outweigh the active user reduction in the targeted age range.
The real benefit is looking like they're doing stuff in a positive direction in the context of minors. I'm definitely expecting them to point at this move (and its voluntary nature) as an argument against future regulation proposals. Especially the part where they're ostensibly putting that control in parents' hands.
I still always say I was born in the 80s out of habit...
I always say 1900 out of habit.... I was at least once rejected as too old :D
Lego is serious about those 4-99 age limits, huh?
If I get offered the whole calendar, I will use the whole calendar!
are you a February 29 1900 enjoyer?
That's ageist. I maintain my god given right to lie about being the oldest person on earth.
I’m 25 now, but I still always say I was born in the 80s out of habit…
...?
For those "special" websites
Thank god they're filtering out the bad no-no words! Finally teens won't be using naughty and scary words any longer because forbidding words that make us sad and upset is a sensible and smart thing to do! Fuck these shitty networks policing every aspect of speech with a humongous camel dick!
Also, if everything is highlighted, nothing is highlighted. Be more reasonable with your highlights.
I’m glad nearly every word in this image is highlighted so I’d know what to read.
(I’m just joshin’)
HI JUST JOSHIN I'M WOGI
They know their network is harmful to teens for years now, I wonder why NOW they are finally doing something about it?
Cause the fish is starting, slowly, to suspect that it's in a net.
They are not. They just make it look like they care, but nothing actually changes
...as private as an Instagram account can be, anyway.
As a user of bionic reading, wtf did you do to your text
Yeah, I'm not sure. People are calling it highlighting, but it doesn't fit any reasonable pattern to have been manually highlighted. Is there some sort of bad automated highlighting? Or just someone still learning what highlighting is even used for. Or is it just some sort of style thing?
Wait, There are Teens who don't private their accounts? That's wierd.
the weirder thing is teens using their real identity online at all.
They have an account their parents can see and private accounts
This has all happened before and it will all happen again. This is what it looks like when a social media company tries to head off an incoming regulatory push.
Only took them 14 years, lol
How are they going to identify who are teens?
Meta said it was fully expecting many teenagers would try to evade the new measures.
"The more restrictive the experience is, the stronger the theoretical incentive for a teen to try and work around the restriction," Mr Mosseri said.
In response, the company is launching and developing new tools to catch them out.
Instagram already asks for proof of age from teenage users trying to change their listed date of birth to an adult one, and has done since 2022.
Now, as a new measure, if an underage user tries to set up a new Instagram account with an adult date of birth on the same device, the platform will notice and force them to verify their age.
In a statement, the company said it was not sharing all the tools it was using, "because we don't want to give teens an instruction manual".
"So we are working on all these tools, some of them already exist … we need to improve [them] and figure out how to provide protections for those we think are lying about their age," Mr Mosseri said.
The most stubborn category of "age-liars" are underage users who lied about their age at the outset.
But Meta said it was developing AI tools to proactively detect those people by analysing user behaviour, networks and the way they interact with content.
Now, as a new measure, if an underage user tries to set up a new Instagram account with an adult date of birth on the same device, the platform will notice and force them to verify their age.
So another reason to force user to hand over PII for "age verification" if they "suspect" (with AI ofc) a new user is underage. Nice.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed