1851
submitted 1 year ago by koper@feddit.nl to c/fediverse@lemmy.ml

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

(page 3) 50 comments
sorted by: hot top controversial new old
[-] thedarkfly@feddit.nl 8 points 1 year ago

Two solutions that I see:

  1. Mods and/or admins need to be notified when a post has a lot of upvotes from accounts on the same instance.
  2. Generalize whitelists and requests to federate from new instances.
load more comments (3 replies)
[-] retronautickz@beehaw.org 6 points 1 year ago

Reddit had/has the same problem. It's just that federation makes it way more obvious on the threadiverse.

[-] danc4498@lemmy.world 5 points 1 year ago

Votes are just a number that determine what everybody sees. This will be manipulated by all the bad actors of this world once Lemmy becomes mainstream. Politicians, dictators, Hollywood, tech companies....

[-] menturi@lemmy.ml 5 points 1 year ago

I wonder if an instance could only allow votes by users who are part of instances that require email verification or some other verification method. I would imagine that would heavily help reduce vote manipulation on that particular instance.

[-] Derproid@sh.itjust.works 4 points 1 year ago

This alone wouldn't help because I can just set up an instance that requires email verification (or any other kind) and automate it still since I can make infinite emails with my own domain.

load more comments (1 replies)
load more comments (1 replies)
[-] lemming007@lemm.ee 5 points 1 year ago

What is the definition of a "fake account"?

[-] hawkwind@lemmy.management 6 points 1 year ago

In this context it would be an account with the sole purpose of boosting the visible popularity of a post or comment.

load more comments (5 replies)
load more comments (4 replies)
[-] lasagna@programming.dev 4 points 1 year ago* (last edited 1 year ago)

Wouldn't a detection system be way better? I can see a machine learning model handling this rather well. Correlate the main accounts to their upvoters across all their posts and create a flag if it returns positive. It would be more of a mod tool, really.

I have already ran into a very obvious Russian troll factory account and it really drags down the quality of the place. Freedom of speech shouldn't extend to war criminals and I'd rather leave any clusterfuck that allows it, whether they do it through will or incompetence.

[-] dimspace@lemmy.world 4 points 1 year ago

Instances can just defederate with those servers

load more comments (4 replies)
[-] Mikina@programming.dev 4 points 1 year ago

This is something that will be hard to solve. You can't really effectively discern between a large instance with a lot of users, and instance with lot of fake users that's making them look like real users. Any kind of protection I can think of, for example based on the activity of the users, can be simply faked by the bot server.

The only solution I see is to just publish the vote% or vote counts per instance, since that's what the local server knows, and let us personally ban instances we don't recognize or care about, so their votes won't count in our feed.

load more comments
view more: ‹ prev next ›
this post was submitted on 09 Jul 2023
1851 points (97.5% liked)

Fediverse

17759 readers
10 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS