view the rest of the comments
News and Discussions about Reddit
Welcome to !reddit. This is a community for all news and discussions about Reddit.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules
Rule 1- No brigading.
**You may not encourage brigading any communities or subreddits in any way. **
YSKs are about self-improvement on how to do things.
Rule 2- No illegal or NSFW or gore content.
**No illegal or NSFW or gore content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts.
Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
:::spoiler Rule 10- Majority of bots aren't allowed to participate here.
My understanding of how this works is that that left one is real accounts making real comments, at least in the majority.
Then when the link gets reposted, either by a bot or naturally, potentially depending on the title, the bots scrape the old comments and post them.
It's content farming. And Reddit is probably okay with this.
The right one is the "real" accounts. Notice how the left one is newer and all the accounts have names ending with four digits, except where they aren't copies from the right.
No, the left one is older and most the names in the right contain four numbers.
What's going on here?
Maybe op updated the picture?
I did, because other people complained in another comment that it was confusing to not have the older thread on the left.
Anyway, it's pretty obvious which one is which one
Thanks I almost thought I'm delusional
I also thought you were, lmao.
yeah they did for some reason it seems
The list of names at the left creeps me the fuck out.
I saw this exact same style of bot account years ago on Tumblr. They always follow the same naming scheme: one word or two words combined and then a string of 4 digits. I bet if you go to any of their profiles, you'll find like 4 comments that are all copied from old threads and a bunch of upvotes on completely random subs, possibly even all of them being on other bot accounts' posts and comments.
The real question is whether they're being used to fake activity on Reddit, sway public opinion by posting this sort of political slant, or will they later be used to advertise scams and this is just to make them seem legitimate.
Why not all of the above? If you have a service, you want to sell it to as many customers as possible.
Very good point.
I thought the names followed that format because that's the format reddit used for suggestions when signing up.
I think the accounts are kind of "warmed up" this way to make them harder for reddit to identify as bots when they're used for vote manipulation.
Like a bot that just voted in /r/politics threads world be easier to identify than one which comments here and there and gets a few upvotes itself.
Reddit is going to poison LLMs sooner than I thought.
LMAO while AIs reading training data sets get stuck in infinite loops.
Reddit probably omits bot accounts when it sells its data to AI companies
I doubt Reddit is in charge of many of the existing bots on their site.
Reddit has access to its own data - they absolutely know which users are posting unique content and which user's content is a 100% copy of data that exists elsewhere on their own platform
I know they could be I'm just not sure they're that competent. These bots often aren't single user or just copy paste either, there's usually some effort to mix it up or change wording slightly. Reddits internal search function is infamously shit but they "know" which users are unlabeled bots with some effort put behind them?
I figure it’s their absolute last priority. They might know rough bot #s, but haven’t built or don’t widely use takedown tools. There’s always an enhancement to deliver, and bots help their engagement metrics.
I know everyone here likes to circle jerk over "le Reddit so incompetent" but at the end of the day they are a (multi) billion dollar company and it's willfully ignorant to infer that there isn't a single engineer at the company who knows how to measure string similarity between two comment trees (hint:
import difflib
in python)You think in Reddit's 20 year history no one has thought of indexing comments for data science workloads? A cursory glance at their engineering blog indicates they perform much more computationally demanding tasks on comment data already for purposes of content filtering
Analytics workflows are never run on the production database, always on read replicas which are taken asynchronously and built from the transaction logs so as not to affect production database read/write performance
Reddit's entire monetization strategy is collecting user data and selling it to advertisers - It's incredibly naive to think that they don't have a vested interest in identifying organic engagement
I'm sure they have, but an index doesn't have anything to do with the python library you mentioned.
Sure, either that or aggregating live streams of data, but either way it doesn't have anything to do with ElasticSearch.
It's still totally possible to sync things to ElasticSearch in a way that won't affect performance on the production servers, but I'm just saying it's not entirely trivial, especially at the scale reddit operates at, and there's a cost for those extra servers and storage to consider as well.
It's hard for us to say if that math works out.
You would think, but you could say the same about Facebook and I know from experience that they don't give a fuck about bots. If anything they actually like the bots because it looks like they have more users.
Doubt it, they are interwoven into almost any conversation with more than 70 comments.
If you have access to the entire Reddit comment corpus it's trivial to see which users are only reposting carbon copies of content that appears elsewhere on the site
It's probably not as easy as you imagine for reddit to identify and cleanse all bot content.
Of course it's not. Nor do they want to.
I think the person you're talking to thinks all bots are like the easy ones in this screenshot.
Look at the picture above - this is trivially easy. We are talking about identifying repost bots, not seeing if users pass/fail the Turing test
If 99% of a user's posts can be found elsewhere, word for word, with the same parent comment, you are looking at a repost bot
That's easy in an isolated case like this, but the reality of the entire reddit comment base is much more complex.
The low level bots in OPs screenshot, sure, because it's identical. Not the rest.
I used to hunt bots on reddit for a hobby and give the results to Bot Defense.
Some of them use rewrites of comments with key words or phrases changed to other words or phrases from a thesaurus to avoid detection. Some of them combine elements from 2 comments to avoid detection. Some of them post generic comments like 💯. Doubtless there are some using AI rewrites of comments now.
My thought process is if generic bots have been allowed to go so rampant they fill entire threads that's an indication of how bad the more sophisticated bot problem has become.
And I think @phdepressed is right, no one at reddit is going to hunt these sophisticated bots because they inflate numbers. Part of killing the API use was to kill bot detection after all.
Reddit has way more data than you would have been exposed to via the API though - they can look at things like user ARN (is it coming from a datacenter), whether they were using a VPN, they track things like scroll position, cursor movements, read time before posting a comment, how long it takes to type that comment, etc.
You are conflating "don't care about bots" with "don't care about showing bot generated content to users". If the latter increases activity and engagement there is no reason to put a stop to it, however, when it comes to building predictive models, A/B testing, and other internal decisions they have a vested financial interest in making sure they are focusing on organic users - how humans interact with humans and/or bots is meaningful data, how bots interact with other bots is not
It's account farming. They make fake accounts look legitimate so they can use them to influence opinions on the site.
They also use them in groups of 3 to lure people to malicious sites and scam sites. Especially fake merchandise sites.
Basically replaying a thread to make it look like there's activity in the sub.
The left predates the right by 10 months