32
top 40 comments
sorted by: hot top controversial new old
[-] West_of_West@piefed.social 17 points 2 months ago

I hate the modern internet

[-] Wildmimic@anarchist.nexus 9 points 2 months ago

Yeah, but at least this post is interesting; it shows how godawful humanity as a whole is at detecting bots in the wild.

2 out of 400 bad.

[-] porous_grey_matter@lemmy.ml 4 points 2 months ago

That assumes that Reddit actually wants to ban bots. But as long as they're not too obvious, the bots are valuable to them, since they inflate the user count.

[-] GreatAlbatross@feddit.uk 1 points 2 months ago

"Bots? No, no, those are active users. They also don't use adblockers, so they've better than regular users!"

[-] West_of_West@piefed.social 2 points 2 months ago

I could be a bot eight now! How would I even know?

[-] ignotum@lemmy.world 3 points 2 months ago

You might be a bot nine too for all we know

[-] Trainguyrom@reddthat.com 2 points 2 months ago

Could simply be that only 2 have been fully banned by Reddit but most have tons of subreddit bans and/or shadowbans. On the other hand, Reddit is such a cesspit these days I wouldn't be too shocked if they just exist on Reddit shitposting slop

[-] hector@lemmy.today 0 points 2 months ago

Reddit isn't trying though. Social media is hooked by big business interests and governments, and there is overlap there. I can spot influence agents, mechanized trolls, supported by bots, you can bet they could better with their tools and analytics.

As we've seen for the last ten years, social media only takes down bots/influence agencies researchers or others make impossible to ignore, and they've cut those researchers off from the information they were using to that effect. Now it's only agencies the US government aligned groups highlight that will get removed, alleged Iranian, and the like, a bit player.

These inauthentic accounts vastly inflate their numbers, make advertising more valuable. Even as they make the sites less useful, and drive away real users, it's also assumed that users have no where else to go so why push back on governments and big business ratfucking the sites that can hurt them in myriad ways?

Not until we build a fediverse that can get critical mass to take off will we see them fight for real people's use of their site.

[-] Goodman@discuss.tchncs.de 0 points 2 months ago

How would we defend ourselves from such a bot flood though?

Let's say that we start to become competitive with one of these big tech companies user wise. What is stopping them from destroying the fediverse with bots by sewing dissent, hate and slop?

Perhaps the answer to such a scenario would just be to splinter, defederate and sort out the bot issue with better user registration.

Happy to hear your thoughts.

[-] hector@lemmy.today 1 points 2 months ago

There are a few options. The best of those is part of a larger reform of how instances, and general forums they interact on, could be run. Rather than moderators that just decide on violations, bans, etc., we have a clear set of rules, with a clear set of appeal processes for bans and the like. Culminating in a jury trial of members for that instance. Maybe a higher court to decide strictly on liability grounds for users that endanger legality even with acquittals.

Beyond that, instances could have some sort of process, maybe even election of qualified users* to appoint censors, that would have tools to hunt bots and influence operations, and flags of users would be forwarded to them, and moderators. Any enforcement actions would go through that appeal process to prevent abuses of power or misapplications of rules.

I'd say, do it like Rome did, for every elected position, and I will get to some others, not to elect one, but two. The first two highest vote getters each get elected with the same powers. It worked for 500 years for them.

There are some other positions we could even do elections through. Now who qualifies for elections? We could have threads where votes of reasoned arguments determine it, votes from qualified people that pass captchas perhaps. It's kind of a chicken or egg problem with voting online if influence agents and chatbots and bots are voting is the problem. Agents could cycle through accounts and do captchas, and chatbots of llm's might already be able to complete the captchas, so that might not work.

How else could we limit voting? Maybe just by making reasoned arguments for why we should be on the voting lists, and having users with their own positive voting record able to vote, as the bots and chatbots won't have much karma without being spotted by the censors and moderators and the like?

So I got bogged down here, but to summarize, to appoint two censors, selected by the community for 1 year terms, or whatever, that can hunt and charge accounts for removal/banning, under clear sets of rules that can be appealed to jury trials of users of the instance. Secure online trials. Maybe tests for suspected accounts.

The trickier part, making a system of real good faith users to be able to vote so influence agents and bots and llm's don't ratfuck the votes, jury trials, etc. There would be ways, we could even establish secure end to end encryption to verify real users person to person if the person agrees, just spitballing here,

But to maybe think about some other elected positions every term to fulfill other functions of the community. To have the clear rules and appealable enforcements to a jury trial of real users.

Because a censor that is vetted and possessing of some analytics tools, and moderators and administrators as they are able, would be able to hunt down suspected bots and influence agents and have them removed. Not all but a lot of them. Industry ones that work off of keyword for instance, say glyphosate bad and an influence agent with bots pops up in a half hour and argues endlessly if you argue back, it' not subtle. The ones pushing for Iran forever war as we speak, also many are not subtle.

One more thing to add. To make a separate form of karma, that results from doing favors for the community, for others, that can be traded like favors and used to qualify people. We can have real world versions for some social media applications, to be traded like credits or money, and instance type versions, not based on votes neccessarily, but for doing jobs for the community, for acting successfully as censor, or moderator, or administrator, or whatever other functions.

[-] LiveLM@lemmy.zip 16 points 2 months ago* (last edited 2 months ago)

Everyone is cooked, you are all cooked

Thanks for making the problem worse, fuck you too man.

[-] OwOarchist@pawb.social 11 points 2 months ago

And yet I get constantly shadowbanned there just for using a VPN...

I think reddit likes bots more than it likes real users.

[-] MBech@feddit.dk 4 points 2 months ago

Well why not, bots inflate their numbers more.

[-] chicken@lemmy.dbzer0.com 10 points 2 months ago* (last edited 2 months ago)

Reddit has shown through its actions that it's more interested in banning real users than bots, and wants to protect bots from being identified and called out by users, so it's not that surprising they've been able to do this.

[-] apftwb@lemmy.world 8 points 2 months ago
[-] aesthelete@lemmy.world 8 points 2 months ago* (last edited 2 months ago)

The days of having arguments with Internet strangers and knowing they aren't a bot are officially over. It's hard to tell exactly when the period ended, but it's definitely done now.

[-] psycotica0@lemmy.ca 4 points 2 months ago

I'd like to argue with you about that, but alas...

[-] demizerone@lemmy.world 4 points 2 months ago

I did that only twice and it never did it again. Arguing with people on the internet is pointless to begin with.

[-] dbtng@eviltoast.org 5 points 2 months ago

No its not!

[-] sheogorath@lemmy.world 3 points 2 months ago

Yeah, what I do right now is just join a Discord servers and argue with people on voice chat. YMMV tho, I accidentally made some lifelong friends this way.

[-] angrywaffle@piefed.social 6 points 2 months ago

Unfortunately they're probably around fediverse as well.

[-] Bristlecone@lemmy.world 6 points 2 months ago

That's true to a certain extent, but I think the fediverse isn't all that attractive to these types of people. Additionally I think we are way better prepared to handle mass bot bans and detection since we aren't as whorish here in the fediverse

[-] rbos@lemmy.ca 3 points 2 months ago

The ratio of human admins to users is better too, I think that will work in our favour.

[-] bjoern_tantau@swg-empire.de 4 points 2 months ago

That's exactly what a bot would say.

[-] angrywaffle@piefed.social 6 points 2 months ago

Absolutely! That is such a fantastic, creative, and thought-provoking comment! 🚀✨

[-] ignotum@lemmy.world 2 points 2 months ago

Thank you! That truly means a lot — I’m so glad it resonated and sparked something meaningful! 🌟💡

[-] dorkynsnacks@piefed.social 2 points 2 months ago

So far there's no money to be made here. Influence and reach is also limited.

If that changes at some point, it might be the end of the Fediverse. It's far too open to bots. A spammer can not only easily create new accounts on instances, they can run their own instances.

[-] irelephant@lemmy.dbzer0.com 1 points 2 months ago

There is a good few attempts, but the obvious ones get detected quickly.

[-] SGforce@lemmy.ca 5 points 2 months ago

Such an inefficient way to astroturf. Just copy old comments and markov-chain basic shit. Reddit has been mostly bots for years and years.

[-] ThunderComplex@lemmy.today 1 points 2 months ago

Too efficient. Why not rage bait AND raise your neighborhoods energy prices at the same time?

[-] abs_mess@lemmy.blahaj.zone 2 points 2 months ago

Too hard. Have an LLM summarize each comment in an old comment chain so that it obliterates any meaning and burries any real engagement. (I have no evidence, but I think Reddit is scraping external sites and turning posts into comment chains)

[-] fullsquare@awful.systems 2 points 2 months ago

according to his own claim, and he's selling his super secret methods. he might be just making shit up

[-] OwOarchist@pawb.social 2 points 2 months ago

How do you make a profit from producing AI slop?

You sell how-to guides to other producers of AI slop.

[-] theunknownmuncher@lemmy.world 1 points 2 months ago

Little late

[-] Gork@sopuli.xyz 1 points 2 months ago

Godd damn clankers

[-] luciferofastora@feddit.org 0 points 2 months ago

I sometimes wonder how prevalent bots are on Lemmy. On one hand, the barrier for entry might be lower / the effectiveness of bans harder to gauge. On the other, I'd think we're a smaller target, less attractive as a target.

Either way, the potential to accuse dissenters of being bots or paid actors is a symptom of the general toxicity and slop spilling all over the internet these days. A (comparatively) few people can erode fundamental assumptions and trust. Ten years ago, I would've been repulsed by the idea of dehumanising conversational opponents that way (which may have been just me being more naive), but today I can't really fault anyone.

In terms of risk assessment (value÷effort), I'm inclined to think something with the reach of Ex-Twitter or reddit would be a more lucrative target, and most people here actually are people—people I disagree with, maybe, but still a human on the other side of the screen. Given the niche appeal, the audience here may overall be more eccentric and argumentative, so it's easy to mistake genuine users for propaganda bots instead of just people with strong convictions.

But I hate that the question is a relevant one in the first place.

[-] Goodman@discuss.tchncs.de 1 points 2 months ago

We are the web. There is no web without the we.

It is ultimately humans who add value to the internet. We can make decisions, take action, have bank accounts, bots for the most part still can't. If we keep growing, there will come a time where swaying opinions, impressing advertisements or driving dissent will reach that value/effort threshold, especially with the effort term shrinking more everyday

I think that we are genuinely witnessing the end of the internet as we know it and if we want meaningful online contact to persist after this death, then we should come up with ways that communities can weather the storm.

I don't know what the solution is, but I want to talk and think about it with others that care.

On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.

  • Being a principled conscious consumer makes you a less likely target for advertisement
  • Avoid ragebait and clickbait, and develop a good epistemic bullshit filter along with media literacy, this makes it more difficult to lie to you, or to provoke outrage.
  • Unfortunately, be selective with your trust. How old is the user account? are the posting hours normal? does the user come across as a genuine human being that values discussion and meaningful online contact?
  • Be authentic and genuine. I don't know how else to signify that I am real (shoutout to the þorn users)

I would love to hear what others think.

[-] luciferofastora@feddit.org 1 points 2 months ago

are the posting hours normal?

Hey, no judging my sleep ~~schedule~~ arbitrary times when biological necessity triumphs over all the fun things I could do while awake!


Serious reply:

On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.

On the collective level, we should do something about the mechanisms that incentivise that malicious extraction of value in the first place, but that's a whole different beast...

Being a principled conscious consumer makes you a less likely target for advertisement

Agreed, though we should also stress that "less likely" or "unlikely" doesn't mean "never" and that we're not immune against being influenced by ads. That's a point I've seen people in my social circles overlook or blatantly ignore when pointed out, hence me emphasising it.

media literacy

This is probably one of the most critical deficits in general. Even with the best intentions, people make mistakes and it's critical to be aware of and able to compensate that.

Unfortunately, be selective with your trust.

Same as media literacy, I feel like this is a point that would apply even in a world where we're all humans arguing in good faith: Others may have a different, perhaps limited or flawed perspective, or just make mistakes — just as you yourself may overlook things or genuinely have blind spots — so we should consider whose voice we give weight in any given matter.

On the flipside, we may need to accept that our own voice might not be the ideal one to comment on something. And finally, we need to separate those issues of perspective and error from our worth as persons, so that admitting error isn't a shame, but a mark of wisdom.

Be authentic and genuine

That's the arms race we're currently running, isn't it? Developers of bots put effort into making them appear authentic—I overheard someone mention that their newest model included an extra filter to "screw up" some things people have come to consider indicators of machine-generated texts, such as these dashes that are mostly used in particular kinds of formal writing and look out of place elsewhere.

If at all, people tend to just use a hyphen instead - it's usually more convenient to type (unless you've got a typographic compulsion to go that extra step because that just looks wrong). And so the dev in question made their model use less dashes and replace the rest with hyphens to make the text look more authentic.

I wanted to spew when I heard that, but that's beside the point.

So basically, we'd have to constantly be running away from the bots' writing style to set ourselves apart, even as they constantly chase our style to blend in. Our best weapon would be the creative intuition to find a way of phrasing things other humans will understand but bots won't (immediately) be able to imitate.

Being creative on demand isn't exactly a viable solution, at least not individually, and coordinating on the internet is like harding lolcats, but maybe we can work together to carve out some space for humanity.

[-] Goodman@discuss.tchncs.de 1 points 2 months ago

Thanks for your comments. I agree with everything you said especially that these traits are desirable for broader life IRL. In a way the web culture is a reflection of our own cultures just more mixed, extreme, amplified and with a good dose of parasociallity. I desperately want people to break free of their cycles. Think, talk, discuss, empathize and form communities, use your free will for good damit. These are the real antidotes that will enable the cultural shift that will allow us to reject the smothering of the human spirit in the current way of life.

Anyways, it is a terrible thing that there is an armsrace to be authentic. This really ought to be solved on the user registration side. And also yes, saying something profound with hidden meaning through creative intuition is great, I write poems sometimes. But its not the solution to authenticity online.

this post was submitted on 15 Feb 2026
32 points (100.0% liked)

Reddit

22916 readers
1 users here now

News and Discussions about Reddit

Welcome to !reddit. This is a community for all news and discussions about Reddit.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules


Rule 1- No brigading.

**You may not encourage brigading any communities or subreddits in any way. **

YSKs are about self-improvement on how to do things.



Rule 2- No illegal or NSFW or gore content.

**No illegal or NSFW or gore content. **



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts.

Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.


founded 2 years ago
MODERATORS