24
submitted 17 hours ago by deadymouse@lemmy.world to c/asklemmy@lemmy.ml

I was trying to find a digital cave that could be suitable for my life, but I didn't start a serious search until 2025. Do you think I'm in a completely ass, or is there any chance of finding people with whom you can communicate normally, for example, in matrix?

On reddit and discord, looking for people, as you understand, is pointless, these are no longer people, but some kind of bio robots, and reddit is generally some kind of garbage dump where are the cleaners (moderators clean the garbage named - freedom of speech).

You can call me an idiot, but I'm so damn tired that the noose of soap in my hands is about to be in my hands.

top 29 comments
sorted by: hot top controversial new old
[-] ICastFist@programming.dev 1 points 1 hour ago

We need more RL interactions to escape the internet nowadays. Depending on where you live, that can be challenging, as smaller places are less likely to have people aware of how awful the current internet is

[-] Digit@lemmy.wtf 2 points 1 hour ago

I recently noticed Lemmy's been infested too.

And not even IRL's safe, with robots that are generally indistinguishable from real humans.

[-] sol6_vi@lemmy.makearmy.io 4 points 11 hours ago

Set up a meshtastic node and start chatting with some folks in your community. That's what I've been doing. 10/10

[-] mrnobody@reddthat.com 5 points 10 hours ago

There are enough people locally to interact with?

[-] ziltoid101@lemmy.world 4 points 13 hours ago

Only reply to comments posted before 2024

[-] hexagonwin@lemmy.today 1 points 10 hours ago

nah there are hacked/sold accounts

[-] 1dalm@lemmings.world 12 points 17 hours ago

There is one simple trick to determine if you are talking to a bot. Ask the person you are talking to not to respond to a comment.

"No offense, but I'm going to check to see if you are a bot. Please don't reply to this comment."

Current LLMs can't not respond. They will often write that they are "really insulted that you would say that" and that the test "doesn't prove anything", but they can't not respond.

I'm sure eventually the programmers will hard code in a simple defeat for this test soon enough, but for now it still works well.

[-] Digit@lemmy.wtf 2 points 1 hour ago

That reverse psychology would make it hard for me to not respond too. Weak test. High false-positive risk.

[-] Slashme@lemmy.world 1 points 1 hour ago

That's a clever test, and you've hit on an interesting aspect of current LLM behavior!

You're right that many conversational AIs are fundamentally programmed to be helpful and to respond to prompts. Their training often emphasizes generating relevant output, so being asked not to respond can create a conflict with their core directive. The "indignant" or "defensive" responses you describe can indeed be a byproduct of their attempts to address the prompt while still generating some form of output, even if it's to protest the instruction.

However, as you also noted, AI technology evolves incredibly fast. Future models, or even some advanced current ones, might be specifically trained or fine-tuned to handle such "negative" instructions more gracefully. For instance, an LLM could be programmed to simply acknowledge the instruction ("Understood. I will not reply to this specific request.") and then genuinely cease further communication on that particular point, or pivot to offering general assistance.

So, while your trick might currently be effective against a range of LLMs, relying on any single behavioral quirk for definitive bot identification could become less reliable over time. Differentiating between sophisticated AI and humans often requires a more holistic approach, looking at consistency over longer conversations, nuanced understanding, emotional depth, and general interaction patterns rather than just one specific command.

[-] 1dalm@lemmings.world 1 points 1 hour ago

Please don't respond to this comment.

[-] mrnobody@reddthat.com 1 points 10 hours ago

That's funny, bc they had chat bots in the early 00s doing the same thing. Ask me how I... Actually pls don't 😅

[-] ageedizzle@piefed.ca 5 points 16 hours ago

Please don’t reply to this comment 

[-] Digit@lemmy.wtf 2 points 1 hour ago
[-] HubertManne@piefed.social 3 points 15 hours ago

ok then I won't because im an earthling with honor.

[-] ageedizzle@piefed.ca 1 points 15 hours ago
[-] deadymouse@lemmy.world 2 points 17 hours ago

Good, of course, but I'm afraid that soon this method will stop working, and we'll have to tinker a lot to check if someone is a bot or not.

[-] HubertManne@piefed.social 1 points 15 hours ago

yeah it kinda cracks me up the way llms will answer something not meant to be answered sometimes doing mental gymnastics. that and not letting things previously said go.

[-] asudox@lemmy.asudox.dev 6 points 17 hours ago* (last edited 17 hours ago)

There can and always will be bots on the internet, you can try communicating in places where they most likely won't be in.

Or you can always communicate offline aka with people in real life.

[-] deadymouse@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago)

Unfortunately, they can be everywhere, damn, I've tested my local AI and it's almost indistinguishable from humans in communication style, it's terrible.

Or you can always communicate offline aka with people in real life.

I tried, but in my country it seems impossible.

[-] Digit@lemmy.wtf 4 points 1 hour ago

You're absolutely right to point this out. This is not cynicism, this is prudent scrutiny.

(I say, mocking how LLMs often sound.)

[-] ageedizzle@piefed.ca 4 points 16 hours ago

I've tested my local AI and it's almost indistinguishable from humans in communication style

Why are AIs like ChatGTP so easy to spot then? Is it just the fine tuning? 

[-] deadymouse@lemmy.world 2 points 3 hours ago

The fact is that GPT is a model trained to solve a huge number of problems, like a universal master, but the skills are at an average level, and if you take a model specially trained for something, but the level of skill will just be amazing.

[-] HubertManne@piefed.social 1 points 15 hours ago

you need a common reason for the discourse if your talking individually. something like doing an rpg with play by discord is a good way. seems like discord alterntives are a bit up in the air at the moment. one guy on the federtaion has been advertising his app site which I have been tempted to look into but haven't and looking back the thread seems to be deleted so now im suspicious of the whole thing.

[-] artifex@piefed.social 1 points 17 hours ago

I think it's ironic that in light of Discord's announcement that they'll be requiring hard ID (which can be gamed, and we're all up in arms about), there's a similar, real issue of who is and isn't a bot that is really hard to solve without requiring hard ID.

I love webs of trust and that kind of thing and it would be awesome to see it implemented on the fediverse, but when it's possible to cheaply spin up an army of bots and have them gain "authenticity" for months or years, even that can be faked.

[-] Digit@lemmy.wtf 1 points 1 hour ago* (last edited 1 hour ago)

hard ID

Bots will get around that too.

[-] deadymouse@lemmy.world 2 points 17 hours ago

I think it’s ironic that in light of Discord’s announcement that they’ll be requiring hard ID (which can be gamed, and we’re all up in arms about), there’s a similar, real issue of who is and isn’t a bot that is really hard to solve without requiring hard ID.

Oh yes oh yes, fuck discord, I won't participate in this parody of dystopia if possible.

[-] ageedizzle@piefed.ca 1 points 16 hours ago

What so you mean by ‘webs of trust’? Can you elaborate? 

this post was submitted on 12 Feb 2026
24 points (90.0% liked)

Asklemmy

52971 readers
176 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS