19
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 Jan 2024
19 points (71.1% liked)
Gaming
20177 readers
46 users here now
Sub for any gaming related content!
Rules:
- 1: No spam or advertising. This basically means no linking to your own content on blogs, YouTube, Twitch, etc.
- 2: No bigotry or gatekeeping. This should be obvious, but neither of those things will be tolerated. This goes for linked content too; if the site has some heavy "anti-woke" energy, you probably shouldn't be posting it here.
- 3: No untagged game spoilers. If the game was recently released or not released at all yet, use the Spoiler tag (the little ⚠️ button) in the body text, and avoid typing spoilers in the title. It should also be avoided to openly talk about major story spoilers, even in old games.
founded 5 years ago
MODERATORS
I was very glad to read the last sentence. I agree fully. Easiest would be a report button that saves the last 60 seconds of voice, analyzes it with ai and check if something illegal/harassing was said and autokicks the person who said it.
Would not require more top down systems.
I personally lean more towards humans for moderation, as words alone dont convey the full intent and meaning. And this cuts both ways, benign words can be used to harass.
But of course, humans are expensive, and recordings of voice chat have privacy implications.
generally, yes. But computers can take care of stuff very well at this point. Kicking someone for using the N-Word does not need meaning. Just dont use it, even if it is for educational purposes (inside a game-chat for example).
I dont think we live in the same reality. over 30% in the US use Voice assistants that constantly listen in to their conversatoins (was just the first number I could find, I'm not from the US). Having a bot in a game VC chat store 1 minute of text for 1 minute for reporting purposes is like 0.00001% of what is going wrong with security stuff. Billions of people are getting analyzed, manipulated and whatnot on a daily basis. A reporting tool is not even the same game, let alone in the same ballpark in terms of privacy implications.
Yeah, AI to knock out the egregious stuff (n-bombs etc) is prefectly reasonable. But there is still a lot of harassment that can happen the really needs a human to interpret. Its a balance.
The privacy i am thinking of is the legal side of things. Google/FB/Apple are huge companies with the resources to work through the different legal requirements for every state and country. Google/FB/Apple can afford to just settle if anything goes wrong. A game studio cannot always do the same. As soon as you store a recording of a users voice, even temporarily, it opens up a lot of legal risks. Developers/publishers should still do it imo, but i dont think its something that can just be turned on without careful consideration.
Good thought. Thanks for bringing it up.
Yeah that sounds totally reasonable and unintrusive, wtf. I don’t want my every word spoken in voice to be live analyzed by ai to see if I did a wrongthink.
Why not simply mute or kick if someone is being an asshole? Has served me well in all my years using discord or teamspeak.
Apart from what you‘re interpreting into my words, I said if someone is harassing you or speaking about lets say the things they did with their daughter yesterday, you can report them and have a computer look into it instead of a human.
Whatever privileges you have in your discord, you cant kick just anyone in every place. You either need privileges or a moderator to do it normally and my idea was to use AI to analyze the reported stuff.
I completely understand the sentiment of protecting children, but at the same time under that argument you can push the most dystopian and intrusive, overreaching legislature imaginable. It is the old balance of freedom versus safety, we can’t have complete safety without giving up all freedom.
And I think a constant ai driven monitoring of everything people say in the general vicinity of a microphone is very dystopian; which would be the eventual outcome of this.
I'm just gonna repeat myself since this is the most common answer I get in those topics:
The vast majority of people is being listened in on, analyzed and manipulated on a daily basis by far, far worse actors. Storing 1 minute of VC for 1 minute only accessible to this hypothetical bot *if someone reports them - facing wrongful report consequences themselves is not comparable to real privacy threats.
You don’t need to repeat yourself (and neither, be this condescending), I am well aware that this is happening to some degree already. Doesn’t mean I have to happily concede the little that is left.
You‘re again interpreting something into my words that I didnt say. Maybe try not to play the victim in every comment. It’s abrasive.
It’s not happening to some degree. Its happening left right and center. Denying that a computer would help with vc moderation does not help at all.
Good day.
Right back at ya buddy. I’m not putting words in your mouth.
And no matter how often times you repeat it, my discord call doesn’t constitute a threat to public safety.