1
1
2
1
3
1
4
1
5
1
6
1

Unrelated picture

7
1
8
1
9
1
10
1
11
1
12
1
Republicans Want You to Die (www.commondreams.org)
submitted 2 days ago* (last edited 2 days ago) by return2ozma@lemmy.world to c/aboringdystopia@lemmy.world
13
1

“Since the public can’t see or hear what’s happening in federal court firsthand, we’re using cutting-edge AI tools to bring these important proceedings to life, based entirely on official transcripts,” said Law&Crime President Rachel Stockman. “This is a pivotal moment in both popular culture and justice, and our goal is to provide accurate, transparent access to what’s actually being said in that courtroom.”

14
1

Al Jazeera Arabic is reporting that Hassan Eslaih has been killed in an Israeli air raid on the burns department at the Nasser Medical Complex in Khan Younis.

Eslaih was wounded last month in an Israeli attack on a media tent outside the hospital. At least two people were also killed in that attack.

15
1
submitted 4 days ago* (last edited 4 days ago) by monkeyman69@lemmynsfw.com to c/aboringdystopia@lemmy.world
16
1
17
1

A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.

18
1
19
1
20
63
21
-4
submitted 1 week ago* (last edited 1 week ago) by givesomefucks@lemmy.world to c/aboringdystopia@lemmy.world

https://github.com/LemmyNet/lemmy/issues/5613

There's two "full time" Lemmy developers, and one of them just added a repeating dialog box that will pop up regardless of instance asking for money to be sent to his and another lemmy.ml admin's personal accounts as a salary.

22
109
submitted 1 week ago* (last edited 1 week ago) by IndustryStandard@lemmy.world to c/aboringdystopia@lemmy.world

A famous television producer in Israel has come under intense scrutiny following the uncovering of a series of inflammatory social media posts in which he called for a "Holocaust" against the people of Gaza.

Elad Barashi, who has worked in the Israeli entertainment industry for several years, sparked outrage after posting on X: "Good morning, let there be a Shoa (Holocaust) in Gaza."

In another post, he wrote, "I can't understand the people here in the State of Israel who don't want to fill Gaza with gas showers... or train cars... and finish this story! Let there be a Holocaust in Gaza."

23
20
24
48

cross-posted from: https://beehaw.org/post/19834534

archive.is link

Less than a year after marrying a man she had met at the beginning of the Covid-19 pandemic, Kat felt tension mounting between them. It was the second marriage for both after marriages of 15-plus years and having kids, and they had pledged to go into it “completely level-headedly,” Kat says, connecting on the need for “facts and rationality” in their domestic balance. But by 2022, her husband “was using AI to compose texts to me and analyze our relationship,” the 41-year-old mom and education nonprofit worker tells Rolling Stone. Previously, he had used AI models for an expensive coding camp that he had suddenly quit without explanation — then it seemed he was on his phone all the time, asking his AI bot “philosophical questions,” trying to train it “to help him get to ‘the truth,’” Kat recalls. His obsession steadily eroded their communication as a couple.

When Kat and her husband separated in August 2023, she entirely blocked him apart from email correspondence. She knew, however, that he was posting strange and troubling content on social media: People kept reaching out about it, asking if he was in the throes of mental crisis. She finally got him to meet her at a courthouse this past February, where he shared “a conspiracy theory about soap on our foods” but wouldn’t say more, as he felt he was being watched. They went to a Chipotle, where he demanded that she turn off her phone, again due to surveillance concerns. Kat’s ex told her that he’d “determined that statistically speaking, he is the luckiest man on Earth,” that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler,” and that he had learned of profound secrets “so mind-blowing I couldn’t even imagine them.” He was telling her all this, he explained, because although they were getting divorced, he still cared for her.

“In his mind, he’s an anomaly,” Kat says. “That in turn means he’s got to be here for some reason. He’s special and he can save the world.” After that disturbing lunch, she cut off contact with her ex. “The whole thing feels like Black Mirror,” she says. “He was always into sci-fi, and there are times I wondered if he’s viewing it through that lens.”

Kat was both “horrified” and “relieved” to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

What they all seemed to share was a complete disconnection from reality.

25
13
view more: next ›

A Boring Dystopia

12150 readers
91 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS