1
79

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
25
submitted 1 hour ago by hperrin@lemmy.ca to c/technology@beehaw.org

A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.

3
33
submitted 2 hours ago by alyaza@beehaw.org to c/technology@beehaw.org

It is likely that there will never be a site like 4chan again—which is, likely, a very good thing. But it had also essentially already succeeded at its core project: chewing up the world and spitting it back out in its own image. Everything—from X to Facebook to YouTube—now sort of feels like 4chan. Which makes you wonder why it even needed to still exist.

"The novelty of a website devoted to shock and gore, and the rebelliousness inherent in it, dies when your opinions become the official policy of the world's five or so richest people and the government of the United States," the Onion CEO and former extremism reporter Ben Collins tells WIRED. “Like any ostensibly nihilist cultural phenomenon, it inherently dies if that phenomenon itself becomes The Man.”

My first experience with the more toxic side of the site came several years after my LOLcat all-nighter, when I was in college. I was a big Tumblr user—all my friends were on there—and for about a year or so, our corner of the platform felt like an extension of the house parties we would throw. That cozy vibe came crashing down for me when I got doxed the summer going into my senior year. Someone made a “hate blog” for me—one of the first times I felt the dark presence of an anonymous stranger’s digital ire, and posted my phone number on 4chan.

They played a prank that was popular on the site at the time, writing in a thread that my phone number was for a GameStop store that had a copy of the ultra-rare video game Battletoads. I received no less than 250 phone calls over the next 48 hours asking if I had a copy of the game.


Collins, like me, closely followed 4chan's rise in the 2010s from internet backwater to unofficial propaganda organ of the Trump administration. As he sees it, once Elon Musk bought Twitter in 2022 there was really no point to 4chan anymore. Why hide behind anonymity if a billionaire lets you post the same kind of extremist content under your real name and even pays you for it?

4chan’s “user base just moved into a bigger ballpark and started immediately impacting American life and policy," Collins says. "Twitter became 4chan, then the 4chanified Twitter became the United States government. Its usefulness as an ammo dump in the culture war was diminished when they were saying things you would now hear every day on Twitter, then six months later out of the mouths of an administration official."

But understanding how 4chan went from the home of cat memes to a true internet bogeyman requires an understanding of how the site actually worked. Its features were often overlooked amid all the conversations about the site's political influence, but I'd argue they were equally, if not more, important.

4
54
5
9
submitted 2 hours ago by corbin@infosec.pub to c/technology@beehaw.org
6
42
submitted 1 day ago* (last edited 1 day ago) by alyaza@beehaw.org to c/technology@beehaw.org

archive.is link

Six hundred and forty-two people are watching when Emily tugs off her sleep mask to begin day No. 1,137 of broadcasting every hour of her life.

They watch as she draws on eyeliner and opens an energy drink for breakfast. They watch as she slumps behind a desk littered with rainbow confetti, balancing her phone on the jumbo bottle of Advil she uses for persistent migraines. They watch as she shuffles into the bathroom, the only corner of her apartment not on camera. A viewer types: “where is emily?” It’s the only quiet moment she’ll get all day.

On the live-streaming service Twitch, one of the world’s most popular platforms, Emily is a legendary figure. For three years, she has ceaselessly broadcast her life — every birthday and holiday, every sickness and sleepless night, almost all of it alone.

Her commitment has made her a model for success in the new internet economy, where authenticity and endurance are highly prized. It’s also made her a good amount of money: $5.99 a month from thousands of subscribers each, plus donations and tips — minus Twitch’s 30-to-40 percent cut.

But to get there, Emily, who agreed to be interviewed on the condition that her last name be withheld due to concerns of harassment, has devoted herself to a solitary life of almost constant stimulation. For three years, she has taken no sick days, gone on no vacations, declined every wedding invitation, had no sex.

She has broadcast and self-narrated a thousand days of sleeping, driving and crying, lugging her camera backpack through the grocery store, talking through a screen to strangers she’ll never meet. Her goal is to buy a house and get married by the age of 30, but she’s 28 and says she’s too busy to have a boyfriend. Her last date was seven years ago.


Though some Twitch stars are millionaires, most scramble to get by, buffeted by the vagaries of audience attention. Emily’s paid-subscription count, which peaked last year at 22,000, has since slumped to around 6,000, dropping her base income to about $5,000 a month, according to estimates from the analytics firm Streams Charts.

She declined to share her total earnings, and Twitch discourages its “Partners” from disclosing the terms of their streaming contracts. “You can have the best month of your life on Twitch, and you can have the worst,” she said.

Sometimes Emily dreads waking up and clocking into the reality show that is her life. She knows staring at screens all night is unhealthy, and when she feels too depressed to stream, she’ll stay in bed for hours while her viewers watch.

But she worries that taking a break would be “career suicide,” as she called it. Some viewers already complain that she showers too long, sleeps in too late, doesn’t have enough fun. So many “are expecting more all the time,” she said. “I’m like: What more do you want?”

7
60

Anyone firing employees because they thought that AI would do their jobs in 2025 should be fired. It really doesn’t take much research to see AI isn’t at the place where it’s replacing people – yet. And business managers – particularly in small and mid-sized companies – who think it is better think again.

At best, generative AI platforms are providing a more enhanced version of search, so that instead of sifting through dozens of websites, lists and articles to figure out how to choose a great hotel in Costa Rica, fix a broken microwave oven or translate a phrase from Mandarin to English, we simply ask our chatbot a question and it provides the best answer it finds. These platforms are getting better and more accurate and are indeed useful tools for many of us.

But these chatbots are nowhere near replacing our employees.

It's somewhat akin to claiming that now that we have hammers, carpenters aren't needed.

8
125
submitted 2 days ago by solo@slrpnk.net to c/technology@beehaw.org
9
102
10
26
submitted 2 days ago by chobeat@lemmy.ml to c/technology@beehaw.org
11
49
12
157
13
110
submitted 5 days ago by Toes@ani.social to c/technology@beehaw.org
14
136
submitted 6 days ago by alyaza@beehaw.org to c/technology@beehaw.org

A recent tell-all book by former Facebook insider Sarah Wynn-Williams, titled "Careless People," is blowing the lid on the sheer depravity of the social media giant's targeting machine. Wynn-Williams worked at Facebook — which subsequently changed its name to Meta a few years back — from 2011 to 2017, eventually rising to the role of public policy director.

As early as 2017, Wynn-Williams writes, Facebook was exploring ways to expand its ad targeting abilities to thirteen-to-seventeen-year-olds across Facebook and Instagram — a decidedly vulnerable group, often in the throes of adolescent image and social crises.

Though Facebook's ad algorithms are notoriously opaque, in 2017 The Australian alleged that the company had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure."

The social media company likewise tracked when adolescent girls deleted selfies, "so it can serve a beauty ad to them at that moment," according to Wynn-Williams. Other examples of Facebook's ad lechery are said to include the targeting of young mothers based on their emotional state, as well as emotional indexes mapped to racial groups, like a "Hispanic and African American Feeling Fantastic Over-index."

15
36
16
106

This should give pause to thought for all those promoting Firefox forks too.

17
61

Not to be confused with the University of Oregon, OSU set up its Open Source Lab in 2003. Since then, it's done a great deal to help multiple FOSS projects. As Linux.com reported in 2006, it gave critical help to Gentoo and Drupal, along with providing one of the first hosting sites for the fledgling Mozilla Foundation.

As the Drupal team reported, the OSU OSL was serving 10 TB of data per month for them – in 2012. Seven years later, LWN reported on a talk by Albertson at SCALE 17x, saying that "role of the lab is to be a neutral hosting facility and to foster relationships between FOSS projects and companies."

18
97
submitted 6 days ago by alyaza@beehaw.org to c/technology@beehaw.org

archive.is link

Less than a year after marrying a man she had met at the beginning of the Covid-19 pandemic, Kat felt tension mounting between them. It was the second marriage for both after marriages of 15-plus years and having kids, and they had pledged to go into it “completely level-headedly,” Kat says, connecting on the need for “facts and rationality” in their domestic balance. But by 2022, her husband “was using AI to compose texts to me and analyze our relationship,” the 41-year-old mom and education nonprofit worker tells Rolling Stone. Previously, he had used AI models for an expensive coding camp that he had suddenly quit without explanation — then it seemed he was on his phone all the time, asking his AI bot “philosophical questions,” trying to train it “to help him get to ‘the truth,’” Kat recalls. His obsession steadily eroded their communication as a couple.

When Kat and her husband separated in August 2023, she entirely blocked him apart from email correspondence. She knew, however, that he was posting strange and troubling content on social media: People kept reaching out about it, asking if he was in the throes of mental crisis. She finally got him to meet her at a courthouse this past February, where he shared “a conspiracy theory about soap on our foods” but wouldn’t say more, as he felt he was being watched. They went to a Chipotle, where he demanded that she turn off her phone, again due to surveillance concerns. Kat’s ex told her that he’d “determined that statistically speaking, he is the luckiest man on Earth,” that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler,” and that he had learned of profound secrets “so mind-blowing I couldn’t even imagine them.” He was telling her all this, he explained, because although they were getting divorced, he still cared for her.

“In his mind, he’s an anomaly,” Kat says. “That in turn means he’s got to be here for some reason. He’s special and he can save the world.” After that disturbing lunch, she cut off contact with her ex. “The whole thing feels like Black Mirror,” she says. “He was always into sci-fi, and there are times I wondered if he’s viewing it through that lens.”

Kat was both “horrified” and “relieved” to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

What they all seemed to share was a complete disconnection from reality.

19
56

Scraps? Pretty sure they're just delaying the official announcement. Most of their operations are now running with money as the motivation

20
14
21
9

Genuinely seems a decent product

22
64
23
150

And people still buy Apple products?

24
35

crosspostato da: https://scribe.disroot.org/post/2656499

Archived link

Here is the original report by SentinelOne.

Cybersecurity company SentinelOne has revealed that a China-nexus threat cluster dubbed PurpleHaze conducted reconnaissance attempts against its infrastructure and some of its high-value customers.

"We first became aware of this threat cluster during a 2024 intrusion conducted against an organization previously providing hardware logistics services for SentinelOne employees," security researchers Tom Hegel, Aleksandar Milenkoski, and Jim Walter said in an analysis published Monday.

PurpleHaze is assessed to be a hacking crew with loose ties to another state-sponsored group known as APT15, which is also tracked as Flea, Nylon Typhoon (formerly Nickel), Playful Taurus, Royal APT, and Vixen Panda.

The adversarial collective has also been observed targeting an unnamed South Asian government-supporting entity in October 2024, employing an operational relay box (ORB) network and a Windows backdoor dubbed GoReShell.

...

25
98
submitted 1 week ago by alyaza@beehaw.org to c/technology@beehaw.org

The DOJ wants to bar Google from paying to be the default search engine in third-party browsers including Firefox, among a long list of other proposals including a forced sale of Google’s own Chrome browser and requiring it to syndicate search results to rivals. The court has already ruled that Google has an illegal monopoly in search, partly thanks to exclusionary deals that make it the default engine on browsers and phones, depriving rivals of places to distribute their search engines and scale up. But while Firefox — whose CFO is testifying as Google presents its defense — competes directly with Chrome, it warns that losing the lucrative default payments from Google could threaten its existence.

Firefox makes up about 90 percent of Mozilla’s revenue, according to Muhlheim, the finance chief for the organization’s for-profit arm — which in turn helps fund the nonprofit Mozilla Foundation. About 85 percent of that revenue comes from its deal with Google, he added.

Losing that revenue all at once would mean Mozilla would have to make “significant cuts across the company,” Muhlheim testified, and warned of a “downward spiral” that could happen if the company had to scale back product engineering investments in Firefox, making it less attractive to users. That kind of spiral, he said, could “put Firefox out of business.” That could also mean less money for nonprofit efforts like open source web tools and an assessment of how AI can help fight climate change.

view more: next ›

Technology

38639 readers
258 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS