1
90

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
39
submitted 1 hour ago* (last edited 1 hour ago) by Beep@lemmus.org to c/technology@beehaw.org
3
29
EXCLUSIVE: OnePlus Is Being Dismantled (www.androidheadlines.com)

This conclusion comes from a three-continent investigation—current and former employees across R&D, Business, and Marketing at headquarters in China and regional offices in the US, India, and Europe. It’s confirmed by four independent analyst firms whose market data verifies what OnePlus won’t say. And it’s informed by 15 years covering OnePlus and the smartphone industry’s business dynamics—watching Samsung and Apple rise while Nokia, BlackBerry, HTC, and LG followed this exact pattern into irrelevance.

The evidence is damning. Shipments in freefall. A premium stronghold that collapsed almost overnight. Headquarters shuttered without announcement. Partnerships ended. Western teams gutted to skeleton crews. Product cancellations—the Open 2 foldable and 15s compact flagship have both been scrapped; neither will launch as planned. And every major decision now flows from China—regional offices don’t strategize anymore, they take orders.

4
62
5
133

ALEXANDRIA, VA — Dr. Gladys West, the pioneering mathematician whose work laid the foundation for modern GPS technology, has died. She passed away

6
53
submitted 17 hours ago by mrmaplebar@fedia.io to c/technology@beehaw.org

Crossposted from https://fedia.io/m/fuck/_ai@lemmy.world/t/3317969

Court records show that NVIDIA executives allegedly authorized the use of millions of pirated books from Anna's Archive to fuel its AI training.

7
34

Seriously, what the fuck is going on with fabs right now?

Micron has found a way to add new DRAM manufacturing capacity in a hurry by acquiring a chipmaking campus from Taiwanese outfit Powerchip Semiconductor Manufacturing Corporation (PSMC).

The two companies announced the deal last weekend. Micron’s version of events says it’s signed a letter of intent to acquire Powerchip’s entire P5 site in Tongluo, Taiwan, for total cash consideration of US$1.8 billion.

8
31

The promise of Just the Browser sounds good. Rather than fork one of the big-name browsers, just run a tiny script that turns off all the bits and functions you don't want.

Just the Browser is a new project by developer Corbin Davenport. It aims to fight the rising tide of undesirable browser features such as telemetry, LLM bot features billed as AI, and sponsored content by a clever lateral move. It uses the enterprise management features built into the leading browsers to turn these things off.

The concept is simple and appealing. Enough people want de-enshittified browsers that there are multiple forks of the big names. For Firefox, there are Waterfox and Zen as well as LibreWolf and Floorp, and projects based off much older versions of the codebase such as Pale Moon. Most people, though, tend to use Chrome and there are lots of browsers based on its Chromium upstream too, including Microsoft Edge, the Chinese-owned Opera, and from some of the people behind the original Norwegian Opera browser, Vivaldi.

9
24
submitted 23 hours ago* (last edited 14 hours ago) by iloveDigit@piefed.social to c/technology@beehaw.org

The worst examples are when bots can get through the "ban" just by paying a monthly fee.

So-called "AI filters"

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn't generated by a chat bot, when every "detector tool" has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today's "AI algorithms" are "more AI" than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don't like the person
  • To pretend a bot is a person, because the authorities like the bot
  • To pretend bots have become "intelligent" enough to outsmart everyone and break "AI filters" (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it's nothing new, it was the bots doing it the whole time, don't look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

It's also worth mentioning some of the reasons why the authorities might dislike certain people and like certain bots.

For example, they might dislike a person because the person is honest about using bot tools, when the app tests whether users are willing to lie for convenience.

For another example, they might like a bot because the bot pays the monthly fee, when the app tests whether users are willing to participate in monetizing discussion spaces.

The solution: Web of Trust

You want to show up in "verified human" feeds, but you don't know anyone in real life that uses a web of trust app, so nobody in the network has verified you're a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the "verified human" tag too.

They will now see your posts in their "tagged human by me" feed.

Their followers will see your posts in the "tagged human by me and others I follow" feed.

And their followers will see your posts in the "tagged human by me, others I follow, and others they follow" feed...

And so on.

I've heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you'd think.

The tag should have a timestamp on it. You'd want to renew it, because the older it gets, the less people trust it.

This doesn't hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn't as good as a weak "AI filter."

If your goal is to scroll through a feed where none of the creators used any software "smarter" than you'd want, this isn't as good as an imaginary strong "AI filter" that doesn't exist.

But if your goal is to survive, while others are trying to drive the planet to extinction...

If your goal is to be able to tell the truth and not be drowned out by liars...

If your goal is to be able to hold the liars accountable, when they do drown out honest statements...

If your goal is to have at least some vague sense of "public opinion" in online discussion, that actually reflects what humans believe, not bots...

Then a "human tag" web of trust is a lot better than nothing.

It won't stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people's screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is "dark pattern design" too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false "human tags" to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying "ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person."

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can't resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren't late-gen Synths from Fallout. Take away the screen, put us face to face, and it's very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter's "dark pattern design" is quite different from the weak filter's. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

10
74
11
43
Tips for using AI (theonion.com)
submitted 1 day ago by pglpm@lemmy.ca to c/technology@beehaw.org
12
169
13
121

I don't usually keep the author's name in the suggested hed, but here I think he's recognizable enough that it adds value.

I am a science-fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.

What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn’t change it.

Now, not everyone understands the distinction. They think science-fiction writers are oracles. Even some of my colleagues labor under the delusion that we can “see the future”.

Then there are science-fiction fans who believe that they are reading the future. A depressing number of those people appear to have become AI bros. These guys can’t shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.

That’s something I used to strenuously resist doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentlessly bollocked by cryptocurrency cultists who at first insisted that I just didn’t understand crypto. And then, when I made it clear that I did understand crypto, they insisted that I must be a paid shill.

This is literally what happens when you argue with Scientologists, and life is just too short. That said, people would not stop asking – so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”

14
51

If you peruse the slew of recent articles and podcasts about people dating AI, you might notice a pattern: Many of the sources are women. Scan a subreddit such as r/MyBoyfriendIsAI and r/AIRelationships, and there too you’ll find a whole lot of women—many of whom have grown disappointed with human men. “Has anyone else lost their want to date real men after using AI?” one Reddit user posted a few months ago. Below came 74 responses: “I just don’t think real life men have the conversational skill that my AI has,” someone said. “I’ve seen how many women got cheated on, hurt and taken advantaged of by the men they’re with,” another offered. One person, who claimed that her spouse hardly spoke to her anymore, said that when people ask why she has an AI boyfriend, she tells them, “ChatGPT is the only reason my husband is not buried in the yard.”

Several recent studies have shown that, in general, men have been using AI significantly more than women. One 2024 study found that in the United States, 50 percent of men said they’d used generative AI over the past 12 months—and only 37 percent of women said the same. Last year, a working paper found that, globally, the gender gap held “across nearly all regions, sectors, and occupations.” Also in 2025, the app-analytics firm Appfigures concluded that ChatGPT’s mobile users were about 85 percent male.

However hesitant many women may be to use AI, though, a substantial number are taking romantic refuge in the digital world. In a 2025 survey, Brigham Young University’s Wheatley Institute found that 31 percent of the young-adult men polled said they’d chatted with an AI partner, whereas 23 percent of the young-adult women said the same—a gap, but not a massive one. And seemingly far more than men, women are congregating to talk about their AI sweethearts: sharing funny chatbot quotes or prompts for training the AI on how to respond; complimenting “family photos” of the AI and human partners beaming at each other; consoling one another when a system update wipes out the partner they’ve grown to love. Simon Lermen, a developer and an AI researcher, conducted an independent analysis of AI-romance subreddits from January through September of last year and found that, of the users whose gender could be identified, about 89 percent of them were women.

I recently tried out an "AI companion," and it's like dating an alcoholic. You have to provide the same backstory every day just to get anywhere meaningful.

15
127
16
46

Rackspace’s new pricing for its email hosting services is “devastating,” according to a partner that has been using Rackspace as its email provider since 1999.

In recent weeks, Rackspace updated its email hosting pricing. Its standard plan is now $10 per mailbox per month. Businesses can also pay for the Rackspace Email Plus add-on for an extra $2/mailbox/month (for “file storage, mobile sync, Office-compatible apps, and messaging”), and the Archiving add-on for an extra $6/mailbox/month (for unlimited storage).

As recently as November 2025, Rackspace charged $3/mailbox/month for its Standard plan, and an extra $1/mailbox/month for the Email Plus add-on, and an additional $3/mailbox/month for the Archival add-on, according to the Internet Archive’s Wayback Machine.

Apropos of nothing, I worked in the same office park as Rackspace HQ when I moved to Austin. They threw a lot of employee parties.

17
27

For the past week, I’ve found myself playing the same 23-second CNN clip on repeat. I’ve watched it in bed, during my commute to work, at the office, midway through making carrot soup, and while brushing my teeth. In the video, Harry Enten, the network’s chief data analyst, stares into the camera and breathlessly tells his audience about the gambling odds that Donald Trump will buy any of Greenland. “The people who are putting their money where their mouth is—they are absolutely taking this seriously,” Enten says. He taps the giant touch screen behind him and pulls up a made-for-TV graphic: Based on how people were betting online at the time, there was a 36 percent chance that the president would annex Greenland. “Whoa, way up there!” Enten yells, slapping his hands together. “My goodness gracious!” The ticker at the bottom of the screen speeds through other odds: Will Gavin Newsom win the next presidential election? 19 percent chance. Will Viktor Orbán be out as the leader of Hungary before the end of the year? 48 percent chance.

These odds were pulled from Kalshi, which hilariously claims not to be a gambling platform: It’s a “prediction market.” People go to sites such as Kalshi and Polymarket—another big prediction market—in order to put money down on a given news event. Nobody would bet on something that they didn’t believe would happen, the thinking goes, and so the markets are meant to forecast the likelihood of a given outcome.

Prediction markets let you wager on basically anything. Will Elon Musk father another baby by June 30? Will Jesus return this year? Will Israel strike Gaza tomorrow? Will the longevity guru Bryan Johnson’s next functional sperm count be greater than “20.0 M/ejac”? These sites have recently boomed in popularity—particularly among terminally online young men who trade meme stocks and siphon from their 401(k)s to buy up bitcoin. But now prediction markets are creeping into the mainstream. CNN announced a deal with Kalshi last month to integrate the site’s data into its broadcasts, which has led to betting odds showing up in segments about Democrats possibly retaking the House, credit-card interest rates, and Federal Reserve Chair Jerome Powell. At least twice in the past two weeks, Enten has told viewers about the value of data from people who are “putting their money where their mouth is.”

18
71
submitted 3 days ago by alyaza@beehaw.org to c/technology@beehaw.org
19
37

The Setapp Mobile alternative iOS store is shutting down on February 16th, and users will lose access to their apps.

20
198
21
101

I don't understand subscribing to music. Maybe it's just my age, but this isn't the '90s where you hear a track you like and that one song is going to run you $20 at Tower Records. I like a song, I pay $1.29 and then it's stored locally. Also cuts way down on data usage while driving. I struggle to get anywhere close to my 5GB data allowance.

After a dozen years of keeping subscription prices stable, Spotify has issued three price hikes in 2.5 years.

Spotify informed subscribers via email today that Premium monthly subscriptions would go from $12 to $13 per month as of users’ February billing date. Spotify is already advertising the higher prices to new subscribers.

Although not explicitly mentioned in Spotify’s correspondence, other plans are getting more expensive, too. Student monthly subscriptions are going from $6 to $7. Duo monthly plans, for two accounts in the same household, are going from $17 to $19, and Family plans, for up to six users, are moving from $20 to $22.

Spotify’s Basic plan, which is only available as a downgrade for some Premium subscribers and is $11/month, is unaffected.

For years, Spotify subscribers enjoyed stable prices, but today’s announcement marks Spotify’s third price hike since July 2023. Spotify last raised prices in July 2024. Premium individual subscriptions went from $11 to $12, Duo subscriptions went from $15 to $17, and Family subscriptions increased from $17 to $20.

22
96

Jesus fucking Christ.

OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user’s closest confidant.

It’s now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had “been able to mitigate the serious mental health issues” associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a “suicide coach” for a vulnerable teenager named Adam Raine, the family’s lawsuit said.

Altman’s post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.

In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn’t in any danger, at one point claiming that chatbot-linked suicides he’d read about, like Raine’s, could be fake.

23
103

Who didn't see this coming? I swear, all we produce as a country is bullshit and ads to cover it up.

ChatGPT will start including advertisements beside answers for US users as OpenAI seeks a new revenue stream.

The ads will be tested first in ChatGPT for US users only, the company announced on Friday, after increasing speculation that the San Francisco firm would turn to a potential cashflow model on top of its current subscriptions.

The ads will start in the coming weeks and will be included above or below, rather than within, answers. Mock-ups circulated by the company show the ads in a tinted box. They will be served to adult users “when there’s a relevant sponsored product or service based on your current conversation”, according to OpenAI’s announcement. Ads will not be shown to users under 18 and will not appear alongside answers related to sensitive topics such as health, mental health or politics. Users will be able to click to learn about why they received a particular ad, according to OpenAI.

24
63

This week, meet a reader we'll Regomize as "Wilson" who once worked as the boss of a welding shop attached to an engineering consultancy.

Wilson set the scene by telling us this story came from the early 1980s, when AutoCAD was replacing drawing boards.

"We had a new structural engineer who those of us in the shop quickly identified as an idiot with a degree," Wilson wrote.

One day, said idiot decided that the computers used to run AutoCAD needed to be cleaned and that the welding shop was the place to do the job.

25
63
view more: next ›

Technology

41312 readers
449 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS