1
85

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
31

It's no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion's share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it's not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we're probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. "What we found is that we didn't need to put any algorithms in, we didn't need to massage the model," Törnberg told Ars. "It just came out of the baseline model, all of these dynamics."

3
29

We already live in a world where pretty much every public act - online or in the real world - leaves a mark in a database somewhere. But how far back does that record extend? I recently learned that record goes back further than I'd seriously imagined.

On my recent tour of the United States (making it through immigration checks in record time, thanks to facial recognition), I caught that bug, the same one that brought the world to a halt half a decade ago. But I caught it early, so I knew that I could probably get some treatment.

That led to a quick trip to an 'Urgent Care' - the frontline medical center for most Americans. At the check-in counter, the check-in nurse asked to see some ID, so I handed over my Australian driver's license. The nurse looked at the license and typed some of the info on it into a computer, then they looked up at me and asked: "Are you the same Mark Pesce who lived at...?" and then proceeded to recite an address that I resided at more than half a century ago.

Dumbstruck, I said, "Yes...? And how did you know that? I haven't lived there in nearly 50 years. I've never been in here before - I've barely ever been in this town before. Where did that come from?"

"Oh," they replied. "We share our patient data records with Massachusetts General Hospital. It's probably from them?"

I remembered having a bit of minor surgery as an 11 year old, conducted at that facility. 51 years ago. That's the only time I'd ever been a patient at Massachusetts General Hospital.

Good thing we're paying for all these data centers!

4
28
submitted 8 hours ago by chobeat@lemmy.ml to c/technology@beehaw.org
5
17
submitted 12 hours ago by remington@beehaw.org to c/technology@beehaw.org
6
52
7
43
8
34

I've been familiar with the concept, but this is by far the best behind-the-scenes explanation I've seen.

9
63
10
55
11
54

The job market is queasy and since you're reading this, you need to upgrade your CV. It's going to require some work to game the poorly trained AIs now doing so much of the heavy lifting. I know you don't want to, but it's best to think of this as dealing with a buggy lump of undocumented code, because frankly that's what is between you and your next job.

A big reason for that bias in so many AIs is they are trained on the way things are, not as diverse as we'd like them to be. So being just expensively trained statistics, your new CV needs to give them the words most commonly associated with the job you want, not merely the correct ones.

That's going to take some research and a rewrite to get it looking like those it was trained to match. You need to be adding synonyms and dependencies because the AIs lack any model of how we actually do IT, they only see correlations between words. One would hope a network engineer knows how to configure routers, but if you just say Cisco, the AI won't give it as much weight as when you say both, nor can you assume it will work out that you actually did anything to the router, database or code, so you need to explicitly say what you did.

Fortunately your CV does not have to be easy to read out loud, so there is mileage in including the longer versions of the names of the more relevant tools you've mastered, so awful phrases like "configured Fortinet FortiGate firewall" are helpful if you say it once, as does using all three F words elsewhere. This works well for the old fashioned simple buzzword matching still widely used.

This is all so fucked.

12
31

Like many teachers at every level of education, I have spent the past two years trying to wrap my head around the question of generative AI in my English classroom. To my thinking, this is a question that ought to concern all people who like to read and write, not just teachers and their students. Today’s English students are tomorrow’s writers and readers of literature. If you enjoy thoughtful, consequential, human-generated writing—or hope for your own human writing to be read by a wide human audience—you should want young people to learn to read and write. College is not the only place where this can happen, of course, but large public universities like UVA, where I teach, are institutions that reliably turn tax dollars into new readers and writers, among other public services. I see it happen all the time.

There are valid reasons why college students in particular might prefer that AI do their writing for them: most students are overcommitted; college is expensive, so they need good grades for a good return on their investment; and AI is everywhere, including the post-college workforce. There are also reasons I consider less valid (detailed in a despairing essay that went viral recently), which amount to opportunistic laziness: if you can get away with using AI, why not?

It was this line of thinking that led me to conduct an experiment in my English classroom. I attempted the experiment in four sections of my class during the 2024-2025 academic year, with a total of 72 student writers. Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs. We would look at the evidence, and at the end of the semester, they would decide by vote whether A.I. could replace me.

What could go wrong?


In the weeks that followed, I had my students complete a series of writing assignments with and without AI, so that we could compare the results.

My students liked to hate on AI, and tended toward food-based metaphors in their critiques: AI prose was generally “flavorless” or “bland” compared to human writing. They began to notice its tendency to hallucinate quotes and sources, as well as its telltale signs, such as the weird prevalence of em-dashes, which my students never use, and sentences that always include exactly three examples. These tics quickly became running jokes, which made class fun: flexing their powers of discernment proved to be a form of entertainment. Without realizing it, my students had become close readers.

During these conversations, my students expressed views that reaffirmed their initial survey choices, finding that AI wasn’t great for first drafts, but potentially useful in the pre- or post-writing stages of brainstorming and editing. I don’t want to overplay the significance of an experiment with only 72 subjects, but my sense of the current AI discourse is that my students’ views reflect broader assumptions about when AI is and isn’t ethical or effective.

It’s increasingly uncontroversial to use AI to brainstorm, and to affirm that you are doing so: just last week, the hosts of the New York Times’s tech podcast spoke enthusiastically about using AI to brainstorm for the podcast itself, including coming up with interview questions and summarizing and analyzing long documents, though of course you have to double-check AI’s work. One host compares AI chatbots to “a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time.”

13
29

WPlace is a desktop app that takes its cue from Reddit’s r/place, a sporadic experiment where users placed pixels on a small blank canvas every few minutes. On Wplace, anyone can sign up to add coloured pixels to a world map – each user able to place one every 30 seconds. By internet standards one pixel every 30 seconds is glacial, and that is part of what makes it so powerful. In just a few weeks since its launch tens, if not, hundreds of thousands of drawings have appeared.

Scrolling to my corner of Scotland, I found portraits of beloved pets, anime favourites, pride flags, football crests. In Kyiv, a giant Hatsune Miku dominates the sprawl alongside a remembrance garden where a user asked others to leave hand drawn flowers. Some pixels started movements. At one point there was just a single wooden ship flying a Brazilian flag off Portugal. Soon, a fleet appeared, a tongue-in-cheek invasion.

Across the diversity and chaos of the Wplace world map, nothing else feels like Gaza. In most cities, the art is made by those who live there. Palestinians do not have this opportunity: physical infrastructure is destroyed while people are murdered. Their voices, culture, and experiences are erased in real time. So, others show up for them, transforming the space on the map into a living mosaic of grief and care.

No algorithm, no leaders, but on Wplace, collective actions emerge organically. A movement stays visible only because people choose to maintain it, adding pixels, repairing any damage caused by others drawing over it. In that sense it works like any protest camp or memorial in the physical world: it survives only if people tend it. And here, those people are scattered across continents, bound not by geography but by a shared refusal to let what they care about disappear from view.

14
70
15
32
16
28
17
23
18
47
19
134

Using supervised fine-tuning (SFT) to introduce even a small amount of relevant data to the training set can often lead to strong improvements in this kind of "out of domain" model performance. But the researchers say that this kind of "patch" for various logical tasks "should not be mistaken for achieving true generalization. ... Relying on SFT to fix every [out of domain] failure is an unsustainable and reactive strategy that fails to address the core issue: the model’s lack of abstract reasoning capability."

Rather than showing the capability for generalized logical inference, these chain-of-thought models are "a sophisticated form of structured pattern matching" that "degrades significantly" when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate "fluent nonsense" creates "a false aura of dependability" that does not stand up to a careful audit.

As such, the researchers warn heavily against "equating [chain-of-thought]-style output with human thinking" especially in "high-stakes domains like medicine, finance, or legal analysis." Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond "surface-level pattern recognition to exhibit deeper inferential competence," they write.

20
110
21
70
submitted 2 days ago by sculd@beehaw.org to c/technology@beehaw.org

Honestly not sure what to say except INSANITY!!!!

Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him. Though extremely thirsty, the man was paranoid about accepting the water that the hospital offered him, telling doctors that he had begun distilling his own water at home and that he was on an extremely restrictive vegetarian diet. He did not mention the sodium bromide or the ChatGPT discussions.

22
61
AI Is A Money Trap (www.wheresyoured.at)

As always with Zitron, grab a beverage before settling in.

23
91
24
92
25
32

The article title is very click baity, but I found the actual discussion and reasoning for why this will happen and how it can be stopped to be thoughtful.

view more: next ›

Technology

39939 readers
384 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS