[-] theluddite@lemmy.ml 7 points 1 day ago

I agree that it's like Apple and Google (I don't know much about Steam) in that those are obvious price-gouging monopolists.

[-] theluddite@lemmy.ml 24 points 1 day ago* (last edited 1 day ago)

The difference is that, unlike craigslist, OnlyFans takes a massive 20% cut of all revenue. For comparison, Patreon takes a little more than 5%. Purely from a labor perspective, that's outrageous, so I do think that it's fair to demand that they at least do more to justify it, which ought to include protecting the people that actually do the work.

There's also what's to me the bigger problem: OnlyFans obviously didn't invent online sex work, but it did radically reshape it. They are responsible for mainstreaming this patreon-style, girl-next-door porn actress that people expect to interact with on a parasocial level. Those are features that OnlyFans purposefully put in to maximize their own profit, but they seem particularly ripe for the kind of nauseating small-scale abuse that the article discusses in depth. Suddenly, if an abusive partner wants to trap and control someone, there's a mainstream, streamlined path to making that profitable. Again, OnlyFans didn't create that, in the same way that Uber didn't create paying some random person with a car for a ride to the airport, but they did reshape it, systematize it, mainstream it, and profit handsomely off it. Craigslist was just a place to put classifieds, but OnlyFans is a platform that governs every detail of these relationships between creators and fans, down to the font of their DMs. If the way that they've built the platform makes this kind of abuse easier, that's a huge problem.

I agree with you that this article doesn't do a good job articulating any of this, though.

6
submitted 2 weeks ago by theluddite@lemmy.ml to c/technology@lemmy.ml

#HashtagActivism is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context, but its analysis overlooks that Twitter is actually a heavy-handed intermediary. It imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

[-] theluddite@lemmy.ml 118 points 4 months ago* (last edited 4 months ago)

Investment giant Goldman Sachs published a research paper

Goldman Sachs researchers also say that

It's not a research paper; it's a report. They're not researchers; they're analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word "research" for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI "research" that's just them poking at their own product but dressed up in a science-lookin' paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I've written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would've noticed that it's actually junk science.

[-] theluddite@lemmy.ml 208 points 8 months ago* (last edited 8 months ago)

I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that's right, charges, not pays), asks peer reviewers to volunteer their time, and then charges the very institutions that produced the knowledge exorbitant rents to access it. It's all upside. Because they're the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.

[-] theluddite@lemmy.ml 100 points 9 months ago* (last edited 9 months ago)

This has been ramping up for years. The first time that I was asked to do "homework" for an interview was probably in 2014 or so. Since then, it's gone from "make a quick prototype" to assignments that clearly take several full work days. The last time I job hunted, I'd politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

My experience with these interactions is not that they're looking for the most qualified applicants, but that they're filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It's the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.

42

It's so slow that I had time to take my phone out and take this video after I typed all the letters. How is this even possible?

[-] theluddite@lemmy.ml 161 points 10 months ago

AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

The media needs to stop falling for this. This is a "pre-print," aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do "research" on their own product. It doesn't matter what the conclusion is, whether it's very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

If the media wants to report on it, fine, but don't legitimize it by pretending that it's "researchers" when it's the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.

[-] theluddite@lemmy.ml 170 points 10 months ago

You can tell that technology is advancing rapidly because now you can type short-form text on the internet and everybody can read it. Truly innovative stuff.

14
submitted 10 months ago by theluddite@lemmy.ml to c/technology@hexbear.net
51
submitted 10 months ago by theluddite@lemmy.ml to c/fuck_cars@lemmy.ml
[-] theluddite@lemmy.ml 149 points 11 months ago* (last edited 11 months ago)

Gen Zers are increasingly looking for ways to prioritize quality of life over financial achievement at all costs. The TikTok trend of “soft life”—and its financial counterpart “soft saving”—is a stark departure from their millennial predecessors’ financial habits, which were rooted in toxic hustle culture and the “Girlboss” era.

"Soft savings" is, to my understanding, the opposite of savings -- it's about investing resources into making yourself happy now versus forever growing your savings for some future good time. It sounds ridiculous because they're hitting on good critiques of capitailsm, but using the language of capitalism itself.

I think this really bolsters my argument that the self-diagnosis trend might be better understood as young people being critical of society, but their education system completely failed them. Since they lack access to critical, social, and political theory, they don't have a vocabulary to express their critiques, so they've used the things we have taught them, like the language of mental health, to sorta make up their own critical theory. When mental health experts are super concerned and talk about how all these teens' self-diagnoses are "wrong," they're missing the point. It's a new theory using existing building blocks.

[-] theluddite@lemmy.ml 133 points 11 months ago

This is bad science at a very fundamental level.

Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management.

I've written about basically this before, but what this study actually did is that the researchers collapsed an extremely complex human situation into generating some text, and then reinterpreted the LLM's generated text as the LLM having taken an action in the real world, which is a ridiculous thing to do, because we know how LLMs work. They have no will. They are not AIs. It doesn't obtain tips or act upon them -- it generates text based on previous text. That's it. There's no need to put a black box around it and treat it like it's human while at the same time condensing human tasks into a game that LLMs can play and then pretending like those two things can reasonably coexist as concepts.

To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

Part of being a good scientist is studying things that mean something. There's no formula for that. You can do a rigorous and very serious experiment figuring out how may cotton balls the average person can shove up their ass. As far as I know, you'd be the first person to study that, but it's a stupid thing to study.

21
35
submitted 1 year ago by theluddite@lemmy.ml to c/antiwork@lemmy.ml
16
submitted 1 year ago by theluddite@lemmy.ml to c/technology@lemmy.ml
[-] theluddite@lemmy.ml 104 points 1 year ago* (last edited 1 year ago)

I'm becoming increasingly skeptical of the "destroying our mental health" framework that we've become obsessed with as a society. "Mental health" is so all-encompassing in its breadth (It's basically our entire subjective experience with the world) but at the same time, it's actually quite limiting in the solutions it implies, as if there's specific ailments or exercises or medications.

We're miserable because our world is bad. The mental health crisis is probably better understood as all of us being sad as we collectively and simultaneously burn the world and fill it with trash, seemingly on purpose, and we're not even having fun. The mental health framework, by converting our anger, loneliness, grief, and sadness into medicalized pathologies, stops us from understanding these feelings as valid and actionable. It leads us to seek clinical or technical fixes, like whether we should limit smart phones or whatever.

Maybe smart phones are bad for our mental health, but I think reducing our entire experience with the world into mental health is the worst thing for our mental health.

[-] theluddite@lemmy.ml 140 points 1 year ago

"I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well"

For how long will we be forced to endure different versions of the same article?

The study said 86.66% of the generated software systems were "executed flawlessly."

Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it "execute flawlessly." Whether or not code executes it not the bar by which we judge it.

Even if it were to hit 100%, which it does not, there's so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

97
submitted 1 year ago by theluddite@lemmy.ml to c/technology@lemmy.ml
12
submitted 1 year ago by theluddite@lemmy.ml to c/technology@lemmy.ml
7

Because technology is not progress, and progress is not necessarily technological. The community is currently almost entirely links to theluddite.org, but we welcome all relevant discussions.

Per FAQ, various link formats:

/c/luddite@lemmy.ml

!luddite@lemmy.ml

[-] theluddite@lemmy.ml 118 points 1 year ago* (last edited 1 year ago)

I know this is just a meme, but I'm going to take the opportunity to talk about something I think is super interesting. Physicists didn't build the bomb (edit: nor were they particularly responsible for its design).

David Kaiser, an MIT professor who is both a physicist and a historian (aka the coolest guy possible) has done extensive research on this, and his work is particularly interesting because he has the expertise in all the relevant fields do dig through the archives.

It’s been a long time since I’ve read him, but he concludes that the physics was widely known outside of secret government operations, and the fundamental challenges to building an atomic bomb are engineering challenges – things like refining uranium or whatever. In other words, knowing that atoms have energy inside them which will be released if it is split was widely known, and it’s a very, very, very long path engineering project from there to a bomb.

This cultural understanding that physicists working for the Manhattan project built the bomb is actually precisely because the engineering effort was so big and so difficult, but the physics was already so widely known internationally, that the government didn’t redact the physics part of the story. In other words, because people only read about physicists’ contributions to the bomb, and the government kept secret everything about the much larger engineering and manufacturing effort, we are left with this impression that a handful of basic scientists were the main, driving force in its creation.

2
submitted 1 year ago by theluddite@lemmy.ml to c/technology@lemmy.ml
1
submitted 1 year ago by theluddite@lemmy.ml to c/fuck_cars@lemmy.ml
view more: next ›

theluddite

joined 1 year ago