76
19

cross-posted from: https://feddit.uk/post/31342443

During Apple’s late-90s struggles with profitability, it made a few overtures toward licensing its software to other computer manufacturers, while at the same time trying to modernize its operating…

77
47

I know you can’t fully de-Google... at least without losing access to parts of the internet, your work life, or the people you care about. But this checklist isn’t about purity, it's about giving regular people a way to push back without needing a tech background or an off-grid cabin.

It’s a week-by-week guide to easing out of the Google ecosystem:

  • Replacing Gmail, Maps, Drive, YouTube, etc.
  • Backing up data safely
  • Making privacy decisions that work for you
  • Optional “fallback” steps for folks who can’t go all-in

This was built as a shareable tool for anyone who's feeling digitally exhausted or sick of feeding surveillance capitalism. It’s free and printable—no signup, no tracking. It's a gift from an angry little dog/semi-retired journalist who cares.

PNGs are below, and the PDF automatic download with links is here. I'll build a post around it one of these days but wanted to get it out to the world asap!

I'd love thoughts on turning this into a free course/challenge, maybe with a Signal group.

78
22

Since Linux does not seem to work on my laptop (I have spent hours trying to find a fully functioning distro, it doesn't exist yet for Snapdragon), I am curious if there are different things I can do to keep Windows from tracking all of my shit.

I know that's impossible but minimizing it is desirable for me, at least. I don't want CoPilot, I don't want advertisement pop ups, anything like bloatware I fucking hate it, as we all do.

So pls let me know what you do to minimize this shit. Apps, things I should uninstall, settings I should change. If I have step by step instructions I'm not afraid to use command prompts since I got a little bit of experience with that while trying to install Linux.

Thank you all in advance for helping this luddite

79
53

cross-posted from: https://rss.ponder.cat/post/209373

This is a combo piece with the first half written by law student Elizabeth Grossman about her take on the recent FTC moral panic about the internet, and the second part being some additional commentary and notes from her professor, Jess Miers.

The FTC is fanning the flames of a moral panic. On June 4, 2025, the Commission held a workshop called The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families. I attended virtually from the second panel until the end of the day. Panelists discussed how the FTC could “help” parents, age verification as the “future,” and “what can be done outside of Washington DC.”  But the workshop’s true goal was to reduce the Internet to only content approved by the  Christian Right, regardless of the Constitution—or the citizens of the United States.

Claim #1: The FTC Should Prevent Minors From Using App Stores and Support Age Verification Laws

FTC panelists argued that because minors lack the legal capacity to contract, app stores must obtain parental consent before allowing them to create accounts or access services. That, in turn, requires age verification to determine who is eligible. This contractual framing isn’t new—but it attempts to sidestep a well-established constitutional concern: that mandatory age verification can burden access to lawful speech. In Brown v. Entertainment Merchants Association, the Supreme Court reaffirmed minors’ rights to access protected content, while Reno v. ACLU struck down ID requirements that chilled adult access to speech. Today, state-level attempts to mandate age verification across the Internet have repeatedly failed on First Amendment grounds.

But by recasting the issue as a matter of contract formation rather than speech, proponents seek to sidestep those constitutional questions. This is the same argument at the heart of Paxton v. Free Speech Coalition, a case the FTC appears to be watching closely. FTC staff repeatedly described a ruling in favor of Texas as a “good ruling,” while suggesting a decision siding with the Free Speech Coalition would run “against” the agency’s interests. The case challenges Texas’ H.B. 1181, which mandates age verification for adult content sites.

The FTC now insists that age verification isn’t about restricting access to content, but about ensuring platforms only contract with legal adults. But this rationale collapses under scrutiny. Minors can enter into contracts—the legal question is whether and when they can disaffirm them. The broader fallacy about minors’ contractual incapacity aside, courts have repeatedly rejected similar logic. Most recently, NetChoice v. Yost reaffirmed that age verification mandates can still violate the First Amendment, no matter how creatively they’re framed. In other words, there is no contract law exception to the First Amendment.

Claim #2: Chatbots Are Dangerous To Minors

The panel’s concerns over minors using chatbots to access adult content felt like a reboot of the violent video game panic. Jake Denton, Chief Technology Officer of the FTC,  delivered an unsubstantiated tirade about an Elsa-themed chatbot allegedly engaging in sexual conversations with children, but offered no evidence to support the claim. In practice, inappropriate outputs from chatbots like those on Character.AI generally occur only when users—minors or adults—intentionally steer the conversation in that direction. Even then, the platform enforces clear usage policies and deploys guardrails to keep bots within fictional contexts and prevent unintended interactions.

Yes, teens will test boundaries, as they always have, but that doesn’t eliminate their constitutional rights. As the Supreme Court held in Brown v. Entertainment Merchants Association, minors have a protected right to access legal expressive content. Then, it was video games. Today, it’s chatbots.

FTC Commissioner Melissa Holyoak adopted a more cautious tone, suggesting further study before regulation. But even then, the agency failed to offer meaningful evidence that chatbots pose widespread or novel harm to justify sweeping intervention.

Claim #3: Pornography is Not Protected Speech

Several panelists called for pornography to be stripped of First Amendment protection and for online pornography providers to be denied Section 230 immunity. Joseph Kohm, of Family Policy Alliance,  in particular, delivered a barrage of inflammatory claims, including: “No one can tell me with any seriousness that the Founders had pornography in mind […] those cases were wrongly decided. We can chip away […] it is harmful.” He added that “right-minded people have been looking for pushback against the influence of technology and pornography,” and went so far as to accuse unnamed “elites” of wanting children to access pornography, without offering a shred of evidence.

Of course, pornography predates the Constitution, and the Founders drafted the First Amendment to forbid the government from regulating speech, not just the speech it finds moral or comfortable. Courts have consistently held that pornography, including online adult content, is protected expression under the First Amendment. Whether panelists find that inconvenient or not, it is not the FTC’s role to re-litigate settled constitutional precedent, much less redraw the boundaries of our most fundamental rights.

During the final panel, Dr. Mehan said that pornography  “is nothing to do with the glorious right of speech and we have to get the slowest of us, i.e. judges to see it as well.” He succeeds in disrespecting a profession he is not a part of and misunderstanding the law in one foul swoop. He also said “boys are lustful” because of pornography and “girls are vain” because of social media. Blatant misogyny aside, it’s absurd to blame social media for “lust” and “vanity”–after all, Shakespeare was writing about them long before XXX videos and Instagram—and even if it weren’t, teenage lust is not a problem for the government to solve.

Panelist Terry Schilling from the American Principles Project—known for his vehemently anti-LGBT positions—called for stripping Section 230 protections from pornography sites that fail to implement age verification. As discussed, the proposal not only contradicts longstanding First Amendment precedent but also reveals a fundamental misunderstanding of what Section 230 does and whom it protects.

Claim #4: The Internet Is Bad For Minors

FTC Commissioner Mark Meador compared Big Tech to Big Tobacco and said that letting children on the Internet is like dropping children off in the red light district. “This is not what congress envisioned,” he said, “when enacting Section 230.” Commissioner Melissa Holyoak similarly blamed social media for the rise in depression and anxiety diagnoses in minors. Yet, as numerous studies on social media and mental health have consistently demonstrated, this rise stems from a complex mix of factors—not social media.

Bizarrely, Dr. Mehan noted “Powerpoints,” he said, “are ruining the humanities.” And he compared online or text communication to home invasion: if his daughter was talking on the phone to a boy at 11 o’clock at night, he said, that boy would be invading his home.

This alarmist narrative ignores both the many benefits of Internet access for minors and the real harms of cutting them off. For young people, especially LGBTQ youth in unsupportive environments or those with niche interests, online spaces can be essential sources of community, affirmation, and safety. Just as importantly, not all parents share the same values or concerns as the government (or Dr. Mehan). It is the role of parents, not the government, to decide when and how their children engage with the Internet.

In the same vein, the Court in NetChoice v. Uthmeyer rejected the idea that minors are just “mere people-in-waiting,” affirming their full participation in democracy as “citizens-in-training.” The ruling makes clear that social media access is a constitutional right, and attempts to strip minors of First Amendment protections are nothing more than censorship disguised as “safety.”

Conclusion

The rhetoric at this event mirrored the early pages of Project 2025, pushing for the outright criminalization of pornography and a fundamental rewrite of Section 230. Speakers wrapped their agenda in the familiar slogan of “protecting the kids,” bringing up big right-wing talking points like transgender youth in sports and harping on good old family values—all while advocating for sweeping government control over the Internet.

This movement is not about safety. It is about power. It seeks to dictate who can speak, what information is accessible, and whose identities are deemed acceptable online. The push for broad government oversight and censorship undercuts constitutional protections not just for adults, but for minors seeking autonomy in digital spaces. These policies could strip LGBTQ youth in restrictive households of the only communities where they feel safe, understood, and free to exist as themselves.

This campaign is insidious. If successful, it won’t just reshape the Internet. It will undermine free speech, strip digital anonymity and force every American to comply with a singular, state-approved version of “family values.”

The First Amendment  exists to prevent exactly this kind of authoritarian overreach. The FTC should remember that.

Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.

Prof. Jess Miers’ Comments

Elizabeth’s summary makes it painfully clear: this wasn’t a serious workshop run by credible experts in technology law or policy. The title alone, “How Big Tech Firms Exploit Children and Hurt Families,” telegraphed the FTC’s predetermined stance and signaled a disinterest in genuine academic inquiry. More tellingly, the invocation of “families” serves as a dog whistle, gesturing toward the narrow, heteronormative ideals typically championed by the religious Right: white, patriarchal, Christian, and straight. The FTC may not say the quiet part out loud, but it doesn’t have to.

Worse still, most of the invited speakers weren’t experts in the topics they were pontificating on. At best, they’re activists. At worst, they’re ideologues—people with deeply partisan agendas who have no business advising a federal agency, let alone shaping national tech policy.

Just a few additional observations from me.

Chair Ferguson opened by claiming the Internet was a “fundamentally different place” 25 years ago, reminiscing about AOL Instant Messenger, Myspace Tom, and using a family computer his parents could monitor. The implication: the Internet was safer back then, and parents had more control. As someone who also grew up in that era, I can’t relate.

I, too, had a family computer in the living room and tech-savvy parents. It didn’t stop me from stumbling into adult AOL chatrooms, graphic porn, or violent videos, often unintentionally. I remember the pings of AIM just as vividly as the cyberbullying on Myspace and anonymous cruelty on Formspring. Parental controls were flimsy, easy to bypass, and rarely effective. My parents tried, but the tools of the time simply weren’t up to the task. The battle over my Internet use was constant, and my experience was hardly unique.

Still, even then, the Internet offered real value, especially for a queer kid who moved often and struggled to make “IRL” friends. But it also forced me to grow up fast in ways today’s youth are better shielded from. Parents now have far more effective tools to manage what their kids see and who they interact with. And online services have a robust toolbox for handling harmful content, not just because advertisers demand it, but thanks to Section 230, a uniquely forward-thinking law that encourages cleanup efforts. It built safety into the system before “trust and safety” became a buzzword. Contrary to Mark Meador’s baseless claims, that result was precisely its authors’ intent.

A more serious conversation would focus on what we’ve learned and how the FTC can build on that progress to support a safer Internet for everyone, rather than undermining it.

That aside, what baffles me most about these “protect the kids” conversations, which almost always turn out to be about restricting adults’ access to disfavored content, is how the supposed solution is more surveillance of children. The very services the FTC loves to criticize are being told to collect more sensitive information about minors—biometrics, ID verification, detailed behavioral tracking—to keep them “safe.” But as Eric Goldman and many other scholars who were notably absent from the workshop have extensively documented, there is no current method of age verification that doesn’t come at the expense of privacy, security, and anonymity for both youth and adults.

A discussion that ignores these documented harms, that fails to engage with the actual expert consensus around digital safety and privacy, is not a serious discussion about protecting kids.

Which is why I find it especially troubling that groups positioning themselves as privacy champions are treating this workshop as credible. In particular, IAPP’s suggestion that the FTC laid the groundwork for “improving” youth safety online is deeply disappointing. Even setting aside the numerous privacy issues associated with age verification, does the IAPP really believe that a digital ecosystem shaped by the ideological goals of these panelists will be an improvement for kids, especially those most in need of support? For queer youth, for kids in intolerant households, for those seeking information about reproductive health or gender-affirming care?

This workshop made the FTC’s agenda unmistakable. They’re not pursuing a safer Internet for kids. As Elizabeth said, the FTC is pushing a Christian nationalist vision of the web, built on censorship and surveillance, with children as the excuse and the collateral.

Just as the playbook commands.

Jess Miers is an Assistant Professor of Law at the University of Akron School of Law


From Techdirt via this RSS feed

80
21
81
7

This is, of course, assuming that I actually manage to find a way to connect it to a computer, which isn't exactly straightforward but I've still got a running plan. Connecting it to the Internet doesn't exactly seem easy, either.

82
11
83
23
submitted 2 weeks ago by chobeat@lemmy.ml to c/technology@hexbear.net
84
23
85
48
86
44
87
21
submitted 2 weeks ago* (last edited 2 weeks ago) by moondog@hexbear.net to c/technology@hexbear.net

Hey gamers. I'm planning on hosting a website with some copyrighted content on it (mostly just archiving a bunch of art). What web hosting services do people here like to use? Ideally one that won't give a fuck that I host copyrighted content. I looked at Ultahost a bit and they seem fine but reddit-logo users seem to say it sucks.

88
16

cross-posted from: https://lemmygrad.ml/post/8207089

Jan-nano is a model fine-tuned with DAPO on Qwen3-4B. Jan-nano comes with some unique capabilities:

  • It can perform deep research (with the right prompting)
  • It picks up relevant information effectively from search results
  • It uses tools efficiently

The model was evaluated using SimpleQA - a relatively straightforward benchmark to test whether the model can find and extract the right answers.

Jan-nano outperforms Deepseek-671B on this metric, using an agentic and tool-usage-based approach. A 4B model obviously has its limitations, but it's interesting to see how far these things can be pushed. Jan-nano can serve as your self-hosted Perplexity alternative on a budget.

You can find the model at: https://huggingface.co/Menlo/Jan-nano

And a gguf is available at: https://huggingface.co/Menlo/Jan-nano-gguf

89
29
90
10
91
21
submitted 2 weeks ago* (last edited 2 weeks ago) by yogthos@lemmygrad.ml to c/technology@hexbear.net
92
23
93
10

This brings me to the debate over training AI and copyright. A lot of creative workers are justifiably angry and afraid that the AI companies want to destroy creative jobs. The CTO of Openai literally just said that onstage: "Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place":

Many of these workers are accordingly cheering on the entertainment industry's lawsuits over AI training. In these lawsuits, companies like the New York Times and Getty Images claim that the steps associated with training an AI model infringe copyright. This isn't a great copyright theory based on current copyright precedents, and if the suits succeed, they'll narrow fair use in ways that will impact all kinds of socially beneficial activities, like scraping the web to make the Internet Archive's Wayback Machine:

...

Here's the problem: establishing that AI training requires a copyright license will not stop AI from being used to erode the wages and working conditions of creative workers. The companies suing over AI training are also notorious exploiters of creative workers, union-busters and wage-stealers. They don't want to get rid of generative AI, they just want to get paid for the content used to create it. Their use-case for gen AI is the same as Openai's CTO's use-case: get rid of creative jobs and pay less for creative labor.

This isn't hypothetical. Remember last summer's actor strike? The sticking point was that the studios wanted to pay actors a single fee to scan their bodies and faces, and then use those scans instead of hiring those actors, forever, without ever paying them again. Does it matter to an actor whether the AI that replaces you at Warner, Sony, Universal, Disney or Paramount (yes, three of the Big Five studios are also the Big Three labels!) was made by Openai without paying the studios for the training material, or whether Openai paid a license fee that the studios kept?

This is true across the board. The Big Five publishers categorically refuse to include contractual language promising not to train an LLM with the books they acquire from writers. The game studios require all their voice actors to start every recording session with an on-tape assignment of the training rights to the session:

And now, with total predictability, Universal – the largest music company in the world – has announced that it will start training voice-clones with the music in its catalog:

It would be really great if someone would do a study on artists' views on generative models & copyright law that also took into account the kind of work they do and their class position. I say "what they do" because doujinshi circles have an interest in weakening intellectual property contrary to other freelance artists, although I'm not sure if this is reflected in reality...

94
38
95
20
96
72
submitted 2 weeks ago* (last edited 2 weeks ago) by dead@hexbear.net to c/technology@hexbear.net
97
13

Our platform at work (hosted on GKE) is down and we have some execs looking to get “official” confirmation before we announce anything. Thankfully, Google’s status pages insist everything’s fine while all the GKE admin pages and command line tools are timing out.

burgerpain

Downdetector seems to suggest it’s an Internet wide widespread event but everything’s working fine for me. Anyone have any insight?

Also, Hexbear seems unaffected. Whoever does ops for the site, congrats on betting on the right horse, apparently.

98
16
99
66
submitted 2 weeks ago* (last edited 2 weeks ago) by vegeta1@hexbear.net to c/technology@hexbear.net
100
20
view more: ‹ prev next ›

technology

23857 readers
501 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS