30
submitted 4 weeks ago by Vincent@feddit.nl to c/firefox@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] chirospasm@lemmy.ml 18 points 4 weeks ago* (last edited 4 weeks ago)

Although this has been heavily downvoted, the author has a point: what do private, safe AI experiences in a software mean for the common browser user? How does a company that was founded as an 'alternative' to a crummy default browser take the same approach? For those that do and will use the tech indiscriminately, what's next for them?

Just as cookie/site separation became a default setting in FF eventually, or the ability to force a more secure private DNS, what could Mozilla consider on its own to prevent abuse, slop, LLM-syncophantism / deception, undesired user data training, tracking, and more? All that stuff we know is bad, but nobody seems to be addressing all too well. These big AI companies certainly don't seem to be.

Rather than advocate for Not AI, how do we address it better for those who'll simply hit up one of these big AI company websites like they would social media or Amazon?

Is it anonymous tokenization systems that prevent a big AI company knowing who a user is, a kind of 'privacy pass?' Is it text re-obsfucation at the browser level that jarbles user input so that patterns can't emerge? Is it even a straightforward warning to users about data hygiene?

The above is silly, and speculative, and mostly for conversation. But: maybe there's something here for your everyday browser user. And maybe we ought to consider how we help them.

[-] Manjushri@piefed.social 36 points 4 weeks ago

Because AI is a massive waste of resources that has yet to prove (to me at least) that it can provide any kind of real benefit to humanity that couldn't be better provided by another, less resource intensive means. Advocating for 'common' AI use is absurd in the face of the amount of energy and other resources consumed by that usage, especially in the face of a looming climate crises being exacerbated by excesses like this.

LLMs may have valid uses, I doubt it, but they may. Using it to make memes and generate answers of questionable veracity to questions that would be better resolved with a Google search is just dumb.

[-] Routhinator@startrek.website 21 points 4 weeks ago* (last edited 4 weeks ago)

This. It burns too much electricity, wastes too much water and is wrong 70% of the time. Even if its private and offline the problems with it go waaaaay beyond that.

[-] Tenderizer78@lemmy.ml 4 points 3 weeks ago

These concerns about water and electricity are overblown (on the global level, locally it's still a concern). Though with that said if AI-generated video takes off then it 100% will be a disaster.

[-] eestileib@lemmy.blahaj.zone 8 points 3 weeks ago

The datacenter industry has planned build outs that will require them using 13-15% of the power in America even after they add their own filthy new generation.

That's insane.

[-] Tenderizer78@lemmy.ml 2 points 3 weeks ago

The American power grid is so screwed.

I'm still in total despair over Trump killing that offshore wind farm that was near completion. It's like he's going out of his way to crush our hope for the future.

[-] vinceman@lemmy.blahaj.zone 1 points 3 weeks ago

I thought you had just said concerns are overblown?

[-] Tenderizer78@lemmy.ml 1 points 3 weeks ago* (last edited 3 weeks ago)

The American power grid is screwed because of Trump and AI image/video, not chatbots.

[-] vinceman@lemmy.blahaj.zone 1 points 3 weeks ago

You'll have to explain how chatbots ran by the same company are fundamentally different. Because imo you're completely wrong.

[-] Tenderizer78@lemmy.ml 1 points 3 weeks ago
[-] vinceman@lemmy.blahaj.zone 1 points 3 weeks ago

Source? Also seems like anything over image is too much. Again, it's the same companies doing both, is it not?

[-] Tenderizer78@lemmy.ml 1 points 3 weeks ago* (last edited 3 weeks ago)

I got a random graph from the internet. It's for illustrative purposes mainly. It's a pretty widely known fact that image generation is more resource intensive than text generation.

Here's a news article that mentions it for example: https://www.youtube.com/watch?v=_mJLTOs5i44

[-] vinceman@lemmy.blahaj.zone 1 points 3 weeks ago

I notice you keep ignoring my one point. Is it not the same companies doing image generation as well as all the rest?

[-] Tenderizer78@lemmy.ml 1 points 3 weeks ago

Because it's besides the point. As things stand currently GenAI isn't as bad as people make it out to be, and unless video (specifically) takes off it won't be.

[-] selokichtli@lemmy.ml 3 points 3 weeks ago

No, it is. That's because Alphabet, Meta and every internet giant are pushing it to become mainstream, not because it's currently is. People who just Google something are in the billions, they use this crap daily several times a day, without any concerns about environmental impacts, all this while some enshitification of useful services is due to keep up with the AI losses. That's not desirable nor responsible.

[-] vinceman@lemmy.blahaj.zone 1 points 3 weeks ago

Oh come on. UNLESS VIDEO TAKES OFF???? I don't have social media, but fucks sake open your eyes to what is popular. It's taking off, there is no unless. And it absolutely is not beside the point, like the other guy said everytime I use google it fucking uses ai, every single google search.

[-] PostaL@lemmy.world 1 points 4 weeks ago

Yeah... Only that the Google search is another LLM hit...

[-] SmokeyDope@piefed.social 10 points 4 weeks ago* (last edited 4 weeks ago)

Hi, hope you don't mind me giving my two cents.

Local models are at their m9st useful in daily life when they scrape data from a reliable factual database or from the internet and then present/discuss that data to you through natural language conversation.

Think about searching for things on the internet now a days. Every search provider stuffs ads in top results and intentionally ofsucates the links your looking for especially if its a no-no term like pirating torrent sites.

Local llms can act as an advanced generalized RSS reader that automatically fetches articles and sources, send STEM based queries to wolfram alpha llm api and retrieve answers, fetch the weather directly from openweatherAPI, retrieve definitions and meanings from local dictionary, retrieve Wikipedia article pages from a local kiwix server, search ArXiv directly for prior searching. One of Claude's big selling points is the research mode toolcall that scrapes hundreds of sites to collect up to date data on the thing your researching and presenting its finins in a neat structured way with cited sources. It does in minutes what would traditionally take a human hours or days of manual googling.

There are genuine uses for llms if your a nerdy computer homelab type of person familiar databases, data handling and can code up/integrate some basic api pipelines. The main challenge is selling these kinds of functions in an easy to understand and use way for the tech illiterate who already think bad of llms and similar due to generative slop. A positive future for llms integrated into Firefox would be something trained to fetch from your favorite sources and sift out the crap based on your preferences/keywords. More sites would have APIs for direct scraping and the key adding process would be one click button

this post was submitted on 16 Nov 2025
30 points (69.7% liked)

Firefox

21449 readers
5 users here now

/c/firefox

A place to discuss the news and latest developments on the open-source browser Firefox.


Rules

1. Adhere to the instance rules

2. Be kind to one another

3. Communicate in a civil manner


Reporting

If you would like to bring an issue to the moderators attention, please use the "Create Report" feature on the offending comment or post and it will be reviewed as time allows.


founded 6 years ago
MODERATORS