-34
top 14 comments
sorted by: hot top controversial new old
[-] Mordikan@kbin.earth 3 points 17 hours ago

This is the definition of a zero effort post.

You don't want to put forth the effort to bug hunt, you want an AI agent to bug hunt for you. You don't want to learn to even setup the agent, you want other people to explain step by step how to do that for you.

I'm assuming you aren't even going to review before it submits. Honestly though, how would you review it given you don't know anything about the topic its submitting on.

[-] artyom@piefed.social 38 points 1 day ago* (last edited 1 day ago)

Dear God, please don't. FF does not want your AI slop bug reports. You people are ruining open source.

[-] Hexarei@beehaw.org 3 points 1 day ago

Especially from a 7b model

[-] org@lemmy.org 15 points 1 day ago

Pretty sure if you have to ask how to do it, you’re not qualified to do it.

[-] utopiah@lemmy.ml 2 points 23 hours ago

This makes me genuinely curious, who thought that would be a good idea?

It feels like a lot of "contribution" to open source suddenly is fueled by AI hype. Is it a LinkedIn/TikTok "trick" that is being amplified that somehow one will get a very well paid job at a BigTech company if they somehow have a lot of contributions on popular projects?

Where does that this trend actually come from?

Did anybody doing so ever bother checking contribution guidelines to see which tasks should actually be prioritized and if so with which tools?

This seems like a recurring pattern so it's not a random idea someone had.

[-] Hexarei@beehaw.org 3 points 1 day ago

run a local LLM like Claude!

Look inside

"Run ollama"

Ollama will almost always be slower than running vllm or llama.cpp, nobody should be suggesting it for anything agentic. On most consumer hardware, the availability of llama.cpp's --cpu-moe flag alone is absurdly good and worth the effort to familiarize yourself with llamacpp instead of ollama.

[-] ctrl_alt_esc@lemmy.ml 1 points 20 hours ago

I have used Ollama so far and it's indeed quite slow, can you recommend a good guide for setting up llama.cpp (on linux). I have Ollama running in a docker container with openwebui, that kind of setup would be ideal.

[-] Hexarei@beehaw.org 2 points 20 hours ago

I just run the llama-swap docker container with a config file mounted, set to listen for config changes so I don't have to restart it to add new models. I don't have a guide besides the README for llama-swap.

[-] TachyonTele@piefed.social 4 points 1 day ago
[-] hendrik@palaver.p3x.de 5 points 1 day ago* (last edited 1 day ago)

Did you forget the body text? Or is this some bug? Looks like a question here, and like an AI fabricated tutorial in the original version of this cross-post.

[-] ZWQbpkzl@hexbear.net 2 points 1 day ago

You'll have to be more specific with how anthropic is debuging Firefox. There's many sort of possible setups. In general though, you'll need

  • an llm model file
  • some openai compatible server, eg lmstudio, llama.cpp, ollama.
  • some sort of client to that server there's a myriad of options here. OpenCode is the most like Claude. But there's also more modular programmatic clients, which might suit a long term task
  • the Firefox source code and/or an MCP server via some plugin.

You'll also need to know which models your hardware can run. "Smarter" models require more ram. Models can run on both CPUs and GPUs but they run way faster on the GPU, if they fit in the VRAM.

[-] etchinghillside@reddthat.com 1 points 1 day ago* (last edited 15 hours ago)

Props for putting something together and not burying it in a 20 minute YouTube video.

My mind initially went to OpenCode - I’m not familiarity lite-cc - any reason you opted for that? Is it just kinder on smaller local models?

[-] PumpkinDrama@reddthat.com 2 points 16 hours ago

I ended up using OpenCode, very useful thanks!

[-] hendrik@palaver.p3x.de 2 points 1 day ago* (last edited 1 day ago)

Judging by the github repo, it's the very basic cousin, written (vibe-coded) in Python. It doesn't do planning or anything, just preface your command with a system prompt telling your model it's a coding assistant. And gives it tool access to read and write files. And execute commands.

And seems no human uses it, there's no interactions like bug reports, PRs or people who star and like the repo.

this post was submitted on 23 Mar 2026
-34 points (16.0% liked)

Linux

63789 readers
726 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS