[-] brucethemoose@lemmy.world 2 points 19 hours ago

They might be hit indirectly through China.

[-] brucethemoose@lemmy.world 11 points 20 hours ago* (last edited 20 hours ago)

Aside from fossil fuels, they mostly only trade with China, right? They're kinda "isolated" like Best Korea.

[-] brucethemoose@lemmy.world 1 points 20 hours ago* (last edited 20 hours ago)

Hear me out:

  • 6 core CCD. Clocked real slow, but with 3D cache like the 5600x3d.

  • The slightly cut 32 CU GPU. Clocked real slow.

  • 32GB of that LPDDR5X. 24GB, if the config is possible?

  • …OLED? I feel like there’s a much better selection of tablet screens to borrow now. If not, use whatever SKU the switch does.

I can dream, can’t I? But modern laptop GPUs/CPUs are absurdly efficient if you underclock them a little.

[-] brucethemoose@lemmy.world 2 points 23 hours ago* (last edited 23 hours ago)

Heh, Ampere is 2020. The Steam Deck's Vang Gogh RDNA2 chip is largely newer.

If valve ever stuffs Strix Halo into a more premium steam deck, it would be like an entire console generation jump.

[-] brucethemoose@lemmy.world 1 points 23 hours ago* (last edited 23 hours ago)

They paused elections, and Zelensky has some extraordinary wartime authority. He could try to consolidate power after the war, but he certainly won’t.

They aren’t really a dictatorship, of course, I’m just trying to illustrate that dictators can and have chosen to give up power… And it’s unfortunate that more don’t.

[-] brucethemoose@lemmy.world 22 points 23 hours ago* (last edited 23 hours ago)

Yeah, 45% tariffs on Japan and 32% on Taiwan is going to make pricing brutal in the US. I wouldn’t be surprised if $450 and $80 for Mario Cart effectively goes up.

Was there anything about the SoC it uses? What architecture is it?

[-] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

No dictator has even chosen to give up power.

Taiwan's did: https://en.wikipedia.org/wiki/History_of_Taiwan_(1945%E2%80%93present)

It was actually neat. The "hybrid" between dictatorial state funding and a democratic business environment all but established TSMC.

Ukraine is sort of a dictatorship now, but Zelensky will probably give up power.

But, uh, we don't talk about them here I guess.

[-] brucethemoose@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

lightly armored ground targets or drones

In that case, can’t we do better?

Mount flak cannons.

Or for ground targets, full auto grenade launchers. Or mortars?

Better yet, let’s just mass produce the AC-130, with more variety. And more flak canons.

[-] brucethemoose@lemmy.world 6 points 3 days ago* (last edited 3 days ago)

They could just station them in Canada like the US stations theirs in NATO countries. You know, in case Russia invades Canada cough cough.

[-] brucethemoose@lemmy.world 17 points 4 days ago* (last edited 4 days ago)

What if the UK or France "lent" a few of theirs? They have plenty.

[-] brucethemoose@lemmy.world 2 points 5 days ago

...Is that not inciting violence?

-2

53% of Americans approve of Trump so far, according to a newly released CBS News/YouGov poll conducted Feb. 5 to 7, while 47% disapproved.

A large majority, 70%, said he was doing what he promised in the campaign, per the poll that was released on Sunday.

Yes, but: 66% said he was not focusing enough on lowering prices, a key campaign trail promise that propelled Trump to the White House.

44% of Republicans said Musk and DOGE should have "some" influence, while just 13% of Democrats agreed.

1
submitted 2 months ago* (last edited 2 months ago) by brucethemoose@lemmy.world to c/politics@lemmy.world

Here's the Meta formula:

  • Put a Trump friend on your board (Ultimate Fighting Championship CEO Dana White).
  • Promote a prominent Republican as your chief global affairs officer (Joel Kaplan, succeeding liberal-friendly Nick Clegg, president of global affairs).
  • Align your philosophy with Trump's on a big-ticket public issue (free speech over fact-checking).
  • Announce your philosophical change on Fox News, hoping Trump is watching. In this case, he was. "Meta, Facebook, I think they've come a long way," Trump said at a Mar-a-Lago news conference, adding of Kaplan's appearance on the "Fox and Friends" curvy couch: "The man was very impressive."
  • Take a big public stand on a favorite issue for Trump and MAGA (rolling back DEI programs).
  • Amplify that stand in an interview with Fox News Digital. (Kaplan again!)
  • Go on Joe Rogan's podcast and blast President Biden for censorship.
18
submitted 3 months ago* (last edited 3 months ago) by brucethemoose@lemmy.world to c/politics@lemmy.world

Reality check: Trump pledged to end the program in 2016.

Called it. When push comes to shove, Trump is always going to side with the ultra-rich.

11
submitted 3 months ago* (last edited 3 months ago) by brucethemoose@lemmy.world to c/politics@lemmy.world

Trump, who has remained silent thus far on the schism, faces a quickly deepening conflict between his richest and most powerful advisors on one hand, and the people who swept him to office on the other.

All this is stupid. But I know one thing:

Trump is a billionaire.

And I predict his followers are going to learn who he’ll side with when push comes to shove.

Also, Bannon’s take is interesting:

Bannon tells Axios he helped kick off the debate with a now-viral Gettr post earlier this month calling out a lack of support for the Black and Hispanic communities in Big Tech.

3
8
submitted 3 months ago* (last edited 3 months ago) by brucethemoose@lemmy.world to c/leopardsatemyface@lemmy.world

I think the title explains it all… Even right wing influencers can have their faces eaten. And Twitter views are literally their livelihood.

Trump's conspiracy-minded ally Laura Loomer, New York Young Republican Club president Gavin Wax and InfoWars host Owen Shroyer all said their verification badges disappeared after they criticized Musk's support for H1B visas, railed against Indian culture and attacked Ramaswamy, Musk's DOGE co-chair.

54
submitted 3 months ago* (last edited 3 months ago) by brucethemoose@lemmy.world to c/technology@lemmy.world

Maybe even 32GB if they use newer ICs.

More explanation (and my source of the tip): https://www.pcgamer.com/hardware/graphics-cards/shipping-document-suggests-that-a-24-gb-version-of-intels-arc-b580-graphics-card-could-be-heading-to-market-though-not-for-gaming/

Would be awesome if true, and if it's affordable. Screw Nvidia (and, inexplicably, AMD) for their VRAM gouging.

[-] brucethemoose@lemmy.world 330 points 5 months ago* (last edited 5 months ago)

As a fervent AI enthusiast, I disagree.

...I'd say it's 97% hype and marketing.

It's crazy how much fud is flying around, and legitimately buries good open research. It's also crazy what these giant corporations are explicitly saying what they're going to do, and that anyone buys it. TSMC's allegedly calling Sam Altman a 'podcast bro' is spot on, and I'd add "manipulative vampire" to that.

Talk to any long-time resident of localllama and similar "local" AI communities who actually dig into this stuff, and you'll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

323
submitted 5 months ago* (last edited 5 months ago) by brucethemoose@lemmy.world to c/selfhosted@lemmy.world

I see a lot of talk of Ollama here, which I personally don't like because:

  • The quantizations they use tend to be suboptimal

  • It abstracts away llama.cpp in a way that, frankly, leaves a lot of performance and quality on the table.

  • It abstracts away things that you should really know for hosting LLMs.

  • I don't like some things about the devs. I won't rant, but I especially don't like the hint they're cooking up something commercial.

So, here's a quick guide to get away from Ollama.

  • First step is to pick your OS. Windows is fine, but if setting up something new, linux is best. I favor CachyOS in particular, for its great python performance. If you use Windows, be sure to enable hardware accelerated scheduling and disable shared memory.

  • Ensure the latest version of CUDA (or ROCm, if using AMD) is installed. Linux is great for this, as many distros package them for you.

  • Install Python 3.11.x, 3.12.x, or at least whatever your distro supports, and git. If on linux, also install your distro's "build tools" package.

Now for actually installing the runtime. There are a great number of inference engines supporting different quantizations, forgive the Reddit link but see: https://old.reddit.com/r/LocalLLaMA/comments/1fg3jgr/a_large_table_of_inference_engines_and_supported/

As far as I am concerned, 3 matter to "home" hosters on consumer GPUs:

  • Exllama (and by extension TabbyAPI), as a very fast, very memory efficient "GPU only" runtime, supports AMD via ROCM and Nvidia via CUDA: https://github.com/theroyallab/tabbyAPI

  • Aphrodite Engine. While not strictly as vram efficient, its much faster with parallel API calls, reasonably efficient at very short context, and supports just about every quantization under the sun and more exotic models than exllama. AMD/Nvidia only: https://github.com/PygmalionAI/Aphrodite-engine

  • This fork of kobold.cpp, which supports more fine grained kv cache quantization (we will get to that). It supports CPU offloading and I think Apple Metal: https://github.com/Nexesenex/croco.cpp

Now, there are also reasons I don't like llama.cpp, but one of the big ones is that sometimes its model implementations have... quality degrading issues, or odd bugs. Hence I would generally recommend TabbyAPI if you have enough vram to avoid offloading to CPU, and can figure out how to set it up. So:

This can go wrong, if anyone gets stuck I can help with that.

  • Next, figure out how much VRAM you have.

  • Figure out how much "context" you want, aka how much text the llm can ingest. If a models has a context length of, say, "8K" that means it can support 8K tokens as input, or less than 8K words. Not all tokenizers are the same, some like Qwen 2.5's can fit nearly a word per token, while others are more in the ballpark of half a work per token or less.

  • Keep in mind that the actual context length of many models is an outright lie, see: https://github.com/hsiehjackson/RULER

  • Exllama has a feature called "kv cache quantization" that can dramatically shrink the VRAM the "context" of an LLM takes up. Unlike llama.cpp, it's Q4 cache is basically lossless, and on a model like Command-R, an 80K+ context can take up less than 4GB! Its essential to enable Q4 or Q6 cache to squeeze in as much LLM as you can into your GPU.

  • With that in mind, you can search huggingface for your desired model. Since we are using tabbyAPI, we want to search for "exl2" quantizations: https://huggingface.co/models?sort=modified&search=exl2

  • There are all sorts of finetunes... and a lot of straight-up garbage. But I will post some general recommendations based on total vram:

  • 4GB: A very small quantization of Qwen 2.5 7B. Or maybe Llama 3B.

  • 6GB: IMO llama 3.1 8B is best here. There are many finetunes of this depending on what you want (horny chat, tool usage, math, whatever). For coding, I would recommend Qwen 7B coder instead: https://huggingface.co/models?sort=trending&search=qwen+7b+exl2

  • 8GB-12GB Qwen 2.5 14B is king! Unlike it's 7B counterpart, I find the 14B version of the model incredible for its size, and it will squeeze into this vram pool (albeit with very short context/tight quantization for the 8GB cards). I would recommend trying Arcee's new distillation in particular: https://huggingface.co/bartowski/SuperNova-Medius-exl2

  • 16GB: Mistral 22B, Mistral Coder 22B, and very tight quantizations of Qwen 2.5 34B are possible. Honorable mention goes to InternLM 2.5 20B, which is alright even at 128K context.

  • 20GB-24GB: Command-R 2024 35B is excellent for "in context" work, like asking questions about long documents, continuing long stories, anything involving working "with" the text you feed to an LLM rather than pulling from it's internal knowledge pool. It's also quite goot at longer contexts, out to 64K-80K more-or-less, all of which fits in 24GB. Otherwise, stick to Qwen 2.5 34B, which still has a very respectable 32K native context, and a rather mediocre 64K "extended" context via YaRN: https://huggingface.co/DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2

  • 32GB, same as 24GB, just with a higher bpw quantization. But this is also the threshold were lower bpw quantizations of Qwen 2.5 72B (at short context) start to make sense.

  • 48GB: Llama 3.1 70B (for longer context) or Qwen 2.5 72B (for 32K context or less)

Again, browse huggingface and pick an exl2 quantization that will cleanly fill your vram pool + the amount of context you want to specify in TabbyAPI. Many quantizers such as bartowski will list how much space they take up, but you can also just look at the available filesize.

  • Now... you have to download the model. Bartowski has instructions here, but I prefer to use this nifty standalone tool instead: https://github.com/bodaay/HuggingFaceModelDownloader

  • Put it in your TabbyAPI models folder, and follow the documentation on the wiki.

  • There are a lot of options. Some to keep in mind are chunk_size (higher than 2048 will process long contexts faster but take up lots of vram, less will save a little vram), cache_mode (use Q4 for long context, Q6/Q8 for short context if you have room), max_seq_len (this is your context length), tensor_parallel (for faster inference with 2 identical GPUs), and max_batch_size (parallel processing if you have multiple user hitting the tabbyAPI server, but more vram usage)

  • Now... pick your frontend. The tabbyAPI wiki has a good compliation of community projects, but Open Web UI is very popular right now: https://github.com/open-webui/open-webui I personally use exui: https://github.com/turboderp/exui

  • And be careful with your sampling settings when using LLMs. Different models behave differently, but one of the most common mistakes people make is using "old" sampling parameters for new models. In general, keep temperature very low (<0.1, or even zero) and rep penalty low (1.01?) unless you need long, creative responses. If available in your UI, enable DRY sampling to tamp down repition without "dumbing down" the model with too much temperature or repitition penalty. Always use a MinP of 0.05 or higher and disable other samplers. This is especially important for Chinese models like Qwen, as MinP cuts out "wrong language" answers from the response.

  • Now, once this is all setup and running, I'd recommend throttling your GPU, as it simply doesn't need its full core speed to maximize its inference speed while generating. For my 3090, I use something like sudo nvidia-smi -pl 290, which throttles it down from 420W to 290W.

Sorry for the wall of text! I can keep going, discussing kobold.cpp/llama.cpp, Aphrodite, exotic quantization and other niches like that if anyone is interested.

2
submitted 6 months ago* (last edited 6 months ago) by brucethemoose@lemmy.world to c/localllama@sh.itjust.works

https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e

Qwen 2.5 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B just came out, with some variants in some sizes just for math or coding, and base models too.

All Apache licensed, all 128K context, and the 128K seems legit (unlike Mistral).

And it's pretty sick, with a tokenizer that's more efficient than Mistral's or Cohere's and benchmark scores even better than llama 3.1 or mistral in similar sizes, especially with newer metrics like MMLU-Pro and GPQA.

I am running 34B locally, and it seems super smart!

As long as the benchmarks aren't straight up lies/trained, this is massive, and just made a whole bunch of models obsolete.

Get usable quants here:

GGUF: https://huggingface.co/bartowski?search_models=qwen2.5

EXL2: https://huggingface.co/models?sort=modified&search=exl2+qwen2.5

65
submitted 7 months ago* (last edited 7 months ago) by brucethemoose@lemmy.world to c/asklemmy@lemmy.world

Obviously there's not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation's Open Model Initiative?

I feel like a lot of people just don't know there are Apache/CC-BY-NC licensed "AI" they can run on sane desktops, right now, that are incredible. I'm thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it's mine to use.

And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training... and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it's actually a threat to them.

Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

29

Senior U.S., Qatari, Egyptian and Israeli officials will meet on Thursday under intense pressure to reach a breakthrough on the Gaza hostage and ceasefire deal.

he heads of the Israeli security and intelligence services told Netanyahu at the meeting on Wednesday that time is running out to reach a deal and emphasized that delay and insistence on certain positions in the negotiations could cost the lives of hostages, a senior Israeli official said.

85
submitted 8 months ago by brucethemoose@lemmy.world to c/news@lemmy.world
view more: next ›

brucethemoose

joined 1 year ago