[-] projectmoon@lemm.ee 1 points 3 days ago

OpenWebUI connected tabbyUI's OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.

[-] projectmoon@lemm.ee 1 points 3 days ago

Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don't make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.

Otherwise, all settings are the defaults.

[-] projectmoon@lemm.ee 1 points 3 days ago

I tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.

[-] projectmoon@lemm.ee 3 points 3 days ago

Perplexica works. It can understand ollama and custom OpenAI providers.

[-] projectmoon@lemm.ee 1 points 3 days ago

Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I'm using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?

[-] projectmoon@lemm.ee 48 points 3 weeks ago* (last edited 3 weeks ago)

They basically want free labor.

[-] projectmoon@lemm.ee 53 points 3 months ago

Depends on the continuity and who's writing it, but often yes. He was notably portrayed this way in the Justice League cartoon.

1

Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second.

What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

Thanks for any pointers!

[-] projectmoon@lemm.ee 41 points 8 months ago

The fork was originally created because upstream NewPipe elected not to include sponsor block functionality.

[-] projectmoon@lemm.ee 29 points 9 months ago

Depends on the language. There is no explicit typing in JavaScript, for example. That's why Typescript was invented.

5
submitted 1 year ago by projectmoon@lemm.ee to c/meta@lemm.ee

Not sure if this has been asked before or not. I tried searching and couldn't find anything. I have an issue where any pictures from startrek.website do not show up on the homepage. It seems to only affect startrek.website. Going to the link directly loads the image just fine. Is this something wrong with lemm.ee?

[-] projectmoon@lemm.ee 53 points 1 year ago

I think "complex" refers to the various dark patterns used by Windows and Mac/iOS to scare and/or force users that know nothing of computers into using the default browsers.

[-] projectmoon@lemm.ee 29 points 1 year ago

You should probably add what license the icon will be under, if it's submitted to the project. Creative Commons? GPL?

[-] projectmoon@lemm.ee 28 points 1 year ago

Am I missing something? Or is the link to this tool not actually present in the post? I only see a screenshot.

view more: next ›

projectmoon

joined 1 year ago