11

Do i need industry grade gpu's or can i scrape by getring decent tps with a consumer level gpu.

you are viewing a single comment's thread
view the rest of the comments
[-] Sylovik@lemmy.world 4 points 12 hours ago

In case of LLM's you should look at AirLLM. I suppose there is no conviniet integrations to local chat tools, but issue at Ollama already started.

[-] muntedcrocodile@lemm.ee 1 points 12 hours ago

That looks like exactly the sort of thing i want. Any existing solution to get it to behave like an ollama instance (i have a bunch of services pointed at an ollama run on docker).

[-] Sylovik@lemmy.world 2 points 4 hours ago

You may try Harbor. The description claims to provide an OpenAI-compatible API.

load more comments (1 replies)
this post was submitted on 10 Jan 2025
11 points (92.3% liked)

LocalLLaMA

2410 readers
35 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS