[-] Captain_Stupid@lemmy.world 10 points 5 days ago

But I thought the jar always breaks...

[-] Captain_Stupid@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.

If you still want to use selfhosted AI with you phone, selfhost the modell on your PC:

  • Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
  • Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
  • Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
  • Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
  • Type the IP-Adress and Port into the browser on your phone.

You now can use selfhosted AI with your phone and an internet connection.

[-] Captain_Stupid@lemmy.world 4 points 1 week ago

If you use Ai for therapie atleast selfhost and keep in mind that its goal is not to help you but to have a conversation that statisvies you. You are basicly talking to a yes-man.

Ollama with OpenWebUi is relativly easy to install, you can even use something like edge-tts to give it a Voice.

[-] Captain_Stupid@lemmy.world 1 points 2 weeks ago

Inflatablebuttplugs to lower your buoyancy, this will allow you to escape to france.

[-] Captain_Stupid@lemmy.world 3 points 1 month ago

What did the little one do to deserve to go straight to hell?

Captain_Stupid

joined 2 months ago