113
submitted 4 months ago* (last edited 4 months ago) by Fingerthief@infosec.pub to c/selfhosted@lemmy.world

I've been building MinimalChat for a while now, and based on the feedback I've received, it's in a pretty decent place for general use. I figured I'd share it here for anyone who might be interested!

Quick Features Overview:

  • Mobile PWA Support: Install the site like a normal app on any device.
  • Any OpenAI formatted API support: Works with LM Studio, OpenRouter, etc.
  • Local Storage: All data is stored locally in the browser with minimal setup. Just enter a port and go in Docker.
  • Experimental Conversational Mode (GPT Models for now)
  • Basic File Upload and Storage Support: Files are stored locally in the browser.
  • Vision Support with Maintained Context
  • Regen/Edit Previous User Messages
  • Swap Models Anytime: Maintain conversational context while switching models.
  • Set/Save System Prompts: Set the system prompt. Prompts will also be saved to a list so they can be switched between easily.

The idea is to make it essentially foolproof to deploy or set up while being generally full-featured and aesthetically pleasing. No additional databases or servers are needed, everything is contained and managed inside the web app itself locally.

It's another chat client in a sea of clients but it is unique in its own ways in my opinion. Enjoy! Feedback is always appreciated!

Self Hosting Wiki Section https://github.com/fingerthief/minimal-chat/wiki/Self-Hosting-With-Docker

you are viewing a single comment's thread
view the rest of the comments
[-] Fingerthief@infosec.pub 15 points 4 months ago* (last edited 4 months ago)

Local models are indeed already supported! In fact any API (local or otherwise) that uses the OpenAI response format (which is the standard) will work.

So you can use something like LM Studio to host a model locally and connect to it via the local API it spins up.

If you want to get crazy...fully local browser models are also supported in Chrome and Edge currently. It will download the selected model fully and load it into the WebGPU of your browser and let you chat. It's more experimental and takes actual hardware power since you're fully hosting a model in your browser itself. As seen below.

[-] JackGreenEarth@lemm.ee 1 points 4 months ago
this post was submitted on 14 Jun 2024
113 points (93.1% liked)

Selfhosted

39677 readers
374 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS