[-] L_Acacia@lemmy.one 3 points 2 days ago

Is there a way to download content from the community workshop using the steam download_depot ?

[-] L_Acacia@lemmy.one 9 points 2 days ago

Are you on windows or linux, if you managed to fond the dlc files, you can most likely(not 100% sure it works with delisted) use creamAPI to make steam think you on them. On windows I've used CreamInstaller, which is a handy GUI that does it for you. I seem to recall I did it on my father computer running ubuntu, but don't recall exactly how.

[-] L_Acacia@lemmy.one 9 points 6 days ago* (last edited 6 days ago)

Scrubbles's comment outlined what would likely be the best workflow. Having done something similar myself, here are my recommendations:

In my opinion, the best way to do STT with Whisper is by using Whisper Writer, I use it to write most most messages and texts.

For the LLM part, I recommend Koboldcpp. It's built on top of llama.cpp and has a simple GUI that saves you from looking for the name of each poorly documented llama.cpp launch flag (cli is still available if you prefer). Plus, it offers more sampling options.

If you want a chat frontend for the text generated by the LLM, SillyTavern is a great choice. Despite its poor naming and branding, it's the most feature-rich and extensible frontend. They even have an official extension to integrate TTS.

For the TTS backend, I recommend Alltalk_tts. It provides multiple model options (xttsv2, coqui, T5, ...) and has an okay UI if you need it. It also offers a unified API to use with the different models. If you pick SillyTavern, it can be accessed by their TTS extension. For the models, T5 will give you the best quality but is more resource-hungry. Xtts and coqui will give you decent results and are easier to run.

There are also STS models emerging, like GLM4-V, but I still haven't tried them, so I can't judge the quality.

[-] L_Acacia@lemmy.one 13 points 10 months ago

I put zorin on my parent's computer 2 years ago, while its a great distro, their windows app support is just marketing, its an out of date wine version with an unmaintained launcher. Worse than tinkering with wine yourself.

[-] L_Acacia@lemmy.one 7 points 11 months ago

It is already here, half of the article thumbnails are already AI generated.

[-] L_Acacia@lemmy.one 24 points 11 months ago

You are easier to track with Adnauseum

[-] L_Acacia@lemmy.one 7 points 1 year ago

Windows is not fine with ARM, which can be a turnoff for some.

[-] L_Acacia@lemmy.one 9 points 1 year ago* (last edited 1 year ago)

Llama models tuned for conversation are pretty good at it. ChatGPT also was before getting nerfed a million time.

[-] L_Acacia@lemmy.one 15 points 1 year ago

JPEG-XL support is being tested in firefox nightly

[-] L_Acacia@lemmy.one 8 points 1 year ago

https://tiz-cycling-live.io/livestream.php

Be sure to use an adblocker, some times the stream get taken down and you have to wait 1/2 min for them to repost one.

[-] L_Acacia@lemmy.one 6 points 1 year ago* (last edited 1 year ago)

The best way to run a Llama model locally is using Text generation web UI, the model will most likely be quantized to 4/5bit GGML / GPTQ today, which will make it possible to run on a "normal" computer.

Phind might make it accessible on their website soon, but it doesn't seem to be the case yet.

EDIT : Quantized version are available thanks to TheBloke

[-] L_Acacia@lemmy.one 18 points 1 year ago

This is because librewolf reports itself as firefox for privacy, and vivaldi does the same thing with chrome. Their is no vivaldi string in their user agent.

view more: next ›

L_Acacia

joined 1 year ago