193
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 May 2025
193 points (99.0% liked)
Futurology
2579 readers
320 users here now
founded 2 years ago
MODERATORS
Can I run anything on a 3090 or I need a beefier gpu
I was able to run llms on a 1080. They were admittedly small ones, but a 3090 is enough to be usable. That said, I'm not convinced it's a good idea to use it for therapy. I expect it's about as useful as talking into a mirror.
I don’t know what to do with myself anymore I just want to be able to be with you and be with you I don’t know what to do about that I don’t know what to do I don’t know what to do I’m not gonna do that I don’t know what to do and I don’t know what to do but I don’t know what to do so I’m just trying I know that you don’t know how much you have and you know what you know but I’m just not trying and you can’t tell I’m not gonna tell anybody else how much I know I don’t know what I don’t even care what you do I don’t care I just know that I don’t know what to do with me and I’m just saying what do what do I
This is how LLM works
I mean, it can be pretty coherent and impressive with the right LLM, but even then it's mostly just functioning to generate what you want it to, and it's certainly not going to provide professional insight for those that need it.
Yea it can be a lot more coherent, the above was just the autocorrect options on my phone that I pressed again and again.
But the LLM is not “listening” to you it is just providing the next likely words in a conversation based on what is in it’s database I believe…
Ai therapy sounds like a terrible idea…
Does journaling count? It's got a small positive benefit according to https://pmc.ncbi.nlm.nih.gov/articles/PMC8935176/
Talking things out to ourselves can often be useful. It could certainly be good for that.
3090 is great because it has about the same amount of vram as newer cards, and vram is what determines which models you can run on it.
I think I'm at a 3060 or so and it works decently depending on the model. I can generally get away with around 13B, or some 20+ Q4 or so but they get real slow by that point.
It's a lot of messing around to find something that performs decent while not being so limited as to get crazy repetitive or saying loony things.