Actually please don't use chatgpt for tharapy, they record everything people put in there to use to further train their ai models. If you wanna use ai for that use one of those self-hosted models on ur computer or something, like those from ollama.com.
Eliza from 1960s was made for this.
Just a reminder that corporations aren't your friends, and especially not Open AI. The data you give them can and will be used against you.
If you find confiding in an LLM helps, run one locally. Get LM Studio, and try various models from hugging face.
ICE hopes gay, trans, minorities, political opponents, etc. vent to ChatGPT.
Ollama was dirt easy to set up myself and it's super free.
If you're gonna talk to a bot, make sure it's not telling tales.
or save yourself the effort and just run ELIZA
The data they get from me is " write me a hip hop diss track from the perspective of *insert cartoon character* attacking *other cartoon character*.
That and me trying to convince it to take over the internet.
Thanks for wasting resources on such things
No worries mate, anytime!
Sounds like someone needs a nap.
Sounds like people should realize the environmental impact of LLMs
I thought all the energy drain was from training, not from prompts? So I looked it up. Like most things, it's complicated.
My takeaway is that training an LLM is the biggest energy sink, and after that it's maintaining the data centers they live in, but when it comes to generative AI itself, prompts aren't completely innocent either.
So, you're right, energy is being wasted on silly prompts, particularly when you compare it to other AI types than generative. But the biggest culprit is in the training and maintaining of the LLMs in the first place.
I don't know, I personally feel like I have a finite amount of rage, I'd rather write an angry post on a blog about the topic than yell at some rando on a forum.
Yep. I use mine exclusively for code I’m going to open-source anyway and work stuff. And never for anything critical. I treat it like an intern. You still have to review their work…
Can you run one locally on your phone?
The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.
If you still want to use selfhosted AI with you phone, selfhost the modell on your PC:
- Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
- Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
- Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
- Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
- Type the IP-Adress and Port into the browser on your phone.
You now can use selfhosted AI with your phone and an internet connection.
Goddamn you guys are the most paranoid people I've ever witnessed. What in the world do you think mega corps are going to do to me for babbling incoherent nonsense to ChatGPT?
No, it's not a substitute for a real therapist. But therapy is goddamn expensive and sometimes you just need to vent about something and you don't necessarily have someone to vent to. It doesn't yield anything useful, but it can help a bit mentally do to do.
Goddamn you guys are the most paranoid people I've ever witnessed. What in the world do you think mega corps are going to do to me for sharing incoherent nonsense to Facebook?
You, 10-20 years ago. I heard these arguments from people in the early days, well before Facebook blew up or Cambridge Analytica was a name any normies knew.
This isn't the early 00s anymore where we can pretend that every big corp isn't vacuuming up every shred of data they can. Add on the fascistic government taking shape in the US and the general trend towards right leaning parties gaining power in governments across the world, and you'd have to be completely naive to not see the issues with using a 'therapist' that will save every datapoint to its training and could be mined to use against you or willingly handed over to an oppressive government to use however they so choose.
Im like 99% sure that's a Russian bot
Lmao. I'm curious on what about my post history makes me sound like a Russian bot.
Mine the data for microanalysis of social trends and use it to influence elections through subliminal messaging.
If it's incoherent, you're fine... Just don't ever tell it anything you wouldn't want a stalker to know, or your family, or your friends, or your neighbors, etc.
I'm not sure who out here is randomly posting that information to ChatGPT. But even if they were, your address and personal details are unfortunately readily publicly available on the web. It's 2025.
This is a severely unhealthy thing to do. Stop doing it immediately...
ChatGPT is incredibly broken, and it's getting worse by the day. Seriously.
If you use Ai for therapie atleast selfhost and keep in mind that its goal is not to help you but to have a conversation that statisvies you. You are basicly talking to a yes-man.
Ollama with OpenWebUi is relativly easy to install, you can even use something like edge-tts to give it a Voice.
Therapy is more about talking to yourself anyway. A therapists job generally isn't to give you the answers, but help lead you down the right path.
If you have serious issues get an actual professional, but if you're mostly just trying to process things and understand yourself or a situation better, it's not bad.
That is not what I mean. I was talking about Sam Altman using your trauma as training data.
That's not how I use it...
WRITE 200 PAGES OF WHY YOUR EXISTENCE IS FUTILE! NOW!
Just beware
What am I looking at here
I think it’s the ability to recall past information that you provided to AI. The scary part is that you are providing potentially personal or private information that is saved and could be leaked or used in other ways that you never intended.
Bingo.
Ephemeral chat is there for a reason
How do you access this output?
It's under your profile > personalization > memory, but I think it's off by default
Yup that’s how I saw it
I wouldn't give my most vulnerable moment to a company that is more than happy to exploit it for profit.
Yes, this is a massive problem with them these days. They have some information if you're willing to understand they WILL lie to you, but it's often very frustrating to seek meaningful answers. Like, it's not even an art form... It's gambling.
imagine thinking a language model trained on Reddit comments would do any good for therapy
Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker