I've only used DuckDuckGo's implementations of GPT and Claude. I haven't really found a use case yet. I don't trust it enough to for queries related to things I don't understand (gaps in my knowledge) and would rather solve these problems or learn these skills through exisiting sources of information that I know have had at least some level of human refinement/vetting. Personally I enjoy the challenge of problem solving in life, particularly when the solution involves learning a new skill that I can utilise again in the future. I find it interesting that AI is advertised as being able to maximise our capabilities as humans, because it appears to be used for the complete opposite in most cases. People want to use their brains less and just be spoonfed the answers.
Only use deep learning ai with deepl translations and for some text annotation project I did for uni.
So far, there have been two interesting uses I've seen for chat gpt.
One is I've used it to help me write regular expressions in the very rare time I need to for my job.
The other is kind of cool but also kind of troubling in a way. But I've come across a couple of therapy style chat bots that are essentially just reading off a list of "here's what to do for XYZ"
I've tested them a bit, and I've found I'm 1) concerned who gets access to the information shared. 2) If/when these kinds of bots will be used to manipulate people in a negative way. 3)The possibility of a bot replying in a bad way that could make an issue worse for someone
Overall, I like the idea of them. I find it's hard to process information if it's coming directly from myself, or accept compassion from myself. So funny enough, these chat bots actually work really well in that respect.
In some cases, I've had better discussions than I have had with actual therapists, which is funny but also sad.
So while there's some troubling possibilities, I think there's a lot of positives that I've seen from my time with it.
I use it to see the answers to problems on my physics homework when I can't figure it or myself. It works far better than forums, which are mostly all paywalled these days.
I'm a bit disappointed to the practical uses, but I still get some value out of AI. I sometimes use chatgpt to tweak existing SQL scripts at work, and as a trouble shooting assistant. Also I use this tool ultimate vocal remover to extract stems from songs, mainly to make myself instrumentals to practice singing over. Those are really only things I do regularly, despite trying different self hosted AI tools. Most are cool but not very useful.
I've been making a small album of music out of lyrics I wrote and a consistent general style/genre using suno. It's pretty fun.
As a musician with experience recording albums, even when the songs come out basic, I can always re-record them myself and make them less generic.
For occasional chatting with Bing and translation help with DeepL.
Dank memes
I've been using ChatGPT in conjunction with search engines just to find things I need. For instance, I did an April Fools presentation for a work meeting and needed humorous real-life legal stories, so the AI was able to provide suggestions.
I also use it to for simple tasks, like organizing info into a table.
Mainly, though, my reason for using it is that, since I work in tech, I'm going to need to know how to use it well, and the best way to do that is being hands-on.
- General purpose LLMs are starting to replace everyday queries I used to posit to Google. Perplexity can be quite good for this.
- Copilot as enhanced autocomplete when writing code. A particularly good use-case is writing tests: with a few test cases already written, a sufficiently good test name will most often generate a well-written test case.
- LLMs for lazy generation of SQL queries can sometimes be quite nice.
- Writing assistance for things I find myself struggling to get written by myself. A writing unblocking tool, if you will.
It's reducing the effort and time I have to put into some things, and I appreciate that. It's far from perfect, but it doesn't have to be perfect to be useful.
I tried using ChatGPT for making cover letters a while ago but the results were terrible, I do better just writing them myself.
I'm using Claude (subbed) to help me do qualitative coding and summarizing within a very niche academic framework. I was encouraged to try it by an LLM researcher and frankly I'm happy with the results. I am using it as a tool to assist my work, not replace it, and I'm trying to balance the bias and insights of the tool with my own position as a researcher.
On that note, if anyone has any insights or suggestions to improve prompts, tools, or check myself while I tinker, please, tell me.
Some of my common uses are:
-
Asking extremely niche scientific questions: I don't depend on these answers but in the answer is usually the specific terminology I can then search and find the answers I was looking for. I have learned a lot about the properties of metals and alloys this way and what the planet could look like with different compositions.
-
Re-phrasing things: At work when I'm drained and out of patience I can tell that what I'm writing in my emails is not really appropriate, so I have GPT re-phrase it. GPT's version is typically unusable of course but it kicks my brain in the direction of re-phrasing my email myself.
-
Brainstorming: The program has endless patience for my random story-related questions and gives me instant stupid or cliche answers. This is great for me because part of my creative process since I was a kid has been seeing in media something that was less than satisfying and my brain flying into all the ways I could have done it better. I ask the program for its opinion on my story question, say "no idiot, instead:" and what comes after is the idea I was looking for from my own mind. Sometimes by total chance it has a good suggestion, and I can work with that too.
Fun uses which are less common:
-
Comedy use: I once had it generating tweets from Karl Marx about smoking weed every day. The program mixed marxist philosophy and language with contemporary party music to endlessly amusing results. Having historical figures with plenty of reference material from their writings opining on various silly things is very funny to me, especially when the program makes obvious mistakes.
-
Language Manipulation: If some philosophical text which was written to be deliberately impenetrable is getting too annoying to read, the program is decent at translating. If I plug in a block of text written by Immanual Kant and have the program re-write it in the style of Mark Twain, the material instantly becomes significantly easier to understand. Re-writing it in the style of stereotypical gen-z is hilarious.
Nothing but have it write stories (not shared or used for anything but just for fun). That, and come up with names for things since I struggle with that.
Almost nothing. I sometimes use it to rephrase a question or answer. I refuse to become dependent on AI or contribute to it more than I already unwittingly have.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.