266
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 Sep 2023
266 points (98.5% liked)
Asklemmy
44151 readers
1674 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
It's not that I hate it, but like, chatGPT sucks.
There was this uber hype around it, then we started using it ... and it just makes so many errors, it's literally just generating more work. Scrapped it after less than a week. It's modern snakeoil.
Bard is the same, I asked it questions about two of my favourite bands whom I know a lot about. It omitted facts and invented things that were not true!
We used it for code generation. But we ended up spending more time fixen and debugging the generated code than it would have taken us to just write it. Also it introduces the most annoying type of bugs. Like once it misspelled a property name, but only at one point in the code, got it right everywhere else.
That's why, in the case of a GPT model you would feed it custom training data using something like LlamaIndex. I don't know if there's an API available for Bard, tho.
You're wrong assuming that the free models that we have at our disposal are the only possible and best implementations of these LLMs.
What! I have the opposite experience.
Im a tabletop roleplaying gamemaster and it has helped me immensely with translations, formatting of text, compiling and keeping track of my players character backgrounds and even coming up with plots and scenes that are suited for each player.
What did you use it for? I helps me a lot with coding, scripting, translations, terminologies... Sometimes it makes mistakes, but other times it produces working code that accomplishes what I asked for.
In any case, ChatGPT is just a demo that uses the GPT-3.5 Turbo model. Many people is being misled assuming that the ChatGPT research preview is all that the model has to offer. You can also try the improved model GPT-4, but it's not free.
If you really want to get its full potential you need a custom implementation in Python that works against the API and do things like fine tune the model, embeddings, feed it custom data or give it access to tools with LangChain.
Of course that's not something easy to do, but don't think that the ChatGPT web/app is GPT models' full potential.
I have a feeling this one's mostly operator error.
Or you vastly overestimated what it could do.
Once we found the issues, it was actually quite easy to tell the AI to fix them. But at this point you're debugging generated code to imrpove your input for the code generator .. and it just was faster to write the code by hand.
And yes, there was a vast overestimation of what it can do, especially by some managers that used to be coders and thought this would compensate for their lack of recent practical expirence. It didn't ... I had to fix it.
My point is that it's not just for coding, if you think that's the only use case then sure I get why you'd think it was shitty.
I've used it a bit for general knowledge things and fun facts, and on more than a couple of occasions it just made shit up.
I'm sure it has some uses, I see a lot of AI generated porn in my "all" feed ... just haven't found one for myself or my work.
Interesting, I'm working as a network engineer and my current job is overhauling an old TV broadcast facility. There are a lot of random solutions like using off brand switches and lack of documentation, etc.
AI has been absolutely critical, it doesn't do the work for me, but like any good tool it amplifies my ability to do work by cutting out the middle man of sifting through pages of spice works and stack overflow articles trying to figure out what command a ten year old Avaya needs to accomplish whatever task I require of it.
Is it always correct? No. That's why the engineer behind the screen exists. It does usually get me a workable answer more quickly than just having to look it up myself, though. Between my knowledge of terminal CLI commands and the AI, I've been able to get a lot done.
Hell I had it walk me through the process of setting up automated backups, it even suggested the tftp server I used to do it. Shits been working great.
Even our service desk has been able to use it to help with more advanced problems by telling it the issue and describing what has already been done.
Idk why no one else sees the value, I'm over here like Captain Picard solving problems by talking to the LCARS system.
I do see the potential value and I'm happy it worked out for you. But don't end up like the lawyers that used chatGPT like a search engine and it just made up fictiional cases they cited in an actual court.
Yeah, that happened.
You only end up like those morons by trusting AI to be perfect, I do not trust AI to even be "good" let alone perfect.
If you're willing to just throw your job into an LLM and hope for the best you deserve to get fired.