167
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Jul 2023
167 points (85.5% liked)
Asklemmy
43728 readers
1702 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
I find it useful in a lot of ways. I think people try to over apply it though. For example, as a software engineer, I would absolutely not trust AI to write an entire app. However, it's really good at generating "grunt work" code. API requests, unit tests, etc. Things that are well trodden, but change depending on the context.
I also find they're pretty good at explaining and summarizing information. The chat interface is especially useful in this regard because I can ask follow up questions to drill down into something I don't quite understand. Something that wouldn't be possible with a Wikipedia article, for example. For important information, you should obviously check other sources, but you should do that regardless of whether the writer is a human or machine.
Basically, it's good at that it's for: taking a massive compendium of existing information and applying it to the context you give it. It's not a problem solving engine or an artificial being.
I feel like it won’t be AI until we figure out how to point it back at itself, have it review its own answers and then be ‘happy’ when it’s answers are right. Not necessarily like if the user gives it a good score, but if it recognizes an answer it had given was actually used, or a prediction it makes if proved true (if I answer this way, the user is likely to ask this as its next question, etc) and it starts changing its behaviour, and asking itself questions to get better at that.