-3
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 01 Apr 2025
-3 points (40.0% liked)
Asklemmy
47155 readers
644 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
Regarding your specific example, there pretty good reasons not to use AI if there's an adequate alternative, so I can absolutely understand people arguing against that.
AI is resource intensive and thus bad for the environment. Results usually aren't deterministic, so the behavior is no longer reproducible. If there is a defined algorithm to solve the issue in a correct way, AI will be less accurate. If you use cloud services, you may run into privacy issues.
Not saying there aren't any use cases for LLMs or other forms of AI. But just applying it everywhere 'cause it's fancy, is not a good idea.
In general, I appreciate if people question my work or come up with proposals for improvement as long as it's polite and the person is at least qualified to some degree. However, that does not mean that I change my mind immediately and follow their advice.
Yeah if you have better way of doing anything with no drawbacks you should do that I'll just say out of pure reason.
Thinking about deterministic results. I can think of a flawed code that gives a wrong result deterministically 1 out of its thousands of potential outputs and you can determine that 1 wrong answer is A) not big enough flaw to fix(code is good enough) B) not worth fixing since it's rare (too much effort to fix). Now how that applies to LLM is that you can see the what LLM outputs and determine it's execution is good enough or not working.
Using a lot of resources at the cost of the environment is more a value thing. Cyanobacterial didn't care about poisoning the environment with oxygen. Ironically I don't think the electric grid should be restructured for ai since I don't think so is doing anything important enough to warrant changing the electrical grid.
I would care if someone was rude or unqualified on an issue. Id want to know why something I did was wrong, either technically or morally, or if there a better way of doing and why it's better
Would you? Your tone reads as fairly rude in this post, and your qualifications seem quite lacking if you don’t even comprehend the dire environmental impact and obvious drawbacks of the vast majority of contemporary AI big compute. For that matter, most llm outputs are not deterministic, especially with certain configurations eg high temperature, etc, so I don’t even follow your contrived example here. Consider that Cyanobacteria are unaware of their environmental impact - humans are not so ignorant, unless they choose to be.
i fucked up, i meant i wouldn't care if someone is rude or unqualified. also forming attacks me based things i said is hilarious. i don't even bother defending myself to people like you, mostly because you don't want to hear me out.