LLM is just a slow way to do things that have better ways to do them.
Or to have an expensive autocorrect do your thinking.
Upvoted. It’s utterly useless.
LLM is just a slow way to do things that have better ways to do them.
Or to have an expensive autocorrect do your thinking.
Upvoted. It’s utterly useless.
Clearly you haven't worked with one.
Its great for getting detailed references on code, or finding sources for info that would take a LOT longer otherwise.
or finding sources for info that would take a LOT longer otherwise.
Maybe. It adds to the list of sources you have to check from, but i've found i still have to manually check to see if it's actually on topic rqther than only tangentially related to what I'm writing about. But that's fair enough, because otherwise it'd be like cheating, having whole essays written for you.
Its great for getting detailed references on code
I know it's perhaps unreasonable to ask, but if you can share examples/anecdotes of this I'd like to see. To understand better how people are utilising LLMs
That’s some funny shit.
Skill issue. I'm better at retrieving and then actioning real and pertinent information than you and an AI combined, guaranteed.
Not who you're responding to, but I used one extensively in a recent work project. It was a matter of necessity, as I didn't know how to word my question in the technical terms specific to the product, and it was something that was just perfect for search engines to go "I think you actually mean this completely different thing". There was also a looming deadline.
Being able to search using natural language, especially when you know conceptually what you're lookong for but not the product or system specific technical term, is useful.
Being able to get disparate information that is related to your issue but spread across multiple pages of documentation in one spot is good too.
But detailed references on code? Reliable sources?
I have extensive technical background. I had a middling amount of background in the systems of this project, but no experience with the specific aspects this project touched. I had to double check every answer it gave me due to how critical what I was working on was.
Every single response I got had a significant error, oversight, or massive concealed footgun. Some were resolved by further prompting. Most were resolved by me using my own knowledge to work from what it gave me back to things I could search on my own, and then find ways to non-destructively confirm the information or poke around in it myself.
Maybe I didn't prompt it right. Maybe the LLM I used wasn't the best choice for my needs.
But I find the attitude of singing praises without massive fucking warnings and caveats to be highly dangerous.
Great response.
It’s great until you realize it’s led you down the garden path and the stuff it’s telling you about doesn’t exist.
It’s horrendously untrustworthy.
LLMs are by far the best way to retrieve information(that doesnt need to be correct).
What is the point in retrieving information if it isn’t correct?
specifically NEED. Very few things in our day to day life NEED to be correct. Typically its good to be correct but our usecases can handle being wrong because its either low stakes or we will be diving deeper into the topic as we narrow down our information search.
We do these kinds of searches all the time. Everytime you ask an average person a question you're preforming one of these searches. Everytime something pops into your head and you want a quick answer you're preforming one of these searches. When you search for information online you're generally preforming one of these kinds of searches.
An example could be I want to know a few of the popular python libs for interacting with atlassian. It gives me a list of some libs and links and I can go check them out.
I would much rather be directed to correct information than be told information that may not be correct. Bad information causes me to waste my time and money.
I disagree with your overall opinion for various reasons (relying on AI erodes researching and critical thinking skills; Copilot is dangerously unreliable in the majority of use cases); it is invasive to the point that it's creating a user backlash; there's are many serious social/emotional issues that are surfacing because of AI over-use; etc etc).
But I respect that you have shared a genuinely unpopular opinion here (in the right community). And you put your arguments forward in a well-worded and coherent way. So kudos.
ETA: I don't think it's appropriate to personally insult OP because of this post, as a small number of people here are doing. C'mon people, look at the community we are in and don't resort to such insults. This is a great topic for legit discussion.
I agree that LLMs can erodes critical thinking skills and can be unreliable but I think they ARE fit for purpose for majority of searches and queries people have day to day and people are figuring out what is and isnt a question for an LLM. Like im not going to ask an LLM how to configure some piece of software I'll go to the docs and read it because i need this to be configured correctly. I wouldnt ask an LLM if I can eat this weird mushroom because i might die if its wrong.
But I would ask an LLM what tech I can use if I want to get X result and then look through the summaries of each suggestion. I would ask for a report or document template to be generated because im proof reading the document anyway. I would ask for help automating a task. I would ask for help writing random low effort slop posts that I have to do for office stuff, like marketing emails, event announcements etc.
My reasoning for this post is that even though I dont like copilot currently I can see that at its core its a good feature and with the right polish it can be a great improvement for users. A big gripe i have is that marketers have way overpromised what assistants like copilot can do. When i speak with other people I can see they have already been leaning heavily on natural language queries for over a decade now and having this built into the OS would be a huge quality of life improvement and would improve what the tool can do. People already been outsourcing their thinking to google many years ago so I cant pearl clutch over doing the same with chatgpt and we can put our heads in the sand(like most people in this thread) and pretend people arent using these LLMs for information but the reality is that they are and we need to accept it and be involved in building the software that people want.
I'm a linux user and I think it would be very useful to be able to click one button and say "Set a calander event for the 25th my dads birthday and set a reminder a week earlier" and have it set that. It works on mobile just fine.
Well that’s definitely an unpopular opinion
I will supplement it by saying Gemini is actually a decent Google replacement for mundane searches
But copilot and ChatGPT are complete garbage
Thats fair and Gemini may be better but I dont think the difference in quality is make or break conceptually. They both fill the purpose enough for me to see that the feature has potential is there even if Gemini would have been a better choice.
I don't know why people are downvoting this. It really is an unpopular opinion.
retrieve information(that doesnt need to be correct).
Perhaps I'm just one of "the olds" who doesn't get modern technology, and this is why I'm having a real difficult time imagining why I, or anyone, would ever spend time looking something up when the factual correctness is optional to begin with.
Yeah, if I don't care about correctness, i can just make it up myself
Unless you think people always come away from google with the right answer I dont see the 1:1.
If you NEED the right answer you should go to a trusted source same as if you're using google. If you are looking for an answer then usually blogspam articles, reddit, or AI will all be good enough to return something satisfying. AI is just a faster way of searching a question on google and clicking thte top result.
a faster way of searching a question on google and clicking thte top result.
No, it isn't. The "I'm feeling lucky" button is.
No its not. Firstly 99% of people have no idea what that button is.
Secondly opening a web browser and going to google typing in your question then pressing 'im feeling lucky' then searching through the webpage is way slower than hitting the copilot button typing your question and getting a quick direct answer.
Then write yourself a desktop plugin, an icon, an input box, anything, to take you to the first Google search result. What the fuck does this have to do with LLM? How is this justified to use gallons of water, gigawatt of electricity, and PBs of stolen training data?

Upvoted for absolutely horrendous take.
Keshee out in the wild? Upvote!
upvoting because this is a good unpopular opinion.
unfortunately, microsoft is about ten years too early to the party, like they always are. what they offer isnt very reliable either, in my opinion
No it's not
No it won't. Not even becsuse it's shit, but becsuse it's Microsoft's and they sure as fuck aren't gonna put it in any OS other than Windows.
It has not been added to every phone and isn't even 10 fucking years old. It's barely existed for 2.
Are you just using "Copilot," a specific brand of generative AI assistant program like the way a parent would call all video games "Nintendo?"
3. It has not been added to every phone and isn't
even 10 fucking years old. It's barely existed for 2.
It might be that they are conflating GenAI with services such as Google Assistant ot Siri. Though I personally find that Google Assistant is/was more useful that their GenAI implementation (Gemini).
LLMs are by far the best way to retrieve information(that doesnt need to be correct)
We already have proof that this is a popular feature for users since its been integrated in every mobile phone for the past 10 years.
you've obviously retrieved your thinking in the best possible way, by far.
Sometimes I wonder if people come here and derive ludicrous drivel for the explicit purpose of posting here.
This is one of those times.
Its a slow day at work, I wont lie.
since its been integrated in every mobile phone for the past 10 years.
We just found out the person who makes job announcements asking for more years of experience with a technology than the technology's years in existence
Do you not think assistances have been in phones for 10 years or are you saying they are different specifically because they didnt use LLMs back then?
So many downvotes, I didnt think so many people would agree!
Honestly if you think it's a good feature I challenge you to make a few short youtube tutorials demonstrating how to use it's helpful features. This will help to spread awareness, and helps convince people with evidence.
I have heard a lot of (what i think is) hot air from the microsoft head AI guy about how much it streamlines professional life and I would love if that was actually true, but I can't help feeling i would already know about these wondrous features if that was in fact the case. Because people would be gushing about them.
We already have proof that this is a popular feature for users since its been integrated in every mobile phone for the past 10 years
That seems like a good argument. People went crazy over siri and such.
LLMs are by far the best way to retrieve information(that doesnt need to be correct).
I'm not sure when people ever need to retrieve information thet doesn't need to be correct, in a professional context. But thanks for being honest i guess.
I think you're getting confused by the marketing. The marketing makes it out to be this useful thing that will do your work for you which it cant. It doesnt have features its just an LLM. You ask question, it returns answer, its really not that much more. The productivity increase comes from people getting fast answers to their questions and quick templates for written work.
I'm finding it funny how many people disagree with the line about information retrieval. We get a ton of untrustworthy information all the time and we know there is a chance of it being wrong and we weigh the consequences vs the extra effort it will take to verify. If im about to stake my career on a fact im not going to rely on chatGPT but if I need to see some popular UI frameworks then chatGPT is fine. If its wrong thats fine there is nothing riding on it I just move on and check the next one.
We get a ton of untrustworthy information all the time and we know there is a chance of it being wrong and we weigh the consequences vs the extra effort it will take to verify
I think we have an easier job determining right away if humans are lying about something, and humans generally own up to being unsure about things. On the other hand, AI seems to be designed with an intention to be infallible, as it doesn't even give an estimate as to how sure it is that it's information is correct.
If a human in an organisation lies/says incorrect things a lot, they get fired.
If im about to stake my career on a fact im not going to rely on chatGPT but if I need to see some popular UI frameworks then chatGPT is fine. If its wrong thats fine there is nothing riding on it I just move on and check the next one.
So it sounds like AI is only really useful for your ~~line~~ wider area of work, that being anything programming focused, and therefore you're thinking of a very specific type of information to get fetched - templates to build off of. I hope you can see why it was bad to generalise in your initial response; someone working with historical or political facts, a structural engineer working on bridges, or a teacher, can't rely on GPT to get them the info they work with.
I think we have an easier job determining right away if humans are lying about something
I think we are equally bad at determining lying between humans and AI. The people who are getting fooled by AI answers would click the top result on google and get fooled there as well so I dont see it as a massive decrease in info quality even though I can admit it is a decrease.
So it sounds like AI is only really useful for your line wider area of work
There are plenty of jobs where it will be more and less useful but my claim is that its generally still useful for majority of professions/people as a quick way to retrieve info via natural language querying. The results are mostly accurate and can include sources if you ask. Thats good enough for most people and most questions. Sure if you need to dig through docs or reference the exact paper then you can search google and get it yourself.
I dont think Teachers is a good example, teachers use it all the time and seem to really like it, idk what structural engineers do so i cant really comment. But even if the engineer has no use I still say its a good feature because not every feature of an operating system has to be used by 100% of people. Windows screen read is a good feature but I dont use it. The share button is a good feature even if I dont use it. Carplay is a good feature even if I dont use it etc.
It doesn’t retrieve information, it’s a txt prediction based on what “might” sound like a correct response. It makes shit up and doesn’t increase productivity. Literally has been proven countless times.
Here’s the lasted example: https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-only-make-their-jobs-harder/
And that’s not even touching on the environmental impact or the oligarchs pushing propaganda into these clankers or them generating porn. See Grok.
every phone
Not every one. I have GrapheneOS on the phone. And Linux on the computer. Like some nerds here. And there hasn't been any assistant popping up on mine...
Also don't think whether that's going to be built into any major operating system is much of an opinion. That's more a fact 😉
But yeah, whether that's useful... I've read all kinds of opinions on that. I think we need more factual data on user efficiency. I'm positive we'll get some more studies on that.
I am so confused as to how the majority of people view this community.
This post right now shows -12 vote count. So does that mean this is a popular opinion..? Or do the majority of people not know how this works?
Because users can smell a shill post a mile away
It could be traffic from outside this community, so they havent read the rules in the sidebar. Maybe people just hate copilot so much its a subconscious reaction lol.
copilot is horrible trash. I've tried to use it but holy crap it's frustrating. why in the ever living heck when I have a spreadsheet open, and click the copilot button inside the spreadsheet and ask it for to specific things on that spreadsheet, does it tell me to upload it. ITS LITERALLY IN THE PROGRAM IM TYPING TO IT IS.
it's useless garbage and I won't use it a y ore. slows down all work, I HAVE to review what it does .. why... why not just do it myself, better myself by learning things instead of relying on failure of software, and still have to review and fix it's outputs.
no thanks.
I think that might be a bug. I tested it now by opening a spreadsheet pressing the copilot button and asking something about the sheet and it returned the answer correctly and didnt ask for it to be uploaded.
tell it to do something to the spreadsheet, it won't. regardless, we've already cancelled the sub to copilot. it's a waste of time and money, and costs are going up while the use cases are just... terrible since you still need to review everything to speed out because it can't be trusted.
heck, even Microsoft is basically saying don't use it right in their terms..
Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.
don't rely on it, it's a risk, not intended for real work. it's a toy..and a useless one at that.
I agree for excel, I think copilot in spreadsheets is using the wrong tool for the job. Its just a bad use of the tech. I also think the price for an enterprise sub is way to expensive for what you get.
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
6. Defend your opinion
This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.
Instance-wide rules always apply. https://legal.lemmy.world/tos/