176
submitted 4 days ago by rain_lover@lemmy.ml to c/asklemmy@lemmy.ml

I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they "put together" that are clearly ChatGPT generated, with no shame. They tell us that if we aren't doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] Godnroc@lemmy.world 9 points 3 days ago

So far it's a glorified search engine, which it is mildly competent at. It just speeds up collecting the information I would anyways and then I can get to sorting useful from useless faster.

That said, I've seen emails from people that were written with AI and it instantly makes me less likely to take it seriously. Just tell me what the end goal is and we can discuss how to best get there instead is regurgitating some slop that wouldn't get is there in the first place!

[-] UnspecificGravity@piefed.social 16 points 4 days ago

The most technically illiterate leaders are pushing the hell out of using for things that don't make sense while the workers who know what they are doing are finding some limited utility.

Out biggest concern is that people are going to be using it for the wrong stuff and fail to account for the errors and limitations.

[-] muxika@lemmy.world 9 points 3 days ago* (last edited 3 days ago)

I feel like giving AI our information on a regular basis is just training AI to do our jobs.

I'm a teacher and we're constantly encouraged to use Copilot for creating questions, feedback, writing samples, etc.

You can use AI to grade papers. That sure as shit shouldn't happen.

[-] sunbeam60@feddit.uk 3 points 3 days ago

I sat next to a teacher the other day who did just that. Every single paper was graded by an AI. It actually shocked me.

load more comments (3 replies)
[-] Crotaro@beehaw.org 2 points 2 days ago* (last edited 2 days ago)

Disclaimer: I only started working at this company about three weeks ago, so this info may not be as accurate as I currently think it is.

I work in quality management and recently asked my boss what the current stance on AI is, since he mentioned quite early that he and his colleagues sometimes use ChatGPT and Copilot in conjunction to write up some text for process descriptions or info pages. They use it in research tasks, or, for example, to summarize large documents like government regulations, and they very often use it to rephrase texts when they can't think of a good way to word something. From his explanation, the company consensus seems to be that everyone has access to Copilot via our computers and if someone has, for example, a Kagi or Gemini or whatever subscription, we are absolutely allowed and encouraged to utilize it to its full potential.

The only rules seem to be to not blindly trust the AI output ever and to not feed it company sensitive information (and/or our suppliers/customers)

[-] PonyOfWar@pawb.social 16 points 4 days ago* (last edited 4 days ago)

My company has 2 CEOs. One of them doesn't ever really talk about AI. The other one is personally obsessed with the topic. His picture on Teams is AI generated and every other day, he posts some random AI tutorial or news into work channels. His client presentations are mostly written by ChatGPT. But luckily, nothing is being forced on us developers. The company is very hands-off in general (some may say disorganized) and we can pretty much use any tools and methods we choose, as long as we deliver good results. I personally use AI only occassionally, mostly for quick prototyping in languages and frameworks I'm unfamiliar with. None of the other devs are very enthusiastic about AI either.

[-] clay_pidgin@sh.itjust.works 12 points 4 days ago

Our devs are implementing some ML for anomaly detection, which seems promising.

There's also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don't mind. Stuff like "make a graph of usage on weekdays" and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said "I can regenerate this report with actual data if it is made available to me".

[-] ragas@lemmy.ml 4 points 3 days ago

As someone who has done various kinds of anomaly detections, it always seems promising until it hits real world data and real world use cases.

There are some widely recognised papers in this field, just about this issue.

[-] clay_pidgin@sh.itjust.works 3 points 3 days ago

Once an anomaly is defined, I usually find it easier to build a regular alert for it. I guess the ML or LLM would be most useful to me in finding problems that I wasn't looking for.

[-] lichtmetzger@discuss.tchncs.de 4 points 3 days ago

I work for a small advertising agency as a web developer. I'd say is mixed. The writing team is pissed about AI, because of the SEO-optimized slop garbage that is ruining enjoyable articles on the internet. The video team enjoys it, because it's really easy to generate good (enough) looking VFX with it. I use it rarely. Mostly for mundane tasks and boilerplate code. I enjoy using my actual brain to solve coding problems.

Customers don't have a fucking clue, of course. If we told them that they need AI for some stupid reason, they would probably believe us.

The boss is letting us decide and not forcing anything upon us. If we believe our work is done better with it, we can go for it, but we don't have to. Good boss.

load more comments (3 replies)
[-] Foofighter@discuss.tchncs.de 3 points 3 days ago

They just hopped onto the bandwagon pushing for copilot and SharePoint. Just in time as some states are switching to open source.

[-] sideponcho69@piefed.social 13 points 4 days ago

I can only speak for my use of it in software development. I work with a large, relatively complex CRUD system so take the following as you will, but we have Claude integrated with MCPs and agent skills and it's honestly been phenomenal.

Initially we were told to "just use it" (Copilot at the time). We essentially used it as an enhanced google search. It wasn't great. It never had enough context and as such the logic it produced would not make sense, but it was handy for fixing bugs.

The advent of MCPs and agents skills really bring it to another level. It has far more context. It can pull tickets from Jira, read the requirements, propose a plan and then implement it once you've approved it. You can talk it through, ask it to explain some of the decisions it made and alter the plan as it's implemented. It's not perfect but what it can achieve when you have MCPs, skills, md files all set up is crazy.

The push for this was from non-tech management who are most definitely driven by hype/FOMO. So much so they actually updated our contracts to include AI use. In our case, it paid off. I think it's a night and day difference between using base Copilot to ask questions vs using it with context sources.

[-] rain_lover@lemmy.ml 11 points 4 days ago

What happens when anthropic triple their prices and your company is totally dependent on them for any development work? You can't just stop using it because no in house developers, if there are even any left, will understand the codebase.

[-] sideponcho69@piefed.social 8 points 4 days ago* (last edited 4 days ago)

To the same point as lepinkainen, we are fully responsible for the code we commit. We are expected to understand what we've committed as if we wrote it ourselves. We treat it as a speed booster. The fact that Claude does a good job at maintaining the same structure as the rest of the codebase makes it no different than trying to understand changes made by a co-worker.

On your topic of dependency, the same point as above applies. If AI support were to drop tomorrow, we would be slower, the work would get done all the same.

I do agree with you though. I can tell we are getting more relaxed with the changes Claude makes and putting more blind trust in it. I'm curious as to how we will be in a years time.

As a disclaimer, I'm just a developer, I've no attachment to my company. This is just my take on the subject.

load more comments (3 replies)
[-] Diddlydee@feddit.uk 13 points 4 days ago

It has absolutely no involvement anywhere which is good.

[-] Lettuceeatlettuce@lemmy.ml 10 points 4 days ago

I work in IT, many of the managers are pushing it. Nothing draconian, there are a few true believers, but the general vibe is like everybody is trying to push it because they feel like they'll be judged if they don't push it.

Two of my coworkers are true believers in the slop, one of them is constantly saying he's been, "consulting with ChatGPT" like it's an oracle or something. Ironically, he's the least productive member of the team. It takes him days to do stuff that takes us a few hours.

[-] teawrecks@sopuli.xyz 10 points 4 days ago

I'm in software. The company gives us access and broadly states they'd like people to find uses for it, but no mandates. People on my team occasionally find uses for it, but we understand what it is, what it can do, and what it would need to be able to do for it to be useful. And usually it's not.

If I thought anyone sent me an email written with AI, I would ask them politely but firmly to never waste my time like that again. I find using AI for writing email to be highly disrespectful. If I worked at a company making a habit out of that, I would leave.

[-] morgan_423@lemmy.world 3 points 3 days ago

I use Excel at work, not in a traditional accounting sense, but my company uses it as an interface with one of our systems I frequently work with.

Rather than tediously search the main Excel sheets that get fed into that system for all of the data fields I have to fill in, I made separate Excel tools that consolidate all of that data, then use macros to put the data into the correct fields on the main sheets for me.

Occasionally I'll have to add new functionality to that sheet, so I'll ask AI to write the macro code that does what I need it to do.

Saves me from having to learn obscure VBA programming to perform a function that I do during .0001% of my work time, but that's about the extent of it. For now.

Of course most of what I do is white collar computer work, so I'm expecting that my current job likely has a two-year-or-less countdown on it before they decide to use AI to replace me.

[-] prettygorgeous@aussie.zone 7 points 3 days ago

I vibe code from time to time because people sometimes demand quick results in an unachievable timeline. In saying that, I may use a LLM to generate the base code that provides a basic solution to what is needed and then I go over the code and review/refactor it line by line. Sometimes if time is severely pressed and the code is waaaay off a bare minimum, I'll have the LLM revise the code to solve some of the problem, and then I review, adjust, amend where needed.

I treat AI as a tool and (frustrating and annoying) companion in my work, but ultimately I review and adjust and amend (and sometimes refactor) everything. It's kind of similar to when you are reading code samples from websites, copying it if you can use it, and refactoring it for your app, except tailored a bit more to what you need already..

In the same token, I also prefer to do it all myself if I can, so if I'm not pressed for time, or I know it's something that I can do quickly, I'll do it myself.

[-] BannedVoice@lemmy.zip 11 points 4 days ago

The organization I work for uses it but they’re taking a very cautious approach to it, we are instructed to double, triple check everything AI generated. Only use specific tools they approve for work related matters as not to train LLMs on company data and slowly rolling out AI in specific areas before they’re more widely adopted.

[-] KingGordon@lemmy.world 8 points 4 days ago

Double and triple checking everything takes longer than just doing the work.

[-] Tyrq@lemmy.dbzer0.com 11 points 4 days ago

Just use it to generate the kind of work he does, so that you can prove his own worthlessness

[-] NomenCumLitteris@lemmy.ml 6 points 3 days ago

My subordinate is quite proud at the code AI produces based off his prompts. I don't use AI personally, but it is surely a tool. Don't know why one would be proud at the work they didn't do and can't explain though. I have to manage the AI use to a "keep it simple" level. Use AI if there is a use case, not just because it is there to be used...

[-] GissaMittJobb@lemmy.ml 9 points 4 days ago

We get encouraged to try out AI tools for various purposes to see where we can find value out of them, if any. There are some use-cases where the tech makes sense when wielded correctly, and in those cases I make use of it. In other cases, I don't.

So far, I suspect we may be striking a decent balance. I have however noticed a concern trend of people copy-pasting unfiltered slop as a response to various scenarios, which is obviously not helpful.

[-] thatradomguy@lemmy.world 5 points 3 days ago

Dumbass senior contract person and program managers are all for using copilot and I've caught several people using chatgpt as a search engine or at least that's what they tell me they think it is.

[-] HobbitFoot@thelemmy.club 7 points 3 days ago

Some people are using it for work purposes when there isn't a major policy on it.

You can tell because the work is shit.

[-] VinesNFluff@pawb.social 5 points 3 days ago* (last edited 3 days ago)

Surprisingly reasonable?

I was terrified that entering the corporate world would mean being surrounded by people who are obssessed with AI.

Instead like... The higher-ups seem to be bullish on it and how much money it'll make them (... And I don't mind because we get bonuses if the corp does well), but even they talk about how "if you just let AI do the job for you, you'll turn in bad quality work" and "AI just gets you started, don't rely on it"

We use some machine learning stuff in places, and we have a local chatbot model for searching through internal regulations. I've used Copilot to get some raw ideas which I cooked up into something decent later.

It's been a'ight.

[-] some_kind_of_guy@lemmy.world 4 points 3 days ago* (last edited 3 days ago)

This is the way. I honestly don't care how the execs think about ai or if they use it themselves, but don't force its usage on me. I've been touching computers since before some of them were born. For me it's just one extra tool that gets pulled out in very specific scenarios and used for a very short amount of time.

It's like the electric start on my snowblower - you don't technically need it, and it won't do the work for you, (so don't expect it to) but at the right time it can be extremely nice to have.

[-] buttwater@hexbear.net 8 points 4 days ago

The incompetent coworkers who usually ask me to do things for them (data entry etc) are asking ai first. One of the boomer managers is regularly printing out Google's AI search result summaries as a basis for research and to write emails for him, which is impressively irresponsible. No top-down expectation to use, which is nice.

[-] hamid@crazypeople.online 2 points 3 days ago

I'm a consultant so I'm doing a lot of different things day to day. We use it to track meetings with the copilot facilitator and meeting recaps and next steps. It is pretty helpful in that regard and often matches the tasks I write for myself during the meeting.

I also have to support a wide arrange of different systems and I can't be an expert in all of them so it is helpful for generating short scripts and workflows if it is powershell one day, bash the next, exchange management etc. I do know powershell and bash scripting decently well and the scripts often need to be fixed but it is good at generating templates and starter scripts I flesh out as the need arises. At this point I've collected many of the useful ones I need in my repos and reuse them pretty often.

Lastly one of the companies I consult for uses machine learning to design medical implants and design and test novel materials and designs. That is pretty cool and I don't think they could do some of the stuff they're doing without machine learning. While still AI, it isn't really GPT style generative AI though, not sure if that is what you're asking.

[-] Pipster@lemmy.blahaj.zone 9 points 4 days ago

Intolerable

[-] chronicledmonocle@lemmy.world 1 points 2 days ago

My company added an AI chatbot to our web site, but beyond that we generally are anti-AI.

[-] Appoxo@lemmy.dbzer0.com 2 points 3 days ago

The order is:
Use whatever tool is not malicious and doesnt attack customer data.

Most use (IMO) way too much AI. The first result (the google AI answer) is trusted and done.
No research done beyond that.

I purposefully blocked the AI answer in uBlock. I don't want any of that.
Besides that I use it on occassion to look for a word or reword my search query if I don't find or know what I am looking for.
Very useful for the "What was the name of X again? It does Y and Z" queries.
Also for Powershell scripting because it can give me examples on using it.

But every asnwer is double and tripple checked for accuracy.
Seen too much news about made up answers.

At home I usually only use it for bash scripting because I can't be bothered to learn that.

[-] Unquote0270@programming.dev 8 points 4 days ago

Not quite that extreme where I am but it is being thrust into any kind of strategy scenario with absolutely nothing to back it up. They are desperate to incorporate.

[-] Passerby6497@lemmy.world 8 points 4 days ago

I use chatgpt when I care to, and while I was given a subscription by work, I'm not actively encouraged to use it. I really only use it for researching problems that Google search is too seo poisoned to help me with, or debugging scripts. Past that it doesn't have much professional use for me, given how much time I spend validating output and insulting the AI for hallucinations and just generally being terrible at moderate tasks.

Basic data interpretation can be pretty great though. I've had it find a couple problems I missed after having it parse log files.

[-] golden_zealot@lemmy.ml 7 points 4 days ago

The owner of the software company I work at openly said to a room full of multiple clients that he believed that AI is a bubble and that it is going fail, but nonetheless let them know the business would be adding an optional AI feature to one aspect of the software product for those who want it, and even at that it's not an LLM or anything, it's intended to try to speed up the re-creation of specific types of diagrams based on an input of the original diagrams.

There is no requirement or suggestion to use AI as an employee at my company, personal preference for how each person works is generally respected and everything goes through a few layers of review regardless. All the management cares about is that the work gets done somehow.

There's one dev who uses it for 1 or 2 things on rare occasions, no one else ever uses it.

[-] TubularTittyFrog@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

it doesn't exist. but i work for a company that does real work. it doesn't bullshit.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 12 Dec 2025
176 points (99.4% liked)

Asklemmy

51651 readers
872 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS