174
submitted 3 days ago by rain_lover@lemmy.ml to c/asklemmy@lemmy.ml

I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they "put together" that are clearly ChatGPT generated, with no shame. They tell us that if we aren't doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

you are viewing a single comment's thread
view the rest of the comments
[-] ZWQbpkzl@hexbear.net 4 points 3 days ago

Fortunately not that bad but the people who are using it do get praised while being a massive burden on everyone who has to review the code or worse, documents.

I did see one clever usage to basically replace most front end devs with AI:

  • client asks LLM question about our data.
  • LLM generates code to query our data and display charts
  • client sees results

All runs with just mistral:8b. Very flexible solution compared to having front end devs constantly iterate on a UI monstrosity meant to serve every single clients needs. Of course this assumes the AI is writing the query correctly.

[-] rain_lover@lemmy.ml 3 points 3 days ago

So the LLM can run arbitrary code against your database? Or your clients can? Both sound scary as hell!

[-] lib1@hexbear.net 3 points 3 days ago

I can’t imagine the nightmare of trying to reproduce “incorrect data” and they just send you the prompt instead of the query

[-] ZWQbpkzl@hexbear.net 2 points 2 days ago

That could be fixed by simply logging the prompt and code executed. Maybe also give each prompt/response a reference ID and demand that in tickets. The nightmare would be actually reading the code the AI generated.

[-] ZWQbpkzl@hexbear.net 1 points 2 days ago

You're being silly. Clients can only prompt the AI and the AI has restricted read-only permissions on the database. Slap on a execution timeout to cover if the AI wrote an expensive query.

The real concern is the AI getting a query subtly wrong and giving the client bad info. That gets "covered" by some flimsy disclaimer.

[-] Tabitha@hexbear.net 2 points 3 days ago

we're doing the same thing, but with claude

[-] ZWQbpkzl@hexbear.net 1 points 2 days ago

Self hosted Claude? I think they're self-hosting mistral.

this post was submitted on 12 Dec 2025
174 points (99.4% liked)

Asklemmy

51639 readers
675 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS