17
top 12 comments
sorted by: hot top controversial new old
[-] quetzaldilla@lemmy.world 8 points 17 hours ago

I quit my job in public accounting for many reasons, but the primary one was the forceful adoption of LLMs to replace associates.

I told the dimwits at the top that it was a mistake, because LLMs are incompetent even when the information fed to it was perfect, and that was rarely the case in practice.

Our ultra wealthy clients were notorious for giving us the most incomplete and asinine information, and it often took someone with decades of experience to decipher what the fuck their personal assistants are even talking about.

They went ahead anyway because of the high cost of wages, of course, and I made my exit because I did not wish to be complicit in such a monumental mistake.

Lmfao the LLM they laid associates off and paid half a million dollars for made up fake ledger accounts when accounts didn't reconcile, and none of the dumbasses left noticed in time because they hadn't done associate-level work in decades.

It also lied all the time, even when you asked it not to.

The damage was done and the biggest clients started leaving, so they begged us all to come back but I got obsessed with baking bread and I ain't about to neglect my sourdough starters to help a group of people who would lose a battle of wits against yeast.

[-] SheeEttin@lemmy.zip 14 points 1 day ago* (last edited 1 day ago)

I would not trust a text generator to do math, no. It's wholly the wrong tool for the job. Nor do I trust them to be up to date and compliant with tax code. And I really don't trust them to take legal responsibility for their output.

[-] vermaterc@lemmy.ml 1 points 23 hours ago

State-of-the-art LLM agents do not perform calculations, they call external tools to do that.

[-] audaxdreik@pawb.social 4 points 22 hours ago

You're describing neurosymbolic AI, a combination of machine learning and neural network (LLM) models. Gary Marcus wrote an excellent article on it recently that I recommend giving a read, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI.

The primary issue I see here is that you're still relying on the LLM to reasonably understand and invoke the ML models. It needs to parse the data and understand what's important in order to feed it into the ML models and as has been stated many times, LLMs do not truly "understand" anything, they are inferring things statistically. I still do not trust them to be statistically accurate and perform without error.

[-] lordnikon@lemmy.world 4 points 22 hours ago

the only good thing i trust an LLM to do is build a official looking document from 5 or 6 bullet points i give it. Even then im going to proof read it a few times just to make sure it didn't add anything stupid.

[-] geneva_convenience@lemmy.ml 3 points 20 hours ago

Cooking the books AI style

Compounding tasks like accounting where operations sound easy but create a chain of counter entries and balances need to be organized by account is none of AIs business until they can prove that multiple sequential steps can have over 99% accuracy and the checksum of the accounts is balanced.

Multiple sequential steps with six operations where we assume 99 PCT of each, is right can equal 90% accuracy. This also reads at least 10% errors if they all six go wrong.

How many different entries are in a company over a month's closure?

Now this wouldn't be and issue if it can balance the statements of consolidated accounts and find where are we missing entries or misallocations. That sir is why we pay someone with experience.

[-] Treczoks@lemmy.world 2 points 23 hours ago

Just after a message that stated how AI developed software deleted the main database and lied about it. Really builds trust and confidence into AI-based projects.

[-] JumpyWombat@lemmy.ml 1 points 1 day ago

I do not believe that LLMs will ever be able to replace humans in tasks designed for humans. The reason is that human tasks require tacit knowledge (=job experience) and that stuff is not written down in training material.

However, we will start to have tasks for LLMs pretty soon. It was already observed that LLMs work better on stuff produced by other LLMs.

[-] vermaterc@lemmy.ml 2 points 23 hours ago

To be fair, not all knowledge of LLM comes from training material. The other way is to provide context to instructions.

I can imagine someone someday develops a decent way for LLMs to write down their mistakes in database and some clever way to recall most relevant memories when needed.

[-] JumpyWombat@lemmy.ml 1 points 23 hours ago

You sort of described RAG. It can improve alignment, but the training is hard to overcome.

See Grok that bounces from “woke” results to “full nazi” without hitting the mid point desired by Musk.

[-] yogthos@lemmy.ml 1 points 23 hours ago

there are already existing approaches tackling this problem https://github.com/MemTensor/MemOS

this post was submitted on 22 Jul 2025
17 points (84.0% liked)

Technology

39019 readers
166 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS