85
Block ditches 4,000 staff, because AI can do their jobs
(www.theregister.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Can we start placing bets on when we find out that the "AI tools" they're using are just sweat shop workers in Bangladesh processing invoices?
Eh, I know this is the anti-AI instance, but reading and interpreting things like that is something you can verifiably get AI to do 90% of the time.
Until you realize those 10% errors missed the decimal point.
Again, that's what the 6000 remaining employees would be for.
Because 90% accuracy is acceptable for financial institutions ...
I've got an idea. If 90% of AI's output is accurate, just have humans review the 10% that will be inaccurate.
(Yes I am an AI expert, how did you know)
Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?
That's easy. You just get a second AI to ask the first AI if their responses were accurate or not
(/s)
This is unironically what I've seen people try to do, except they assume the second AI is correct.
Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.
That's also basically how thinking models work too, isn't it? And probably the new GPT-5 router, which everybody hates...
Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.
No, it's really not. Thus the 6000 remaining employees.
(Assuming this is a significant part of their business)
Hell USPS has been using machine learning (yes a kind of AI but not the kind they are implying) for years to do that kind of thing.
Kind of
They've had several address resolution centers around the country, where reviewers look at mail and figure out it's address. They don't physically handle the mail, it's an image on a screen.
Iirc they've been doing it this way since the 70s
No? For everything they can they just use OCR and then send it on its way without a human having ever seen it sometimes. If the hand writing is bad enough that the machine can't figure it out that's where the human reviewers come in.