221
submitted 1 year ago* (last edited 1 year ago) by bahmanm@lemmy.ml to c/technology@lemmy.ml

It's not the 1st time a language/tool will be lost to the annals of the job market, eg VB6 or FoxPro. Though previously all such cases used to happen gradually, giving most people enough time to adapt to the changes.

I wonder what's it going to be like this time now that the machine, w/ the help of humans of course, can accomplish an otherwise multi-month risky corporate project much faster? What happens to all those COBOL developer jobs?

Pray share your thoughts, esp if you're a COBOL professional and have more context around the implication of this announcement 🙏

you are viewing a single comment's thread
view the rest of the comments
[-] FoxBJK@midwest.social 24 points 1 year ago

Converting ancient code to a more modern language seems like a great use for AI, in all honesty. Not a lot of COBOL devs out there but once it's Java the amount of coders available to fix/improve whatever ChatGPT spits out jumps exponentially!

The fact that you say that tells me that you don’t know very much about software engineering. This whole thing is a terrible idea, and has the potential to introduce tons of incredibly subtle bugs and security flaws. ML + LLM is not ready to be used for stuff like this at the moment in anything outside of an experimental context. Engineers are generally - and with very good reason - deeply wary of “too much magic” and this stuff falls squarely into that category.

[-] FoxBJK@midwest.social -5 points 1 year ago

All of that is mentioned in the article. Given how much it cost last time a company tried to convert from COBOL, don't be surprised when you see more businesses opt for this cheaper path. Even if it only converts half of the codebase, that's still a huge improvement.

Doing this manually is a tall order...

[-] sugar_in_your_tea@sh.itjust.works 11 points 1 year ago

And doing it manually is probably cheaper in the long run, especially considering that COBOL tends to power some very mission critical tasks, like financial systems.

The process should be:

  1. set up a way to have part of your codebase in your new language
  2. write tests for the code you're about to port
  3. port the code
  4. go to 2 until it's done

If you already have a robust test suite, step 2 becomes much easier.

We're doing this process on a simpler task of going from Flow (JavaScript with types) to TypeScript, but I did a larger transition from JavaScript to Go and Ruby to Python using the same strategy and I've seen lots of success stories with other changes (e.g. C to Rust).

If AI is involved, I would personally use it only for step 2 because writing tests is tedious and usually pretty easy to review. However, I would never use it for both step 2 and 3 because of the risk of introducing subtle bugs. LLMs don't understand the code, they merely spot patterns and that's absolutely not what you want.

[-] gravitas_deficiency@sh.itjust.works 8 points 1 year ago* (last edited 1 year ago)

Yeah, I read the article.

They’re MASSIVELY handwaving a lot of detail away. Moreover, they’re taking the “we’ll fix it in post” approach by suggesting “we can just run an armful of security analysis software on the code after the system spits something out”. While that’s a great sentiment, you (and everyone considering this approach) needs to consider that complex systems are pretty much NEVER perfect. There WILL be misses. Add this to the fact that a ton of organizations that still use COBOL are banks - which are generally considered fairly critical to the day-to-day operation of our society, and you can see why I am incredibly skeptical of this whole line of thinking.

I’m sure the IBM engineers who made the thing are extremely good at what they do, but at the same time, I have a lot less faith in the organizations that will actually employ the system. In fact, I wouldn’t be terribly shocked to find that banks would assign an inappropriately junior engineer to the task - perhaps even an intern - because “it’s as simple as invoking a processing pipeline”. This puts a truly hilarious amount of trust into what’s effectively a black box.

Additionally, for a good engineer, learning any given programming language isn’t actually that hard. And if these transition efforts are done in what I would consider to be the right way, you’d also have a team of engineers who know both the input and output languages such that they can go over (at the very, very least) critical and logically complex areas of the code to ensure accuracy. But since this is all about saving money, I’d bet that step simply won’t be done.

[-] IHeartBadCode@kbin.social 7 points 1 year ago

For those who have never worked on legacy systems. Any one who suggests “we’ll fix it in post” is asking you to do something that just CANNOT happen.

The systems I code for, if something breaks, we’re going to court over it. Not, oh no let’s patch it real quick, it’s your ass is going to be cross examined on why the eff your system just wrote thousands of legal contracts that cannot be upheld as valid.

Yeah, that fix it in post shit any article, especially this one that’s linked, suggests should be considered trash that has no remote idea how deep in shit one can be if you start getting wild hairs up your ass for changing out parts of a critical system.

And that’s precisely the point I’m making. The systems we’re talking about here are almost exclusively banking systems. If you don’t think there will be so Fucking Huge Lawsuits over any and all serious bugs introduced by this - and there will be bugs introduced by this - you straight up do not understand what it’s like to develop software for mission-critical applications.

[-] PuppyOSAndCoffee@lemmy.ml 1 points 1 year ago

Trusting IBM engineers, perhaps…sales/marketing? Oooh now I am skeptical.

[-] Kerfuffle@sh.itjust.works 4 points 1 year ago

Even if it only converts half of the codebase, that’s still a huge improvement.

The problem is it'll convert 100% of the code base but (you hope) 50% of it will actually be correct. Which 50%? That's left as an exercise to the reader. There's no human, no plan, no logic necessarily to how it was converted also so it can be very difficult to understand code like that and you can't ask the person who wrote why stuff is a certain way.

Understanding large, complex codebases one didn't write is a difficult task even under pretty ideal conditions.

[-] PuppyOSAndCoffee@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

First, odds are only half the code is used, and in that half, 20% has bugs that the system design obscures. It’s that 20% that tends to take the lionshare of modernization effort.

It wasn’t a bug then, though it was there, but it is a bug now.

[-] FoxBJK@midwest.social 0 points 1 year ago

The problem is it’ll convert 100% of the code base

Please go read the article. They specifically say they aren't doing this.

[-] Kerfuffle@sh.itjust.works 3 points 1 year ago

I was speaking generally. In other words, the LLM will convert 100% of what you tell it to but only part of the result will be correct. That's the problem.

[-] FoxBJK@midwest.social 0 points 1 year ago

And in this case they're not doing that:

“IBM built the Code Assistant for IBM Z to be able to mix and match COBOL and Java services,” Puri said. “If the ‘understand’ and ‘refactor’ capabilities of the system recommend that a given sub-service of the application needs to stay in COBOL, it’ll be kept that way, and the other sub-services will be transformed into Java.”

So you might feed it your COBOL code and find it only coverts 40%.

[-] Kerfuffle@sh.itjust.works 3 points 1 year ago

So you might feed it your COBOL code and find it only coverts 40%.

I'm afraid you're completely missing my point.

The system gives you a recommendation: that has a 50% chance of being correct.

Let's say the system recommends converting 40% of the code base.

The system converts 40% of the code base. 50% of the converted result is correct.

50% is a random number picked out of thin air. The point is that what you end up with has a good chance of being incorrect and all the problems I mentioned originally apply.

[-] FoxBJK@midwest.social 1 points 1 year ago

One would hope that IBM's selling a product that has a higher success rate than a coinflip, but the real question is long-term project cost. Given the example of a $700 million dollar project, how much does AI need to convert successfully before it pays for itself? If we end up with 20% of the original project successfully done by AI, that's massive savings.

The software's only going to get better, and in spite of how lucrative a COBOL career is, we don't exactly see a sharp increase in COBOL devs coming out of schools. We either start coming up with viable ways to move on from this language or we admit it's too essential to ever be forgotten and mandate every CompSci student learn it before graduating.

[-] Kerfuffle@sh.itjust.works 2 points 1 year ago

One would hope that IBM’s selling a product that has a higher success rate than a coinflip

Again, my point really doesn't have anything to do with specific percentages. The point is that if some percentage of it is broken you aren't going to know exactly which parts. Sure, some problems might be obvious but some might be very rare edge cases.

If 99% of my program works, the remaining 1% might be enough to not only make the program useless but actively harmful.

Evaluating which parts are broken is also not easy. I mean, if there was already someone who understood the whole system intimately and was an expert then you wouldn't really need to rely on AI to port it.

Anyway, I'm not saying it's impossible, or necessary not going to be worth it. Just that it is not an easy thing to make successful as an overall benefit. Also, issues like "some 1 in 100,000 edge case didn't get handle successfully" are very hard to quantify since you don't really know about those problems in advance, they aren't apparent, the effects can be subtle and occur much later.

Kind of like burning petroleum. Free energy, sounds great! Just as long as you don't count all side effects of extracting, refining and burning it.

[-] Bene7rddso@feddit.de 1 points 1 year ago

A random outcome isn't flipping a coin, it's rolling dice

[-] HellAwaits@lemm.ee 8 points 1 year ago

Is ChatGPT magic to people? ChatGPT should never be used in this way because the potential of critical errors is astronomically high. IBM doesn't know what it's doing.

[-] socsa@lemmy.ml 3 points 1 year ago

I'm more alarmed at the conversation in this thread about migrating these cobol apps to java. Maybe I am the one who is out of touch, but what the actual fuck? Is it just because of the large java hiring pool? If you are effectively starting from scratch why in the ever loving fuck would you pick java?

[-] NightAuthor@beehaw.org 4 points 1 year ago

Java is the new cobol, all the enterprises love it.

[-] LeylaLove@hexbear.net 2 points 1 year ago

This is what in thinking. Even the few people I know IRL that know COBOL from their starting days say it's a giant pain in the ass as a language. It's not like it's really gonna cost all that much time to do compared to paying labor to rewrite it from the base, even if they don't end up using it. Sure, correcting bad code can take a lot of time to do manually. But important code being in COBOL is a ticking time bomb, they gotta do something.

Counterpoint: if it ain’t broke, don’t fix it.

[-] FaceDeer@kbin.social -1 points 1 year ago* (last edited 1 year ago)

Counter counterpoint: The longer you let it sit the more obsolete the language becomes and the harder it becomes to fix it when something does break.

This is essentially preventative maintenance.

Counter^3 point: a system that was thoroughly engineered and tested a long time ago, and that still fulfills all the technical requirements that the system must meet will simply not spontaneously break.

Analogously: this would be like using an ML + LLM to rewrite the entire Linux kernel in Rust. While an (arguably) admirable goal, doing that in one fell swoop would be categorically rejected by the Linux community, to the extent that if some group of people somehow unilaterally just merged that work, the rest of the Linux kernel dev community would almost certainly trigger a fork of the entire kernel, with the vast majority of the community using the forked version as the new source of truth.

This is not preventative maintenance. This is fixing something that’s not broken, that has moreover worked reliably, performantly (enough), and correctly for literal decades. You do not let a black box rewrite your whole codebase in another language and then expect everything to magically work.

this post was submitted on 23 Aug 2023
221 points (96.6% liked)

Technology

34989 readers
51 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS