[-] VoterFrog@kbin.social 7 points 8 months ago

Yes, what I'm saying is that lower costs for software, which AI will help with, will make software more competitive against human production labor. The standard assumption is that if software companies can reduce the cost of producing software, they'll start firing programmers but the entire history of software engineering has shown us that that's not true as long as the lower cost opens up new economic opportunities for software users, thus increasing demand.

That pattern stops only when there are no economic opportunities to be unlocked. The only way I think that happens is when automation has become so prevalent that further advancement has minimal impact. I don't think we're there yet. Labor costs are still huge and automation is still relatively primitive.

[-] VoterFrog@kbin.social 13 points 8 months ago

One thing that is somewhat unique about software engineering is that a large part of it is dedicated to making itself more efficient and always has been. From programming languages, protocols, frameworks, and services, all of it has made programmers thousands of times more efficient than the guys who used to punch holes into cards to program the computer.

Nothing has infinite demand, clearly, but the question is more whether or not we're anywhere near the peak, such that more efficiency will result in an overall decrease in employment. So far, the answer has been no. The industry has only grown as it's become more efficient.

I still think the answer is no. There's far more of our lives and the way people do business that can be automated as the cost of doing so is reduced. I don't think we're close to any kind of maximum saturation of tech.

[-] VoterFrog@kbin.social 3 points 9 months ago

Did Starfield only cost 4x as much to make as HiFi? Doubt it. I'd bet the marketing budget of Starfield alone dwarfed the lifetime cost of HiFi. I agree that "bombed" is maybe too harsh but the problem that the article is talking about is ROI. As I continues to balloon, R needs to keep up and it's not.

[-] VoterFrog@kbin.social 5 points 9 months ago

Yeah there's definitely some overlap. Lots of dark UX is used for enshittification but sometimes enshittification is just plainly bold bad UX for the sake of making money with a hint of "Yeah it's bad. What are you going to do about it?"

On the other hand, enshittification is part of a cycle that starts with a service that grows dominant at least in part by providing a great experience, only to tear that experience down when it gets in the way of making money. Dark UX isn't always part of that cycle. Plenty of services of all sizes use these patterns right from the start. Not really accurate to call it "enshittification" when it was always just shit.

[-] VoterFrog@kbin.social 4 points 9 months ago

You don't see the difference between distributing someone else's content against their will and using their content for statistical analysis? There's a pretty clear difference between the two, especially as fair use is concerned.

[-] VoterFrog@kbin.social 4 points 9 months ago

I think that undersells most of the compelling open source libraries though. The one line or one function open source libraries could be starved, I guess. But entire frameworks are open source. We're not at the point yet where AI can develop software on that scale.

[-] VoterFrog@kbin.social 12 points 9 months ago

Why do you think AI will starve open source?

[-] VoterFrog@kbin.social 4 points 10 months ago

The above is pretty misleading. A typical Java program can be made into a Kotlin program with little changes, this is true. But Kotlin code, particularly when written using Kotlin best practices, bares very little resemblance to Java code. If you learn Kotlin first, you'll find some of that knowledge does transfer to Java but then there's plenty that won't and you'll have to learn the Java way of doing things too. Still, as a dev, knowing more languages never hurts. I'd still recommend proceeding with Kotlin.

[-] VoterFrog@kbin.social 3 points 1 year ago

The logger analogy is a misunderstanding of what people with a degree in CS do. Most become software engineers. They're not loggers, they're architects who occasionally have to cut their own logs.

They've spent decades reducing the amount of time they have to spend logging only to be continually outpaced by the growth of demand from businesses and complexity of the end product. I don't think we've reached a peak there yet, if anything the capabilities of AI are opening up even more demand for even more software.

But, ultimately, coding is only a fraction of the job and any halfway decent CS program teaches programming as a means to practice computer science and software engineering. Even when an AI gets to the point that it can produce solid code from the English language, it has a ways to go before replacing a software engineer.

One thing that's for sure: tons of business owners will get richer and pay fewer workers. I think we're going to have to face a reckoning as we reach the limits of what capitalism can sustain. But it's also unpredictable because AI opens up new opportunities for everyone else as well.

[-] VoterFrog@kbin.social 6 points 1 year ago

they want to leverage a mind-like thing (either a human brain or a trained AI) that has internalized a ton off content that it can use to generate new content from, but they don’t ever want to pay them or treat them like a living being.

That's anybody, really. Everything you've ever accomplished has depended upon the insights and knowledge of countless other people who never saw a dime from you for it. That's part of living in a society and it's a crucial part of how it advances.

Or maybe it’s simply a false equivalence we all need to accept. Maybe creativity can exist independent from a conscious brain, or maybe it’s just a vulnerability in human consciousness to look at these stochastic arrangements of data and say “that looks inspired”.

I think that most of the value we get from creativity isn't from the mechanics of creating something. And I think that by removing the mechanical barrier, we unlock that value much more widely across humanity. Art is a form of communication. Will we ever feel the same connection when that communication comes wholesale from an AI? I don't know. But we're certainly not there yet.

[-] VoterFrog@kbin.social 3 points 1 year ago* (last edited 1 year ago)

I'd like to chime in the point out that the vast majority of employed artists aren't making anything as creative as cover art for a hobbyist board game. If they're lucky, they're doing illustrations for Barbie Monopoly or working on some other uncreative cash grab. More likely, they're doing incredibly bland corporate graphic design. And if you ask me, the less of humanity's time we dedicate to bullshit like that, the better.

Professionals will spend more of their time concerned with higher order functions like composition and direction. More indies and small businesses will be empowered to create things without the added expense. And consumers will be able to afford more stuff with higher quality visuals.

[-] VoterFrog@kbin.social 16 points 1 year ago

Frankly, it's an absurd question. Has Polygon obtained consent from all of the artists for the works used by its own human artists as inspiration or reference? Of course not. To claim that any use of an image to train or influence a human user is stealing is to warp the definition of the word beyond any recognition. Copyright doesn't give you exclusive ownership over broad thematic elements of your work because, if it did, there'd be no such thing as an art trend.

Then what's the studio having its name dragged through the mud for? For using a computer to speed up development? Is that a standard that Polygon wants to live up to as well?

view more: next ›

VoterFrog

joined 1 year ago