[-] theluddite@lemmy.ml 19 points 7 months ago* (last edited 7 months ago)

All these always do the same thing.

Researchers reduced [the task] to producing a plausible corpus of text, and then published the not-so-shocking results that the thing that is good at generating plausible text did a good job generating plausible text.

From the OP , buried deep in the methodology :

Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.

Yet here's their conclusion :

The advancement from GPT-3.5 to GPT-4 marks a critical milestone in which LLMs achieved physician-level performance. These findings underscore the potential maturity of LLM technology, urging the medical community to explore its widespread applications.

It's literally always the same. They reduce a task such that chatgpt can do it then report that it can do to in the headline, with the caveats buried way later in the text.

[-] theluddite@lemmy.ml 18 points 9 months ago

This is nowhere near the worst on a technical level, but it was my first big fuck up. Some 12+ years ago, I was pretty junior at a very big company that you've all heard of. We had a feature coming out that I had entirely developed almost by myself, from conception to prototype to production, and it was getting coverage in some relatively well-known trade magazine or blog or something (I don't remember) that was coming out the next Monday. But that week, I introduced a bug in the data pipeline code such that, while I don't remember the details, instead of adding the day's data, it removed some small amount of data. No one noticed that the feature was losing all its data all week because it still worked (mostly) fine, but by Monday, when the article came out, it looked like it would work, but when you pressed the thing, nothing happened. It was thankfully pretty easy to fix but I went from being congratulated to yelled at so fast.

[-] theluddite@lemmy.ml 17 points 10 months ago

Whenever one of these stories come up, there's always a lot of discussion about whether these suits are reasonable or fair or whether it's really legally the companies' fault and so on. If that's your inclination, I propose that you consider it from the other side: Big companies use every tool in their arsenal to get what they want, regardless of whether it's right or fair or good. If we want to take them on, we have to do the same. We call it a justice system, but in reality it's just a fight over who gets to wield the state's monopoly of violence to coerce other people into doing what they want, and any notions of justice or fairness are window dressing. That's how power actually works. It doesn't care about good faith vs bad faith arguments, and we can't limit ourselves to only using our institutions within their veneer of rule of law when taking on powerful, exclusively self-interested, and completely antisocial institutions with no such scruples.

[-] theluddite@lemmy.ml 18 points 11 months ago

The Nordic countries are also on Earth, which we are destroying. Some of their wealth comes directly from that destruction. Norway is the 5th and 3rd largest oil and natural gas exporter, respectively, making their happiness the result of good social policy that makes up for capitalist inequality which is directly funded by destroying the Earth and fueling capitalism elsewhere.

Even setting the climate aside (a ridiculous thing to do, really), the Nordic model isn't possible to sustainably replicate elsewhere on Earth on capitalism's own term, because we can't make every country a net exporter of the most desired commodities for obvious reasons, or the beneficiary of complex historical circumstances, like neutrality during ww2 (Sweden), or a long-time colonial power (Denmark).

Put another way, there is no Nordic model available for Bangladesh, whose workers work six days a week in factories to make the cheap clothing that happy Norwegians wear. Norways needs Bangladeshes to keep their standard of living.

In a previous job, I spent a good amount of time in a Bangladeshi garment factory. That specific factory in which I worked had been on strike a few years prior, requesting a raise to dozens of dollars per month. That's not a typo -- per month!. The police fired into their picket line, killing and wounding hundreds. This fall, Bangladeshi garment workers went on strike again, demanding a tripling of the minimum wage from its current ~75USD per month.

The urban poverty that makes my life possible, so far away, out of sight and out of mind, is an absolute fucking disgrace. We should talk about it daily. When they go on strike, as those garment workers are now, every single westerner ought to strike in solidarity, even if motivated by nothing but shame. Instead, we don't even know that it's happening, at least in the anglosphere.

I've since become convinced that there''s only one path to a just and verdant world -- international solidarity. Communists and anarchists have filled libraries with ideas for what that might look like. I've read some tiny sliver of that corpus. If you actually want to know why some of us want capitalism defeated (beyond the anecdote that I just relayed), or if you're curious how much better some of us think the world could be, I'd be happy to point you towards books that spoke to me.

[-] theluddite@lemmy.ml 20 points 1 year ago* (last edited 1 year ago)

You can't use an LLM this way in the real world. It's not possible to make an LLM trade stocks by itself. Real human beings need to be involved. Stock brokers have to do mandatory regulatory trainings, and get licenses and fill out forms, and incorporate businesses, and get insurance, and do a bunch of human shit. There is no code you could write that would get ChatGPT liability insurance. All that is just the stock trading -- we haven't even discussed how an LLM would receive insider trading tips on its own. How would that even happen?

If you were to do this in the real world, you'd need a human being to set up a ton of stuff. That person is responsible for making sure it follows the rules, just like they are for any other computer system.

On top of that, you don't need to do this research to understand that you should not let LLMs make decisions like this. You wouldn't even let low-level employees make decisions like this! Like I said, we know how LLMs work, and that's enough. For example, you don't need to do an experiment to decide if flipping coins is a good way to determine whether or not you should give someone healthcare, because the coin-flipping mechanism is well understood, and the mechanism by which it works is not suitable to healthcare decisions. LLMs are more complicated than coin flips, but we still understand the underlying mechanism well enough to know that this isn't a proper use for it.

[-] theluddite@lemmy.ml 19 points 1 year ago

Copyright is broken, but that's not an argument to let these companies do whatever they want. They're functionally arguing that copyright should remain broken but also they should be exempt. That's the worst of both worlds.

[-] theluddite@lemmy.ml 19 points 1 year ago

1 fast 1 furious

[-] theluddite@lemmy.ml 18 points 1 year ago

I had Max Tegmark as a professor when I was an undergrad. I loved him. He is a great physicist and educator, so it pains me greatly to say that he has gone off the deep end with his effective altruism stuff. His work through the Future of Life Institute should not be taken seriously. For anyone interested, I responded to Tegmark's concerns about AI and Effective Altruism in general on The Luddite when they first got a lot of media attention earlier this year.

I argue that EA is an unserious and self-serving philosophy, and the concern about AI is best understood as a bad faith and self-aggrandizing justification for capitalist control of technology. You can see that here. Other commenters are noting his opposition to open sourcing "dangerous technologies." This is the inevitable conclusion of a philosophy that, as discussed in the linked post, reifies existing power structures to decide how to do the most good within them. EA necessarily excludes radical change by focusing on measurable outcomes. It's a fundamentally conservative and patronizing philosophy, so it's no surprise when its conclusions end up agreeing with the people in charge.

[-] theluddite@lemmy.ml 18 points 1 year ago

I've submitted apps to both stores many times.

I hesitate to use the word "rigorous," but Apple's process is certainly more involved, though I'd say it's also bureaucratic and even arbitrary. Their primary concern is clearly maintaining their tight control over their users' phones, which is an extremely lucrative monopoly. The play process, by comparison, is definitely lighter, though I don't know if I'd be comfortable saying it's less well vetted.

Philosophically, relying on either of the duopolies to screen the software we use for safety is ultimately a bad system, especially since they are creating this problem. Until very recently, the internet existed on websites. They are pushing us to use mobile apps because it is more lucrative for them. Apple takes something like a 20% cut of every single transaction that happens on any iPhone app. They don't even allow non-apple-webkit browsers on iOS, meaning that the iphone's chrome, firefox, etc. are actually different than Android's. They do this specifically to hamstring mobile browser development.

They've managed to align the incentives here by offering tech companies more advertising revenue through the mobile platform. Basically, if you make a mobile app, Apple takes a huge cut each time your users pay you, but companies also get to spy on you more, meaning more lucrative advertising.

[-] theluddite@lemmy.ml 19 points 1 year ago

Designing freedom, by Stafford Beer

I'd been a software engineer for 15 years. In that time, in all the jobs I've had, I'd never once worked on anything that actually made people's lives better, nor did I ever hear anyone else in tech ever really dive into any sort of meaningful philosophical interrogation of what digital technology is for and how we should use it. I made a few cool websites or whatever, but surely there's more we can do with code. Digital technology is so obviously useful, yet we use it mostly to surveil everyone to better serve them ads.

Then i found cybernetics, though the work of Beer and others. It's that ontological grounding that tech is missing. It's the path we didn't take, choosing instead to follow the California ideology of startups and venture capital and so on that's now hegemonic and indistinguishable from the digital technology itself.

Even beers harshest critic is surely forced to admit that he had a hell of a vision, whereas most modern tech is completely rudderless

view more: ‹ prev next ›

theluddite

joined 2 years ago