Because there are no rich connected socialite interest groups that frequent same country clubs as the payment processor C suites who care about nazi’s, in fact, the same people pushing fash politics at sub stack probably go to the same country club.
Perhaps it’s a matter of the left hand not knowing what the right hand is doing. Like the front page and subscription page are fundamentally separated, and the algorithm sees this video doing well, but the subscription page has it shadow banned, but that shadow ban doesn’t transfer over.
A lot of artists will practice anatomy by drawing people nude, largely because it’s hard to get a good understanding of anatomy by only drawing people with clothes on.
If you wanted to put some examples of bare human anatomy in odd positions to expand the range that the model is capable of, well there aren’t many larger corpuses of that than porn.
Also, even if they don’t want it to make explicit content, they probably want it to make “suggestive” or “appealing” content. And they just assume they can guide rail it away from making actual explicit content. Although that’s probably pretty short sighted given how weak guardrails really are.
I worry that if they do Gerrymander it won’t just be for the sake of beating republicans, but preventing primary wins by progressive candidates.
It’s insane to me that people are actually trying get these LLMs to do things. Let alone outside of an experimental setting. Like, it’s a non starter at a fundamental conceptual level.
It reminds me of an experiment where they had a few try and run simulated vending machines.
It was pretty clear from the results that none of the LLMs were capable of consistently performing basic tasks, with them routinely introducing irrelevant or incorrect information that would derail things. Such as ordering nonexistent products, assuming capabilities that it was never given, and generally just failing to properly recall information or keep values it was given consistent. Some of the failures were quite spectacular, ranging from insisting it had gone bankrupt and was trying to sell the vending machine, to threatening to nuke suppliers and trying to contact the FBI.
Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.
What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.
A lot of supermarkets like Kroger are particularly bad about pricing. They will stock stuff that barely anyone buys, lose money because the case goes bad long before it sells out, and waste space on super obscure goods, necessitating a larger floor plan. Then they take the cost of that and spread it over all the items that move regularly, pushing up prices for everyone.
Why do they do this? Because it helps kill competition. If they didn’t have the obscure item, the one customer in a 1000 who wants it might go to a second store, and they might buy some of the quickly moving items there as well. By incentivizing shoppers to buy everything at one store, they are able to kill off smaller competitors that can’t afford to take losses or are unwilling to stock superfluous items.
Aldi’s is a fairly good example of a store that doesn’t do this. They tend to avoid stocking products that won’t move quickly. Keeping the inventory, and thus floor plan, small saves them money and prevents them from having to spread costs over staple goods. This model is much more common in Europe, but in the US, particularly in suburbs where the density is super low, it easier for them to absorb all the local demand and thus push out smaller more affordable stores.
Even if the stores them selves end up having issues, or if most people choose to continue to go to private stores, the presence of a public option with competitive pricing will anchor prices at other stores.
Specifically, I could see it undermining any attempt to implement software based price fixing among the private grocery stores, where they all use a common piece of software to “recommend prices”, with the software set up to increase prices at an even rate among all the clients so that none of them are undercutting each other. It should be illegal, but since technically no one at the companies are communicating about it, it falls in to a legal grey area. I haven’t heard about grocery stores doing this yet, but it’s been well documented in everything from real estate and renting to frozen potatoes.
I really don’t get why payment processors care. like, I really doubt it’s a morality thing for them, so where’s the financial incentive?
No it didn’t. OpenAI is just pushing deceptively worded press releases out to try and convince people that their programs are more capable than they actually are.
The first “AI” branded products hit the market and haven’t sold well with consumers nor enterprise clients. So tech companies that have gone all in, or are entirely based in, this hype cycle are trying to stretch it out a bit longer.
“ See ink cartridges can be vectors for viruses because they have chips in them.”
“Why does a container of ink have chips in it?”
“To make sure you don’t use third party ink cartridges”
Sounds like Putin wants to have his Baltic desert before he’s even finished his Ukraine.