[-] Voroxpete@sh.itjust.works 2 points 6 days ago

Yeah, that's it exactly. It doesn't really feel genuine or meaningful, even though I'm sure a lot of people do mean it earnestly. It just sort of feels like a checklist.

[-] Voroxpete@sh.itjust.works 198 points 9 months ago

Let's be clear about something; climate scientists almost universally agree that there is no such thing as "winning" or "losing" the fight against climate change (Suzuki, for the record, is a zoologist, not a climate scientist). This isn't a game, there's no referee, and no one gets a trophy at the end.

The battle against climate change is about mitigating harm. The worse we do, the more harm there will be. But there is never a point where it is "too late". The car is going to crash, but the sooner you hit the brakes, the less damaging the impact will be. Everything we do to push the needle will save lives. There is never a point where we get to throw up our hands and succumb to the comforting fantasy that it's "too late" to change anything.

I have a lot of respect for Suzuki, and I don't blame him for feeling defeated with everything that's happening, but spreading this kind of message is, dangerous, damaging, and flies entirely in the face of the science.

[-] Voroxpete@sh.itjust.works 272 points 1 year ago

Fuck this noise. The only classes that matter are the people who are rich enough to own Disneyland, and everyone else. Quibbling over whose shit sandwich is bigger is just dividibg ourselves for their benefit.

[-] Voroxpete@sh.itjust.works 194 points 1 year ago* (last edited 1 year ago)

Trump said Monday that “vast amounts of fentanyl got poured into our country” largely through Mexico and from China, and he encouraged car manufacturers to build plants in the U.S. to avoid the upcoming tariffs.

If the media actually did their jobs, they would add the context that the US seized all of 20kg of fentanyl at the Canadian border last year. This claim is, according to the US government's own numbers, absolute bullshit.

They would also note that in order to enact these tariffs without involving Congress he has to invoke a "national emergency", which is the only reason why he's suddenly all about stopping fentanyl.

[-] Voroxpete@sh.itjust.works 225 points 1 year ago

If only there was some kind of method for safely removing infectious diseases from milk.

[-] Voroxpete@sh.itjust.works 162 points 2 years ago

Incredible way to openly admit that your policy agenda is for sale to the highest bidder.

[-] Voroxpete@sh.itjust.works 313 points 2 years ago

Thank you, I am fucking sick of people passing this comic around in relation to the Crowdstrike failure. Crowdstrike is a $90bn corporation, they're not some little guy doing a thankless task. They had all the resources and expertise required to avoid this happening, they just didn't give a shit. They want to move fast and break things, and that's exactly what they did.

[-] Voroxpete@sh.itjust.works 198 points 2 years ago

We not only have to stop ignoring the problem, we need to be absolutely clear about what the problem is.

LLMs don't hallucinate wrong answers. They hallucinate all answers. Some of those answers will happen to be right.

If this sounds like nitpicking or quibbling over verbiage, it's not. This is really, really important to understand. LLMs exist within a hallucinatory false reality. They do not have any comprehension of the truth or untruth of what they are saying, and this means that when they say things that are true, they do not understand why those things are true.

That is the part that's crucial to understand. A really simple test of this problem is to ask ChatGPT to back up an answer with sources. It fundamentally cannot do it, because it has no ability to actually comprehend and correlate factual information in that way. This means, for example, that AI is incapable of assessing the potential veracity of the information it gives you. A human can say "That's a little outside of my area of expertise," but an LLM cannot. It can only be coded with hard blocks in response to certain keywords to cut it from answering and insert a stock response.

This distinction, that AI is always hallucinating, is important because of stuff like this:

But notice how Reid said there was a balance? That’s because a lot of AI researchers don’t actually think hallucinations can be solved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. **Just as no person is 100 percent right all the time, neither are these computers. **

That is some fucking toxic shit right there. Treating the fallibility of LLMs as analogous to the fallibility of humans is a huge, huge false equivalence. Humans can be wrong, but we're wrong in ways that allow us the capacity to grow and learn. Even when we are wrong about things, we can often learn from how we are wrong. There's a structure to how humans learn and process information that allows us to interrogate our failures and adjust for them.

When an LLM is wrong, we just have to force it to keep rolling the dice until it's right. It cannot explain its reasoning. It cannot provide proof of work. I work in a field where I often have to direct the efforts of people who know more about specific subjects than I do, and part of how you do that is you get people to explain their reasoning, and you go back and forth testing propositions and arguments with them. You say "I want this, what are the specific challenges involved in doing it?" They tell you it's really hard, you ask them why. They break things down for you, and together you find solutions. With an LLM, if you ask it why something works the way it does, it will commit to the bit and proceed to hallucinate false facts and false premises to support its false answer, because it's not operating in the same reality you are, nor does it have any conception of reality in the first place.

[-] Voroxpete@sh.itjust.works 175 points 2 years ago

Literally the opposite of this is true. Not having kids is one of the single best things you can do for the planet.

(Still want to raise a child? Adopt! There are so many kids out there looking for good homes and people who will love and care for them)

[-] Voroxpete@sh.itjust.works 230 points 2 years ago* (last edited 2 years ago)

Jesus Christ, this is I getting nsane. When did it become unacceptable to say that genocide is bad?

[-] Voroxpete@sh.itjust.works 203 points 2 years ago

"Should workers be subjected to pointless and dehumanizing drudgery that serves no practical purpose? Find out what this panel of five overpaid CEOs think, after the break."

view more: next ›

Voroxpete

joined 2 years ago