Hope in humanity, Kamala do what Kamala do she is brat after all, I expect nothing good. What I do need is for some kind of attention on what her policies might look like from the media
No he's not an oracle, just a well centered (in his analysis), very smart and observant person. If he offers an opinion on something I don't know a lot about I consider it plausible by default. So, since I know very little compared to Matt about the facts surrounding the JFK assassination, the fact that he holds or has held this opinion lends credence to the idea that the shooting yesterday was part of a similar conspiracy to what he was imagining.
So if we find out the the shooter was some kind of expert marksman who had a perfect opportunity and should have made the shot, I would be going hmmmm, likewise if the shooter appeared to hold some extremely complicated ideology ala Oswald.
Confession: I tried watching andor (drunk) and couldn't make it, seemed kinda boring although I'm sure I wasn't fully present.. try again?
I'm not sure how long a panic attack lasts.. if you are willing to wait a half hour or so gabapentinoids would probably be a good alternative to benzos, doctors have yet to catch on to how addictive they are and give them out like candy. The rc benzos are still extremely good despite the DEAs efforts, brominated and florinated ones are very much available in the US. If that's too scary I do believe you'll get a script if you're persistent enough and willing to shop around for doctors, may be a long haul project but you could also get lucky.
I know it doesn't seem like it but not all of them are the same, society isn't as polarized as it claims to be, it's an illusion! Okay cards on the table I've only convinced one lib to not vote Joe and they were family and they were already using the g word about gaza.. still I fancy myself a lib whisperer and am therefore hyper qualified to give advice so, what you need to do is become them a bit first, rotate your talent for disagreeableness into finely honed debate skills, squint your eyes and imagine the eggs benedict and mimosas they shove to you at the crack of 11am is harmless pizza and whiskey, now you're catching on good job
If you're claustrophobic: the descent might land you in the hospital. It's also really funny
Every argument that refers to stochastic parrots is terrible. First off, people are stochastic, animals are stochastic, any sufficiently advanced AI is going to be stochastic, that part does no work. The real meat is in the parrot, parrots produce very dumb language that is mostly rote memorization, maybe a smidge of basic pattern matching thrown in, with little understanding of what they're saying. Are LLMs like this? No.
Idk if I can really argue with people who think they're so stupid as to be compared to a bird, I actually think they can be a bit clever, even exhibiting rare sparks of creativity, but this is just, like, my opinion after interacting with them a lot, other people have a different impression and I really think this is pretty subjective. I'll grant that even the best of them can be really dumb sometimes, and I really don't think it matters as this technology is in its infancy, unless we think they are necessarily dumb for some reason we will just have to wait to see how smart they will become. So we're down to the rote memorization / basic pattern matching part. I've seen various arguments here. Pointing and waving at examples of LLMs seemingly using wrong patterns or regurgitating something almost verbatim found on the internet, but there are also many examples of them not obviously doing this. Then there's claiming that because the loss function merely incentivizes the system to predict the next token that it therefore can't produce anything intelligent but this just doesn't follow. The loss function for humans merely incentivizes us to produce more offspring, just because it doesn't directly incentivize intelligence doesn't mean it won't produce it as a side effect. And I'm sure more arguments, all of them are flawed..
..because the idea that LLMs are just big lookup tables with some basic pattern matching thrown in is, while plausible, demonstrably false. The internals of these models are really really hard to interrogate but it can be done if you know what you're looking for. I think the clearest example of this would be in models trained on games of chess/othello, people have pointed out that some versions of chatgpt are kind of okay at chess but fail hard if weird moves are made in the opening, making illegal moves and not understanding what pieces are on the board, suggesting that they are just memorizing common moves and extracting basic patterns from a huge number of game histories. Probably this is to some extent true for ChatGpt 3.x, but version 4 does quite a bit better and LLMs specifically trained to mimic human games do better still, playing generally reasonably no matter what their opponent does. It could still technically be that they somehow pattern matching... better.. but actually no, this question has been directly resolved. Even quite tiny LLMs trained on board game moves develop the ability to, at the very least, faithfully represent the board state, like you can just look inside at the activations the right way and see what piece is on each square. This result has been improved upon and also replicated with chess. What are they doing with that board state, how are they using it? Unknown, but if you're building an accurate model of something not directly accessible to you using incidental data, you're not just pattern matching. That's just one example, and it's never been proven, to my knowledge, that ChatGPT and the like do something like this, but it shows that it's possible and does sometimes happen under natural conditions. Also, it would be kind of weird if a ~1 trillion parameter model was not at the very least taking advantage of something accessible to a 150 million parameter one, I'd expect it to be doing that plus a lot more clever and unexpected stuff.
Neither depression nor anxiety are reliably treated by an SSRI, or any medication for that matter, or any known medical treatment for that matter. Come to think of it SSRIs are far better at causing sexual dysfunction than they are at treating depression: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6007725/
Eliezer has some horrible opinions and is possibly the most arrogant person alive, but he also wants to nationalize Nvidia and force them to stop turning out GPUs
I wasn't familiar, what a cool fish, congrats on being a tequila grandparent!
Also, holy shit you weren't kidding:
Zoogoneticus tequila is endemic to the Ameca River basin in west-central Mexico. Its current distribution is restricted to a single spring pool in Teuchitlán, only 4 metres (13 ft) in diameter, where a population consisting of less than 50 adult fish live
I have dwarf barbs in one of my tanks for the exact same reason, unfortunately despite supposedly having some of the smallest mouths they are nimble af and suspiciously this is the only tank in which the shrimp have not taken off. I never caught them in the act, but same with my betta and when they passed (fuck dropsy) the population exploded. Fwiw, in my limited experience the detritus worm and copepod community has always calmed to what I consider an acceptable level with time and without predators, I'm not sure why this is
The model was trained on self-play, it's unclear exactly how, whether via regular chain-of-thought reasoning or some kind of MCTS scheme. It no longer relies only on ideas from internet data, that's where it started from. It can learn from mistakes it made during training, from making lucky guesses, etc. Now it's way better as solving math problems, programming, and writing comedy. At what point do we call what it's doing reasoning? Just like, never, because it's a computer? Or you object to the transformer architecture specifically, what?