You're describing like 95% of the entire USA though, and 80% of the entire world. I am also prone to judging humanity as not so great, but we literally need them for the revolution. Well maybe not the initial part, but we'll need most of them to get with the program in short order.
Still the same answer, well even moreso, that's a low dose. And unless you know someone who gets it wholesale the strain names really don't mean much in terms of potency, lots of variability, depends on brand. Anyway, enjoy, it's a good mood booster and I don't see any downside except the potential for addiction (lower than alcohol but still a factor) and escalating daily use from chasing a high you won't be able to achieve without an actual break
The voice chat feature they show in the demo isn't rolled out yet
Digital release next month apparently, immediately there will be torrents https://www.inverse.com/entertainment/dune-2-digital-release-date
That's a perfectly reasonable position, the question of how complex a human brain is compared with the largest NNs is hard to answer but I think we can agree it's a big gap. I happen to think we'll get to AGI before we get to human brain complexity, parameter wise, but we'll probably also need at least a couple architectural paradigms on top of transformers to compose one. Regardless, we don't need to achieve AGI or even approach it for these things to become a lot more dangerous, and we have seen nothing but accelerating capability gains for more than a decade. I'm very strongly of the opinion that this trend will continue for at least another decade, there's are just so many promising but unexplored avenues for progress. The lowest of the low hanging fruit has been, while lacking in nutrients, so delicious that we haven't bothered to do much climbing.
Huh? a human brain is a complex as fuck persistent feedback system
Every time-limited feedback system is entirely equivalent to a feed-forward system, similar to how you can unroll a for loop.
No see this is where we're disagreeing.... It is doing string manipulation which sometimes looks like maths.
String manipulation and computation are equivalent, do you think not just LLMs but computers themselves cannot in principal do what a brain does?
..you may as well say human reasoning is a side effect of quark bonding...
No because that has nothing to do with the issue at hand. Humans and LLMs and rocks all have this in common. What humans and LLMs do have in common is that they are a result of an optimization process and do things that weren't specifically optimized for as side effects. LLMs probably don't understand anything but certainly it would help them to predict the next token if they did understand, describing them as only token predictors doesn't help us with the question of whether they have understanding.
...but that is not evidence that it's doing the same task...
Again, I am not trying to argue that LLMs are like people or that they are intelligent or that they understand, I am not trying to give evidence of this. I'm trying to show that this reasoning (LLMs merely predict a distribution of next tokens -> LLMs don't understand anything and therefore can't do certain things) is completely invalid
Not the weights, the activations, these depend on the input and change every time you evaluate the model. They are not fed back into the next iteration, as is done in an RNN, so information doesn't persist for very long, but it is very much persisted and chewed upon by the various layers as it propagates through the network.
I am not trying to claim that the current crop of LLMs understand in the sense that a human does, I agree they do not, but nothing you have said actually justifies that conclusion or places any constraints on the abilities of future LLMs. If you ask a human to read a joke and then immediately shoot them in the head before it's been integrated into their long term memory they may or may not have understood the joke.
This is just a restatement of the second example argument I gave, trying to assert something about the internals of a model (it doesn't understand) based on the fact that it was optimized to predict the next token
I don't think anyone disputes that they work for some people, but only around 50% will receive any benefit whatsoever, that's being generous, compared to 25% for a placebo, a huge number of people are taking these for no reason except to reduce their doctor's liability should they harm themselves. And being technically "medicated" they are much less likely to shop around for something that works better for them or might supplement the ssri, which is almost always the correct treatment strategy.
CombineZP looks like it's just for this
AI will stop advancing, it's been a whole 9 months and clearly there are no more impressive breakthroughs coming, we should only worry about what's already possible with the tech we have
Very well, I'll take that as a sort of compliment lol.
So I guess I start where I always do, do you think a machine, in principal, has the capability to be intelligent and/or creative? If not, I really don't have any counter, I suppose I'd be curious as to why though. Like I admit it's possible there's something non-physical or non-mechanistic driving our bodies that's unknown to science. I find that very few hold this hard line opinion though, assuming you are also in that category...
So if that's correct, what is it about the current paradigm of machine learning that you think is missing? Is it embodiment, is it the simplicity of artificial neurons compared to biological ones, something specific about the transformer architecture, a combination of these, or something else I haven't thought of?
And hopefully it goes without saying, I don't think o1-preview is a human level AGI, I merely believe that we're getting there quite soon and without too many new architectural innovations, possibly just one or two, and none of them will be particularly groundbreaking, it's fairly obvious what the next couple of steps will be as it was that MCTS + LLM was the next step 3 years ago.