[-] BigMuffin69@awful.systems 24 points 4 months ago* (last edited 4 months ago)

Smh, why do I feel like I understand the theology of their dumb cult better than its own adherents? If you believe that one day AI will foom into a 10 trillion IQ super being, then it makes no difference at all whether your ai safety researcher has 200 IQ or spends their days eating rocks like the average LW user.

[-] BigMuffin69@awful.systems 23 points 5 months ago* (last edited 5 months ago)

This is literally the dumbest shit I've read all week and it's been a pretty dumb week. I'm afraid I have to diagnose Roko with having the brain scamblies. There is no cure.

[-] BigMuffin69@awful.systems 18 points 5 months ago

Ah, I see TWG made the rookie mistake of thinking they could endear themselves to internet bigots by carrying water for them. ^Also, fuck this nazi infested shithole. Absolute eye bleach.

[-] BigMuffin69@awful.systems 21 points 5 months ago

Wishful thinking on my part to think their sexism/eugenics posting was based on ignorance instead of deliberately being massive piles of shit. Don't let them know Iceland has the highest number of GMs/pop or else we'll get a 10,000 page essay about how proximity to volcanoes gives + 20 IQ points.

[-] BigMuffin69@awful.systems 24 points 5 months ago* (last edited 5 months ago)

my honest reacton:

Edit: Judit Polgár for ref if anyone wants to learn about one of the greatest of all times. Her dad claimed he was doing a nature/nurture experiment in order to prove that anyone could be great if they were trained to master a skill from a young age, so taught his 3 daughters chess. Judit achieved the rank of number 8 in the world OVERALL and beat multiple WC including Kasparov over her career.

idk its almost like if more girls were encouraged to play chess and felt welcome in the community these apparent skill differences might disappear

[-] BigMuffin69@awful.systems 22 points 5 months ago* (last edited 5 months ago)

Holy fuck David, you really are living rent free in this SOB's head.

[-] BigMuffin69@awful.systems 27 points 5 months ago

How many rounds of training does it take before AlphaGo realizes the optimal strategy is to simply eat its opponent?

[-] BigMuffin69@awful.systems 21 points 6 months ago* (last edited 6 months ago)

Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

Wow, what are the odds! The exact same transformer paradigm that OAI co-opted from Google is also the key to solving 'system 2' reasoning, meta cognition, recursive self improvement, and the symbol grounding problem! All they need is a couple trillion more dollars of VC Invesment, a couple of goat sacrifices here and there, and AGI will just fall out. They definitely aren't tossing cash into a bottomless money pit chasing a dead-end architecture!

... right?

21

Folks in the field of AI like to make predictions for AGI. I have thoughts, and I’ve always wanted to write them down. Let’s do that.

Since this isn’t something I’ve touched on in the past, I’ll start by doing my best to define what I mean by “general intelligence”: a generally intelligent entity is one that achieves a special synthesis of three things:

A way of interacting with and observing a complex environment. Typically this means embodiment: the ability to perceive and interact with the natural world. A robust world model covering the environment. This is the mechanism which allows an entity to perform quick inference with a reasonable accuracy. World models in humans are generally referred to as “intuition”, “fast thinking” or “system 1 thinking”. A mechanism for performing deep introspection on arbitrary topics. This is thought of in many different ways – it is “reasoning”, “slow thinking” or “system 2 thinking”. If you have these three things, you can build a generally intelligent agent. Here’s how:

First, you seed your agent with one or more objectives. Have the agent use system 2 thinking in conjunction with its world model to start ideating ways to optimize for its objectives. It picks the best idea and builds a plan. It uses this plan to take an action on the world. It observes the result of this action and compares that result with the expectation it had based on its world model. It might update its world model here with the new knowledge gained. It uses system 2 thinking to make alterations to the plan (or idea). Rinse and repeat.

My definition for general intelligence is an agent that can coherently execute the above cycle repeatedly over long periods of time, thereby being able to attempt to optimize any objective.

The capacity to actually achieve arbitrary objectives is not a requirement. Some objectives are simply too hard. Adaptability and coherence are the key: can the agent use what it knows to synthesize a plan, and is it able to continuously act towards a single objective over long time periods.

So with that out of the way – where do I think we are on the path to building a general intelligence?

World Models We’re already building world models with autoregressive transformers, particularly of the “omnimodel” variety. How robust they are is up for debate. There’s good news, though: in my experience, scale improves robustness and humanity is currently pouring capital into scaling autoregressive models. So we can expect robustness to improve.

With that said, I suspect the world models we have right now are sufficient to build a generally intelligent agent.

Side note: I also suspect that robustness can be further improved via the interaction of system 2 thinking and observing the real world. This is a paradigm we haven’t really seen in AI yet, but happens all the time in living things. It’s a very important mechanism for improving robustness.

When LLM skeptics like Yann say we haven’t yet achieved the intelligence of a cat – this is the point that they are missing. Yes, LLMs still lack some basic knowledge that every cat has, but they could learn that knowledge – given the ability to self-improve in this way. And such self-improvement is doable with transformers and the right ingredients.

Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

Embodiment Embodiment is something we’re still figuring out with AI but which is something I am once again quite optimistic about near-term advancements. There is a convergence currently happening between the field of robotics and LLMs that is hard to ignore.

Robots are becoming extremely capable – able to respond to very abstract commands like “move forward”, “get up”, “kick ball”, “reach for object”, etc. For example, see what Figure is up to or the recently released Unitree H1.

On the opposite end of the spectrum, large Omnimodels give us a way to map arbitrary sensory inputs into commands which can be sent to these sophisticated robotics systems.

I’ve been spending a lot of time lately walking around outside talking to GPT-4o while letting it observe the world through my smartphone camera. I like asking it questions to test its knowledge of the physical world. It’s far from perfect, but it is surprisingly capable. We’re close to being able to deploy systems which can commit coherent strings of actions on the environment and observe (and understand) the results. I suspect we’re going to see some really impressive progress in the next 1-2 years here.

This is the field of AI I am personally most excited in, and I plan to spend most of my time working on this over the coming years.

TL;DR In summary – we’ve basically solved building world models, have 2-3 years on system 2 thinking, and 1-2 years on embodiment. The latter two can be done concurrently. Once all of the ingredients have been built, we need to integrate them together and build the cycling algorithm I described above. I’d give that another 1-2 years.

So my current estimate is 3-5 years for AGI. I’m leaning towards 3 for something that looks an awful lot like a generally intelligent, embodied agent (which I would personally call an AGI). Then a few more years to refine it to the point that we can convince the Gary Marcus’ of the world.

Really excited to see how this ages. 🙂

32
submitted 6 months ago* (last edited 6 months ago) by BigMuffin69@awful.systems to c/sneerclub@awful.systems

[-] BigMuffin69@awful.systems 27 points 6 months ago

David, please I was trying to have a nice day.

[-] BigMuffin69@awful.systems 21 points 6 months ago* (last edited 6 months ago)

No, they never address this. And as someone who personally works on large scale optimization problems for a living, I do think it's difficult for the public to understand, that no, a 10000 IQ super machine will not be able to just "solve these problems" in a nano second like Yud thinks. And it's not like well, the super machine will just avoid having to solve them. No. NP hard problems are fucking everywhere. (Fun fact, for many problems of interest, even approximating the solution to a given accuracy is NP-hard, so heuristics don't even help.)

I've often found myself frustrated that more computer scientist who should know better simply do not address this point. If verifying solutions is exponentially easier than coming up with them for many difficult problems (all signs point to yes), and if a super intelligent entity actually did exist (I mean does a SAT solver count as a super intelligent entity?), it would probably be EASY to control, since it would have to spend eons and massive amounts of energy coming up with its WORLD_DOMINATION_PLAN.exe, but you wouldn't be able to hide a super computer doing this massive calculation, and someone running the machine seeing it output TURN ALL HUMANS INTO PAPER CLIPS, would say, 'ah, we are missing a constraint here, it thinks that this optimization problem is unbounded' <- this happens literally all the time in practice. Not the world domination part, but a poorly defined optimization problem that is unbounded. But again, it's easy to check that the solution is nonsense.

I know Francois Chollet (THE GOAT) has talked about how there are no unending exponentials and the faster growth the faster you hit constraints IRL (running out of data, running out of chips, running out of energy, etc... ) and I've definitely heard professional shit poster Pedro Domingos explicitly discuss how NP-hardness strongly implies EA/LW type thinking is straight up fantasy, but it's a short list of people who I can think of off the top of my head who have discussed this.

Edit: bizarrely, one person who I didn't mention who has gone down this line of thinking is Illya Sutskever; however, he has come to some frankly... uh... strange conclusions -> the only reason to explain the successful performance of ML is to conclude that they are Kolmogorov minimizers, i.e., by optimizing for loss over a training set, you are doing compression which done optimally is solving an undecidable problem. Nice theory. Definitely not motivated by bad sci-fi mysticism imbued with pure distilled hopium. But from my arm-chair psychologist POV, it seems he implicitly acknowledges for his fantasy to come true, he needs to escape the limitations of Turing Machines, so he has to somehow shoehorn a method for hyper computation into Turing Machines. Smh, this is the kind of behavior reserved for aging physicist, amirite lads? Yet in 2023, it seemed like the whole world was succumbing to this gas lighting. He was giving this lecture to auditoriums filled with tech bro shilling this line of thinking to thunderous applause. I have olde CS prof friends who were like, don't we literally have mountains of evidence this is straight up crazy talk? Like you can train an ANN to perform addition, and if you can look me straight in the eyes and say the absolute mess of weights that results looks anything like a Kolmogorov minimizer then I know you are trying to sell me a bag of shit.

160
[-] BigMuffin69@awful.systems 48 points 8 months ago

It's true. ChatGPT is slightly sentient in the same way a field of wheat is slightly pasta.

51
submitted 8 months ago* (last edited 8 months ago) by BigMuffin69@awful.systems to c/sneerclub@awful.systems

Then: Google fired Blake Lemoine for saying AIs are sentient

Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient

That makes 2 of the 3 most cited scientists:

  • Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
  • Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?

Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.

ALL 3 switched from working on AI capabilities to AI safety.

Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.

Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>

17
view more: next ›

BigMuffin69

joined 11 months ago