[-] BrickedKeyboard@awful.systems 3 points 1 year ago

Primary myoblasts double on average every 4 days! So if given infinite nutrients, and you started with 1 gram of meat, it would take .... 369 days to equal the mass of earth!

[-] BrickedKeyboard@awful.systems 1 points 1 year ago

Real talk: a real doll with the brain of a calculator would be a substantial product improvement.

[-] BrickedKeyboard@awful.systems 0 points 1 year ago

Sure, but they were 4 function calculators a few months ago. The rate of progress seems insane.

[-] BrickedKeyboard@awful.systems 0 points 1 year ago

My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

That's right. Eliezer's LSD vision of the future where a smart enough AI just figures it all out with no new data is false.

However, you could...build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.

[-] BrickedKeyboard@awful.systems 0 points 1 year ago

I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

??? I don't believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I'm a cog in the machine. https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

What I think is the underlying technology that made GPT-4 possible can be made to drive robots to human level at some tasks, though if you note I think it may take to 2040 to be good, and that technology mostly just includes the idea of using lots of data, neural networks, and a mountain of GPUs.

Oh and RSI. That's the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

[-] BrickedKeyboard@awful.systems -1 points 1 year ago* (last edited 1 year ago)

1, 2 : since you claim you can't measure this even as a thought experiment, there's nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks...culminating in robots assembling new robots.

It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.

  1. This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it's cheaper to have robots do the task than a human in most situations.

Regarding (3) : the specific mechanism would be AI that works like this:

Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting "what would a human do". You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the "foundation model": you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.

The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don't need to do a lot of engineering work for a robot to do a million different jobs.

Multiple startups and deepmind are working on this.

[-] BrickedKeyboard@awful.systems 5 points 1 year ago* (last edited 1 year ago)

I'm trying to find the twitter post where someone deepfakes eliezer's voice into saying full speed ahead on AI development, we need embodied catgirls pronto.

[-] BrickedKeyboard@awful.systems 0 points 1 year ago* (last edited 1 year ago)

academic AI researchers have passed him by.

Just to be pedantic, it wasn't academic AI researchers. The current era of AI began here : https://www.npr.org/2012/06/26/155792609/a-massive-google-network-learns-to-identify

Academic AI researchers have never had the compute hardware to contribute to AI research since 2012, except some who worked at corporate giants (mostly deepmind) and went back into academia.

They are getting more hardware now, but the hardware required to be relevant and to develop a capability that commercial models don't already have keeps increasing. Table stakes are now something like 10,000 H100s, or about 250-500 million in hardware.

https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

I am not sure MIRI tried any meaningful computational experiments. They came up with unrunnable algorithms that theoretically might work but would need nearly infinite compute.

[-] BrickedKeyboard@awful.systems -1 points 1 year ago

Having trouble with quotes here **I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation. **

  1. I meant 25% of the tasks, not 25% of the jobs. So some combination of jobs where AI systems can do 90% of some jobs, and 10% of others. I also implicitly was weighting by labor hour, so if 10% of all the labor hours done by US citizens are driving, and AI can drive, that would be 10% automation. Does this change anything in your response?

No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

  1. I didn't mean 'skynet'. I meant, AI systems. chatGPT and all the other LLMs are an AI system. So is midjourney with controlnet. So humans want things. They want robots to make the things. They order robots to make more robots (initially using a lot of human factory workers to kick it off). Eventually robots get really cheap, making the things humans want cheaper and that's where you get the limited form of Singularity I mentioned.

At all points humans are ordering all these robots, and using all the things the robots make. An AI system is many parts. It has device drivers and hardware and cloud services and many neural networks and simulators and so on. One thing that might slow it all down is that the enormous list of IP needed to make even 1 robot work and all the owners of all the software packages will still demand a cut even if the robot hardware is being built by factories with almost all robots working in it.

**I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction. **

  1. So again that's a detail I didn't give. Obviously there are many kinds of robotic hardware, specialized for whatever task they do, and the only reason to make a robot humanoid is if it's a sexbot or otherwise used as a 'face' for humans. None of the hardware has to be superhuman, though obviously industrial robot arms have greater lifting capacity than humans. Just to give a detail what the real stuff would look like : most robots will be in no way superhuman in that they will lack sensors where they don't need it, won't be armored, won't even have onboard batteries or compute hardware, will miss entire modalities of human sense, cannot replicate themselves, and so on. It's just hardware that does a task, made in factory, and it takes many factories with these machines in it to make all the parts used.

think:

[-] BrickedKeyboard@awful.systems -1 points 1 year ago* (last edited 1 year ago)

It would be lesswrongness.

Just to split where the gap is :

  1. lesswrongers think powerful AGI systems that can act on their own against humans will soon exist, and will be able to escape to the internet.
  2. I work in AI and think powerful general AI systems (not necessarily the same as AGI) will exist soon and be powerful, but if built well will be unable to act against humans without orders, and unable to escape or do many of the things lesswrongers claim.
  3. You believe AGI of any flavor is a very long way away, beyond your remaining lifespan?
[-] BrickedKeyboard@awful.systems 2 points 1 year ago* (last edited 1 year ago)

Hi David. Reason I dropped by was the whole concept of knowing the distant future with too much certainty seemed like a deep flaw, and I have noticed lesswrong itself is full of nothing but 'cultist' AI doomers. Everyone kinda parrots a narrow range of conclusions, mainly on the imminent AGI killing everyone, and this, ironically, doesn't seem very rational...

I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted. So I was trying to differentiate between:

A. This is a club of smart people, even smarter than lesswrongers who can't see the flaws!

B. This is a club of well, the reason I called it boomers was I felt that the current news and AI papers make each of the questions I asked a reasonable and conservative outcome. For example posters here are saying for (1), "no it won't do 25% of the jobs". That was not the question, it was 25% of the tasks. Since for example Copilot already writes about 25% of my code, and GPT-4 helps me with emails to my boss, from my perspective this is reasonable. The rest of the questions build on (1).

14
submitted 1 year ago* (last edited 1 year ago) by BrickedKeyboard@awful.systems to c/sneerclub@awful.systems

First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow 'rationalists' are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there's 8 billion people alive right now, and we don't actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying "fuck em". This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can't solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are "boomer" forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of "what is your probability" seems like asking for "joint probabilities", aka smoke a joint and give a probability.

Here's my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say "alignment", because I think that's hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*"epistemic status": I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas..

view more: next ›

BrickedKeyboard

joined 1 year ago