[-] BrickedKeyboard@awful.systems -2 points 1 year ago* (last edited 1 year ago)

The major take is: We spell it differently.

I am too dumb/autistic to know what you're conveying here.

[-] BrickedKeyboard@awful.systems -1 points 1 year ago

The counter argument is GPT-4. For the domains this machine has been trained on it has a large amount of generality - a large amount of capturing that real world complexity and dirtiness. Reinforcement learning can make it better.

Or in essence, if you collect colossal amounts of information, yes pirated from humans, and then choose what to do next by 'what would a human do', this does seem to solve the generality problem. You then fix your mistakes with RL updates when the machine fails on a real world task.

[-] BrickedKeyboard@awful.systems -1 points 1 year ago* (last edited 1 year ago)

Did this happen with Amazon? The VC money is a catalyst. It's advancing money for a share of future revenues. If AI companies can establish a genuine business that collects revenue from customers they can reinvest some of that money into improving the model and so on.

OpenAI specifically seems to have needed about 5 months to go to 1 billion USD annual revenue, or the way tech companies are valued, it's already worth more than 10 billion intrinsic value.

If they can't - if the AI models remain too stupid to pay for, then obviously there will be another AI winter.

https://fortune.com/2023/08/30/chatgpt-creator-openai-earnings-80-million-a-month-1-billion-annual-revenue-540-million-loss-sam-altman/

[-] BrickedKeyboard@awful.systems -1 points 1 year ago

I agree completely. This is exactly where I break with Eliezer's model. Yes obviously an AI system that can self improve can only do so until it's either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

That's not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then...

Genuinely asking, I don't think it's "religion" to suggest that a huge speedup in global GDP would be a dramatic event.

[-] BrickedKeyboard@awful.systems -1 points 1 year ago

Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won't be in a week or a month, energy requirements alone limit how fast it can happen.

Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you "priced in" this possibility in your world view?

[-] BrickedKeyboard@awful.systems -2 points 1 year ago

This pattern shows up often when people are trying to criticize tesla or spaceX. And yeah, if you measure "current reality" vs "promises of their hype man/lead shitposter and internet troll", absolutely. Tesla probably will never achieve full self driving using anything like their current approach. But if you compare Tesla "to other automakers, "to most automakers that ever existed"", or SpaceX to "any rocket company since 1970" there's no comparison. If you're going to compare the internet to pre-internet, compare it to BBS you would access via modem or fax machines or libraries. No comparison.

Similarly you should compare GPT-4 and the next large model to be released, Gemini, vs all AI software for all time. It's no comparison.

[-] BrickedKeyboard@awful.systems -2 points 1 year ago

take some time and read this

I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

It's a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means "if the machine is given a task, what is the probability it completes the task successfully". Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

People have benchmarked GPT-4 and it's got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It's below human level overall I think, but still surprisingly strong given it's emergent behavior from computing tokens.

[-] BrickedKeyboard@awful.systems -2 points 1 year ago

Just to engage with the high school bully analogy, the nerd has been threatening to show up with his sexbot bodyguards that are basically T-800s from terminator for years now, and you've been taking his lunch money and sneering. But now he's got real funding and he goes to work at a huge building and apparently there are prototypes of the exact thing he claims to build inside.

The prototypes suck...for now...

[-] BrickedKeyboard@awful.systems -2 points 1 year ago* (last edited 1 year ago)

No literally the course material has the word "belief". It means "at this instant what is the estimate of ground truth".

Those shaky blue lines that show where your Tesla on autopilot thinks the lane is? That's it's belief.

English and software have lots of overloaded terms.

[-] BrickedKeyboard@awful.systems -1 points 1 year ago

The one issue I have is that "what if some are their beliefs turn out to be real". How would it change things if Scientologists get a 2 way communication device, say they found it buried in Hubbard's backyard or whatever and it appears to be non human technology - and are able to talk to an entity who claims it is Xenu. Doesn't mean their cult religion is right but say the entity is obviously nonhuman, it rattles off the method to build devices current science knows no method to build and other people build the devices and they work and YOU can pay $480 a year and get FTL walkie talkies or some shit sent to your door. How does that change your beliefs?

[-] BrickedKeyboard@awful.systems -2 points 1 year ago

Software you write can have a "belief" as well. The course I took on it had us write Kalman filters, where you start with some estimate of a quantity. That estimate is your "belief", and you have a variance as well.

Each measurement you have a (value, variance) where the variance is derived from the quality of the sensor that produced it.

It's an overloaded word because humans are often unwilling to update their beliefs unless they are simple things, like "I believe the forks are in the drawer to the right of the sink". You believe that because you think you saw them their last. There is uncertainty - you might have misremembered, as your own memory is unreliable, your eyes are unreliable. If it's your kitchen and you've had thousands of observations, your belief has low uncertainty, if it's a new place your belief has high uncertainty.

If you go and look right now and the forks are in fact there you update your beliefs.

[-] BrickedKeyboard@awful.systems -2 points 1 year ago* (last edited 1 year ago)

Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.

...What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

The cult uh points out their "sequences" of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.

My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it's mistakes and had the vision module deployed...

Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that's actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.

And he's saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.

view more: ‹ prev next ›

BrickedKeyboard

joined 1 year ago