This is a fascinating read. It was interesting to hear him say that all of the current problems with factuality and hallucinations are solvable, and he sees the route to doing that.
He was less convincing in discussing how to constrain the power of an AGI that is smarter than us. His solution is to make sure it understands ethics. That idea has lots of weaknesses, and the guy interviewing pressed him on that a few times, and he seemed to fudge his responses.
It's interesting to look at people's past predictions. He said he predicted the 2028 date for AGI back in 2008. Not only was he able to say it hasn't changed, but he was also able to point out everything he expected has gone to schedule in the 2008-2023 timeframe. It makes that 2028 prediction more credible.
We've been predicting that we won't be ready for AGI/ASI emergence both in science and in scifi for decades. Still holds true, even as the potential grows. If we're really lucky, AGI isn't possible, but I think that just powerful AI tools like LLMs and such will end up being just as dangerous in their misuse by power seekers and profiteers. We've seen this coming, and though even the actual people working on them are talking about the dangers, we're barreling forward without a care.
This is a fascinating read. It was interesting to hear him say that all of the current problems with factuality and hallucinations are solvable, and he sees the route to doing that.
He was less convincing in discussing how to constrain the power of an AGI that is smarter than us. His solution is to make sure it understands ethics. That idea has lots of weaknesses, and the guy interviewing pressed him on that a few times, and he seemed to fudge his responses.
It's interesting to look at people's past predictions. He said he predicted the 2028 date for AGI back in 2008. Not only was he able to say it hasn't changed, but he was also able to point out everything he expected has gone to schedule in the 2008-2023 timeframe. It makes that 2028 prediction more credible.
We've been predicting that we won't be ready for AGI/ASI emergence both in science and in scifi for decades. Still holds true, even as the potential grows. If we're really lucky, AGI isn't possible, but I think that just powerful AI tools like LLMs and such will end up being just as dangerous in their misuse by power seekers and profiteers. We've seen this coming, and though even the actual people working on them are talking about the dangers, we're barreling forward without a care.
I really wish there was a good !remindeme bot