That's not Sam Altman saying that LLMs will achieve AGI. LLMs are large language models, OpenAI is continuing to develop LLMs (like GPT-4o) but they're also working on frameworks that use LLMs (like o1). Those frameworks may achieve AGI but not the LLMs themselves. And this is a very important distinction because LLMs are reaching performance parity so we are likely reaching a plateau for LLMs given the existing training data and techniques. There is still optimizations for LLMs like increasing context window sizes etc.
When has Sam Altman said LLMs will reach AGI? Can you provide a primary source?
I'm developing some human centric LLM frameworks at work. Every API request to OpenAI is currently subsidized by venture capital. I do worry about what the industry will look like once there is a big price adjustment. Locally run models are pretty decent now and the pace is still moving forward, especially with regards to context window sizes so as long as I keep the frameworks model agnostic it might not be a big impact.
I don't think anyone in the industry thought LLMs were going to reach AGI. But LLMs will be useful as part of an AGI framework. That's the current focus in the industry.
It's not a binary question but a good opener would be to ask the panel what AI tools they use. That would also help to set the expectations of the audience for the level of follow up questions.
Hosting local has become even more important with the vulnerability of submarine cables.
I was lucky enough to get to one of those drive thru vacation clinics in Kingston. They were great, so well organized. I hope this gets sorted out quick for Dr Ma.
If women had on average 1.5 children it would still take about 1000 years before the population was reduced to just a million humans (the number of humans around 10,000BCE). We're not going anywhere soon.
Most Americans and Russians are great people. Unfortunately even great people can fall for propaganda.
Google has become an awful company. I'm in the process of degoogling but it's not easy given all the monopolies they have created
I'm a Demodex folliculorum and I'm currently dating a Demodex brevis so I'm somewhat of an expert. Our host is pretty gross and rarely showers which has made the real estate in this area really expensive. We've been trying to move to another host but the opportunity hasn't come up yet. Anyway, to answer your question, we have scuba gear.
I'm not defending Sam Altman or the AI hype. A framework that uses an LLM isn't an LLM and doesn't have the same limitations. So the accurate media coverage that LLMs may have reached a plateau doesn't mean we won't see continued performance in frameworks that use LLMs. OpenAI's o1 is an example. o1 isn't an LLM, it's a framework that augments some of the deficiencies of LLMs with other techniques. That's why it doesn't give you an immediate streamed response when you use it, it's not just an LLM.