Sure, I have no problem with that.
Im not one of them, but incredibly it is probably about 5,000,000x more people than the ones that give a fuck what the average person on here thinks.
I think without anything akin to extrapolation, we just need to wait and see what the future holds. In my view, most people are almost certainly going to be hit up side the head in the not to distant future. Many people haven't even considered what a world might be like where pretty much all the jobs that people are doing now are easily automated. It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn't possibly crash down.
I think having it give direct quotes and specific sources would help your experience quite a bit. I absolutely agree that if just use the simplest forms of current LLMs and the "hello world" agent setups that there are hallucination issues and such, but lots of this is no longer an issue when you get deeper into it. It's just a matter of time until the stuff that most people can easily use will have this stuff baked in, it isn't anything that is impossible. I mean, I pretty much always have my agents tell me exactly where they get all their information from. The exception is when I have them writing code because there the proof is in the results.
Anybody who gets so triggered and defensive when someone points out how disgusting factory farms are doesn't have a diet that they are proud of. Whether your cognitive dissonance allows you to acknowledge that or not is a different story.
Yeah, you may be able to get all the way to a playable game if you use that prompt in a well set up AutoGen app. I would be interested to see if you give it a shot, so please share if you do. It's such a cool time to be alive for "idea" people!
Yeah, you are definetly onto something there. If you are interested in checking out the current state of this, it is called "AutoGen". You can think of it like a committee of voices inside the bots head. It takes longer to get stuff out, but it is much higher quality.
It is basically a group chat of bots working together on a common goal, but each with their own special abilities(internet access, apis, code running ability..) their own focuses, concerns, etc. It can be used to make anything, most projects now seem to be focused on application development, but there is no reason why it can't be stories, movie scripts, research papers, whatever. For example, you can have a main author, an editor that's fine-tuned on some editing guidelines/books, a few different fact checkers with access to the internet or datasets of research papers (or whatever reference materials) who are required to list sources for anything the author says(if no source can be found, then the author is told by the fact checkers and they must revise what they've written) and whatever other agents you can dream up. People are using dwsigners, marketers, CEOs.. Then you plug in some api keys, maybe give them a token limit, and let them run wild.
A super early version of this idea was ChatDev, if you don't want to go down the whole rabbit hole and just want a quick glimpse, skip ahead to 4:25, ChatDev has an animated visual representation of what is happening. These days AutoGen is where it's at though, this same guy has a bunch of videos on it if you are looking to go a bit deeper.
Yeah, to be clear, I'm not arguing that current LLMs are as creative and intelligent as people.
I am saying that even before babies get human language input, they still get input from people to be made, the baby's algorithm to make that spark is modled on previous humans by the human data that is DNA. These future intelligent AIs will also be made by data that humans make. Even our current LLMs are not purely human language input, they also have an algorithm that is doing stuff with that data in order to show to us its, albeit relatively weak, "intelligent spark" that it had before it got all that human language input.
Chatbots are not new. They started around 1965. Objectively, gpt4 is more creative than the chatbots of 1965. The two are not equally able to create. This is an ongoing change, in the future AI will be more creative than today's most creative AIs. AI will most likely continue on its trajectory and some day, if we dont all get destroyed, it will eventually be more intelligent and creative than humans.
I would love to hear an rebuttal to this that doesn't just base its argument on the fact that AI needs human language input. A baby and its spark is not impressively intelligent. What makes that baby intelligent is its initial algorithm plus the fact that it gets human language data. Requiring that AI must do what the baby does without the human language data that babies get makes no sense to me as a requirement.
Is this how you see human intelligence? Is human intelligence made without the input of other humans? I understand that even babies have some sort of spark before they learn anything from other people, but dont they have the human dna input from their human parents? Why should the requirement for AI intelligence require no human input when even human intelligence seemingly requires human input to be made?
Sorry, lots of questions, just food for thought I suppose.
Really easy way to get it set up
https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/
You need to go clone the comfyui git first
Thanks! That's so cool