Yeah, you may be able to get all the way to a playable game if you use that prompt in a well set up AutoGen app. I would be interested to see if you give it a shot, so please share if you do. It’s such a cool time to be alive for “idea” people!
Yeah, you may be able to get all the way to a playable game if you use that prompt in a well set up AutoGen app. I would be interested to see if you give it a shot, so please share if you do. It’s such a cool time to be alive for “idea” people!
Alright, no big deal. But yeah, your’re gut instinct was correct when you assumed there was a missing /s. I don’t really like the /s that much, especially in situations where it is so obvious.
If you had read down through this thread first then you would have seen the obviousness of the /s. I don’t think my comment history outside of this thread would have done much since I don’t generally talk about this stuff. I just meant if you had looked more than a couple comments in this particular back and forth discussion.
Well then you didn’t read very many of my comments. I made this first comment because the post I responded to was so absurd so I just exaggerated the ridiculousness that they said. Of course AI is capable of creativity and intelligence. If you look at the long back and forth that this sparked you would see that this is my stance. After I made this over the top, very sarcastic comment, OP corrected themself to clarify that when they said “AI” they actually only meant the current state of LLMs. They have since admitted that it is indeed true that AI absolutely can be capable of creativity and intelligence.
Yeah, you are definetly onto something there. If you are interested in checking out the current state of this, it is called “AutoGen”. You can think of it like a committee of voices inside the bots head. It takes longer to get stuff out, but it is much higher quality.
It is basically a group chat of bots working together on a common goal, but each with their own special abilities(internet access, apis, code running ability…) their own focuses, concerns, etc. It can be used to make anything, most projects now seem to be focused on application development, but there is no reason why it can’t be stories, movie scripts, research papers, whatever. For example, you can have a main author, an editor that’s fine-tuned on some editing guidelines/books, a few different fact checkers with access to the internet or datasets of research papers (or whatever reference materials) who are required to list sources for anything the author says(if no source can be found, then the author is told by the fact checkers and they must revise what they’ve written) and whatever other agents you can dream up. People are using dwsigners, marketers, CEOs… Then you plug in some api keys, maybe give them a token limit, and let them run wild.
A super early version of this idea was ChatDev, if you don’t want to go down the whole rabbit hole and just want a quick glimpse, skip ahead to 4:25, ChatDev has an animated visual representation of what is happening. These days AutoGen is where it’s at though, this same guy has a bunch of videos on it if you are looking to go a bit deeper.
Yeah, to be clear, I’m not arguing that current LLMs are as creative and intelligent as people.
I am saying that even before babies get human language input, they still get input from people to be made, the baby’s algorithm to make that spark is modled on previous humans by the human data that is DNA. These future intelligent AIs will also be made by data that humans make. Even our current LLMs are not purely human language input, they also have an algorithm that is doing stuff with that data in order to show to us its, albeit relatively weak, “intelligent spark” that it had before it got all that human language input.
Chatbots are not new. They started around 1965. Objectively, gpt4 is more creative than the chatbots of 1965. The two are not equally able to create. This is an ongoing change, in the future AI will be more creative than today’s most creative AIs. AI will most likely continue on its trajectory and some day, if we dont all get destroyed, it will eventually be more intelligent and creative than humans.
I would love to hear an rebuttal to this that doesn’t just base its argument on the fact that AI needs human language input. A baby and its spark is not impressively intelligent. What makes that baby intelligent is its initial algorithm plus the fact that it gets human language data. Requiring that AI must do what the baby does without the human language data that babies get makes no sense to me as a requirement.
Is this how you see human intelligence? Is human intelligence made without the input of other humans? I understand that even babies have some sort of spark before they learn anything from other people, but dont they have the human dna input from their human parents? Why should the requirement for AI intelligence require no human input when even human intelligence seemingly requires human input to be made?
Sorry, lots of questions, just food for thought I suppose.
Even those future “real” AIs are going to be taking in human input and regurgitating it back to us. The only difference is that the algorithms processing the data will continue to get better and better. There is not some cutoff where we go from 100% unintelligent chatbot to 100% intelligent AI. It is a gradual spectrum.
Yeah the current popular LLMs, absolutely they are, you couldn’t be more right.
We were talking about “AI” though. Are you implying that you think some day AI might be capable of creativity, and that creativity isn’t strictly a human trait?
I was agreeing with you. I’m so sick of people thinking that “someday AI might be creative”. Like no, it’s literally impossible unless some day AI becomes human(impossible) because human is the only thing capable of creativity. What have I said that you disagree with? You’re not one of them are you? What’s with all this obsessive AI love?
Yeah, I’ve just set up a hotkey that says something like “back up your answer with multiple reputable sources” and I just always paste it at the end of everything I ask. If it can’t find webpages to show me to back up its claims then I can’t trust it. Of course this isn’t the case with coding, for that I can actually run the code to verify it.
Yes, it is literally impossible for any AI to ever exist that can be creative. At no point in the future will it ever create anything creative, that is something only human beings can do. Anybody that doesn’t understand this is simply incapable of using logic and they have no right to contribute to the conversation at all. This has all already been decided by people who understand things really well and anyone who objects is obviously stupid.
From monyet.cc most of the comments in this thread are missing.
David L. Mayer, David Faber, Brian Hoods, Jonathan Turley and Jonathan Zittrain.
Tell me about these people and what they all have in common.
Try this prompt