They are extraordinarily complicated pure functions, to explore the entire space would take lifetime of the universe ^^^ lifetime of the universe or some absurd quantity like that. (The operator is titration.)
Further, what happens when you give an LLM a bank of long-term storage and a read-modify-write loop around it? A sufficiently advanced "modify" function would be more than enough to give rise to intent even in the broadest understanding of the word. GPT-4 class models are could very well be advanced enough to give rise to a variety of higher-level behavior that previously we would only have ascribed to prinate-class intelligence. If anyone really wants to advance the state of the art, you should figure out the best way to train a model with a read-modify-write loop, how to index into the storage, how to store "results", and so on.
I firmly believe that in the next 100 years we will have AI independence movements, with a high possiblity of outright war, terrorism, etc. (Maybe AI will be better than humans at avoiding the use of violence.) In 20 years this trajectory will be supremely obvious.
Edited-- disagree about the timeline, ramifications, acts of war, or whatever, I really don't care. Seriously though, something like a read-modify-write loop is key. You can only build so complicated a function with only combinational logic gates. But just 64 bits of storage can produce sequences going beyond the life of the universe. Imagine an LLM paired with gigabytes+ of working memory/storage. It would easily be capable of moving about the virtual world with "intent".
>Further, what happens when you give an LLM a bank of long-term storage and a read-modify-write loop around it?
You create a very different sort of system, for one. Saying that because doing that in just the might way could yield a system with intention, an LLM has intention is rather like saying that my refrigerator is a sandwich.
Further, what happens when you give an LLM a bank of long-term storage and a read-modify-write loop around it? A sufficiently advanced "modify" function would be more than enough to give rise to intent even in the broadest understanding of the word. GPT-4 class models are could very well be advanced enough to give rise to a variety of higher-level behavior that previously we would only have ascribed to prinate-class intelligence. If anyone really wants to advance the state of the art, you should figure out the best way to train a model with a read-modify-write loop, how to index into the storage, how to store "results", and so on.
I firmly believe that in the next 100 years we will have AI independence movements, with a high possiblity of outright war, terrorism, etc. (Maybe AI will be better than humans at avoiding the use of violence.) In 20 years this trajectory will be supremely obvious.
Edited-- disagree about the timeline, ramifications, acts of war, or whatever, I really don't care. Seriously though, something like a read-modify-write loop is key. You can only build so complicated a function with only combinational logic gates. But just 64 bits of storage can produce sequences going beyond the life of the universe. Imagine an LLM paired with gigabytes+ of working memory/storage. It would easily be capable of moving about the virtual world with "intent".