When I talk about "formal language use" I refer to the term in context of the paper https://arxiv.org/pdf/2301.06627.pdf. Why is it a huge step towards AGI? Because a system that has general intelligence will be a system that has mastered language use (both formal and functional as referred to in the paper). Interestingly the very limitations of current LLMs such as hallucinations and poor logical reasoning can be solved via LLMs themselves by a process known as Autoformalization (https://arxiv.org/pdf/2205.12615.pdf ). They teach an LLM to translate natural language to a "formal" language (a computer program basically). They translate to a language called Isabelle which is used for math proof verification. What would this enable? Imagine you give an LLM a math problem and ask it to solve it. If you have an agent that can tell whether the solution of the LLM is correct or not, you can use this setting to train the LLM via reinforcement learning. Autoformalization acts as that agent where the solution given by the LLM is converted from natural language to Isabelle and verified by the Isabelle software program. If the output is correct the LLM can be given a positive reinforcement, if it wrong it can be given a negative reinforcement. Who will do this translation? An LLM itself! How is this connected to AGI? Well you can induce reasoning into language models that way. Because pretty much any real world problem (albeit some due to the incompleteness theorem) will have a certain set of axioms and the solution to the problem can be proved in a mathematical sense. This will allow LLMs to master functional language use as well, and would make the LLMs more grounded.
The beauty of LLMs is the fact that they tend to bridge the gap between a natural language and a formal computer program. This along with their few shot learning capabilities indeed show that LLMs are indeed a huge leap towards AGI.
Particular_Number_68 OP t1_jabcuie wrote
Reply to comment by LeCodex in The naivety of arguments on both sides of the AGI debate is quite frustrating to look at by Particular_Number_68
When I talk about "formal language use" I refer to the term in context of the paper https://arxiv.org/pdf/2301.06627.pdf. Why is it a huge step towards AGI? Because a system that has general intelligence will be a system that has mastered language use (both formal and functional as referred to in the paper). Interestingly the very limitations of current LLMs such as hallucinations and poor logical reasoning can be solved via LLMs themselves by a process known as Autoformalization (https://arxiv.org/pdf/2205.12615.pdf ). They teach an LLM to translate natural language to a "formal" language (a computer program basically). They translate to a language called Isabelle which is used for math proof verification. What would this enable? Imagine you give an LLM a math problem and ask it to solve it. If you have an agent that can tell whether the solution of the LLM is correct or not, you can use this setting to train the LLM via reinforcement learning. Autoformalization acts as that agent where the solution given by the LLM is converted from natural language to Isabelle and verified by the Isabelle software program. If the output is correct the LLM can be given a positive reinforcement, if it wrong it can be given a negative reinforcement. Who will do this translation? An LLM itself! How is this connected to AGI? Well you can induce reasoning into language models that way. Because pretty much any real world problem (albeit some due to the incompleteness theorem) will have a certain set of axioms and the solution to the problem can be proved in a mathematical sense. This will allow LLMs to master functional language use as well, and would make the LLMs more grounded.
The beauty of LLMs is the fact that they tend to bridge the gap between a natural language and a formal computer program. This along with their few shot learning capabilities indeed show that LLMs are indeed a huge leap towards AGI.