comefromspace

comefromspace t1_j1zm1fv wrote

I am aware of some of the philosophy of language, but i prefer to look at the neuroscientific findings instead. Language is a human construct that doesn't really exist in nature - communication does, which in humans is exchange of mental states between brains. The structure of language follows from abstracting the physical world into compact communicable units, and syntax is a very important byproduct of this process. I am more interested to see how hierarchical structure of language arises in these computational models like LLMs, which are open to empirical investigation. Most folk linguistic theories are high conjecture that has only circumstancial evidence.

−4

comefromspace t1_j1z64ux wrote

Language is syntax, and LLMs excel at it. I think it is interesting to note that GPT improved with learning programming because programming languages follows exact syntactic rules, which are rules of symbol manipulation. But it seems those rules are also great when applied to ordinary language which is much more fuzzy and ambiguous. transformers do seem to be exceptional at capturing syntactic relationships without necessarily knowing what it is that they are talking about (so, abstractly). And math is all about manipulating abstract entities..

I think symbol manipulation is something that transformers will continue to excel at. After all it's not that difficult either - Mathematica does it. The model may not understand the consequences of their inventions, but it will definitely be able to come up with proofs , models, theorems, physical laws etc. If the next GPT will be multi-modal, it seems it might be able to reason about its sensory inputs as well

−2