comefromspace
comefromspace t1_j23hsyi wrote
Reply to comment by madnessandmachines in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
It is a conjecture that can be tested however, starting with artificial networks. I don't think it's folk theory because it s not mainstream at all
comefromspace t1_j1zm1fv wrote
Reply to comment by madnessandmachines in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
I am aware of some of the philosophy of language, but i prefer to look at the neuroscientific findings instead. Language is a human construct that doesn't really exist in nature - communication does, which in humans is exchange of mental states between brains. The structure of language follows from abstracting the physical world into compact communicable units, and syntax is a very important byproduct of this process. I am more interested to see how hierarchical structure of language arises in these computational models like LLMs, which are open to empirical investigation. Most folk linguistic theories are high conjecture that has only circumstancial evidence.
comefromspace t1_j1z64ux wrote
Reply to comment by seventyducks in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
Language is syntax, and LLMs excel at it. I think it is interesting to note that GPT improved with learning programming because programming languages follows exact syntactic rules, which are rules of symbol manipulation. But it seems those rules are also great when applied to ordinary language which is much more fuzzy and ambiguous. transformers do seem to be exceptional at capturing syntactic relationships without necessarily knowing what it is that they are talking about (so, abstractly). And math is all about manipulating abstract entities..
I think symbol manipulation is something that transformers will continue to excel at. After all it's not that difficult either - Mathematica does it. The model may not understand the consequences of their inventions, but it will definitely be able to come up with proofs , models, theorems, physical laws etc. If the next GPT will be multi-modal, it seems it might be able to reason about its sensory inputs as well
comefromspace t1_j1ytlcw wrote
Reply to [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
I don't know but it seems like LLMs will get there faster as soon as they become multimodal. Language is already symbol manipulation.
comefromspace t1_j1ps1i3 wrote
Reply to What will cheap available AI-generated images lead to? Video? Media? Entertainment? by Hall_Pitiful
Emojis. mostly
Of course people will be making movies, but they will be called 'videos' and from time to time a 13-year old kid will become famous for making a good ones.
It's like what happened with blogs and ebooks. Anyone can be a director now
comefromspace t1_j2dgqhz wrote
Reply to comment by Glycerine in An Open-Source Version of ChatGPT is Coming [News] by lambolifeofficial
> The moment I knew it was out of reach to process wikipedia:
on GTX1650