MinaKovacs
MinaKovacs t1_jbz2gqw wrote
Reply to comment by hebekec256 in [D] Is anyone trying to just brute force intelligence with enormous model sizes and existing SOTA architectures? Are there technical limitations stopping us? by hebekec256
I think the math clearly doesn't work out; otherwise, Google would have monetized it already. ChatGPT is not profitable or practical for search. The cost of hardware, power consumption, and slow performance are already at the limits. It will take something revolutionary, beyond binary computing, to make ML anything more than expensive algorithmic pattern recognition.
MinaKovacs t1_jbyzv1v wrote
Reply to [D] Is anyone trying to just brute force intelligence with enormous model sizes and existing SOTA architectures? Are there technical limitations stopping us? by hebekec256
A binary computer is nothing more than an abacus. It doesn't matter how much you scale up an abacus, it will never achieve anything even remotely like "intelligence."
MinaKovacs t1_j9vbv7v wrote
Reply to [D] Got invited to an ML final interview - have zero statistics/math background by [deleted]
It depends on the job. Not all ML jobs involve building new low level tools. You might find it is 80% dataset classification and 20% Python code customization, with little or no statistics background required.
MinaKovacs t1_j9s04eh wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
It's just matrix multiplication and derivatives. The only real advance in machine learning over the last 20yrs is scale. Nvida was very clever and made a math processor that can do matrix multiplication 100x faster than general purpose CPUs. As a result, the $1bil data center, required to make something like GPT-3, now only costs $100mil. It's still just a text bot.
MinaKovacs t1_j9rfnej wrote
Reply to comment by sticky_symbols in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
True, but it doesn't matter - it is still just algorithmic. There is no "intelligence" of any kind yet. We are not even remotely close to anything like actual brain functions.
MinaKovacs t1_j9ref87 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
We are so far away from anything you can really call "AI" it is not on my mind at all. What we have today is simply algorithmic pattern recognition and it is actually really disappointing. The scale of ChatGPT is impressive, but the performance is not. Many many thousands of man-hours were needed to manually tag training datasets. The only place "AI" exists is in the marketing department.
MinaKovacs t1_j2f756m wrote
The Chinese economy still depends heavily on exports to the US, which are only going to get worse. With the recession next year and the continued migration of manufacturing out of China, any investments there will be like throwing money into a burning dumpster.
MinaKovacs t1_jbzso7m wrote
Reply to comment by MurlocXYZ in [D] Is anyone trying to just brute force intelligence with enormous model sizes and existing SOTA architectures? Are there technical limitations stopping us? by hebekec256
One of the few things we know for certain about the human brain is it is nothing like a binary computer. Ask any neuroscientist and they will tell you we still have no idea how the brain works. The brain operates at a quantum level, manifested in mechanical, chemical, and electromagnetic characteristics, all at the same time. It is not a ball of transistors.