Submitted by fingin t3_zb6f72 in singularity
Humans have an advantage over AI in the form of a priori knowledge (some in-built pattern recognition faculties). As well as dispositions or reactions to sensory and emotional signals. Evolving beings are always using samples to transform themselves. The AI needs to have some implict knowledge, before it can actually "learn" in any sense that a human can. Soon enough, it will outperform most of us in language and image generation tasks. In that case, we should only be concerned about using it rather than dismissing it in favour of human performers. Curious to hear your thoughts
Superschlenz t1_iypsktf wrote
>Humans have an advantage over AI in the form of a priori knowledge
... and AIs have an advantage over humans in the form of perfect mind copying. Once there exists a single AI that has learned the mind, regardless of how long the training took, it's no longer necessary for the other AIs to learn sample-efficiently from raw data again and again when they could just make a 1:1 copy of the first AIs mind. Instead of thinking about how to apply Bayesian Optimization to high-dimensional data, which would give you the theoretically best possible sample efficiency, you better think about how to infiltrate the first AI's developer team with spies in order to steal their work.