What about current architectures makes you think they won’t continue to improve with scale and multimodality, provided a good way of tokenizing? Is it the context length? What about models like S4/RWKV?
Thanks, I actually read this today. He and Richard Ngo are the names I've come across for researchers who've deeply thought about alignment and hold views grounded in the literature.
SchmidhuberDidIt OP t1_j9rwh3i wrote
Reply to comment by arg_max in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What about current architectures makes you think they won’t continue to improve with scale and multimodality, provided a good way of tokenizing? Is it the context length? What about models like S4/RWKV?