Submitted by singularpanda t3_1060gfk in MachineLearning
Recently, ChatGPT has become one of the hottest tools in the NLP area. I have tried it and it gives me amazing and fancy results. I believe it will benefit most of the people and make a significant advance in our life. However, unfortunately, I, as an NLP researcher in text generation, feel all what I have done seems meaningless now. I also don't know what I can do as ChatGPT is already strong enough and can solve most of my previous concerns in text generation. Research on ChatGPT also seems not possible as I believe it will not be an open-source project. Research on other NLP tasks also seems challenge as using a prompt in ChatGPT can solve most of the NLP tasks. Any suggestions or comments are welcome.
f_max t1_j3e2s3m wrote
I work at one of the big techs doing research on this. Frankly LLMs will be the leading edge of the field for the next 2 years imo. Join one of the big techs and get access to tens of thousands of dollars of compute per week to train some LLMs. Or in academia, lots of work needs to be done to characterize inference-time capabilities, understand bias, failure modes, smaller scale experiments w/ architecture, etc.