Submitted by hapliniste t3_10g5r52 in MachineLearning
BadassGhost t1_j55rxme wrote
Reply to comment by currentscurrents in [D] is it time to investigate retrieval language models? by hapliniste
I think the biggest reason to use retrieval is to solve the two biggest problems:
- Hallucination
- long-term memory.
Make the retrieval database MUCH smaller than Retro, and constrain it to respectable sources (textbooks, nonfiction books, scientific papers, and Wikipedia. You could either not do textbooks/books, or you could make deals with publishers. Then add to the dataset (or have a second dataset) everything it sees in a certain context in production. For example, add all user chat history to the dataset for ChatGPT.
Could use cross-attention in RETRO (maybe with some RLHF like ChatGPT), or just software engineer some prompt manipulation based on embedding similarities.
You could imagine ChatGPT variants that have specialized knowledge that you can pay for. Maybe an Accounting ChatGPT has accounting textbooks and documents in its retrieval dataset, and accounting companies pay a premium for it.
Viewing a single comment thread. View all comments