ClayStep
ClayStep t1_izfv6fi wrote
Reply to comment by MetaAI_Official in [D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything! by MetaAI_Official
Ah this was my misunderstanding then - I did not realize the language model was conditioned on intent (it makes perfect sense that it is). Thanks for the clarification!
ClayStep t1_izbsdur wrote
Reply to [D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything! by MetaAI_Official
I was at your neurips talk. I note that the language model (conditioned on world state) had the ability to suggest moves to a human player, which the human player found to be good moves.
Could the same model be used to suggest moves for the agent? What are the limitations?
ClayStep t1_j4s2dfp wrote
Reply to [D] Is it possible to update random forest parameters with new data instead of retraining on all data? by monkeysingmonkeynew
Hackiest solution I can think of:
Just add new trees to the forest trained on the new data and weight the trees by how new the data is...(assuming we care more about the new data)
(probably a terrible idea)