Submitted by Cogwheel t3_113448t in MachineLearning
In brains, the neural networks are transformed by the act of "inference". Neurons that have recently fired are more likely to fire again given the same input. Individual neural pathways can be created or destroyed based on the behavior of neurons around them. This leads me (through various leaps of logic and "faith") to suspect that some amount of mutability over time is required for an AI to exhibit sentience.
So far, all of the ML models I've seen distinctly separate training from inference. Every model that we put into production is a fixed snapshot of the most recent round of training. ChatGPT, for instance, is just the same exact model being incrementally fed both your prompts and its own previous output. This does create a sort of feedback, but in my mind it is not actually "experiencing" the conversation with you.
So I'm wondering if there are any serious attempts in the works to create an AI that is able to transform itself dynamically. E.g. having some kind of reinforcement learning module built into inference so that each new inference fundamentally (rather than superficially) incorporates its past experiences into its future predictions.
PredictorX1 t1_j8ntphr wrote
Some modeling algorithms (naive bayes and local models, like k-nearest neighbor or kernel regression) can be updated immediately. In some sense, they can be used for recall and training very close in time.