Funny thing there is actually attempts at modeling uncertainty in Deep Learning. But they are rarely used because they are either super inaccurate or have super slow convergence. (MCMC, Bayesian neural networks) The problem is essentially that learning algorithms cannot properly integrate over certainty distributions, so only an approximation can be trained, which is often pretty slow.
if they existed they’d be killer for RL. RL is insanely unstable when the distribution shifts as the policy starts exploring different parts of the state space. you’d think there’d be some clean approach to learning P(Xs|Ys) that can handle continuous shift of the Ys distribution in the training data, but there doesn’t seem to be. just replay buffers and other kludges.
Funny thing there is actually attempts at modeling uncertainty in Deep Learning. But they are rarely used because they are either super inaccurate or have super slow convergence. (MCMC, Bayesian neural networks) The problem is essentially that learning algorithms cannot properly integrate over certainty distributions, so only an approximation can be trained, which is often pretty slow.
if they existed they’d be killer for RL. RL is insanely unstable when the distribution shifts as the policy starts exploring different parts of the state space. you’d think there’d be some clean approach to learning P(Xs|Ys) that can handle continuous shift of the Ys distribution in the training data, but there doesn’t seem to be. just replay buffers and other kludges.