• uuldika@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    if they existed they’d be killer for RL. RL is insanely unstable when the distribution shifts as the policy starts exploring different parts of the state space. you’d think there’d be some clean approach to learning P(Xs|Ys) that can handle continuous shift of the Ys distribution in the training data, but there doesn’t seem to be. just replay buffers and other kludges.