Our goal is to derive the operator for Forward the Backward Equation. This is used by Song et al., 2021, Song and Ermon, 2019 to show that the distribution induced would match with the ideal. It's also used in a series of other papers [1][3][4][7][2]. I am first led to this body of work by Karras et al., 2022.
Let ut(x)=∫f(y)pt(y∣x)dy, where f is some benign test function. In plain words, u is the expected value of f after time t rollout starting at x. We want to find operator L for the PDE
∂tut(x)=Lxut(x).
Since u is an integral over dummy y, we can move both of the the operators ∂t and L inside the integral, and obtain a PDE on the transition kernel pt(y∣x)
Note that when t+s is split, t goes to the 2nd segment y→z and s is assigned to 1st segment x→y. Otherwise we won't be able to form an expression for u(⋅,t) cuz f is integrated over z. This gives these steps a "backward-going" flavor i.e. we have u(y,x) and then extend backward by s to form ut+s(x).
For an alternative proof, directly apply Ito v2 on ut(x), and since ut(x) is a conditional expectation, the terms on dt is 0. This way you get the PDE. This is much quicker.
2. Kolmogorov Forward Backward Operator Duality
To relate forward and backward PDEs, we consider a test function f integrated over the end-of-time marginal distribution pT. We further break pT into integrals of transition kernels pT−t(⋅∣⋅) and pt(⋅∣⋅),
This is the Kolmogorov Forward / Fokker-Planck equation. Eqn. 14 is true for both continuous and discrete Markov processes.
[Adjoints of operators] To write down the expression for the adjoint, we use the following integration by parts identities
⟨h(x)∂xa,b⟩⟨l(x)∂xxa,b⟩=∫xh(x)∂xa(x)⋅b(x)dx=∫x∂xa(x)⋅[h(x)b(x)]dx=∫xa(x)⋅−∂x[h(x)b(x)]dx=⟨a,−∂x[h(x)⋅b]⟩=∫xl(x)∂xxa(x)⋅b(x)dx=∫x∂xxa(x)⋅[l(x)b(x)]dx=∫xa(x)⋅∂xx[l(x)b(x)]dx=⟨a,∂xx[l(x)⋅b]⟩int by parts twice.
The probability density p(x,t) induced by a deterministic ODE dtdx=v(x,t) is described by the (noiseless) Fokker Planck equation. Expand it using product rule of divergence:
For a neural ODE e.g. a diffusion at inference time, to compute the density at a data point x, we integrate the divergence of velocity term over time. Evaluating divergence i.e. trace of jacobian of a neural net feedforward pass is costly cuz it's expensive to materialize the full jacobian. FFjord (https://arxiv.org/abs/1810.01367) proposes to use hutchinson trace estimator tr(∂x∂vθ(x,t))=Eϵ∼N(0,I)ϵT∂x∂vθ(x,t)ϵ cuz vector-jacobian product is very manageable.
This recipe is first popularized in ML community by Neural ODE (https://arxiv.org/abs/1806.07366). Their derivation in appendix A.2 is essentially what I wrote here.
I have learned from a friend of friend working in fluid dynamics that they call it the "material derivative'' (https://en.wikipedia.org/wiki/Material_derivative). So it's not exactly a new result.
4. Discrete Diffusion Training Objective Interpretation
Recall the training objective of continuous diffusion
and the optimal prediction by denoiser D(xi) is the conditional mean E[x0∣xi]. The minimum loss is thus Var(x0∣xi).
We now want to do the same for discrete diffusion, and establish the counterpart of "regressing to the mean''. The steps below are actually fully general (if we replace the ∑ with ∫). The spirit is to look for minima by completing the KL, analogous to completing the square.
Consider a single term in the ELBO interpretation of diffusion training objective. We are optimizing q(xi−1∣xi) to minimize
So the loss is minimized when q(xi−1∣xi) becomes p(xi−1∣xi), and the minimum loss is the mutual information MI(X0;Xi−1∣Xi) conditioned at Xi.
Note that I am using a non-standard notation for mutual information under conditioning. The definition of Conditional Mutual Information I(X;Y∣Z):=EZ[KL{P(X,Y∣Z)∣∣P(X∣Z)P(Y∣Z)}] includes the outer expectation.
My confusion: under continuous diffusion, q(xi−1∣xi) is parameterized as a Gaussian. The real p(xi−1∣xi) like you wrote is a mixture of gaussian. Therefore KL{p(xi−1∣xi)∣∣q(xi−1∣xi)} can't reach zero. So somehow in continuous diffusion this objective can't be perfectly optimized?
References
Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Adv. Neural Inform. Process. Syst. 2022.
Diederik P Kingma and Jimmy Ba. Adam: a method for stochastic optimization. In Int. Conf. Learn. Represent. 2015.
Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707, 2021.
Tzu-Mao Li, Michal Lukáč, Gharbi Michaël, and Jonathan Ragan-Kelley. Differentiable vector graphics rasterization for editing and learning. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 39(6):193:1–193:15, 2020.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, and others. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. 2021.
Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Adv. Neural Inform. Process. Syst. 2019.
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In Int. Conf. Learn. Represent. 2021.
Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: lifting pretrained 2d diffusion models for 3d generation. Computer Vision and Pattern Recognition, 2023.
Last updated on 2024-04-03. Design inspired by distill.