The validation bound says that we only need 5000 samples to verify within 1% error. It is a bit paradoxical for big data.
The denoiser parameterizations are somewhat confusing, and some of them are the same / with only minute differences. We keep to one set of notations to compare them apples-to-apples.
Diffusion models are making waves. A core concept is conditional mean / weighted nearest neighbor.
In 2D, cross-product of 2 lines gives the intersecting point. Cross product of 2 points gives the common line. Here we generalize it to 3D and beyond.
Textbook definitions of forms and determinants are easier for analysis. For computation, I am trying some more algorithmic mnemonics.
The class this semester is using Julia. Some thoughts on its trade-offs.
Last updated on 2024-04-03.