References
[1]
Nathanaël Berestycki. Introduction to the gaussian free field and liouville quantum gravity.
Lecture notes, 2015.
[2]
Maury Bramson, Jian Ding, and Ofer Zeitouni. Convergence in law of the maximum of the two-
dimensional discrete gaussian free field. Communications on Pure and Applied Mathematics,
69(1):62–123, 2016.
[3]
Nanxin Chen, Yu Zhang, Heiga Zen, Ron J. Weiss, Mohammad Norouzi, and William Chan.
Wavegrad: Estimating gradients for waveform generation. In International Conference on
Learning Representations, 2021.
[4]
Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis.
Advances in Neural Information Processing Systems, 34, 2021.
[5]
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances
in neural information processing systems, pages 6626–6637, 2017.
[6]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances
in Neural Information Processing Systems, 2020.
[7]
Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and
David J. Fleet. Video diffusion models, 2022.
[8]
Alexia Jolicoeur-Martineau, Rémi Piché-Taillefer, Rémi Tachet des Combes, and Ioannis
Mitliagkas. Adversarial score matching and improved sampling for image generation. Interna-
tional Conference on Learning Representations, 2021.
[9]
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile
diffusion model for audio synthesis. In International Conference on Learning Representations,
2021.
[10]
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.
Technical Report, University of Toronto, 2009.
[11]
Eliya Nachmani, Robin San Roman, and Lior Wolf. Denoising diffusion gamma models, 2021.
[12]
Severi Rissanen, Markus Heinonen, and Arno Solin. Generative modelling with inverse heat
dissipation, 2022.
[13]
Saeed Saremi and Aapo Hyvarinen. Neural empirical bayes. Journal of Machine Learning
Research, 20:1–23, 2019.
[14]
Scott Sheffield. Gaussian free fields for mathematicians. Probability theory and related fields,
139(3):521–541, 2007.
[15]
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models.
International Conference on Learning Representations, 2021.
[16]
Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data
distribution. Advances in Neural Information Processing Systems, 2019.
[17]
Yang Song and Stefano Ermon. Improved techniques for training score-based generative models.
Advances in Neural Information Processing Systems, 2020.
[18]
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon,
and Ben Poole. Score-based generative modeling through stochastic differential equations.
International Conference on Learning Representations, 2021.
[19]
Vikram Voleti, Alexia Jolicoeur-Martineau, and Christopher Pal. Mcvd: Masked conditional
video diffusion for prediction, generation, and interpolation. In (NeurIPS) Advances in Neural
Information Processing Systems, 2022.
[20]
Wendelin Werner and Ellen Powell. Lecture notes on the gaussian free field. arXiv preprint
arXiv:2004.04720, 2020.
[21]
Yilun Xu, Ziming Liu, Max Tegmark, and Tommi Jaakkola. Poisson flow generative models,
2022.
[22]
Ruihan Yang, Prakhar Srivastava, and Stephan Mandt. Diffusion probabilistic modeling for
video generation. arXiv preprint arXiv:2203.09481, 2022.
5