Neural implicit representations

Neural nets as coordinate mappings

January 21, 2021 — April 26, 2023

dynamical systems
functional analysis
machine learning
neural nets
Figure 1

A cute hack for generative (only generative?) neural nets. Unlike other structures, here we allow the output to depend upon image coordinates, rather than some presumed-invariant latent factors. I am not quite sure what the rationale is for implicit being used as the term here. There is nothing especially implicit about that.

So the terminology is not self-explanatory. At least it is distinctive, right?

Haha no. Implicit representation networks are not the neural network ones using implicit in a confusing way. See also “implicit layers,” which allow an optimisation problem to be solved in a neural net, which is not the same thing. And which is also not, to my mind, particularly implicit.

Awesome implicit representations:

Implicit Neural Representations (sometimes also referred to coordinate-based representations) are a novel way to parameterize signals of all kinds. Conventional signal representations are usually discrete — for instance, images are discrete grids of pixels, audio signals are discrete samples of amplitudes, and 3D shapes are usually parameterized as grids of voxels, point clouds, or meshes. In contrast, Implicit Neural Representations parameterize a signal as a continuous function that maps the domain of the signal (i.e., a coordinate, such as a pixel coordinate for an image) to whatever is at that coordinate (for an image, an R,G,B color). Of course, these functions are usually not analytically tractable — it is impossible to “write down” the function that parameterizes a natural image as a mathematical formula. Implicit Neural Representations thus approximate that function via a neural network.

Implicit Neural Representations have several benefits: First, they are not coupled to spatial resolution any more, the way, for instance, an image is coupled to the number of pixels. This is because they are continuous functions! Thus, the memory required to parameterize the signal is independent of spatial resolution, and only scales with the complexity of the underlying signal. Another corollary of this is that implicit representations have “infinite resolution” — they can be sampled at arbitrary spatial resolutions.

This is immediately useful for a number of applications, such as super-resolution, or in parameterising signals in 3D and higher dimensions, where memory requirements grow intractably fast with spatial resolution.

However, in the future, the key promise of implicit neural representations lies in algorithms that directly operate in the space of these representations.

At first glance, this is a cute hack (does reality plausibly ever know its own coordinates with reference to some grid system?), but maybe it is more useful than that description implies. As an audio guy, I cannot help but notice the similarity to what synthesizer designers do all the time when designing new instruments, which are terrible representations of “real” instruments but are deft at making novel sounds by ingenious parameterisations. There is a short speculative essay to write on that, with a side journey into signal sampling.

1 Neural Signed Distance Functions

Fruitful sub-field, defining shapes as level sets of some neural function. Very popular for 3d data.

There are some techniques that look useful in there. I am curious about Atzmon et al. (2019);Atzmon and Lipman (2020);Chabra et al. (2020);Gropp et al. (2020);Huang, Bai, and Kolter (2021);Ma et al. (2021);Ortiz et al. (2022);Park et al. (2019);Takikawa et al. (2021);Zhu et al. (2022).

2 Neural rendering

NeRFs etc. See neural rendering.

3 In generative art

hardmaru’s Compositional pattern-producing network uses apparently a classic type of implicit rep NNs. See generative art with NNs for more of that.

4 In PDEs

Implicit representations are the main trick that makes PINNs go.

5 Position encoding

See position encoding.

6 References

Atzmon, Haim, Yariv, et al. 2019. Controlling Neural Level Sets.” In.
Atzmon, and Lipman. 2020. SAL: Sign Agnostic Learning of Shapes from Raw Data.” In.
Bautista, Guo, Abnar, et al. 2022. GAUDI: A Neural Architect for Immersive 3D Scene Generation.” In.
Chabra, Lenssen, Ilg, et al. 2020. Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction.” In.
Chen, and Zhang. 2018. Learning Implicit Fields for Generative Shape Modeling.”
De Luigi, Cardace, Spezialetti, et al. 2023. Deep Learning on Implicit Neural Representations of Shapes.” In.
Dockhorn, Vahdat, and Kreis. 2022. GENIE: Higher-Order Denoising Diffusion Solvers.” In.
Du, Collins, Tenenbaum, et al. 2021. Learning Signal-Agnostic Manifolds of Neural Fields.” In Advances in Neural Information Processing Systems.
Dupont, Kim, Eslami, et al. 2022. From Data to Functa: Your Data Point Is a Function and You Can Treat It Like One.” In Proceedings of the 39th International Conference on Machine Learning.
Gropp, Yariv, Haim, et al. 2020. Implicit Geometric Regularization for Learning Shapes.” In.
Huang, Bai, and Kolter. 2021. (Implicit)^2: Implicit Layers for Implicit Representations.” In.
Lim, Kovachki, Baptista, et al. 2023. Score-Based Diffusion Models in Function Space.”
Luo, Du, Tarr, et al. 2021. Learning Neural Acoustic Fields.” In.
Ma, Han, Liu, et al. 2021. Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces.” In.
Mescheder, Oechsle, Niemeyer, et al. 2018. Occupancy Networks: Learning 3D Reconstruction in Function Space.”
Mildenhall, Srinivasan, Tancik, et al. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.” arXiv:2003.08934 [Cs].
Navon, Shamsian, Achituve, et al. 2023. Equivariant Architectures for Learning in Deep Weight Spaces.”
OrEl, Luo, Shan, et al. 2022. StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Ortiz, Clegg, Dong, et al. 2022. iSDF: Real-Time Neural Signed Distance Fields for Robot Perception.” In.
Park, Florence, Straub, et al. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Sitzmann, Martel, Bergman, et al. 2020. Implicit Neural Representations with Periodic Activation Functions.” arXiv:2006.09661 [Cs, Eess].
Sitzmann, Zollhoefer, and Wetzstein. 2019. Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations.” Advances in Neural Information Processing Systems.
Stanley. 2007. Compositional Pattern Producing Networks: A Novel Abstraction of Development.” Genetic Programming and Evolvable Machines.
Takikawa, Litalien, Yin, et al. 2021. Neural Geometric Level of Detail: Real-Time Rendering with Implicit 3D Shapes.”
Tewari, Fried, Thies, et al. 2020. State of the Art on Neural Rendering.” Computer Graphics Forum.
Tsuchida, Ong, and Sejdinovic. 2023. Squared Neural Families: A New Class of Tractable Density Models.”
Xie, Takikawa, Saito, et al. 2022. Neural Fields in Visual Computing and Beyond.” Computer Graphics Forum.
Xu, Wang, Jiang, et al. 2022. Signal Processing for Implicit Neural Representations.” In.
Yariv, Gu, Kasten, et al. 2021. Volume Rendering of Neural Implicit Surfaces.” In.
Zhuang, Abnar, Gu, et al. 2022. Diffusion Probabilistic Fields.” In.
Zhu, Peng, Larsson, et al. 2022. “NICE-SLAM: Neural Implicit Scalable Encoding for SLAM.”