What is it about?
Imagine you're standing at the north pole. Your friend is somewhere else on the earth and you'd like to tell them what direction you are facing. You find yourself at a loss for words. North, south, east, and west don't make sense any more, since you are at the pole. Maybe you can pick one point on the horizon and measure the angle between your orientation and that point. But how do you correctly relate your orientation to your friend's frame of reference? This is a known problem that arises when working with curved surfaces (like the earth). Fortunately, there are well known solutions as well. This paper modifies a technique from deep learning to be able to apply such solutions inside a neural network for surfaces.
Featured Image
Photo by Denise Jans on Unsplash
Why is it important?
Any method that applies deep learning techniques to surfaces has to either deal with the same problem, or use networks that are limited in what the can 'see' on the surfaces. This method gives a fundamental solution to the problem while allowing the network to see more information on the surface. Because of its fundamental nature, this result is important for any application of deep learning for surfaces. Some examples are medical imaging, automated driving, and computer aided design.
Perspectives
Read the Original
This page is a summary of: CNNs on surfaces using rotation-equivariant features, ACM Transactions on Graphics, August 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3386569.3392437.
You can read the full text:
Resources
Contributors
The following have contributed to this page