What is it about?
Neural Networks are at the core of many modern graphics applications for visual appearance. However, most networks are still trained "in isolation", without re-using knowledge gathered during previous training runs. We use meta-learning as a means of embedding this information into the training process, which helps achieving faster network training and inference at same quality and fewer data points.
Photo by Shubham Dhage on Unsplash
Why is it important?
With the number of trained neural networks growing every day, accelerating their training and inference time not only improves user convenience, but can also have measurable impacts on environmental aspects like energy consumption and the entailed CO2 emissions. Moreover, our method helps reducing the amount of data needed to train a network, making it more applicable to client-server architectures.
Read the Original
This page is a summary of: Metappearance, ACM Transactions on Graphics, November 2022, ACM (Association for Computing Machinery), DOI: 10.1145/3550454.3555458.
You can read the full text:
The following have contributed to this page