A Learnable Radial Basis Positional Embedding for Coordinate-MLPs

Authors

  • Sameera Ramasinghe Amazon
  • Simon Lucey University of Adelaide

DOI:

https://doi.org/10.1609/aaai.v37i2.25307

Keywords:

CV: Representation Learning for Vision, CV: 3D Computer Vision

Abstract

We propose a novel method to enhance the performance of coordinate-MLPs (also referred to as neural fields) by learning instance-specific positional embeddings. End-to-end optimization of positional embedding parameters along with network weights leads to poor generalization performance. Instead, we develop a generic framework to learn the positional embedding based on the classic graph-Laplacian regularization, which can implicitly balance the trade-off between memorization and generalization. This framework is then used to propose a novel positional embedding scheme, where the hyperparameters are learned per coordinate (i.e instance) to deliver optimal performance. We show that the proposed embedding achieves better performance with higher stability compared to the well-established random Fourier features (RFF). Further, we demonstrate that the proposed embedding scheme yields stable gradients, enabling seamless integration into deep architectures as intermediate layers.

Downloads

Published

2023-06-26

How to Cite

Ramasinghe, S., & Lucey, S. (2023). A Learnable Radial Basis Positional Embedding for Coordinate-MLPs. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2137-2145. https://doi.org/10.1609/aaai.v37i2.25307

Issue

Section

AAAI Technical Track on Computer Vision II