Split-Layer: Enhancing Implicit Neural Representation by Maximizing the Dimensionality of Feature Space

Authors

  • Zhicheng Cai Nanjing University
  • Hao Zhu Nanjing University
  • Linsen Chen Nanjing university
  • Qiu Shen Nanjing University
  • Xun Cao Nanjing University

DOI:

https://doi.org/10.1609/aaai.v40i4.37243

Abstract

Implicit neural representation (INR) models signals as continuous functions using neural networks, offering efficient and differentiable optimization for inverse problems across diverse disciplines. However, the representational capacity of INR—defined by the range of functions the neural network can characterize—is inherently limited by the low-dimensional feature space in conventional multilayer perceptron (MLP) architectures. While widening the MLP can linearly increase feature space dimensionality, it also leads to a quadratic growth in computational and memory costs. To address this limitation, we propose the split-layer, a novel reformulation of MLP construction. The split-layer divides each layer into multiple parallel branches and integrates their outputs via Hadamard product, effectively constructing a high-degree polynomial space. This approach significantly enhances INR’s representational capacity by expanding the feature space dimensionality without incurring prohibitive computational overhead. Extensive experiments demonstrate that the split-layer substantially improves INR performance, surpassing existing methods across multiple tasks, including 2D image fitting, 2D CT reconstruction, 3D shape representation, and 5D novel view synthesis.

Downloads

Published

2026-03-14

How to Cite

Cai, Z., Zhu, H., Chen, L., Shen, Q., & Cao, X. (2026). Split-Layer: Enhancing Implicit Neural Representation by Maximizing the Dimensionality of Feature Space. Proceedings of the AAAI Conference on Artificial Intelligence, 40(4), 2561-2570. https://doi.org/10.1609/aaai.v40i4.37243

Issue

Section

AAAI Technical Track on Computer Vision I