S-INF: Towards Realistic Indoor Scene Synthesis via Scene Implicit Neural Field

Authors

  • Zixi Liang Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Guowei Xu Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Haifeng Wu School of Computer Science and Engineering, University of Electronic Science and Technology of China
  • Ye Huang Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Wen Li Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China School of Computer Science and Engineering, University of Electronic Science and Technology of China
  • Lixin Duan Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China Sichuan Provincial Key Laboratory for Human Disease Gene Study and the Center for Medical Genetics, Department of Laboratory Medicine, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital, UESTC

DOI:

https://doi.org/10.1609/aaai.v39i5.32549

Abstract

Learning-based methods have become increasingly popular in 3D indoor scene synthesis (ISS), showing superior performance over traditional optimization-based approaches. These learning-based methods typically model distributions on simple yet explicit scene representations using generative models. However, due to the oversimplified explicit representations that overlook detailed information and the lack of guidance from multimodal relationships within the scene, most learning-based methods struggle to generate indoor scenes with realistic object arrangements and styles. In this paper, we introduce a new method, Scene Implicit Neural Field (S-INF), for indoor scene synthesis, aiming to learn meaningful representations of multimodal relationships, to enhance the realism of indoor scene synthesis. S-INF assumes that the scene layout is often related to the object-detailed information. It disentangles the multimodal relationships into scene layout relationships and detailed object relationships, fusing them later through implicit neural fields (INFs). By learning specialized scene layout relationships and projecting them into S-INF, we achieve a realistic generation of scene layout. Additionally, S-INF captures dense and detailed object relationships through differentiable rendering, ensuring stylistic consistency across objects. Through extensive experiments on the benchmark 3D-FRONT dataset, we demonstrate that our method consistently achieves state-of-the-art performance under different types of ISS.

Downloads

Published

2025-04-11

How to Cite

Liang, Z., Xu, G., Wu, H., Huang, Y., Li, W., & Duan, L. (2025). S-INF: Towards Realistic Indoor Scene Synthesis via Scene Implicit Neural Field. Proceedings of the AAAI Conference on Artificial Intelligence, 39(5), 5173-5181. https://doi.org/10.1609/aaai.v39i5.32549

Issue

Section

AAAI Technical Track on Computer Vision IV