NeuSEditor: From Multi-View Images to Text-Guided Neural Surface Edits

Abstract

Implicit surface representations are valued for their compactness and continuity but pose significant challenges for editing. To address this gap, we introduce NeuSEditor, a novel method for text-guided editing of neural implicit surfaces derived from multi-view images. NeuSEditor employs an identity-preserving architecture that efficiently separates scenes into foreground and background, enabling precise modifications without altering the scene's inherent properties. Our geometry-aware distillation loss significantly enhances rendering and geometric quality. Our method simplifies the editing workflow by eliminating the need for continuous dataset updates and source prompting. NeuSEditor outperforms recent state-of-the-art methods like PDS and InstructNeRF2NeRF, delivering superior rendering and geometric quality.

Conceptual Overview

Step 1: Understand the 3D Identity by analyzing the input scene images (Decompose and Conquer)

Step 2: Modify the foreground elements

Step 3: Compose and render the complete model

Optimization track

The video below demonstrates the optimization route of the IN2N-data person scene with the edit prompt "put him into a suit".

Qualitative results


The following videos show the results (with learned identity and edit) of our method on various scenes.
Check out the user survey here to compare our method with other state-of-the-art methods.


DTU-Scan24



DTU-Scan65



DTU-Scan83 and DTU-Scan105



DTU-Scan106 and DTU-Scan110



Blender (NeRF synthetic)



IN2N-data bear scene



IN2N-data person scene



Background editing

(Leftmost renders are for reference)

Our Extracted Background

Our Full Model

IN2N

PDSNERF

PDSSplat

BibTeX

@InProceedings{...,
        author    = {Author#1, Author#2, Author#3},
        title     = {NeuSEditor: From Multi-View Images to Text-Guided Neural Surface Edits},
        booktitle = {},
        year      = {2025},
      }