LLaNA is the first NeRF-language assistant, capable of performing new tasks such as NeRF captioning and NeRF QA.

NeRF Captioning

NeRF QA

Abstract

We present LLaNA, the first general-purpose NeRF-language assistant capable of performing new tasks such as NeRF captioning and Q&A.

Multimodal Large Language Models (MLLMs) have demonstrated an excellent understanding of images and 3D data. However, both modalities have shortcomings in holistically capturing the appearance and geometry of objects. Meanwhile, Neural Radiance Fields (NeRFs), which encode information within the weights of a simple Multi-Layer Perceptron (MLP), have emerged as an increasingly widespread modality that simultaneously encodes the geometry and appearance of objects. This work investigates the feasibility and effectiveness of ingesting NeRF into MLLM.

Notably, our method directly processes the weights of the NeRF’s MLP to extract information about the represented objects without the need to render images or materialize 3D data structures. Moreover, we build a dataset of NeRFs with text annotations for various NeRF-language tasks with no human intervention. Based on this dataset, we develop a benchmark to evaluate the NeRF understanding capability of our method. Results show that processing NeRF weights performs favourably against extracting 2D or 3D representations from NeRFs.

LLaNA Architecture

In this work, we explore how a NeRF assistant can be realized by processing the NeRF weights directly. For this reason, we emply as our meta-encoder the architecture nf2vec, which takes as input the weights of a NeRF and yields a global embedding that distills the content of the input NeRF. Then, we build LLaNA by leveraging on a pre-trained LLM with a Transformer backbone, in our experiments LLaMA 2, and injecting the NeRF modality into its embedding input space. We employ a trainable linear projection layer, φ, to project the embedding of the input NeRF computed by the meta-encoder into the LLaMA 2 embedding space.

LLaNA is trained in two stages, where in the first we train the projector network φ to align the NeRF and the word embedding spaces while keeping the LLM weights fixed, and in the second we optimize both the projector and the LLM, to help the model understand and reason about NeRF data.

Architecture of LLaNA

ShapeNeRF-Text Dataset

ShapeNeRF-Text is a NeRF-language benchmark based on ShapeNet, providing conversations about 40K NeRFs. Following the structure defined in PointLLM, each object is paired with a brief description, a detailed description, 3 single-round QAs and one multi-round QA. The automatic annotation pipeline relies on multi-view captioning and text generation, leveraging the LLaVA and LLaMA models.

Related Works

Other recent works have explored the use of LLM to reason on 3D world.

PointLLM and GPT4Point achieve 3D-language understanding, leveraging colored point clouds as input data representation. LLM-Grounder proposes a method for performing Open-Vocabulary 3D Visual Grounding based on OpenScene and LERF, leveraging multi-view images and point clouds as input data representation. In contrast, LLaNA considers NeRF as the only input modality.

BibTeX

@misc{amaduzzi2024llana,
      title={LLaNA: Large Language and NeRF Assistant}, 
      author={Andrea Amaduzzi and Pierluigi Zama Ramirez and Giuseppe Lisanti and Samuele Salti and Luigi Di Stefano},
      year={2024},
      eprint={2406.11840},
      archivePrefix={arXiv}}
}