Know3D: Prompting 3D Generation with Knowledge from Vision-Language Models

1Peking University,  2Math Magic,  3The Hong Kong University of Science and Technology,  4Dalian University of Technology
*Equal contribution.   Corresponding authors.

We introduce Know3D, a novel framework that incorporates multimodal large language model knowledge into 3D generative processes, enabling language-controllable generation of unseen regions for 3D assets.

Abstract

Recent advances in 3D generation have improved the fidelity and geometric details of synthesized 3D assets. However, due to the inherent ambiguity of single-view observations and the lack of robust global structural priors caused by limited 3D training data, the unseen regions generated by existing models are often stochastic and difficult to control, which may sometimes fail to align with user intentions or produce implausible geometries. In this paper, we propose Know3D, a novel framework that incorporates rich knowledge from multimodal large language models into 3D generative processes via latent hidden-state injection, enabling language-controllable generation of the back-view for 3D assets. We utilize a VLM-diffusion-based model, where the VLM is responsible for semantic understanding and guidance. The diffusion model acts as a bridge that transfers semantic knowledge from the VLM to the 3D generation model. In this way, we successfully bridge the gap between abstract textual instructions and the geometric reconstruction of unobserved regions, transforming the traditionally stochastic back-view hallucination into a semantically controllable process, demonstrating a promising direction for future 3D generation models.

Method

Know3D pipeline

We propose Know3D, a novel knowledge-guided 3D generation framework that uses vision-language models (VLMs) to achieve controllable generation of unobserved regions. Instead of directly integrating high-level VLM features into the 3D generation pipeline, our method employs a diffusion model as an intermediate bridge to convert semantic knowledge into structural priors in the image space.

Comparison

More Results

BibTeX

@article{chen2025know3d,
  title={Know3D: Prompting 3D Generation with Knowledge from Vision-Language Models},
  author={Wenyue Chen and Wenjue Chen and Peng Li and Qinghe Wang and Xu Jia and Heliang Zheng and Rongfei Jia and Yuan Liu and Ronggang Wang},
  year={2026},
  journal={arXiv preprint arXiv:2603.22782}
}