Meta AI Unveils SAM3D Model for Top-Notch 3D Reconstruction Using a Single Image
2025-11-20 / Read about 0 minute
Author:小编   

As reported by AI-related sources, Meta AI has just launched the newest addition to its Segment Anything series, namely SAM3D. This model comes with two distinct sets of weights: SAM3D Objects and SAM3D Body. The SAM3D Objects variant is tailored for the reconstruction of general objects and scenes. On the other hand, SAM3D Body is specifically geared towards portrait modeling. Remarkably, both models are capable of generating 3D assets that boast texture, material, and geometric consistency, all derived from a single 2D photograph. They outperform solutions based on NeRF (Neural Radiance Fields) and Gaussian Splatting.

The model leverages a combined 'spatial location-semantic' encoding technique. This approach guarantees physical accuracy, making it directly applicable in a variety of fields, including AR/VR (Augmented Reality/Virtual Reality), robotics, and post-production for film and television.

Official tests reveal impressive results. SAM3D Objects has managed to reduce the Chamfer Distance by 28% and enhance normal consistency by 19% on publicly available datasets. Meanwhile, SAM3D Body has surpassed the current best single-image method by 14% in the MPJPE (Mean Per Joint Position Error) metric on the AGORA-3D benchmark.

At present, the model has been seamlessly integrated into the creative tools of Quest3 and Horizon Worlds. Developers have the option to access the API (priced at $0.02 per model) via the Edits and Vibes applications. Furthermore, a real-time mobile inference SDK is set to be released in the first quarter of 2026. Meta has also taken the initiative to open-source the weights, inference code, and evaluation benchmarks. Additionally, it has introduced the 'View in Room' feature on Facebook Marketplace.