Since the rise of Gen-AI/3D, the importance of data is becoming more obvious. The quality and variety of data are crucial for training models and maintaining model performance. Especially important in 3D if the desire is to achieve a level of quality that can slot into AAA and virtual production workflows.
Today, I'm excited to share more about our work and vision at M-XR, a journey that began six years ago...
When I met Elliott Round in 2018, we were obsessed with creating realistic 3D assets and procedural workflows. We believed, and still do, that proceduralism is the key to scalable, realistic virtual world building.
However, the available 3D data to execute procedural workflows and train generative models simply does not exist. Traditional 3D assets are either too perfect, crafted by artists, or captured through scanning techniques that cannot produce accurate or compelling data for engines.
The core issue? Material data. This data is essential for shaders to render assets accurately in any engine. The more precise the data, the better the render. The bottom line is that these features are necessary for models to learn, and they do not exist within current datasets.
That's why we created a method to capture real-world data with unmatched accuracy and compatibility with any engine. This breakthrough has created a flywheel effect, generating rich, detailed 3D data that was previously unattainable. For the first time ever we can capture all the intricate features of reality, providing generative models with the information (data) it needs to reconstruct our world accurately and convincingly.
As our data is measured from reality, it holds deep context to the relationships between materials and surfaces—how they should look, feel, and impact their environment. Detailing how different assets and materials age or weather over time, and providing additional data points such as material classes to be used by physics or sound engines.
Now, our plan is to integrate our technology with every capture device and engine in the world. Enabling studios and creators to automate their scanning pipelines, creating hyper realistic 3D assets faster than ever and acquiring 3D data that will supercharge and sustain accuracy and quality of generative and procedural 3D workflows.
This is important as generative models are beginning to see diminishing returns due to lack of quality data and structure.
Here are some of the foreseen benefits of the method:
Capture possible with smartphone, camera (DSLR) or custom rig.
Automated material and texture workflows, at any resolution and LOD.
Data modelled specifically for any shader or engine, enabling interoperability.
Material classification, enabling automatic segmentation of assets.
Additional and advanced material maps such as metallic and anisotropic.
Assets stored in our RAW .MXR format can be sampled from any at time. Meaning removing the need to recapture assets when a shader is updated.
Thank you to all our amazing team, partners and customers who continue to dig deep and support us in our mission to enable studios, teams and artists to create 3D content they care about.