Helm.ai, a leading AI software provider, has introduced WorldGen-1, a pioneering multi-sensor generative AI foundation model designed to simulate the entire autonomous vehicle stack. This advanced model synthesizes highly realistic sensor and perception data, predicts the behavior of vehicles and pedestrians, and streamlines the development and validation of autonomous driving systems.
Key Highlights:
- WorldGen-1 synthesizes sensor and perception data across multiple modalities.
- Trained on thousands of hours of diverse driving data for comprehensive simulation.
- Generates high-fidelity multi-sensor labeled data to address challenging corner cases.
- Extrapolates from real camera data to multiple other modalities, enhancing dataset richness.
- Predicts behaviors of pedestrians, vehicles, and the ego-vehicle, aiding in scenario planning.
- Facilitates advanced multi-agent planning and prediction for autonomous driving.
Helm.ai’s WorldGen-1 leverages innovative generative DNN architectures and Deep Teaching, an efficient unsupervised training technology, to cover every layer of the autonomous driving stack, including vision, perception, lidar, and odometry. This model simultaneously generates realistic sensor data for surround-view cameras, semantic segmentation at the perception layer, lidar front-view, lidar bird’s-eye-view, and the ego-vehicle path in physical coordinates. By producing consistent data across the entire AV stack, WorldGen-1 accurately replicates real-world scenarios from the perspective of the self-driving vehicle.
Moreover, WorldGen-1 can extrapolate from real camera data to create multi-sensor datasets, enhancing the richness of camera-only datasets and reducing data collection costs. This capability allows for the augmentation of existing datasets, providing a more comprehensive training set for autonomous driving systems.
Beyond simulation and extrapolation, WorldGen-1 can predict behaviors of pedestrians, vehicles, and the ego-vehicle based on observed input sequences, generating realistic temporal sequences up to minutes in length. This enables the creation of a wide range of potential scenarios, including rare corner cases, and supports advanced multi-agent planning and prediction.
“Combining innovation in generative AI architectures with our Deep Teaching technology yields a highly scalable and capital-efficient form of generative AI. With WorldGen-1, we’re taking another step towards closing the sim-to-real gap for autonomous driving, which is the key to streamlining and unifying the development and validation of high-end ADAS and L4 systems. We’re providing automakers with a tool to accelerate development, improve safety, and dramatically reduce the gap between simulation and real-world testing,” said Helm.ai’s CEO and Co-Founder, Vladislav Voroninski.
“Generating data from WorldGen-1 is like creating a vast collection of diverse digital siblings of real-world driving environments at the level of richness of the full AV sensor stack, replete with smart agents that think and predict like humans, enabling us to tackle the most complex challenges in autonomous driving,” added Voroninski.
About Helm.ai:
Founded in 2016 and headquartered in Redwood City, CA, Helm.ai is developing the next generation of AI software for high-end ADAS, Level 4 autonomous driving, and robotic automation. For more information on Helm.ai, including its products, SDK, and career opportunities, visit Helm.ai or find Helm.ai on LinkedIn.
Sign up for our weekly emails to stay up to date on the latest news!