logo
SimReady Library
EgoSuite
RoboFinals
Lightwheel-Platform Enterprise
Customers
About
logo

Lightwheel-Platform Enterprise

Make Simulation Successful

Lightwheel Lab Enterprise delivers an end-to-end sim2real pipeline
and comprehensive data factory for building Physical AI models.

LW-BenchHub Training
Framework

LW-BenchHub Training Framework is designed for robotics research and development teams looking to accelerate their
work with comprehensive simulation capabilities built on IsaacLab, with upcoming Newton solver integration.

Who It’s For

Smaller Engineering Team

Emerging robotics teams transitioning into simulation-based development who need a complete, ready-to-use framework without building infrastructure from scratch
Research labs and academic groups seeking to adopt IsaacSim for manipulation, locomotion, or teleoperation research with minimal setup overhead
Startups and small teams that want to leverage advanced simulation features (GPU parallelization, photorealism, diverse benchmarks) without dedicating resources to custom tooling

Larger Engineering Team

Established robotics teams looking to enhance their existing workflows with specialized features like sim-to-real tools, domain randomization, or multi-robot teleoperation
Organizations standardizing on IsaacSim that need a proven framework with extensive RL infrastructure, baseline algorithms, and benchmark tasks

Feature Comparison

Feature ComparisonFeature Comparison

See What's Next

LW-BenchHub is built on IsaacLab and will support Newton solver capabilities as they become available.
Full Newton-IsaacLab integration is currently in development by NVIDIA, with a small update expected in December 2024 and the comprehensive integration planned for March 2025.
Once these updates are released, LW-BenchHub will incorporate Newton's advanced physics solving capabilities.
LW-BenchHub lowers the barrier to entry for sophisticated robot learning while providing the depth and flexibility that experienced teams require.
Whether you're prototyping your first reinforcement learning policy or scaling to thousands of parallel simulations, LW-BenchHub provides the comprehensive toolkit to get started quickly and grow with your needs.

End-2-End
Data Collection Pipeline

End-to-end data collection and generation capabilities

Simulation environments

Ego-centric data collection

Data generation and augmentation

Collect data from Isaac Sim and MuJoCo

What you need

Enhanced dataset diversity and quality without manual collection overhead.

Our solution

Collect comprehensive, physics-accurate trajectories from Isaac Sim and MuJoCo that capture the full state of robot interactions:

Rich sensory data

RGB/depth visuals, proprioceptive feedback, and tactile information

Physical parameters

Kinematic states (positions, velocities, accelerations) and contact dynamics (forces, torques, collision geometry)

Multiple data collection modalities

Data Collection using Teleoperation in simulation: Human-guided demonstrations with full physics fidelity
Data Collection using Reinforcement learning in Simulation: Autonomous policy exploration and optimization

Simulation environments

Collect data from Isaac Sim and MuJoCo

What you need

Enhanced dataset diversity and quality without manual collection overhead.

Our solution

Collect comprehensive, physics-accurate trajectories from Isaac Sim and MuJoCo that capture the full state of robot interactions:

Rich sensory data

RGB/depth visuals, proprioceptive feedback, and tactile information

Physical parameters

Kinematic states (positions, velocities, accelerations) and contact dynamics (forces, torques, collision geometry)

Multiple data collection modalities

Data Collection using Teleoperation in simulation: Human-guided demonstrations with full physics fidelity
Data Collection using Reinforcement learning in Simulation: Autonomous policy exploration and optimization

Ego-centric data collection

Gather data using physical robots and objects

What you need

Authentic real-world data to complement simulation and validate your models.

Our solution

Gather data using physical robots and objects to capture real-world dynamics, edge cases, and environmental variations.

Bridge the sim-to-real gap with teleoperation demonstrations and sensor recordings from actual deployment scenarios.

Seamless Integration

Our system is hardware-agnostic and supports a wide range of commercial and specialized devices:

Next-generation research glasses: Meta Aria Gen 2 (featuring state-of-the-art sensor suites with RGB, SLAM cameras, eye tracking, spatial audio, PPG sensors, contact microphones, and on-device machine perception)
Consumer AR/VR headsets: Meta Quest, Apple Vision Pro, Pico, and other XR platforms
Smart glasses: Meta Ray-Ban, and other ego-centric eyewear solutions
Action cameras: GoPro, Insta360, custom body-mounted cameras
Industrial capture systems: High-speed cameras, depth sensors, multi-camera rigs

This flexibility allows us to optimize for your specific use case—whether prioritizing portability, sensor density, image quality, field of view, or real-time on-device processing capabilities.

Our data is delivered synchronized, calibrated, and formatted for immediate integration into training workflows:

Time-aligned sensor streams with hardware-synchronized timestamps
Calibrated camera intrinsically and extrinsics
Pre-processed SLAM trajectories and 3D reconstructions
Compatible with modern imitation learning frameworks (RLDS, LeRobot, etc.)
Scalable formats optimized for large-scale model training
Optional annotation and labeling services
Comprehensive Data Capture

Beyond visual data, our system integrates comprehensive sensor configurations tailored to your requirements:

Full-body motion capture: Hybrid marker-based and markerless tracking systems (OptiTrack Duplex Mode compatible) for precision kinematics
Proprioceptive sensors: IMUs, accelerometers, gyroscopes, magnetometers, barometers
Contact and haptic data: Force/torque sensors, pressure sensors, tactile arrays, grip force measurement
Biometric sensors: PPG for heart rate, EMG, joint angle encoders
Audio capture: Spatial microphones, contact microphones for voice isolation
Environmental sensing: GNSS positioning, depth sensing
Custom sensor integration: Modular architecture accommodates any sensor modality your application demands

This flexibility allows us to optimize for your specific use case—whether prioritizing portability, sensor density, image quality, field of view, or real-time on-device processing capabilities.

We deploy both tracking paradigms to provide comprehensive spatial understanding:

Inside-out: Person-worn cameras and headsets with real-time SLAM, hand tracking, and eye tracking capturing environmental interactions and manipulation dynamics
Outside-in: External optical motion capture arrays for sub-millimeter 6-DoF pose estimation and third-person scene reconstruction

This dual approach delivers the visual grounding, contact dynamics, and kinematic ground truth that modern VLA architectures and world models require. Our hybrid tracking solutions enable simultaneous markerless and marker-based capture, streaming both motion data and video in real time.

Proven Value

Ego-centric data naturally aligns with how robots perceive and interact with their environment. Our collection methodology captures:

Human demonstrations from the operator's viewpoint, directly transferable to robot embodiments
Contact-rich manipulation dynamics often missed by external observation alone
Dense visuomotor correspondences critical for action prediction and policy learning
Temporal dynamics of task execution with precisely synchronized multi-modal streams
Natural language context and environmental audio for grounded language understanding
Fine-grained hand-object interactions with combined eye tracking and full-body kinematics

Our collection infrastructure leverages the same technology powering breakthrough research in egocentric AI and robotics, including datasets like Ego-Exo4D that have become foundational tools across computer vision and robotics communities.

Data generation and augmentation

Leverage advanced generative models and world models (Cosmos, MimicGen, and others) to enhance
data quality, quantity, and physical accuracy—customizable based on your requirements

What you need

Enhanced dataset diversity and quality without manual collection overhead.

Our solution

Leverage advanced generative models and world models (Cosmos, MimicGen, and others) to enhance data quality, quantity, and physical accuracy—customizable based on your requirements.

Augment existing datasets with procedurally generated variations, synthetic scenarios, and domain randomization to improve model robustness and generalization.

SimReady Asset Library

Production-ready assets validated for robotics simulation

Objects

Environments

Supported Tasks

Rigid objects

Everyday items with validated geometry, mass, and inertial properties

Articulated objects

Deformable objects

Rigid objects

Everyday items with validated geometry, mass, and inertial properties

Articulated objects

Deformable objects

Precise geometry matching real-world dimensions
Validated mass and inertia tensors
Calibrated contact dynamics (friction coefficients, restitution, contact stiffness)
Material properties tuned for realistic interaction

Optimized Robot Models

Access our curated library of the most commonly used robot platforms

Commonly used robot hands in the market and robot models

Pre-configured models of popular platforms with validated kinematics, dynamics, and control characteristics that match real hardware behavior.

Dexterous robotic hands, fine-tuned for minimal sim-to-real gap

Specialized Dexterous Hand models with carefully calibrated contact dynamics, friction parameters, and actuator models—validated against real-world performance to ensure learning transfers seamlessly from simulation to physical robots.

Production-tested configurations ready for immediate use

Models that have been tested and refined through real-world deployment cycles, with optimized parameters for physics accuracy, rendering fidelity, and computational efficiency.

Commonly used robot hands in the market and robot models

Pre-configured models of popular platforms with validated kinematics, dynamics, and control characteristics that match real hardware behavior.

Dexterous robotic hands, fine-tuned for minimal sim-to-real gap

Specialized Dexterous Hand models with carefully calibrated contact dynamics, friction parameters, and actuator models—validated against real-world performance to ensure learning transfers seamlessly from simulation to physical robots.

Production-tested configurations ready for immediate use

Models that have been tested and refined through real-world deployment cycles, with optimized parameters for physics accuracy, rendering fidelity, and computational efficiency.

Benchmarking &
RL Policy Evaluation API

Know exactly where you stand. Whether you're publishing research, optimizing for production deployment, or
validating a new learning algorithm, our benchmarking infrastructure gives you the credibility and insights you need.

How Does Your Approach Compare to State-of-the-Art?

Training robots for specific behaviors is just the first step — you need to know how your RL policies stack up against established benchmarks and competing approaches. Are you meeting industry standards? Outperforming baseline methods? Our automated evaluation framework gives you the answers.

Compare Against State-of-the-Art Benchmarks

Measure your policy's performance against established standards across manipulation, locomotion, and multi-robot tasks. Validate your approach against published results from leading research labs and production systems.

Automated RL Policy Evaluation API

Stop manually running evaluations — our API automatically benchmarks your trained policies across standardized test suites. Receive comprehensive performance metrics, success rates, and video recordings for detailed analysis. Track improvements across training iterations and identify performance bottlenecks.

What You Get

Objective performance metrics against recognized benchmarks

Video recordings for qualitative analysis and debugging

Automated evaluation workflows that integrate into your training pipeline

Model

- GO-1
pi05_base
GR00T-N1.5-3B
OpenVLA-7B
Octo-Base-1.5

Zero-shot VLM Model Evaluation

Pull the blue cable

Success rate: 0%

Unplug the blue cable

Success rate: 0%

Get Started

Ready to Get Started?

The Lightwheel Enterprise Package brings together all the tools, assets, and services
you need to accelerate your robotics development from simulation to reality.

LW-BenchHub Training Framework
Data Collection
SimReady Asset Library
Optimized Robot Models
Benchmarking & Policy Eval