Advanced driver assistance systems (ADAS) continue to evolve — yet perception under real-world conditions remains a major challenge. Whether it’s fog, glare, wet roads, or hard-to-see pedestrians, conventional sensors like RGB cameras or lidar detect color and shape — but not what objects are made of.
Hyperspectral imaging adds a new dimension to perception.
It measures the spectral signature of every pixel, enabling AI systems to classify objects based on material composition — not just appearance. This supports more robust, explainable perception in critical scenarios.
HSI-Drive is a scientifically validated reference dataset for material-based scene interpretation in driver assistance systems. It was developed by the Digital Electronics Design Group (GDED) at the University of the Basque Country (UPV/EHU).
| ![]() |

HSI-Drive is not labeled by appearance — each pixel is annotated based on material composition. Classes like asphalt, water, metal, glass, or vegetation were defined by their unique spectral signatures, captured using a 25-band snapshot NIR sensor with a Fabry-Pérot filter mosaic. Manual annotation was carried out conservatively, focusing only on regions with clear spectral reflectance. This ensures high spectral purity within each class and makes the dataset an ideal foundation for machine learning models that go beyond visual appearance.
In all tested conditions, hyperspectral imaging enabled:
The result: a high-quality reference dataset with pixel-level material labeling — enabling the training and validation of AI models that require trustable ground truth. This connection between expert annotation and algorithmic interpretation is essential for advancing explainable, robust perception AI in real-world conditions.
Watch how hyperspectral data enables material-based scene segmentation — even in low light and wet road conditions.
Photonfocus brings spectral imaging from research into application — with proven technology, integration-ready platforms, and deep machine vision expertise.
|
|
|