Hyperspectral Imaging for the next generation of Advanced Driver Assistance Systems

 

Seeing what other sensors miss: Material-based perception for ADAS


Advanced driver assistance systems (ADAS) continue to evolve — yet perception under real-world conditions remains a major challenge. Whether it’s fog, glare, wet roads, or hard-to-see pedestrians, conventional sensors like RGB cameras or lidar detect color and shape — but not what objects are made of.

Hyperspectral imaging adds a new dimension to perception.
It measures the spectral signature of every pixel, enabling AI systems to classify objects based on material composition — not just appearance. This supports more robust, explainable perception in critical scenarios.

 

What is HSI-Drive?

 

HSI-Drive is a scientifically validated reference dataset for material-based scene interpretation in driver assistance systems. It was developed by the Digital Electronics Design Group (GDED) at the University of the Basque Country (UPV/EHU).

The dataset includes hundreds of annotated hyperspectral images and video sequences from real driving scenarios in Spain. All recordings were captured using a Photonfocus hyperspectral camera with a 25-band imec NIR snapshot sensor.

Photonfocus provided the imaging platform, while annotation and dataset curation were carried out independently by UPV/EHU researchers.

 

 

What the data reveals – Turning spectra into meaning


HSI-Drive is not labeled by appearance — each pixel is annotated based on material composition. Classes like asphalt, water, metal, glass, or vegetation were defined by their unique spectral signatures, captured using a 25-band snapshot NIR sensor with a Fabry-Pérot filter mosaic. Manual annotation was carried out conservatively, focusing only on regions with clear spectral reflectance. This ensures high spectral purity within each class and makes the dataset an ideal foundation for machine learning models that go beyond visual appearance.


In all tested conditions, hyperspectral imaging enabled:

 

  • Reliable discrimination of water, asphalt, and shadows
  • Separation of reflective materials like glass, metal, and painted surfaces
  • Identification of vegetation, sky, concrete, and road markings
  • Material-based classification even in low-contrast or adverse Scenarios


The result: a high-quality reference dataset with pixel-level material labeling — enabling the training and validation of AI models that require trustable ground truth. This connection between expert annotation and algorithmic interpretation is essential for advancing explainable, robust perception AI in real-world conditions.

Watch how hyperspectral data enables material-based scene segmentation — even in low light and wet road conditions.

Low lighting and wet weather:
Rainy weather with droplets on the camera lens:

The technology behind it: MV4 camera and NIR snapshot sensor


The imaging system used to create the HSI-Drive dataset is based on a combination of proven optical engineering and spectral innovation. The MV4-D2048x1088-C01-HS02-GT is the industrial successor to the camera platform used in HSI-Drive, offering the same spectral core in a more robust and integration-ready design.

It features a 25-band imec NIR snapshot sensor with a 5×5 Fabry-Pérot filter mosaic, capturing spectral data in the 665–975 nm range in a single exposure — no scanning, no moving parts.

Key features of the MV4:
 

  • Snapshot capture of all spectral bands

  • No moving parts – robust and vibration-resistant

  • Industrial housing for mobile or lab use

  • GigE Vision & GenICam compliant

  • SDK for spectral calibration and data export

 

What it means for ADAS developers


Urban traffic scenarios — with pedestrians, varied road materials, and changing visibility — push perception systems to their limits. In the image shown, conventional sensors may struggle to distinguish between wet asphalt, road markings, and moving people under uneven lighting.

Hyperspectral imaging adds a new layer of scene understanding by identifying what objects are made of — not just how they appear. This enables pixel-level recognition of relevant elements such as pedestrians, vehicles, road signs, and lane markings, even in visually ambiguous conditions.

Whether you're training AI models, validating perception stacks, or analyzing edge cases, spectral data delivers tangible benefits:
 

  • Material-based scene interpretation
  • More robust object detection in low-contrast conditions
  • Better segmentation and explainability for perception AI
  • Support for sensor fusion, simulation, and annotation pipelines
 
 

From research to real-world deployment


Photonfocus brings spectral imaging from research into application — with proven technology, integration-ready platforms, and deep machine vision expertise.

 
  • Speak with our modular camera experts
  • Request technical documentation
  • Get integration support
     


See more – because vehicles need to understand more than they see.