Comparing Different Deep Learning Architectures as Vision-Based Multi-Label Classifiers for Identification of Multiple Distresses on Asphalt Pavement

Aline Calheiros Espindola, Mujib Rahman, Senthan Mathavan, Ernesto Ferreira Nobre Júnior

Research output: Contribution to journalArticlepeer-review

Abstract

Distress measurement is essential in pavement management. Image-based distress identification is increasingly becoming an integral part of traffic speed network-level road condition surveys. This allows an aggregated summary of road conditions over the whole network, so it does not require an exact distress location within the lane. In this context, multi-label classification (MLC), based on convolutional neural networks (CNN), is proposed as a potential solution for distress identification from a network-level right-of-way (ROW) video survey. MLC has the advantage of low computing resource consumption, as it is implemented from lightweight classification networks. In this work, the developed MLC models used three different CNN architectures (VGG16, ResNet-34, and ResNet-50) to detect potholes, cracks, patches, and bleeding. The best model obtained 97% average accuracy with an F1-score of 93% in distress identification despite the variability in imaging hardware. This makes it possible to generalize the classification algorithm, allowing versatile applications and incorporating it into network-level pavement management systems. This model has good potential for fast and accurate distress identification from a video survey, avoiding the need for various types of expensive sensors like laser scanners.
Original languageEnglish
JournalTransportation Research Record
DOIs
Publication statusE-pub ahead of print - 28 Oct 2022

Keywords

  • Mechanical Engineering
  • Civil and Structural Engineering

Fingerprint

Dive into the research topics of 'Comparing Different Deep Learning Architectures as Vision-Based Multi-Label Classifiers for Identification of Multiple Distresses on Asphalt Pavement'. Together they form a unique fingerprint.

Cite this