Mining Domain Knowledge from Unstructured Multi-Modal Data for Smart Bridge Infrastructure Management

Author: ORCID icon
Li, Tianshu, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Harris, Devin, EN-Eng Sys and Environment, University of Virginia

The management of the bridge system encloses a pipeline of condition data collection (i.e. inspections), condition assessment, and deterioration prediction. Visual inspections are performed periodically, submitting ratings of bridge conditions to the National Bridge Inventory (NBI) and the element-level rating data to support deterioration prediction. While the current experience-driven inspection and assessment have been practiced for decades, today’s bridge infrastructure system in the U.S. faces critical preservation challenges in its sheer volume of aging and deteriorating bridges with limited funding and resources. Improving the efficiency of the bridge infrastructure management in the face of critical preservation challenges calls for the integration of automation concepts through the process of inspection, condition assessment, and deterioration prediction.

The current experience-driven condition rating process requires extensive effort in training and quality control to ensure the consistency of the assigned ratings. Additionally, bridge conditions are recorded by rating scores, which lose the local condition details and the opportunity for supporting well-informed maintenance decisions. The lack of details in the currently available bridge condition database limits the performance of the data-driven models that extract knowledge from past experience to guide decision-making in future maintenance.

Meanwhile, the bridge infrastructure system is historically rich in not only structured tabular data such as the NBI, but also unstructured descriptive data such as inspection reports and maintenance records. The inspection reports generated through the current infrastructure management practices only serve as records of activities, leaving the condition details and domain expertise buried in the reports without being fully exploited for further analysis. To that end, this study identifies visual and textual data from bridge inspection reports as an untapped resource of bridge condition information and mines domain knowledge from a large number of historical inspection reports for automatic condition rating and information extraction.

First, to improve the accuracy and consistency of bridge condition rating, a data-driven automatic condition rating model is proposed that maps natural language descriptions from bridge inspection reports to quantitative condition ratings. A highly interpretable hierarchical attention network employing word and sentence-level recurrent neural network encoders with an attention mechanism were developed to fully exploit the semantics and context of the heterogeneous textual data from the inspection reports. The proposed system was developed using a large collection of inspection reports collected from the Virginia Department of Transportation database. The developed model outperformed a variety of baseline systems in terms of accuracy and mean error metrics, while a diagnostic investigation of error cases revealed a number of inconsistency issues in the input data. Visualization of the resulting attention patterns demonstrated interpretable insights regarding the mapping of local descriptions to global condition ratings, which can also assist in the rating assignment by highlighting important indicators that may have been overlooked. The application of the proposed system to improve the consistency of bridge condition assessment was demonstrated via two use cases: automated rating recommendation that produces a rating for a given inspection narrative, and data-driven quality control that screens inspector-assigned ratings based on the corresponding narrative descriptions. The quality control application was examined against a series of assumed rating scenarios to illustrate how the proposed framework can reliably detect inconsistent ratings. The proposed framework can serve as a supportive tool for rating recommendation as well as quality control and error case analysis, which can proactively increase the statewide and nationwide consistency of condition rating practices.

Next, to fully exploit the multi-modal data from bridge inspection reports, a deep learning-based fusion approach is proposed for automated bridge condition rating using the visual and textual data from bridge inspection reports. Considering the structure of inspection reports that each contains a collection of images and a sequence of sentences that document local bridge conditions, the proposed fusion approach constructs visual and textual representations from images and sentences separately, and adopts a sequence encoder followed by an attention mechanism to fuse multi-modal representations to support condition rating. While the image-based defect recognition and condition assessment models have been extensively studied in the existing literature, results from this study show that the visual modality alone did not yield satisfactory condition rating performance. Condition rating using textual data from the inspection reports significantly outperformed the visual modality, and the proposed fusion approach introduced further improvements over the uni-modal baselines. This study further investigated the uncertainty of rating predictions under random disturbance introduced by data augmentation and dropout training strategy. The uncertainty analysis showed that 95% of the rating predictions for the testing data vary within 0.535, and referring the uncertain predictions to human investigations can further improve the rating performance. The proposed model can be used to process the bridge condition data collected from the current visual inspection practices to improve rating consistency, and discussions of this study point to the potential improvement in future inspection data collection that can further facilitate automated condition assessment.

Lastly, an information extraction framework is developed to extract bridge conditions from the inspection reports at a high level of detail. A natural language processing approach was developed to formalize the condition extraction problem by modeling inspection narratives as a combination of words representing defects, their severity and location, and formulating a sequence labeling task that accounts for the context of each word. The proposed framework employs a deep-learning-based approach and incorporates context-aware components including a bi-directional Long Short Term Memory (LSTM) neural network architecture and a Conditional Random Field (CRF) classifier to account for the context of words when assigning labels. Dependency-based word embeddings were also used to represent the raw text while incorporating both semantic and contextual information. The sequence labeling model was trained using bridge inspection reports collected from the Virginia Department of Transportation bridge inspection database and achieved an F1 score of 94.12% during testing. The proposed model also demonstrated improvements compared with baseline sequence labeling models, and was further used to demonstrate the capability of detecting condition changes concerning previous inspection records. Results of this study show that the proposed method can be used to extract and create a condition information database that can further assist in develop- ing data-driven bridge management and condition forecasting models, as well as automated bridge inspection systems.

This dissertation is a collection of three manuscripts that describes the aforementioned research works. Through the presented research outcomes, this dissertation highlights the value of unstructured bridge inspection documentation in supporting automated condition assessment and information extraction for the smart bridge infrastructure system.

PHD (Doctor of Philosophy)
Bridge Inspection Reports, Natural Language Processing, Multi-Modal Fusion, Smart Bridge Infrastructure Management
All rights reserved (no additional license for public reuse)
Issued Date: