Development of Freeway Travel Time Reliability Prediction Methods

Author:
Zhang, Xiaoxiao, Civil Engineering - School of Engineering and Applied Science, University of Virginia
Advisors:
Fontaine, Michael, EN-Eng Sys and Environment, University of Virginia
Smith, Brian, EN-Eng Sys and Environment, University of Virginia
Abstract:

While average travel times are widely used in traffic operations and planning, they only represent typical situations instead of the whole picture of user experiences. Unexpected interruptions, including incidents, inclement weather, special events, and work zones, cause deviations from average travel times, which is equally if not more important to roadway users, especially those who care about on-time arrivals. The concept of travel time reliability was developed to quantify the variability in travel times and becomes a critical aspect for evaluating transportation networks.

With the MAP-21 System Performance Measure requirements, state Department of Transportation (DOTs) are responsible for not only reporting travel time reliability but also setting targets and showing progress towards those targets. However, current targets are set based on either historical trend lines or change rates of other congestion measures. Prediction models that account for changes in underlying impact factors do not exist. In order to know how to improve travel time reliability, set realistic performance targets, and accurately estimate benefits of transportation infrastructure investments, state DOTs need to advance reliability analysis, especially in terms of understanding the impact of key factors on reliability.

Motivated by such needs, this dissertation proposes models to estimate quantiles (the 50th, 80th, and 90th) of travel time distributions to quantify travel time reliability impact factors and develop predictions for selected reliability measures (Level of Travel Time Reliability (LOTTR) and the 90th percentile). Data for independent variables were collected from both traditional publicly maintained and emerging crowdsourced sources, and their impacts on interpreting and predicting reliability were compared. It was found that the model results of crowdsourced data were unstable and difficult to draw patterns. This is mainly caused by the data quality issues, such as unbalanced spatial density, duplicated reporting, and inconsistent event classification due to individual observer bias.

To explore suitable statistical methods for reliability modeling, mixed linear quantile regression models (LQMM) were built using publicly maintained data sources. The performance of model predictions was compared with one of the currently used prediction methods, using fitted trend lines. The results indicated that the accuracy improvements quantified by mean absolute percent error (MAPE) were: 1) freeway segments: 82%, 74%, and 68% for the 50th, 80th, and 90th percentiles, respectively; 2) interchange segments: 15%, 12%, and 5% for the 50th, 80th, and 90th percentiles. To further improve the prediction accuracy, generalized random forests (GRF) were applied using the same set of input variables by mixed linear quantile regression models. By comparing the MAPE values, for both freeway and interchange models, GRF improved the prediction accuracy over LQMM by 42% (freeway) and 65% (interchange) for the 50th percentile, and 12% (freeway) and 29% (interchange) for the 80th percentile. However, the accuracy reduced for the 90th percentile by 20% and 9% for the freeway and interchange segments, respectively. On the other hand, GRF models provided impact factors interpretation through the variable importance ranking. Unlike the mixed linear quantile regression models, the variable importance ranking cannot tell the direction of the impacts (e.g., whether targeted variables increase/decrease with independent variables increase/decrease).

The two resulting models, mixed linear quantile regression and generalized random forests, were used to evaluate a series of projects selected by the Virginia Department of Transportation (VDOT) to illustrate their application. After conducting the before and after studies on the selected projects, it was found that LQMM captured the changes of the 90th percentile better, and GRF captured the changes of LOTTR better in most cases. GRF models were better at capturing the reliability changes caused by non-recurrent events, such as incidents or work zones. GRF models could also reflect the impact of variables that were removed from LQMM models due to insignificance, such as the Safety Service Patrol (SSP) factor in one of the case studies.

This dissertation is the first study to develop reliability models considering various impact factors (e.g., demand/capacity, incidents, weather, work zones, geometric features, and SSP). The models were constructed and validated using data from all interstate highways in Virginia at the Traffic Message Channel (TMC) segment level, allowing models to predict reliability changes caused by multiple impact factors simultaneously and be applied at the statewide network level. The methodology used by this study estimated the relationship between quantiles of travel time distributions and their impact factors, which is flexible enough to infer various reliability measures. The estimation techniques used in this study, including mixed linear quantile regression and random forests, have not been applied to reliability analysis in previous studies. The prediction accuracy provided by the proposed models improved significantly compared to the currently used trend line method.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Travel Time Reliability, Distribution Clustering, Quantile Regression
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2021/04/28