Machine-learning models used to direct the journeys of Google Maps users have been retrained to adapt to changing traffic conditions during the coronavirus outbreak.
“Since the start of the COVID-19 pandemic, traffic patterns around the globe have shifted dramatically,” said Johann Lau, the product manager at Google Maps.
“We saw up to a 50 per cent decrease in worldwide traffic when lockdowns started in early 2020. Since then, parts of the world have reopened gradually, while others maintain restrictions. To account for this sudden change, we’ve recently updated our models to become more agile.”
Relying on old data, where common patterns describing the traffic conditions is no longer feasible, Google said. AI engineers have had to retrain their models on more current data and now ETA approximations are based on traffic conditions around the world recorded from the last two to four weeks.
“Our ETA predictions already have a very high accuracy bar–in fact, we see that our predictions have been consistently accurate for over 97 per cent of trips,” Lau said.
Artful prankster creates Google Maps traffic jams by walking a cartful of old phones around Berlin
To improve accuracy even further, The Chocolate Factory teamed up with Alphabet stablemate biz DeepMind to implement a system based on a graph neural network architecture. The model uses a mixture of live traffic conditions and older traffic patterns gleaned from previous data.
“Our model treats the local road network as a graph, where each route segment corresponds to a node and edges exist between segments that are consecutive on the same road or connected through an intersection,” DeepMind explained. “In a graph neural network, a message passing algorithm is executed where the messages and their effect on edge and node states are learned by neural networks.”
The structure of roads in different places vary; quieter, more rural areas probably have smaller graphs with less nodes, whereas busy intersections in a city have larger graphs with more nodes. Training the neural networks on these different graphs, therefore, is tricky.
“A big challenge for a production machine learning system that is often overlooked in the academic setting involves the large variability that can exist across multiple training runs of the same model,” said DeepMind.
The trick to stabilizing the model was to implement a reinforcement method that allows it to work out the best learning rate. For example, if live traffic conditions or the previous weeks recorded for its training data changes dramatically over time then the model will do better if it has a higher learning rate.
That way, it can adapt to the changes better, say if a city starts lifting lockdown measures and the roads get busier. A slower learning rate is more suitable for areas that don’t experience as much change.
“By automatically adapting the learning rate while training, our model not only achieved higher quality than before, it also learned to decrease the learning rate automatically. This led to more stable results, enabling us to use our novel architecture in production,” Google concluded. ®