As vehicles start incorporating more sensors and interfacing with wireless networks, particularly as they move toward fully-automated operation, an “ocean of data” will result, said Barclays analyst Brian Johnson (not the one from AC/DC) in a note last Wednesday. Managing all this data will pose engineering challenges, but also provide new opportunities.
Johnson says that an autonomous vehicle loaded with cameras, LiDAR, and other sensors could theoretically generate up to 100 gigabytes per second. “Assuming the entire US fleet of vehicles (260mn vehicles) has a similar data generation, it would create an ocean of data,” he says “To put it in context, one hour’s worth of raw data across the entire US fleet would be ~5,800 exabytes in size.”
One exabyte is equal to one million terabytes. Fifty-eight hundred exabytes would likely be around 100 million times the amount of data contained in all books ever written (assuming Columbia Journalism Review had the math right in this article from 2008).
Johnson says that all this data will create challenges for management, storage, and analysis. At the same time, he says it will present new opportunities to provide data-driven, vehicle-related services like predicting vehicle maintenance needs, flagging warranty issues, and enabling self-driving cars.
And like everyone else, Ford is developing ways to put all that data to work. The automaker is already considered a leader in the autonomous-vehicle space, and is looking at other opportunities like crowdsourcing maps of local potholes, tracking and analyzing the routes of fleet vehicles, and even combining vehicle-generated data with information from other sources.
We don’t yet know all the myriad ways in which Ford might end up leveraging big data to improve transportation – but frankly, neither does Ford. The company’s “Smart Mobility” plan is still very much in the “throw things at the wall and see what sticks” phase. We’ll all just have to stay tuned.