We’re mapping and tracking the world around us with data. From the smartphone in your pocket using a combination of Bluetooth and web or cellular connectivity to alert you of Covid-19 (Coronavirus) exposure risks, to the more sophisticated use of geospatial information capture techniques used to build topographical maps of the Earth, we are creating a data map of the planet to build intelligence services, some of which will be used by specialists… and some of which may be used by everyone.
But geospatial location data has a fence around it (pun not intended), in that it predominantly delivers what it sets out to do in the first place i.e. record, track, deliver and analyze information about a geographical space. What it doesn’t do inherently well is to also add in any time-related information so that we can track the exact moment changes occur.
Yes, there is ‘time-stamped data’ as a format, but this is more prevalently used when collecting behavioral data (for example, user actions on a website) to record a representation of actions over time. That’s not quite the same as being able to augment geospatial data relatinbg to the world around us with a sophisticated level of time awareness.
Why do we need time-based geospatial technology?
But why bother trying to create time-based geospatial technology in the first place – and, moreover, why not just stick a time-stamped data tag onto geospatial data and use that method?
Co-founder and CTO of Tel Aviv headquartered dashcam technology company Nexar is Bruno Fernandez-Ruiz. As an in-car dashcam platform for crowd-sourced vision data and software applications, the Nexar team has clear opinions on how and why we need time-based geospatial data, as opposed to geospatial data with a ‘plain old’ time stamp. Clarifying the difference, he explains that, “It’s not just a timestamp for an object that you put on the map. This is about having a time range where you verify that an object was located somewhere in the world between a start and an end time. A timestamp is not sufficient, you need lineage across the time dimension.”
Nexar is a company that creates a visual index of the world by using dashcams (or, in the future, when they eventually become ubiquitous, car cameras). Its dashcam imagery is a novel (i.e. new) data set that can create new applications that can detect work zones where people are working and moving, it can be used to track changes that happen inside stores and other retail establishments, it can also (and has already) be used to create a Google streetview-like website, which Nexar calls Nexar Streets (Chicago has already been used as the test case). All imagery is sourced from the public domain and all Personally Identifiable Information (PII) is removed before delivery.
Dear GIS, excuse me, but you have ‘issues’
Nexar’s vision technology can locate images within eight meters, using a hybrid coarse-to-fine approach that leverages visual and GPS location cues. It is based on a deep learning model to identify a driver’s accurate location using Nexar’s massive archive of anonymized images, solving location issues in dense ‘urban canyon’ big cities.
In short, the technology looks at past images in the area and using AI, infers where the current image is coming from.
“GIS is no longer just about layers of data of things on the map, but also evidence of those things on the map. We see mapping companies looking at images of road signs, obstructions etc. to verify maps, logistics companies looking at the curb to understand curbside behavior throughout the day etc. Yet when we think of evidence, being able to see images of the same area taken at different times becomes important,” said Fernandez-Ruiz.
Creating a location memory
When Nexar adds location + evidence and the element of time – it is adding a new layer into the geospatial dataset i.e. memory. If we create a ‘memory’ of what the area was like before (in the morning, the curb is free, there is no congestion on a given street, there are many parking spots) and what it is like now, then this new layer of data can give real time insight into cities and roads and serve as a geospatial layer for a new breed of apps.
The underlying technology involves using dashcams to take images of the real world, and then locate them accurately on a map, using AI. Given that Geographic Information Systems (GIS) data has issues around its ability to accurately perform localization (i.e. the exact pinpointing and identification of objects are a granular and exact level, such as: whether a parking spot is free, whether a tree has fallen over a road, or whether there is heavy rain on a section of highway) – vision-based localization complements Global Positioning System (GPS) technology at low cost. It’s worth noting that this is only temporary, as next-generation GPS-RTK (Real Time Kinematic) is on the way that will increase significantly Global Navigation Satellite System (GNSS) localization).
Fernandez-Ruiz reminds us that cars are getting smarter and will soon incorporate an increasing set of cameras; cars use on-board sensors much in the way people use vision to navigate. Even today, and especially with car cameras, these sensors create huge data streams that can complement single-car, line-of-sight, vision.
“These data streams can augment the understanding of the road beyond line-of-sight and allow us to make better decisions. It’s just like our sense of hearing adds to what we see on the road. What if we could collect that big data, overcome the compute connectivity barrier and deliver a shared and low latency vision of the road, to the area where it matters most, using edge-compute [as in the Internet of Things],” said Fernandez-Ruiz.
He says that when his firm’s technology adds time to create a memory of an area at a given time, this can be used to create notifications to Autonomous Vehicles of the road around them (of everything that isn’t line-of-sight), support road safety efforts trialed by various Departments of Transportation and even better driving applications for drivers of today.
“In this [above] case, GPS doesn’t work well – since accessing this shared memory of the road around you won’t work well for such driver applications. To make the world algorithmically addressable, Nexar is driving a new iETF standard that would reflect the number of dynamic conditions and variables affecting the road. To do this, we divide the road into 1m squared hexagonal ‘road-tiles’ that limit complexity and the number of variables at play. Tiles have dimensions of computational context that GPS coordinates do not, which helps coalesce and correlate data. Each individual tile is defined by a 64bit mask with 16 fields of enumeration for condition, speed, events, signs, density, platooning, schedules and impact,” said Fernandez-Ruiz.
This still-nascent but increasingly-proven technology is developing fast. Real time time-based geospatial-awareness could be used extensively not just in autonomous vehicles, but in every area of life that people pass through on their daily journeys, some of which will be on foot.
Software application developers will want to see this type of functionality tightly packaged, well supported, properly secured and presented in an easy to integrate way if we are all going to get this kind of spacetime software in the palm of our hands. We’ll still get traffic congestion, gridlocks and holdups for the foreseeable future, but one day we may even be able to automate ourselves out of those jams as well.