How Maps Are Making Us Smarter
Written by Sanchit Agarwal   
Saturday, 08 July 2017

A 2.684Mb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE

The evolution of aerial maps is, in many respects, a story of the development of technology. It's also a narrative about human ambition: maps have always been important aids in making key decisions about commercial, military, and imperial goals. Once rare artifacts, they are now increasingly vital and available tools in the hands of multitudes--a true democratization of information.

While the earliest Babylonian clay tablets of 4,500 years ago crudely sketched out property boundaries, the Ancient Greeks applied mathematical principles to maps in order to help expand trade routes in the Mediterranean. By the Age of Discovery, cartographers relied on chronometers to determine longitude and, during explorations of the New World and regions beyond, used more sophisticated nautical and celestial means to create increasingly detailed and factual maps. Aerial photos from hot air balloons in the 19th century, then from airplanes and satellites in the 20th, sharpened accuracy and helped expand the uses and availability of maps.

In the last two decades, advances in software, algorithms, artificial intelligence, and deep learning have completely changed the landscape. Google and Microsoft have played an instrumental role in educating the masses. GPS-enabled smartphones have only accelerated the trend. Maps have graduated from just being a visualization/direction/reconnaissance commodity to a full-on analytical platform capable of empowering the consumer in many unimaginable ways in better, faster decision making. Now everyone can use maps not simply for directions, but to solve a multitude of everyday riddles, as well: tracking down the next taxi, deciding where to eat, monitoring traffic, finding places to meet up with friends.

Most of the mapping content that is currently consumed is two-dimensional and is generated from low-mid resolution Nadir view imagery. With the democratization of the mapping products and services and the general trend toward consumption of multidimensional experiences (3-D, 4-D, 5-D....), there is an implicit need to increase the number of dimension and perspectives in the mapping content and services as well. Oblique imagery (perspective/bird's-eye views) and 3-D maps is a natural evolution. The increase in the dimensions of mapping content will exponentially increase the power of the analytical platforms resulting in effective decision making.

Nearmap's upcoming introduction of 3-D, or oblique imagery, will provide a true bird's-eye perspective and unleash entirely new uses for such imagery. Given the consistency and currency of the maps published by Nearmap, the volatile nature of the rapidly changing world around us and the incredible ability of multi-perspective, high-resolution imagery to capture changes, time becomes a very compelling dimension for a lot of use cases. Given the temporal aspects of 3-D map products it is fair to characterize them as 4-D. This important 4th dimension allows users to perform change analysis and, in so doing, transforms the decision-making process.

Another way to look at the evolution of maps is how people use them in everyday life. Let's take three examples of use cases from today, tomorrow, and the more distant future.

The insurance market
A policyholder files a claim after some major storm damage to his roof. Normally, it costs the insurance company $200-$300 to send an inspector or claims adjuster to the home, to say nothing of the risks in having someone climb up a ladder and perform an inspection. Then comes a costly investigation to make sure the claim is legitimate and the homeowner isn't exaggerating the extent of storm-caused destruction.

With high-resolution aerial photography, updated more frequently than most satellite images, the entire process changes radically. If the insurance inspector has access to the latest images, including 3-D map content accessible in the cloud, he can see the damage without leaving his office. With proper telemetry, he can measure the affected area at his desk. And with historical photos taken, say, a couple of months apart, he can compare before and after shots of the roof. If the damage is real, he can come up with an estimate of repair costs on the spot; if photos suggest a fake claim, he can take the necessary actions.

Drone deliveries and self-driving cars
We've all read about these technologies and how they're being tested on a limited basis in select cities. When Amazon sends you your most recent purchase hours after your order is submitted, it has to solve a few practical problems before your package shows up. For a drone to make a safe landing in your back yard or at your front door, it can't rely on two-dimensional map content alone. There are trees, bushes, and neighboring homes to consider; there might be construction equipment around. Only the most up-to-date 3-D elevation data, along with real-time information, will allow a completely safe delivery.

Autonomous cars can't operate on just 2-D content, either. They must constantly generate and consume high-fidelity mapping content in order to navigate safely in a complex, urban environment. Lidar units, which use lasers to assemble a 360-degree view of the environment, along with radar sensors, collect data that are processed by machine-learning algorithms that help guide self-driving cars and avoid accidents. But highresolution aerial imagery can also help by providing a recent map of everything that is stationary--curbs, traffic lights, buildings, construction sites, and the like. With such technology on board, the cars' sensors need track only objects in motion: people crossing the street, other cars in traffic, and so on. That should help reduce the cost of sensors and of the cars.

The smart campus of tomorrow
Google Maps already gives us near real-time, if graphically crude, updates of traffic snarls. Thanks to deep learning and neural networks, multi-perspective and multi-dimensional maps will one day capture not just 3-D imagery, but also accommodate time, the fourth dimension in quantum physics.

I've chosen the hypothetical example of a university campus by way of illustration. Imagine being able to turn the process of registering for classes--now so fraught with uncertainty and anxiety-- into a fun and predictable exercise.

It starts with a geo-scheduled map, updated daily, built just for you and based on your class schedule, perhaps shown as darts on a three-dimensional map of the campus. As the hours roll by you're sent a live rendering of the class schedule on a 3-D model of the building where it's taught. Based on that schedule and live information, the system picks out a schedule of the bus you need to take from your dorm--or shows you the shortest walking distance between the location of your first class and the next one. It uses real-time information about the traffic density of each path. Want to visit a professor during office hours? Type in her name and you get the schedule, as well as a detailed map showing just where her office is. What about your downtime, when you're done with class? Relying on your interests and class schedule, the platform sends you targeted information about activities you might enjoy--volleyball, documentary movies, good cheap eateries, or a party that evening. And, of course, you get security alerts and suggested action plans in potential 911 situations.

Exciting new applications are coming every day. And they're all helping us make better decisions about our lives. From their earliest incarnations, maps have been helpful, if limited, guides. Now that they're packed with so much information, they're as indispensable as drawing breath.

Sanchit Agarwal is Vice President Field Operations for Nearmap USA, which provides high-resolution aerial imagery to businesses and governments.

A 2.684Mb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE