Urban designers and planners frequently consult 3D imagery from Google Earth and other online mapping services to get a sense of existing site conditions. If you’ve used Google Earth or Apple Maps recently, you’ll notice that the 3D imagery is increasingly realistic. How are they doing this, and how could this technology augment the planning and design profession in the future? Current initiatives to digitize the globe mirror the great 19th-century race to complete the map of the world. Tech firms are now creating an increasingly highly-detailed virtual planet where every corner of the globe is available instantly for those who would like to scrutinize it.
It used to be that 3D imagery in Google Earth was populated manually. Individuals contributed 3D models, building by building, to the Google Earth database. This workflow was incredibly time consuming and resulted in inconsistencies in modeling techniques—a situation ripe for automation. We are now seeing accurate and detailed 3D cities, complete with terrain, buildings and landscape, generated largely by automated processes. Using algorithms and 3D photogrammetry, these new maps stitch together aerial photographs taken at 45 degree angles from the four cardinal directions to create a three-dimensional model of entire cities.
Google introduced these images on their blog in 2010 and since then have been working to cover the globe with 3D photogrammetric images. If you’re interested in delving further into the photogrammetry process, check out how the software is being used for archaeological documentation. This digitization process allows archaeologists to study sensitive or far-flung buildings and artifacts. It’s also enabling designers to understand distant cities and project sites with virtual visits through online mapping services. Additionally, the Photomodeler software provides a great overview of the process of creating 3D meshes from multiple photographs on their website.
The technology used to create this virtual mirror of the physical world will be coming soon to smartphones. Google’s Project Tango uses computer vision on a smartphone to create 3D imagery. Apple’s IPhone 7 is rumored to have dual cameras that would aid in computer vision. Smartphones and camera-enabled drones will enable us to continually record our surroundings in three dimensions. The Pix4D software used for 3D mapping and surveying is already available for drones.
As algorithms are perfected and imagery resolution increases, the maps we pull up on our computers and smartphones will get closer to rendering reality “in silico.” Google’s pedestrian-scale images, accessed through “Street View” mode, already give us high-resolution, eye-level views of streets around the world. Here again, advances in computer algorithms have the potential to make this imagery extremely life-like. Google’s Project Deep Stereo uses algorithms to create the “missing” frames within a series of still images, resulting in a relatively seamless movement from one frame to the next.
And while the quality of this digital imagery is being enhanced, so will our ability to interact with it, as evidenced by early efforts to connect Google Maps to virtual reality devices.
Tools like these could help planners and designers accurately assess on-the-ground conditions where their projects are based. Applications that use these techniques in ways that can benefit planners and designers are starting to crop up. The new service Aero3Dpro is, as the company’s website states, “working with businesses to find new ways to use 3D information to solve real-life problems.” Their 3D models can be used to render highly detailed sites, and have also been integrated into interactive 3D GIS software such as TerraExplorer. Planners and landscape architects can use this software to conduct slope and viewshed analyses.
Detailed, algorithmically-rendered 3D imagery will benefit the building, design and construction industry by helping to speed up and better inform the planning phase. And this technology is just one of an explosion of futuristic technologies that have the potential to transform many industries across the US. The race to map the globe with sensors in phones, cars, and drones could enable revolutions that will reshape the world we live in by reshaping how we look at and understand it. Much of this detailed information will reside in the cloud, providing enormous public data sets that will help protect the planet’s biodiversity and enable driverless cars to navigate complex geographies. Combined with virtual reality, these advances will, in time, democratize travel and could disrupt the conventions of space and location. In the future, we will traverse vast dimensions in an instant, giving new, virtual meaning to the saying, “No matter where you go, there you are.”