3D-Maps-Minus-3D allows you to browse one of the major online satellite 3D-maps, but with all of the 3D-information removed. What's left is texture maps: two-dimensional images used by computer software to add color information to the 3D model. We know these texture maps from seeing them wrapped around the 3D models of buildings, roads, skyscrapers, trees, hills, and topography. Here they are presented as is— flat 2D images— as they exist before they are parsed by the computer to produce the illusion of 3D space.
The major online satellite 3D-map is the result of a 3D-mapping apparatus. GPS units, 3D scanners and cameras mounted on automobiles, airplanes, and satellites capture photographs and data. These are collected, parsed and packaged by software and computers. These data and images are sent across the network, to millions of browsers that reassemble the data into a navigable 3D-representation of the planet. The apparatus is as automated as possible, but human input still exists, as they operate the vehicles, and write and troubleshoot the software and hardware.
There are many blank spots on this map. Not every surface on the planet has been parsed and mapped. Only certain regions, mostly cities and hotspots of human activity are captured by the apparatus. This map's blank spots highlight the mapping apparatus's priorities.
The ubiquity of networked 3D-maps has given us a new perspective: the view from above. It looks down at the ground from the sky. This perspective has traditionally been associated with a god-like view, or more recently with governments’ and militaries' ability to surveil from above, through aerial reconnaissance and drones. These views are the point of view of power, and they are now being democratized for capital gain.
The mapping apparatus that gives us access to the view-from-above perspective also contains within it a multi-perspective view from within. This is the view we see in the texture maps. These texture maps are flattened, fragmented and exploded photographs. There are no singular vanishing points, no ground or horizon in these maps, and thus no hierarchies of information. Rooftops, facades, roads, and buildings are all collapsed onto the same plane. They collapse multiple points of view and times into a single picture plane.
These images imply a particular vision of the city. A fragmented disorienting whole, only ever experienced partially and from within. These images are kaleidoscopic shards. Each of the cities captured by the apparatus are distinct, unique collages of color, angles, forms. These images inform us about the places captured in ways completely different from perspectival images.
The mapping apparatus is almost fully automated. We typically see the input and the output, but the data capturing devices and processing algorithms produce "intermediary" images — images used to make other images (the output). These texture maps are one part of the data used to produce 3D spaces in our internet browsers. But the texture maps themselves are not yet ready for consumption, they are in a data limbo, necessary but usually hidden.
These texture maps are a new kind of acheiropoieta-images not made by human hand. Examples of Acheiropoieta include the shroud of Turin or the Veil of Veronica. Acheiropoieta have historically been of divine origin. They are discovered fully formed, not created by humans. Scientific images are supposed to be acheiropoieta as well - their objectivity comes from the fact that they are not made by humans.1 These new Acheiropoieta are not made by human hands either, but by a massive mapping apparatus. This apparatus is supposed to show the planet as it is — its goal is perfect simulation based on data gathering. Its truth claim relies on the objectivity of the system towards all its data sources.
These images are not made for humans to look at. They are meant to be seen and parsed by computers. They may look like glitched maps, disaster scenes, cubist collages, but these images are produced for other computers to use—to apply color and texture to 3d forms. These images are efficient vectors of information. But unlike a long list of 1s and 0s, or some other cold alien encoding, they still look like the objects they represent. They are uncannily close to photographs or human made collages.
These images are generated for efficiency of transmission and for machines to read. They use texture-packing algorithms, algorithms to cut up photographs, and algorithms to recreate 3D volumes. They were not produced to look like anything so much as to encode information. Their formal characteristics (empty gray areas at the bottom, jagged edges, wedges) are the result of an automated process. We can intuit parts of the process in these images, read how 3D surfaces are fragmented and collaged into 2D maps, understand how the 3D scanners, cameras, engineers, operators and software deal with facades, roofs, trees, bodies of water, or automobiles. For example we get the sense that the apparatus privileges placing photographic information in the top left corner, or that it tends to fragment photos of trees leaving lots of little grey holes in the imagery. What we are reading in this case is the process residue itself.2
It is likely that most of these images created by the mapping apparatus will never be seen by a human. Soon, more of these images will be created by all kinds of apparatuses—for mapping, for making video games, for digitizing objects in museums, for producing 3D printable objects—than were ever created by human hands. The vast majority of these images will never be seen by humans. If that is the case, there is a possibility that these are not images anymore. If an image is not read by a human, can it still be called an image?
1. Bruno Latour, “What is Iconoclash? or Is there a world beyond the image wars?,” In Iconoclash, Beyond the Image-Wars in Science, Religion and Art (edited by Peter Weibel and Bruno Latour), ZKM and MIT Press, pp. 14-37, 2002
2. Curt Cloninger, "Manifesto for a Theory of the ‘New Aesthetic’." Mute Magazine 3, No. 4 (Spring), 16-27, 2013