In preparing for my upcoming faculty summer workshop on immersive environments, I have become more interested in ways that technology enhances the examination of how space – understood to be a certain point situated at the intersection of the physical world (geography) and time (history) – is constructed. Specifically, how technology can reveal sociopolitical forces that shape individual and collective narratives that are embedded within space. In reading about the tools and theoretical approaches that the spatial humanities brings to bear on these examinations, I was interested to note the tension between the positivist epistemology that underpins GIS technology and the poststructuralist approaches that inform the theories utilized by the spatial humanities. In other words, the tendency of GIS to privilege data points, Cartesian coordinates, and quantitative data over individual narratives, fuzzy data, complexity, and ambiguity.
I was excited, therefore, to discover that some researchers in the spatial humanities are looking to new visualization technologies as a way to counteract this positivist epistemology. Virtual reality and immersive environments seem to be especially promising:
“This convergence of technologies has the potential to revolutionize the role of space and place in the humanities by allowing us to move far beyond the static map, to shift from two dimensions to multidimensional representations, to develop interactive systems, and to explore space and place dynamically – in effect, to create virtual worlds embodying what we know about space and place” (Bodenhamer, 2010, p. 24).
One of the difficulties I have uncovered when creating these immersive environments has been a question of proper scale. That is to say, that the scale of the immersive environment approximates as closely as possible the scale of the original real-world environment. Proper scale is important as it creates an accurate sense of presence. It guarantees that the structures in the immersive environment relate to each other and the viewer as authentically as possible, which in turn allows the viewer to develop a more nuanced sense of the power relations and activity systems that are simulated in the space. Although this may not be such a problem when photogrammetry and LiDAR technologies are employed, I have found it to be rather difficult when digitized archival materials are the only primary source on which an immersive environment is constructed.
For example, in creating the models of the Uncle Sam plantation project, I have been relying exclusively on TIFF images of elevation and floor plans made by the 1940 Historic American Buildings Survey (HABS) downloaded from the Library of Congress site. In the past I have used SketchUp to create models on account of this software’s ability to do quick edge measurements with the Tape Measure tool and the relative ease to scale the reference image to the appropriate size. Detailed work in SketchUp was difficult, however, since the reference image pixelated at higher magnification, which made accurate measurements problematic. Although this usually meant only a fraction of an inch for some models, this small error becomes magnified when a model is repeated over longer distances. For example, placing steps in a building. I also found that exporting models in STL format from SketchUp into Blender (or some other 3D modeling software) for texturing sometimes created unnecessary model geometry.
Finding a workflow in Autodesk 3ds Max that enabled higher magnification of the reference image for more detailed modeling while also removing a step for texturing the model before importing it into a game engine was a great relief:
- Right click Snaps Toggle → Home Grid → Grid Spacing (1′)
- Customize → Units Setup → US Standard Feet with Fractional Inches (1/16)
- Create standard primitive plane based on pixel dimensions of original image (W: 9324 px, H: 7584 px)
- Press “M” key to open Slate Material Editor
- Drag Standard Material option (Material/Scanline/Standard) to viewport
- Double-click Standard Material node to open dialog box
- Under Blinn Basic Parameters, set Self-Illumination Color (100) and Opacity (75)
- Under Blinn Basic Parameters, click checkbox next to Diffuse
- Select “Bitmap” in Material/Map Browser and click “OK”
- Select target image in the “sceneassets/images” folder of your 3ds Max project folder
- Double-click Bitmap node
- Deselect Use Real-World Scale and set UV Tiling to 1.0
- In Slate Material Editor toolbar, click Show Shaded Material in Viewport button
- In Slate Material Editor toolbar, click Assign Material to Selection
- Click [+] in upper-left corner of Viewport and select Configure Viewports
- Set Texture Maps to 2048 pixels in Viewport Images and Texture Display Resolution
- In Modify panel, select XForm filter
- Expand Xform filter and select Center
- Move Xform center so that it is located at the 0-point of the architectural scale
- Select Gizmo in expanded XForm filter
- Right-click Gizmo and select Scale from pop-up menu
- Scale down until 1′ in architectural scale corresponds to 1′ in Grid Spacing
There’s a lot of steps here and hopefully, when I get more time, I will make a video detailing the process. Once the reference image has been properly scaled, it is simply a question of creating a mesh with a relatively low vertex count, which will help in reaching higher render times when the model is imported into a game engine such as Unity or Unreal Engine, and creating model textures. I was happy to discover that the Unity import process converts the model scale into Unity world units, which helps to realize proper scale in the immersive environment.
Bodenhamer, D. (2010). The potential of spatial humanities. In D. Bodenhamer, J. Corrigan, & T. Harris (Eds.), The spatial humanities: GIS and the future of humanities scholarship (pp. 14-30). Bloomington: Indiana University Press.