Methodology and Technology for Rapid Three-Dimensional Scanning of In Situ Archaeological Materials in Remote Areas  

 

 

E. R. Crane1, L. G. Hassebrook1, C. T. Begley2, W. F. Lundby1 and C. J. Casey1

 

1 Department of Electrical and Computer Engineering, University of Kentucky, Lexington, Kentucky, USA

2 Department of Anthropology, Transylvania University, Lexington, Kentucky, USA

cbegley@transy.edu, {Eli.Crane, lgh, cjcase0}@engr.uky.edu, <lundby> dayscondor@gmail.com

 

Key words: 3-Dimensional, scanning, Structured Light Illumination

 

 


1.    Introduction

Archaeology faces unique challenges among the historical sciences in that much of the accepted methodologies used are destructive to the archaeological resources. From excavation to surface collections, nearly all archaeological fieldwork impacts the resources. This also applies to the deterioration of artifacts during their study through handling by various researchers and shipment between research groups.

Recently, our group has developed and tested a proof of concept 3-D surface acquisition technology that will help archaeologists face these challenges. The innovation, called Rotate and Hold and Scan (RAHAS), is a new Structured Light Illumination (SLI) technique and processing algorithm, which allows for the minimization of the complexity of the scanning apparatus. The reduction allows for the energy intensive processing steps (which require computers and power generators) to be postponed until a later time and place.  Since the energy intensive processing elements are not required in the field, this decoupling enables 3-D scanning in extremely remote locations.

The technology has undergone initial testing in remote forests of Honduras. While there, Chris Begley and Eli Crane collected a series of 3-Dimensional scans of ancient artifacts and petroglyphs in the Mosquito Coast of Honduras. Begley and Crane took the scanner on a two-week trip through the most remote parts of Honduras, during which time they were not resupplied from the outside and could not recharge batteries or replace any part of the scanner. The trip consisted of travel by truck, on foot, on mules, and by raft. The scanner was carefully packed in a Pelican case and suffered no damage despite being subjected to all manner of abuse including two episodes of rafts flipping in the rapids.

 It is anticipated that this methodology and technology would be of particular use to archaeologists working in similar remote areas, with little or no capacity to recharge batteries, to record objects that cannot be recovered for whatever reason. During this trip, objects that met those criteria were selected to scan. This included objects that could not be removed (such as petroglyphs) or were too bulky and heavy to be transported long distances by foot (such as large stone bowl). Data was collected to test other potential uses for the scanner. Badly eroded petroglyphs were scanned to see if details not visible to the naked eye could be discerned. Also, petroglyphs in areas undergoing rapid erosion were scanned, including petroglyphs that are regularly inundated by swift water, to establish baseline data to compare with future measurements in order to assess the rate of destruction.

We present the preliminary results of that expedition and evaluate the capability of RAHAS to obtain accurate 3-D surface data of artifacts in remote areas.

2.    RAHAS theory and operation

The RAHAS scanner, shown in Fig. 1, is a collection device and does not process the data. To achieve a scan, a slide projector moves a slide with a special image in a rotational and linear manner.  One standard consumer digital camera captures a video of the moving projection upon target artifacts, the second camera captures a high resolution still photograph.  Both cameras record their data on their respective camera's flash card. The 2D quality of the images can be ascertained on site in the camera's preview window. The content of the two camera's flash cards are processed at a later time, normally off site, where power and computing resources are available to create quality 3D images with high resolution skins.

Rotate and Hold and Scan (RAHAS) [HASSEBROOK et al. 2009] is a SLI method that uses “snake” tracking of the projected pattern stripes to allow for non-ambiguous 3-D capture. The pattern projection is initially aligned in the epipolar direction to allow correct identification of the pattern stripes followed by rotation and stripe tracking to the phase alignment for a more traditional Phase Measuring Profilometry (PMP) movement of the stripe pattern. The stripes are formed from a sinusoidal image projection. It can be implemented with a single patterned slide projection or with a digital projection system. A single sinusoidal pattern is used throughout the process. RAHAS uses epipolar stripe alignment inspired by our composite pattern (CP) [HASSEBROOK et al. 2008, GUAN et al. 2008] techniques, snake tracking techniques inspired by our Lock and Hold SLI [CASEY et al. 2008] methodology, and followed with single frequency Phase Measuring Profilometry (PMP) [SRINIVASAN et al. 1984].

 

RAHASsystem.tif

Figure 1: RAHAS system used in Honduras expedition (June 2009).

Figure2.png

Figure 2: Color texture mapping to depth surface of scan. Sample spacing is regular in the low resolution (upper right) and 2X resolution (lower right) spacing of 360 microns and 180 microns, respectively. The high resolution (lower left) mapping is iregularly spaced but about 90 microns between samples.

The system is shown in Fig. 1. The top camera is used in video mode to capture the RAHAS pattern projection as the pattern is rotated and translated by the operator. The center camera captures the color texture image of the artifact feature. A 3-D calibration grid was scanned once for each scanning set up.

The quality of the scan and the photograph can be evaluated on site using the existing camera previewing systems. The memory cards were then brought back to our laboratory where we are developing the preliminary processing algorithms to extract the 3-Dimensional information.  We use a method of combining the high resolution texture with the lower resolution depth information we refer to as Mixed Resolution (MXR) texture skinning [HASSEBROOK et al. 2009]. We show in Fig. 2 the results of mixed resolution skinning. The alignment of the texture onto the depth is automated by pre-calibrating the texture camera and depth scanner to the same calibration grid. As explained by Hassebrook et al (2009) the 3 methods of mapping the texture to the depth results in different lateral sampling resolutions ranging between the depth sample spacing to the high resolution texture spacing. MXR is discussed further in Section 3.

3.    RAHAS algorithm description

Referring to the camera and RAHAS projector module alignment shown in Fig. 3 and the pattern and coordinate alignment [2] in Fig. 4, the phase dimension is in the vertical direction and the orthogonal direction is in the horizontal direction. The RAHAS algorithm starts with the stripe pattern aligned in the epipolar direction as shown in Fig. 3 and upper left corner image of Fig. 4. For a digitally projected image, the pattern can be distorted so all the stripes are along epipolar lines but in the case of the mechanical slide projection, one stripe will be closest to epipolar alignment and the other stripes will vary slightly from the epipolar alignment. Because the epipolar alignment results in straight stripes/snakes, the snakes are easily identified. As the pattern slide is rotated, these snakes are tracked as they become more distorted by surface depth variation. The motion of the snakes is radial and non-linear and dependent on the surface depth. Using optical flow and predictor techniques, the snakes can be optimally tracked. Also, the faster the frame rate, the simpler the tracking needs to be.

 

CameraProjectorAlignmentCrop.gif

Figure 3: (left) Pattern aligned in epipolar directions. (center) Rotate and track. (right) Phase alignment for PMP.

Once the pattern rotation reaches the phase alignment as in Fig. 3 (right) and Fig. 4 (lower left), the pattern is translated in the phase direction to capture the PMP sequence. The wrapped phase is determined by the PMP sequence as shown in Fig. 4 (lower right). This wrapped phase could be unwrapped directly but if there is a step edge or multiple objects, the result may be ambiguous in depth. To unwrap the phase without ambiguity, we use the snakes in Fig. 4 (lower center) to define the phase wrapping boundaries. Because we know the snake identities from tracking them, they can be used to unwrap the phase with no ambiguity in depth. Given the unwrapped phase, the 3-D world coordinates are obtained from a calibration process.

RAHASalgorithm.tif

Figure 4: RAHAS data processing algorithm.

The last step in the process is to combine the depth scan with the color texture image. This is referred to as “mixed resolution.” Mixed resolution allows the researcher to achieve the maximum level of resolution of both the color texture and the depth even though they are not the same. Typically, the resolution of the video capture of the PMP sequence is lower than that of the color texture image. The reason is that to capture at video rate, the resolution of the camera image must be decreased to operate within the bandwidth of the camera digital electronics. Since the desire is not to degrade the color texture image, the depth image is interpolated up to the color texture image and combined with it to form the final high resolution point cloud. Because this result is used for scientific application, it is important to specify the details of the mixed resolution format.

4.    Data analysis

We collected data to demonstrate the ability to detect patterns in worn petroglyphs. Fig. 5 shows the color texture component of five scans where the petro-glyph contours are easily visible. In contrast the  3-D scans shown in Fig. 6 are relatively worn.

Because of the many years of weathering, some of the petroglyphs are slowly disappearing by erosion and no longer visible by conventional photography. One of our future goals is to analyze the three scans shown in Fig. 6 and attempt to enhance the eroded contours.

 

VisiblePetroGlyphs

Figure 5: Visible petroglyph images.

ErodedPetroGlyphs

Figure 6: Three eroded petroglyphs.

Three methods for visualizing the curvature of the stone are shown Fig. 7 based on the petroglyph in Fig. 5 (c). In Fig. 7 (a), only the color texture is used with shadowing from the light illuminating the surface at an angle to the viewing angle of the camera. This represents the traditional approach of photographing petroglyphs. In Fig. 7 (b), surface normal enhancement is applied which removes texture and illuminates the contour to synthesize shadowing affects to enhance the contours. This technique uses the 3-Dimensional depth information and can be modified with different illumination and viewing angles to visualize the artifacts surface curvature. In Fig. 7 (c), the surface is directly encoded based on local depth to bring out the detail, independent of direction. One of the big advantages of 3-D scans is that they can be accurately measured because they are captured in a well defined 3-D coordinate system. This is not true for a single 2-Dimensional photograph. This characteristic of 3-D scans is what allows models of the artifacts to be readily made from the 3-D data.

Monkey.PNG

Figure 7: (a) Texture image of stone, (b) surface normal enhancement of contours and (c) depth encoded enhancment.

Similar to Fig. 7, the petroglyph in Fig. 5 (d) is shown in Fig. 8. In the upper left corner of Fig. 8 (c), there is a scan error called “banding.” Banding is not really part of the petroglyph. The error comes from the wrapped phase image generation shown in Fig. 5. In the case of the RAHAS technology used, the PMP pattern sequence was manually shifted across the surface and the shift amount was then estimated by tracking the movement of the stripe patterns. Error in the stripe pattern motion leads to banding in the phase and that comes through in the 3-Dimensional coordinate depth values.

River4.PNG

Figure 8: (a) Texture image of stone, (b) surface normal enhancement of contours and (c) depth encoded enhancement.

In Fig. 9, the banding error dominates the enhanced image of Fig. 5 (a). While the concentric circles in the feature are clearly visiable, the banding error prevents further enhancement of the feature contours.

River1.PNG

Figure 9: (a) Texture image of stone, (b) surface normal enhancement of contours and (c) depth encoded enhancement.

 

5.    Conclusions

We have demonstrated feasibility of the RAHAS technology. We have described the methodology of applying this technology to archeological expeditions. The device worked as expected and the data was returned with the device intact. The post-processing algorithm for mapping the image data to 3-D contours has been developed and demonstrated. While the basic components of the algorithm were developed, a key limitation is the banding error that occurs in the phase calculation. For now, the banding error is the limiting factor in the RAHAS method. We believe this is a solvable problem and will involve using more precise methods of calculating the pattern motion across the artifact’s surface. For future research this method will be used to analyze the remaining petroglyphs and pottery that were scanned during the expedition.   

References

L.G. HASSEBROOK, C.J. CASEY, E.R. CRANE AND W.F. LUNDBY, Y. WANG, K. LIU AND D. L. LAU, 2009. “Rotate and Hold and Scan Structured Light Illumination Pattern Encoding and Decoding,” Invention Disclosure: Intellectual Property Development, University of Kentucky, 11/30/2009, INV09/1706.

 

L.G. HASSEBROOK, D.L. LAU AND C. GUAN, 2008. "System and Technique for Retrieving Depth Information about a surface by projecting a Composite Image of Modulated Light Patterns," Patent No. US 7,440,590 B1, University of Kentucky Research Foundation, (Granted Oct. 21, 2008).

 

C. GUAN,  L.G. HASSEBROOK, D.L. LAU AND V. YALLA, 2008. "Improved composite-pattern structured-light profilometry by means of postprocessing," Opt. Engr., Vol. 47(9) pp.097203-1 through 097203-11, September 2008.

 

C.J. CASEY, L.G. HASSEBROOK AND D.L. LAU, 2008. “Structured Light Illumination Methods for Continuous Motion Hand and Face-Computer Interaction,” Human-Computer Interaction, New Developments, International Journal of Advanced Robotic System, edited by Kikuo Asai, published by In-Teh, Croation branch of I-Tech Education and Publishing KG, Vienna, Austria, pp 297-308, (copyright 2008)

 

V. SRINIVASAN, H.C. LIU, M. HALIOUA, 1984. “Automated phase-measuring profilometry of 3-D diffuse objects”, Applied Optics, 23(18), 3105-3108, 1984.

 

L.G. HASSEBROOK, C.J. CASEY AND W. LUNDBY, 2009. “Non-Contact Fiducial Based 3-Dimensional Patch Merging Methodology and Performance,” Three-Dimensional Surface Recording, Analysis, and Interpretation in Archaeology and Anthropology, Computer Applications and Quantitative Methods in Archaeology, Williamsburg, Virginia, in press, number 346 (May 2009)