PUBLIC DOMAIN 3-DIMENSIONAL DATA ACQUISITION DESIGN AND SOFTWARE

9-21-97

A primary branch of our research is in 3-D data acquisition. Most of our effort has been in the study of structured light techniques involving light striping while some of our earlier work involved acoustic imaging. Our latest work is in real time 3-D data acquisition and processing using high speed Spatial Light Modulation (SLM) devices. In these systems, we use time and space light stripe modulation techniques to achieve high SNR as well as high through put. We believe that this area of study is becoming very important for both industrial and commercial applications. In the past decades many methodologies of structured light have been developed. However, these methods have been limited by the computational requirements. With the emergence of low cost high speed general purpose computers and standardized high speed buses and interfaces, many of these methods become practical. In the last few years SLM technology has reached speeds that will support data acquisition rates, and preprocessing capabilities that will allow for 3-D data acquisition/reconstruction near video rates.

We have already begun preliminary work in this area and in the next two years our group will be publishing on this latter topic and our efforts will be documented in these web pages. We present the following design below. The software is public domain and all the formats are also public domain. If you are just starting to get involved in this area, this system is an excellent introduction.


HB2003 DESIGN AND SOFTWARE

You will need the following components:

NOTES FOR ENHANCEMENTS

Please contact us at lgh@engr.uky.edu.

or http://www.engr.uky.edu/~lgh/

SOFTWARE

EXAMPLES OF USAGE

Example A, SETUP: 9-21-97

1. Apparatus Setup

The projector is placed perpendicular to the target object region. The camera is at an incident angle from this perpendicular and the field of view of the camera is adjusted to overlap with the projected field. Both systems are focused to the same focal plane. The focal plane may be anywhere from the front of the target object region to the back of the target region. For this example, the back of the target region will be the focal plane. In this example there is no compensation for perspective distortion.

2. Camera and Projector Course Adjustment

A flat white matte background plane is placed at the desired focal plane. The projector is focused onto this plane and range or zoom adjustments are made to obtain the desired field of projection. The camera is adjusted to have a field of view comparable to that of the projection field. A test pattern is placed in the field of view at the focal plane and the camera is focused on this. Focus and Magnification adjustments interact so iteration may be needed. The trickiest part is aperture adjustment. The background should be illuminated with equivalent light to the projector while the aperture is set. One way is to do this by experience, turn the lights on and make all the settings in consideration of the target object's reflectance characteristics. Another way is to alternately project a white image, then grab frame and display. This is somewhat distracting because the display and projection are one of the same, so the user needs to work with a flashing image which may be confusing. We do it by experience and use "imageit.exe" to simply display what the camera sees while room light is used for illumination.

3. Camera and Projector Fine Adjustment

The program "strpalgn -t 5" is run which forms a projected circle inside an outer box with cross hairs. The outer box indicates the limits of the projection, the circle gives an idea of the distortion and the cross hair allows alignment of the camera center with the projector center. The frame is digitized and then coursely sampled and redisplayed in the center of the screen. Thus the entire field of view of the camera is displayed and simultaneously projected within the center area of the screen. The camera should not be refocused at this point because the image is course. However, the camera direction should be aligned to center it while the zoom is adjusted to best enclose the projected pattern.

4. Determine the Projector Angle

The program "strpcal -o angle.txt" is run which automatically measures the incident angle of the camera. It will also indicate the side that the camera is on based on the sign of the angle. However, whether your images are inverted someway or another because of your projector or camera type or setup, you may need to verify the correct sign association. An incorrect result will invert the range values of the scanned objects. This step is optional and may be performed manually.

Example B, CALIBRATION OF FOCAL PLANE: 9-21-97

1. In-Phase Scan of Focal Plane

A planar white matte background is placed in the focal plane. The program "stripe -o data\icala#.mat -s 8 -x 4 -d 0" is run. The surface will be scanned and icala0.mat through icala8.mat will be written to the data subdirectory. Matview3.exe may be used to view these files. The reference image is in icala0.mat and the encoded images are in the other 8 files.

2. Binarization of In-Phase Scans

Run program "strpbin -i data\icala#.mat -o data\icalb#.mat -t 0.8" where the threshold 0.8 (i.e., 0<t<1.0) is set by trial and error. To set the threshold, look at data\tmp1.mat which is the lowest bit encoding. The threshold should result in tmp1.mat being 50% duty cycle stripes. Once you have a desirable threshold then you should be able to stay with it.

3. Quadrature-Phase Scan of Focal Plane (these quadrature steps are used to fill in the high error regions of the in-phase. The details of this should be published in the next year.)

The planar white matte background used in step 1 is rescanned. The program "stripe -o data\qcala#.mat -s 8 -x 4 -d 1" is run. The surface will be scanned and qcala0.mat through qcala8.mat will be written to the data subdirectory. Matview3.exe may be used to view these files. The reference image is in qcala0.mat and the encoded images are in the other 8 files.

4. Binarization of Quadrature-Phase Scans

Run program "strpbin -i data\qcala#.mat -o data\qscalb#.mat -t 0.8" where the threshold 0.8 (i.e., 0<t<1.0) is set by trial and error in step 1.

5. Interlace In-Phase and Quadrature-Phase Components

Run "strplace -i1 data\icalb1.mat -i2 data\qcalb1.mat -o data\cal.mat" and final result of the range encoding of the focal plane is complete. This data will be used in the reconstruction process of target objects which are scanned.

Example C, SCAN OF TARGET OBJECT: 9-21-97

1. In-Phase Scan of Target Object

Place target object in scan region. The program "stripe -o data\iobja#.mat -s 8 -x 4 -d 0" is run. The surface will be scanned and iobja0.mat through iobja8.mat will be written to the data subdirectory. Matview3.exe may be used to view these files. The reference image is in iobja0.mat and the encoded images are in the other 8 files.

2. Binarization of In-Phase Scans

Run program "strpbin -i data\iobja#.mat -o data\iobjb#.mat -t 0.8" where the threshold 0.8 (i.e., 0<t<1.0) is set by trial and error. To set the threshold, look at data\tmp1.mat which is the lowest bit encoding. The threshold should result in tmp1.mat being 50% duty cycle stripes. Once you have a desirable threshold then you should be able to stay with it.

3. Quadrature-Phase Scan of Object (these quadrature steps are used to fill in the high error regions of the in-phase. The details of this should be published in the next year.)

The object in step 1 is rescanned. The program "stripe -o data\qobja#.mat -s 8 -x 4 -d 1" is run. The object will be scanned and qobja0.mat through qobja8.mat will be written to the data subdirectory. Matview3.exe may be used to view these files. The reference image is in qobja0.mat and the encoded images are in the other 8 files.

4. Binarization of Quadrature-Phase Scans

Run program "strpbin -i data\qobja#.mat -o data\qobjb#.mat -t 0.8" where the threshold 0.8 (i.e., 0<t<1.0) is set by trial and error in step 1.

5. Interlace In-Phase and Quadrature-Phase Components

Run "strplace -i1 data\iobjb1.mat -i2 data\qobjb1.mat -o data\obj.mat" and final result of the range encoding of the target object is complete..

Example D, RECONSTRUCTION OF TARGET OBJECT RANGE IMAGE: 9-21-97

1. Subtraction of Encoded Focal Plane from Encoded Object

The program "strpsub -i data\obj.mat -r data\cal.mat -o data\final.mat" is run. The image in final.mat is a true range image of the target object.

2. Conversion to Vector Format

The vector format (VCT) is our own home brew format which is very compact. Run "matvct -i data\final.mat -a data\iobjb0.mat -p 45.0 -s 4.0 -o data\final.vct." It was developed to store the raw data collected by the HB1009 system. The format contains machine variables need for reconstructing in to vertex data. Although the VCT format has capability for perspective distortion compensation, the only variable used in this example is the scaling of the range axis. Thus the incident angle (-p 45.0 degrees), obtained in the SETUP, is necessary and you may need to adjust the scale factor (-s 4.0) for your particular setup. The 3-D vertices have albedo information, in this example, the original image is used (-a data\iobjb0.mat). This can be done after the scan has been made.

3. Vector to Vertex Format

Vertex format requires much more memory than vector format. However, it is easier to work with because it can be transformed using 3-D transformations. Run the program "vctvtx -i data\final.vct -o data\final.vtx."

4. Editing Vertex Data

If you have your own 3-D editing programs, you will probably want to convert the vertex format to one compatible to your format. We have some wavefront conversions but they have not been tested. If your interested, we would be willing to debug these with your cooperation, no charge. The vertex format is simple, it is a byte format which has the X,Y, and Z coordinates as well as R,G and B values of each vertex. The first two numbers in the file are two, 4 byte, integers corresponding to the rows and columns of the original VCT file. The product of these numbers corresponds to the number of vertices. R,G, and B are unsigned byte values and X,Y, and Z are floating point values. If you don't want to convert to another format but want to edit and view the data, run "vtxedit -c control\temp.dat -i data\final.vtx -o data\temp" which will bring up the final.vtx vertices joined by lines. This editor will allow you to scale, crop and rotate in 3-D. You have a choice of intensity mapping, range mapping and all white mapping of vertices. You can perform hidden surface project as well. The projected images can be store into MATRIX form (data\temp.mat) which can be converted to GIF or EPS formats for printing. An edited vertex file can also be stored as a new set of vertices (data\temp.vtx) in VERTEX form. Here is the catch, we haven't documented the VTX editor so you would need to ask us to document the process which we would be happy to do. The editor is not intuitive, contains several coordinate systems, several hidden keys and it is unlikely that you would be able to stumble your way through it without some form of help. Good luck if you try.