dr. daniel l. lau

Google Scholar Page of List of Citations to My Works


Most Significant Peer Reviewed Works
  1. D. L. Lau, G. R. Arce, and N. C. Gallagher, "Green-Noise Digital Halftoning," Proceedings of the IEEE, vol. 86, no. 12, December 1998, pp. 2424-2444.
  2. D. L. Lau, R. Ulichney, and G. R. Arce, "Blue and Green-Noise Digital Halftoning," IEEE Signal Processing Magazine, vol. 20, no. 4, July/August 2003, pp. 28-38.
  3. D. L. Lau and R. Ulichney, "Blue-Noise Halftoning for Hexagonal Grids," IEEE Transactions on Image Processing, vol. 15, no. 5, May 2006, pp. 1270-1284.
  4. K. Liu, Y. Wang, D. L. Lau, Q. Hao and L. G. Hassebrook, "Dual-frequency pattern scheme for high-speed 3-D shape measurement," Optics Express, vol. 18, no. 5, March 1, 2010, pp. 5229-5244, DOI: 10.1364/OE.18.005229.
  5. Y. Wang, L. G. Hassebrook, and D. L. Lau, "Data Acquisition and Processing of 3-D Fingerprints," IEEE Transactions on Information Forensics and Security, vol. 5, no. 4, December 2010, pp. 750-760, DOI: 10.1109/TIFS.2010.2062177.
  6. Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, "Robust Active Stereo Vision Using Kullback-Leibler Divergence," IEEE Transactions on Pattern Analysis and Machine Intelligence, July 30, 2011, DOI: 10.1109/TPAMI.2011.162.

Research Statement
My academic research began in the area of digital halftoning, which is the process used by printers to convert a continuous tone image or photograph into binary patterns of printed and not printed dots. My major contribution was the introduction of the green-noise model for describing the ideal spatial and spectral characteristics of dot patterns composed of randomly sized, shaped, and distributed dot clusters. This work was first published in the December 1998 issue of the Proceedings of the IEEE as well as JOSA A and the IEEE TIP. It also resulting in two separate U.S. patents being awarded to the University of Delaware as well as my book, Modern Digital Halftoning, which had its first edition published by Marcel Dekker in 2001 and a second edition in 2008 by CRC Press.

After joining the University of Kentucky (UK), I’ve published two papers with
Robert Ulichney, the investigator who performed the original work with blue-noise. The first was in a special issue of IEEE’s Signal Processing Magazine on blue and green-noise halftoning models while the second published in the IEEE TIP, corrected an error that Bob made in his original work on blue-noise, which is one of the most oft cited works in halftoning. I’ve also continued to collaborate with Gonzalo Arce having co-advised two students through their PhD studies including work that was sponsored by Agere Systems. And I am also working closely with my fellow faculty member at UK, Robert Heath, developing error diffusion algorithms for FPGAs from which we have co-advised two MS students. Also, I’ve received funding from Mutoh America to develop a printer RIP on an embedded PC platform.

In 2004, I began working with M2 Technologies with funding from the
U.S. Marine Corp to use mid-infrared cameras to track bullets in flight as a means of sniper detection. Performed in three phases, the project was completed in 2010 after we had developed a portable, ruggedized, anti-sniper system that could detect bullets in-flight from over 200 meters. The final systems were eventually delivered to the United States Air Force but never field deployed. In total, the project eclipsed $10M in funding that was split between M2 Technologies, CABEM Technologies, Lockheed Martin, and the University of Kentucky. UK’s exact total was over $2M.

Separate from M2, I’ve been working with
Laurence Hassebrook in the area of structured light, a 3-D imaging process that relies on active triangulation between a camera and projector. In the beginning, our projects focused on real-time structured light using a single projected pattern which was funded by NASA through an STTR award. Later, our work focused on face recognition and surveillance, which was funded through the National Institutes for Hometown Security and was a principal project at the University of Kentucky’s new Center for Visualization. Our big break came with the National Institutes for Justice’s Fast Fingerprint Capture Program to develop a high-speed fingerprint scanner that could scan all five fingers in under 30 seconds. Larry and I proposed using structured light for non-contact scanning, an unheard idea at the time. Funded through the National Institutes for Hometown Security, we were one of only four teams included in the project and only the second academic team, Carnegie Mellon University being the other. This work also lead to the technology spin-off, FlashScan3D, who separately received a Phase III grant from the Department of Homeland Security to develop a commercially viable prototype as well as a grant from the United States Army Criminal Investigation Laboratory to develop a scanner for 3-D ballistic imaging.

The fingerprint research also garnered significant news coverage when it was first picked up by the
MIT Technology Review, whose article was then carried by an innumerable number of online news blogs, making my graduate student, Yongchang Wang, an instant celebrity who had articles about his research published in the EE Times, China Youth Online, Smart Planet, Popular Science, Photonics Spectra and Vision Systems Design. The SPIE Newsroom even invited us to publish an article for their Electronic Imaging and Signal Processing section. Peer reviewed journal articles included a paper published in IEEE Transactions on Information Forensics and Security.

Separate from fingerprints, my research group’s biggest contribution to structured light was to acquire, process, and
display structured light at a rate of 150+ frames per second. When it first appeared in Optics Express, our paper on dual-frequency pattern design was downloaded over 700 times and was the most popular open access paper in the OSA’s library for the four months after its publishing. Like our work with fingerprints, our real-time structured light work was featured in SPIE Newsroom. The machine vision trade magazine, Vision Systems Design, even asked me to develop a webinar on 3-D imaging for machine vision for their magazine, an honor that continues on a yearly basis today. The paper also lead to the launching of Seikowave, Inc. a startup company serving the oil and gas pipeline industry, selling ruggedized 3-D scanners for pipeline inspection.

In 2013, Seikowave had over $1M is sales and expects over $3M in 2014 with sales extending into Europe and the Middle East. Peer reviewed journal articles included papers published in
JOSA A, IEEE TIP, Optics Letters, as well as our paper, “Robust Active Stereo Vision Using Kullback-Leibler Divergence,” which appeared in the March 2012 edition of the IEEE Transactions on Pattern Analysis and Machine Intelligence. The paper focused on improving the reconstructions of an active stereo system that used two cameras and one projector using all available information through a novel information theoretic criterion.

Separate from these major programs described above, I’ve had smaller collaborations including projects with
Royce Mohan, now with the University of Connecticut’s Medical School developing a machine vision system for high throughput screening for drug discovery, which was funded through an R01 NIH award. I am also currently involved in a research project being managed in the Department of Civil Engineering by Dr. Reginald Souleyrette who intends to use 3-D scanning to develop a quantitative method for determining the need to rehabilitate rail crossings. This research is a consortium partner for the National University Rail (NURail) Center. And I’m working with Jeffrey Bewley, a faculty member in the University of Kentucky’s Department of Animal and Food Sciences, to use 3-D webcams for making body condition scores (BCS) of diary cows. BCS is like a body fat index where fatter cows are preferred. By automating their scoring, we hope to introduce a range of technologies that assist farmers to individualize each animal’s care. As part of this work, my graduate student, Anthony Shelly, published his Master’s thesis on use a Prime Sense camera to measure feed intake by measuring the volume inside a feed bin before and after each animal’s feeding. The resulting thesis has been accessed online over 400 times since it first appeared last May, a staggering number for an MS thesis in Electrical Engineering.
 
Stacks Image 612
646A4AFA4ED0DC19