LilScan is a so-called structured light system using a known light pattern to triangulate the 3D scene depth. In the case of LiLScan, a simple line laser is utilized as a light source placed at a fixed relative position to the camera.
Perspective distortion of a line laser observed by the camera.
From the camera's viewpoint, the emitted laser line gets perspectively distorted depending on the object's location in the scene, reflecting the laser line back to the camera sensor.
This distortion is measured by the camera system by detecting the laser line in the image domain. The location of each detected pixel of the laser line is converted from its 2D image coordinate to a 3D ray using a standard pinhole camera model. Following this, the back projected laser points (camera rays) are intersect with the known laser sheet to triangulate the position of the laser line in 3D space.
Back-projected 2D laser points.
The math behind triangulations is known for centuries and taught in math class. However, 3D profilers require a digital camera to measure the perspective distortion of the light pattern. Therefore, the first triangulation-based laser system dates back only to the late 80s. Nowadays, they are widely used in production environments for quality control and other inspection and sensing tasks.
With the general availability of high-resolution cameras and cheap processing power, they are currently quite overpriced and hardly available to the consumer market. Several attempts to bring this technology to the maker community failed in the last decades as a couple of core components must be addressed all together, which can be pretty challenging:
- Ridgid baseline between the camera and the laser module
- Robust calibration, tolerant to misalignments during assembly
- Mono Camera or Raw Camera Access to avoid demosaicing artifacts
- Robust Sub-Pixel Peak detectors
- Fast Processing of incoming images
- Combination with external motors with tight timings for 3D scans.