Photogrammetry stitches together photographs of an object or a scene taken at different angles into a 3 dimensional digital representation. This method uses a series of photographs (usually 60-120 for an object the size of a coffee maker).
The algorithms behind photogrammetry are similar to facial recognition tools and car warning systems that use cameras to detect obstacles. First, a photogrammetry tool breaks down a photo of an object into pixels, the basic building blocks of a digital image. To find patterns between all of the photos of an object, the algorithm looks for three things:
Photogrammetry relies on these patterns to identify matching points across these photos. It uses geometry to determine the 3D position of these points based on angles and distances. Some tools also take into account camera settings like focal length or which direction your phone is tilted when you take the photo. The software first creates a rough point cloud, then adds more detail. A mesh is generated by connecting these points, and textures from the original images are overlaid onto the model to make the digital object look realistic.
Now, this process isn’t perfect– if an object is one solid color, a photogrammetry tool will have difficulty reconstructing the shape of the object. Highly reflective objects like metal can add glare to the photos, which introduces pixels to the images that the algorithm can’t process.