Why do I keep losing tracking?
There is nothing more frustrating than purchasing your sensor, getting everything installed, getting calibrated, only to be unable to capture your subject matter. The good news is, you are not alone, and likely there are easy ways to make improvements to ensure your subject matter is captured successfully in 3D. Please check out the following tips to set yourself up for success.
What is "tracking"?
Tracking is the sensor's ability to maintain its understanding of its position in space, relative to the subject matter being scanned. When you set a bounding box to surround your object, you are telling the sensor to use that subject as a reference point, and then using the iPad's gyroscope and accelerometer, the sensor and software can maintain position and tracking as your move around your subject.
If the sensor loses its position in space, it can no longer reliably capture depth data and translate that into the point cloud format with which we are used to seeing and working. If you see the error message "Tracking Lost", that means that something has gone wrong and the sensor cannot understand where it or the object is.
Check 1: Your Sensor's Glass Plate
- Remove the protective plastic film. Leaving it on may cause distortions, random artifacts, or cause your sensor to fail to capture anything.
- Keep the glass plate clean and free of oil and smudges. If you find your sensor is covered in fingerprints, take a microfiber cloth and some rubbing alcohol. Wipe down the glass plate in small circles (as opposed to side to side or up and down motions).
- Do you have a Wide Vision Lens?
- If you are room scanning, attach the Wide Vision Lens. The Wide Vision Lens augments and improves the iPad's color camera to assist in tracking across large swathes of blank wall, where the sensor might otherwise lose its place in space.
- If you are object scanning, remove the Wide Vision Lens. This is not useful in object scanning, and our software is not optimized to account for the fisheye effect of the color camera for color tracking and color/texture addition. We do not recommend using the Wide Vision Lens in these instances.
Check 2: Your Depth Coverage
While all sensors leave our manufacturing facility well-calibrated, sometimes rough delivery can cause them to become uncalibrated, causing poor depth coverage. To check this, switch your Structure app to “Depth” and face a flat, blank wall away from sunlight. If your depth looks like this, you are in good shape!
If your depth looks like this, or something in between, you have what we call an “IR Offset”, which happily can be fixed remotely. Please follow the instructions in this article: How can I improve my sensor's depth coverage?
Check 3: Your Environment
Our sensors project an infrared speckle pattern onto the subject/subject matter, and using the twin IR cameras, it measures the distortion of that pattern as it reaches the subject and calculates the depth.
One issue with this capture method is that it can fail due to infrared saturation, when there is too much infrared in the scene and the sensor can no longer determine what is the infrared it produced from the ambient infrared. This can be caused by too much sunlight in the scene. If you have large, open windows that are flooding the scene with IR light, your sensor could lose tracking.
- Close the blinds or move your scene. If you have the ability, see if blocking out the sunlight improves your scan.
- Switch IR Auto Exposure to "On". This allows the software to automatically adjust the gain and exposure to account for extra IR in the environment, making it easier to capture objects if there is indirect sunlight in the scene. Please note: IR Auto Exposure should be turned off when in lower-sunlight settings, as it introduces more noise into the scan.
Check 4: Your Objects
Black and Reflective Objects
Structured light depth sensing has a few limitations, as it relies on being able to sense the infrared speckle pattern it projects and the resulting distortions as it reaches the object.
Black objects and reflective objects pose a particular difficulty for our sensors.
Black objects absorb the IR lasers, causing the objects to “disappear” to the sensor, as the light never reflects back.
Reflective objects scatter the IR lasers in an erratic fashion, causing the sensor to lose tracking and producing random artifacts.
As the sensor is using your object as its anchor to the physical world, objects that are too small might not have enough distinction from the rest of the environment to act as an effective anchor.
Generally speaking, the smallest object we suggest trying to capture is around the size of a grapefruit, though finer details will be captured in that scan.
Check 5: Your Selected Depth Stream Preset
Certain objects or sizes require a different preset. Within the Scanner SDK sample app, you have access to three presets: Default, Body, and Outdoors. Within the SDK there are two others: CloseRange and RoomScanning.
Each preset has a different focal length and gain and exposure setting to optimize it for a variety of different settings. This is one of the reasons why Structure Sensor (Mark II) is so powerful, but if you are unaware of how to use these settings, you can also end up frustrated.
|Preset||Minimum Range||Maximum Range|
|CloseRange||35 cm||90 cm|
|Body||36 cm||98 cm|
|Default||57 cm||10+ m|
|Outdoor||58 cm||33 m|
|RoomScanning||48 cm||621 cm|
If you are using the unmodified Scanner sample app, choose the setting with the closest range to what fits your object. If you are building from the SDK, ensure you optimize for the use case and objects that your app expects to capture.
Check 6: Your Technique
Finally, good technique is essential for producing good scans. Here are some tips to help you make sure you are scanning effectively as possible.
- Start your scan with a fully-charged sensor and iPad. Once the sensor starts to reach below 10% charge remaining, your scans may suffer, and you may be interrupted with a "Please Charge Structure Sensor" message. Additionally, make sure your iPad has enough power as well.
- Scan slowly. Moving too quickly can cause the sensor to lose the object's position in space.
- Scan each area only once, but thoroughly. You may have the thought that scanning the same area multiple times will ensure all holes are filled and the mesh is watertight and contiguous. However, what this actually does is muddles the mesh and can also muddle the texture and color. A better method for scanning is to scan an area slowly, stopping every 10-15 degrees to trigger a color keyframe capture. Fill the holes in an area, then move onto another area. A good scan might take 3-4 minutes but you should only need to take it once.
- Start on a wider plane. This will give your sensor the largest possible anchor and reference point. For example, if you wish to scan a foot, start from the top or bottom of the foot, rather from the side.
- Stay within your chosen preset's range. As mentioned above, each preset has a different minimum and maximum range. If you are using the Body preset, exceeding a meter distance will cause you to lose tracking.
- Begin the scan, then move slightly back. The beginning of a scan is important to ensure the object is positioned fully within the bounding box, but moving slightly back after beginning capture is a good way to help the sensor maintain tracking.
- Make sure you have accessibility. You will need to be able to move around your object 360 degrees. Set up your space and object accordingly, and move any other furniture and obstacles out of the way before beginning your scan. If you must scan underneath an object, be sure the object is elevated enough for you to easily access the underside. If you must scan the top of the object, make sure you have a way to scan above the object.