Each instrument type has its own uncertainty characteristics. By simulating measurements of a point while inducing normalized random error using those uncertainty variables its possible to build uncertainty clouds for points measured by that instrument. This provides an accurate estimate of the points uncertainty both visually in 3D as well as numerically. Viewed side-by-side the differing characteristics of each instrument result in drastically different measurement uncertainties (in size, shape, and density):
Now, consider the situation in which a long part is to be measured. Due to its shape, this part might require multiple instrument stations along its length. In order to tie all of the measurements into a single coordinate system, a number of common points are measured be- tween adjacent sets of instruments, and then these points are best- fit together to determine the overall instrument network. For some cases, this method is a perfectly valid approach, and in fact, this is the approach taken by other metrology software packages for tying a network of instruments together. In particular, fairly small instrument networks with low uncertainty requirements can use this approach without problems.
However, using best-fit introduces error stack-up, very similar to the idea of tolerance stack-up in engineering drawings. This error stack- up can be significant. The picture below depicts an instrument network using a best-fit compared to the true instrument positions (ghosted). A slight error for the first set of common points (between Instruments A & B) will move the position and orientation of Instrument B out of its true position. By fitting Instrument C to Instrument B, not only does Instrument C inherit the error from the first set of common points (which has now been “leveraged” by its distance from those common points), but now errors in the common points between Instruments B & C cause even more error to stack up. The end result is a network that could potentially be quite different from reality.
When more accuracy is desired--or when the measurement network is elongated, large scale, fairly 2-dimensional, or C-shaped (non- closed, meaning the first instrument is not tied back to the last), then best-fitting is usually inadequate. Some examples of real-world measurements that get significant benefit from USMN include particle accelerator measurements (very large and mostly 2-dimensional), large linear measurements with high accuracy requirements (such as the catapult rail on an aircraft carrier), and large “open” networks (such as measurement of two sides of a building).
Instead of using a best-fit, USMN takes an intelligent weighted bundle approach. USMN examines each common measurement and con- siders the characteristics of the instruments that measured them and their positions in space. Components of instrument observations which are considered to have low uncertainty are assigned a higher weight than components that are considered to have higher uncertainty. If a total station and laser tracker measure a common point from different angles, then the angular measurement from the total station will be assigned more weight than that of the tracker, and the distance measurement from the laser tracker will be assigned more weight than that of the total station.
Once these weights are assigned, the instrument transforms are adjusted to calculate their most likely positions based on all available information. The net result is an instrument network that is significantly closer to reality than the best-fit network described above.
In summary, USMN considers each instrument’s uncertainty variables and perturbs instrument positions in order to get minimal measurement closures on the common points. This is similar to a traditional bundle process, except the optimization process uses the estimates of the instrument uncertainties and the range of each observation to weight the individual contributions for each measurement.