Ten things you need to know about precision displacement measurement

Despite their frequent use, terms such as accuracy, resolution, repeatability and linearity are often misunderstood. As critical factors in the selection of a displacement sensor as well as in many other precision measuring instruments, engineers must ensure they fully understand the terminology before making a purchasing decision. Unfortunately, not all displacement sensor specifications are presented in a way that allows direct comparisons to be made.

The terminology applied to sensors can be confusing but is critical when it comes to selecting the right measuring instruments for an application – especially for displacement and distance sensors. If engineers get this part wrong, they could end up paying more than they need to for over-specified sensors. Conversely, a control system or product may lack critical performance if the displacement sensor does not meet the required specification.


Resolution is one of the most frequently misunderstood and poorly defined descriptions of performance. A simple resolution statement in a technical datasheet rarely provides sufficient information for a fully informed sensor selection.

The resolution of a sensor is defined as the smallest possible change it can detect in the quantity that it is measuring. Resolution is not accuracy. An inaccurate sensor could have high resolution, and a low resolution sensor may be accurate in some applications.

In practice, the resolution is determined by the signal-to-noise ratio, taking into account the acquired frequency range. Often in a digital display, the least significant digit will fluctuate, indicating that changes of that magnitude are only just resolved. The resolution is related to the precision with which the measurement is made.

The electrical noise in a sensor’s output is the primary factor limiting its smallest possible measurement. For example, a measurement of a 5µm displacement will be lost if the sensor has a 10µm of noise in the output. It is therefore essential that the resolution of the selected sensor is significantly lower than the smallest measurement that is required. Best practice will require a resolution of at least 10 times greater than the required measurement accuracy. In addition, resolution is only meaningful within the context of the system bandwidth, unit of measure, the application and the measurement method used by the sensor manufacturer.


The accuracy of a displacement sensor describes the maximum measuring error taking into account all the factors that affect the real measurement value. These factors include the linearity, resolution, temperature stability, long-term stability and a statistical error (which can be removed by calculation).


Repeatability is a quantitative specification of the deviation of mutually independent measurements, which are determined under the same conditions. It defines how good the electrical output is for the same input if tried again and again under the same conditions. In terms of displacement sensors, repeatability is a measure of the sensor’s stability over time.

Typically, sample-to-sample repeatability will be lower for very fast sample rates, since less time is used to average the measurement. As the sample rate is lowered, repeatability will improve, but this does not continue indefinitely. Beyond some slower sample rate, repeatability will start to worsen as long term drift in the components and temperature changes cause changes in the sensor’s output.

Signal-to-noise ratio

The quality of a transmitted useful signal can be stated by its signal-to-noise ratio (SNR). The SNR often limits the accuracy with which some measurements can be performed. Noise arises with any data transmission. The higher the separation between noise and useful signal, the more stable the transmitted data can be reconstructed from the signal. If, during digital sampling, the noise power and the useful signal power become too close, an incorrect value may be detected and the information corrupted. The SNR is calculated by dividing the mean useful power by the Mean Noise power. SNR is generally understood to be the ratio of the detected powers (not amplitudes) and is often expressed in decibels. Usually, the definition refers to electrical powers in the output of some kind of sensor or detector. In optical measurements, a common situation is that some light beam impinges a photodetector such as a photodiode, which produces a photocurrent in proportion to the optical power, with some electronic noise added. Depending on the situation, the SNR may be limited either by optical noise influences or by noise generated by the sensor electronics.


The maximum deviation between an ideal straight-line characteristic and the real characteristic is known as the non-linearity or linearity of the sensor. The figure is normally provided as a percentage of the measuring range or percentage of full-scale output (% FSO).

In many applications, the sensor non-linearity will play a large part in determining the actual measurement accuracy. It is very common for users to use the resolution value of a device when actually the linearity figure is required. Quite often the linearity figure will be 10 or 20 times greater than the resolution. Therefore, if incorrectly specified, the measurement sensor will dramatically under perform.

Long-term stability

Despite the use of high quality components, the stability of sensors or measurement systems can change over the course of time i.e. with unchanged input quantity and ambient conditions, the possible change of the output signal over a certain time period is acquired. This figure is typically stated in % FSO / month.

Temperature stability

Check out the technical datasheet and you may find that most suppliers of low cost laser sensors do not state the ‘temperature stability’ of their sensors. So how do you know the actual measurement error or how to correct your results to account for this? Typically, measurement errors can be as high as 400ppm/K, which can significantly affect the measurement accuracy.

On the other hand, a supplier of high performance laser sensors is much more likely to state the temperature stability of a sensor on the datasheet. In addition, active temperature compensation algorithms may also be provided for the sensor, reducing temperature stability to as low as 100ppm/K or more.

Measuring range

The measuring range describes the space of a sensor in which the object to be measured must be situated so that the specified technical data are satisfied. The extreme regions of this space are termed the start and end of the measuring range. Some sensors exhibit a free space between the front of the sensor and measuring range and the sensor. With contact sensors, the measuring range is the distance between the mechanical minimum and maximum possible distance of the sensor mounting to the measurement object.

Offset distance

The offset distance of a sensor is defined differently from supplier to supplier and from sensor principle to sensor principle. The offset distance corresponds to the distance between the sensor edge and the centre of the measuring range or the start of the measuring range.

Response time

Response time is the period from the time of the event to the signal output. The response time is often deemed to be achieved when 90% of the signal output is achieved. Many sensor specifications do not state response time and often it is assumed this is equal to the stated measurement speed or measurement frequency. This is incorrect. Quite often the response time will vary depending on the position of the measurement object.

For example, if the object is out of the measuring range and then moves into the measuring range, the response time can be significantly longer than the quoted measurement speed or measurement frequency.

Also, if the object is already in the measuring range but moves rapidly over a large percentage of the measuring range, say greater than 50%, again the response time will be longer than the quoted measurement speed. Care must be taken in this instance, as this can cause problems, particularly in closed loop control applications or in an application where fast moving individual components are moving through the measuring range in, for example, a production process.

Author profile:
Chris Jones is the managing director at Micro-Epsilon UK