measures the difference between the discrete output value and the corresponding input.
Error introduced when a value is represented by a data type that has too few bits to represent it exactly, or when a value is converted from one data type to a shorter data type. Quantization error is also called quantization noise. See also bit, data type, quantization
The uncertainty that is inherent when digitizing an analog value due to the finite resolution of the conversion process.
The difference between the actual value of the analog signal when it is sampled and the resulting digital word value.
For an ADC, quantization error is defined as the difference between the actual analog input and the digital representation of that value. (See 'Quantization.')
The magnitude of the error introduced in a signal when the actual signal is between levels, resulting from subdividing a video signal into distinct increments, such as levels from 0 to 255.
The inherent error introduced by digitizing an analog signal, caused by original analog values that fall between the discrete digital encoding values.
Error resulting from quantizing an analog waveform to a discrete level. In general the longer the word length, the less the error.
Error caused by the nonzero resolution on an analog-to digital converter; this is an inherent error of the device.
When converting from an analog signal to a digital signal, error is unavoidable. An analog signal is continuous, with ideally infinite accuracy, while the digital signal's accuracy is dependent on the quantization resolution, or number of bits of the analog to digital converter. The difference between the actual analog value and approximated digital value due to the "rounding" that occurs while converting is called quantization error.