# Error Detection Mechanisms

Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver.
Error correction is the detection of errors and reconstruction of the original, error-free data.

## Common Error Detection Mechanisms

1. Parity bit: error detection
2. Checksum: error detection for accidental and unintentional changes.
3. Cyclic redundancy check (CRC): error detection and correction
4. Message digest (Fingerprint): error detection and unique identification

## Parity Bit

• Even parity: appending a bit to the bit stream so that the number of 1s becomes even, e.g., 1000(1) or 1001(0).
• Odd parity: appending a bit to the bit stream so that the number of 1s becomes odd, e.g., 1000(0) or 1001(1).

## Checksum

A checksum is calculated to detect accidental or unintentional changes. The following example demonstrates how an 8-bit stream, 10110110, is split into two halves to calculate the checksum.

## Cyclic Redundancy Check (CRC)

A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to digital data. Blocks of data entering these systems get a short check value attached, based on the remainder of a polynomial division of their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used for error correction (see bitfilters).

CRCs are so called because the check (data verification) value is a redundancy (it expands the message without adding information) and the algorithm is based on cyclic codes. CRCs are popular because they are simple to implement in binary hardware, easy to analyze mathematically, and particularly good at detecting common errors caused by noise in transmission channels. Because the check value has a fixed length, the function that generates it is occasionally used as a hash function.

~ Wikipedia

## Hash Functions

Hash functions are related to but often confused with checksums, cyclic redundancy checks (CRCs), or digital fingerprints (message digests). “Any function that can be used to map data of arbitrary size to fixed-size values,” aka hash values or hashes, is qualified as a hash function. Hash functions may produce duplicate hashes or hash collisions. (Wikipedia)

Hash functions compute hash values as part of the hash table, aka hash map or dictionary, and as an index to store and retrieve data items or records.

## Message Digest

Cryptographic hash functions produce a message digest as a digital fingerprint uniquely identifying the original data.

Fingerprint functions may be seen as high-performance hash functions used to uniquely identify substantial blocks of data where cryptographic hash functions may be unnecessary.

Mainstream cryptographic grade hash functions generally can serve as high-quality fingerprint functions, are subject to intense scrutiny from cryptanalysts, and have the advantage that they are believed to be safe against malicious attacks.”

~ Wikipedia

## Summary

We often generate codes, such as parity bits, checksums, CRCs, or message digests/fingerprints, against data of interest to ensure data integrity. Some codes are used for error detection only, while others can be used for error detection, correction, and/or unique identification.