Hypothesis Testing and Binary Classification Errors

Null and Alternative Hypotheses (Source: PrepNuggets)

Wentz’s book, The Effective CISSP: Security and Risk Management, helps CISSP and CISM aspirants build a solid conceptual security model. It is a tutorial for information security and a supplement to the official study guides for the CISSP and CISM exams and an informative reference for security professionals.

Null and Alternative Hypotheses

The null hypothesis is a presumption of zero or no deviation from the normal state. As proving an assumption is difficult, we typically find evidence against the null hypothesis and accept the alternative hypothesis instead of proving the alternative is true directly. As a result, the null and alternative hypotheses can be written as follows:

  • Alternative Hypothesis: The sample fingerprint doesn’t match the template in the model repository
  • Null Hypothesis: The sample fingerprint matches the template in the model repository

Biometrics-related terms like False Acceptance Rate (FAR) and False Rejection Rate (FRR) are commonly used and quite effective for communication. It’s not uncommon for people or books to relate FAR/FRR to Type I/II error (used in statistical hypothesis) or False Positive/Negative (used in binary classification).

Type I and Type II Errors

In statistics, we typically don’t propose only one hypothesis that requires sufficient evidence to prove it. Instead, we accept the alternative hypothesis because we reject the null hypothesis based on the evidence against it with a predefined significance level (e.g., 5%).

The decision of statistical hypothesis testing is to reject the null hypothesis or not. However, some decisions can be wrong, which can be classified as follows:

  • Type I Error: we reject a null hypothesis, which is true. (reject a normal case)
  • Type II Error: we fail to reject a null hypothesis, which is false. (accept an abnormal case)

False Positive and False Negative

When it comes to binary classification in machine learning, a model is trained as the binary classifier based on a small portion of sample data, classifying instances/cases by labels (e.g., 0/1, spam/not spam, weapon/no weapon).

In a system that implements anomaly-based detection, it may use Imposter/No Imposter for classification as follows:

A false positive means an imposter is identified/detected but the decision is wrong. A false negative means an imposter is not identified/detected, and sill, the decision is wrong. It’s common for people to relate false-positive to Type I error and false-negative to Type II error, even though they are used in contexts using different techniques. Li’s paper compares statistical hypothesis testing with machine learning binary classification very well.

Hypothesis Testing and Binary Classification
Hypothesis Testing and Binary Classification (Source: ScienceDirect)

Reference



Leave a Reply