Measuring Differences Between Distributions

16 minute read

In mathematics, the concept of “distance” extends beyond the everyday understanding of the term. Typically, when we think of distance, we envision Euclidean distance, which is the straight-line distance between two points in space. This form of distance is familiar and intuitive, often represented by the length of a line segment connecting two points on a plane. Euclidean distance is widely used in various fields, from geometry to physics, and serves as a foundation for many mathematical and scientific applications.

However, mathematics offers a plethora of ways to measure distance, especially when dealing with more abstract concepts such as probability distributions. When comparing two probability distributions, we need a method to quantify how different they are from each other. This is where more specialized distance measures come into play.

Two prominent measures used to compare probability distributions are Kullback-Leibler (KL) divergence and Wasserstein distance. Both of these metrics provide a way to quantify the difference between distributions, but they do so in fundamentally different ways.

KL divergence, often referred to as relative entropy or information gain, measures the amount of information lost when one probability distribution is used to approximate another. It gives an indication of how one distribution diverges from another by focusing on the ratio of their probabilities. KL divergence is particularly useful in fields like machine learning, where it is employed in the training of various models, including neural networks and generative adversarial networks (GANs).

On the other hand, Wasserstein distance, also known as the Earth Mover’s distance, originates from the field of optimal transport. It measures the minimal “work” required to transform one probability distribution into another by considering both the amount of probability mass that needs to be moved and the distance over which it needs to be transported. This measure takes into account the underlying geometry of the space in which the distributions are defined, making it especially valuable in applications such as computer vision and machine learning.

In this article, we will delve deeper into these two measures of distance between probability distributions. We will explore their definitions, intuitive examples, methods of calculation, and practical applications, providing a comprehensive understanding of how they help us quantify differences in statistical and probabilistic contexts.

Kullback-Leibler (KL) Divergence

Kullback-Leibler (KL) divergence, also known as relative entropy or information gain, is a statistical measure that quantifies the difference between two probability distributions. Specifically, it measures how much information is lost when using one probability distribution (P) to approximate another probability distribution (Q). Unlike traditional distance metrics, KL divergence is not symmetric, meaning that the divergence of P from Q is not the same as the divergence of Q from P. This asymmetry highlights the directional nature of the information loss.

To understand KL divergence intuitively, consider the following example involving customer preferences for apples and bananas over two consecutive years. Suppose you are a chef who annually surveys customers about their favorite fruits. Each year, customers can choose either apples (a) or bananas (b). The survey results provide you with the probability distributions of customer preferences for each year.

In Year 1, 40% of customers preferred apples, and 60% preferred bananas. This distribution can be represented as: \(Q(a) = 0.4, \quad Q(b) = 0.6\)

In Year 2, the preferences shifted significantly: 10% of customers preferred apples, and 90% preferred bananas. This new distribution is: \(P(a) = 0.1, \quad P(b) = 0.9\)

KL divergence can measure how different the distribution of preferences in Year 2 is from Year 1. It focuses on the individual elements within each distribution and examines the ratio of their probabilities across the two years. For instance:

  • The proportion of customers who preferred apples in Year 1 (Q(a)) was 0.4, while in Year 2 (P(a)) it was 0.1. The ratio \(\frac{P(a)}{Q(a)}\) is 0.25.
  • Similarly, for bananas, the proportion in Year 1 (Q(b)) was 0.6, and in Year 2 (P(b)) it was 0.9. The ratio \(\frac{P(b)}{Q(b)}\) is 1.5.

Using these ratios, KL divergence calculates the average difference between the two distributions, weighted by the probabilities in distribution P. The formula for KL divergence \(D_{KL}(P \| Q)\) is:

\[D_{KL}(P \| Q) = \sum_{i} P(i) \log \frac{P(i)}{Q(i)}\]

In this formula:

  • \(P(i)\) represents the probability of event \(i\) in distribution P.
  • \(Q(i)\) represents the probability of event \(i\) in distribution Q.
  • The sum is taken over all possible events (in this case, apples and bananas).

To see this in action, you can calculate the KL divergence for the given example using Python:

import numpy as np

# Define the distributions
Q = np.array([0.4, 0.6])
P = np.array([0.1, 0.9])

# Calculate KL divergence D(P||Q)
D_PQ = np.sum(P * np.log(P / Q))

print("D(P||Q) =", D_PQ)

Running this code will give you the output: \(D(P \| Q) = 0.22628916118535888\)

This value quantifies how different the Year 2 distribution is from the Year 1 distribution. A higher value indicates a greater difference.

It’s important to note that KL divergence is asymmetric. This means that \(D(P \| Q) \neq D(Q \| P)\). To confirm this, you can calculate the divergence of Year 1 from Year 2 using the following code:

# Calculate KL divergence D(Q||P)
D_QP = np.sum(Q * np.log(Q / P))

print("D(Q||P) =", D_QP)

Running this code will yield: \(D(P \| Q) = 0.31123867958305756\)

As shown, the divergence of Year 1 from Year 2 is different from the divergence of Year 2 from Year 1, highlighting the directional nature of KL divergence.

KL divergence is a powerful tool for measuring the difference between two probability distributions. It is widely used in various fields, including machine learning, where it helps in model training by comparing the predicted distribution with the true distribution. Understanding and calculating KL divergence allows for a deeper insight into how distributions change and how much information is lost in approximations.

Properties of KL Divergence

Asymmetry of KL Divergence

One of the key properties of KL divergence is its asymmetry. This means that the KL divergence of \(P\) from \(Q\) is not the same as the KL divergence of \(Q\) from \(P\). Mathematically, this is represented as:

\[D_{KL}(P \| Q) \neq D_{KL}(Q \| P)\]

This asymmetry indicates that the information loss when using \(P\) to approximate \(Q\) is not the same as the information loss when using \(Q\) to approximate \(P\). This property is crucial for understanding the directional nature of KL divergence, where the order of distributions matters significantly.

Example to Illustrate Asymmetry with Python Code

To illustrate the asymmetry of KL divergence, let’s consider the same example distributions \(P\) and \(Q\) from earlier:

import numpy as np

# Define the distributions
Q = np.array([0.4, 0.6])
P = np.array([0.1, 0.9])

# Calculate KL divergence D(P||Q)
D_PQ = np.sum(P * np.log(P / Q))
print("D(P||Q) =", D_PQ)

# Calculate KL divergence D(Q||P)
D_QP = np.sum(Q * np.log(Q / P))
print("D(Q||P) =", D_QP)

Running this code will give you the outputs: \(D(P \| Q) = 0.22628916118535888\) \(D(Q \| P) = 0.31123867958305756\)

As shown, \(D(P \| Q)\) is not equal to \(D(Q \| P)\), demonstrating the asymmetry of KL divergence.

Applications of KL Divergence

KL divergence is widely used in various fields, particularly in machine learning and information theory. Some of the common applications include:

Machine Learning Models

KL divergence is used in the training of machine learning models such as decision trees, random forests, neural networks, and generative adversarial networks (GANs). It helps in measuring the difference between predicted and actual probability distributions, guiding the optimization process.

Variational Inference

In Bayesian machine learning, KL divergence is used in variational inference to approximate complex posterior distributions. It measures the difference between the true posterior distribution and the approximating distribution.

Information Theory

KL divergence quantifies the inefficiency of assuming that the distribution \(Q\) describes the data when the true distribution is \(P\). It is a fundamental concept in information theory for measuring information loss and entropy.

Natural Language Processing (NLP)

In NLP, KL divergence is used to compare language models and probability distributions of words or phrases. It helps in tasks such as topic modeling and text classification.

Understanding KL divergence and its properties, such as asymmetry, is essential for applying it effectively in these diverse applications. It provides a powerful tool for comparing probability distributions and optimizing models based on the divergence between predicted and actual outcomes.

Wasserstein Distance

Definition and Intuition

Wasserstein distance, also known as the Earth Mover’s distance, is a measure of the difference between two probability distributions. It originates from the field of optimal transport. Intuitively, it quantifies the minimum amount of “work” required to transform one probability distribution into another by moving probability mass across a space.

Intuitive Example: Moving Soil from One Pile to Another

Imagine you have two piles of soil representing two different distributions. Each pile consists of a certain amount of soil placed at different locations. To transform one pile into the other, you need to move soil from one location to another. The amount of “work” required is determined by the amount of soil moved times the distance it is moved. In this analogy, the soil represents the “mass” of the distribution, the location represents the values that the distribution can take, and the work done to move the soil represents the Wasserstein distance.

Calculation of Wasserstein Distance

The calculation of Wasserstein distance is framed as an optimal transport problem. The goal is to find the most efficient way to transport probability mass from one distribution to another, minimizing the total work required.

\[W(P, Q) = \inf_{\gamma \in \Gamma(P, Q)} \int_{\mathcal{X} \times \mathcal{Y}} d(x, y) \, d\gamma(x, y)\]

Where:

  • \(P\) and \(Q\) are the two probability distributions.
  • \(\Gamma(P, Q)\) is the set of all possible transport plans.
  • \(d(x, y)\) is the distance between points \(x\) and \(y\).
  • ( d\gamma(x, y) ) represents the amount of mass moved from \(x\) to \(y\).

The optimal transport plan minimizes the total cost of moving mass, where the cost is defined as the product of the mass moved and the distance it is moved.

Properties of Wasserstein Distance

One of the key properties of Wasserstein distance is that it takes into account the underlying geometry of the space in which the distributions are defined. This makes it particularly useful in applications where spatial relationships between values are important. Unlike other distance measures, Wasserstein distance provides a meaningful way to compare distributions by considering the actual paths along which probability mass is transported.

Applications of Wasserstein Distance

Wasserstein distance has a wide range of applications in various fields due to its consideration of underlying geometry and spatial relationships:

Computer Vision

In computer vision, Wasserstein distance is used to compare images and measure the similarity between distributions of pixel intensities. It helps in tasks such as image retrieval, texture analysis, and object recognition. For example, in image retrieval, Wasserstein distance can be used to find images that are visually similar to a query image by comparing the distributions of pixel colors and intensities. This measure is robust to slight variations in images, making it useful for recognizing objects in different lighting conditions or from different angles.

Machine Learning

In machine learning, Wasserstein distance is employed to train generative models, such as Wasserstein GANs (Generative Adversarial Networks). Unlike traditional GANs, Wasserstein GANs use Wasserstein distance to measure the difference between the generated data distribution and the real data distribution. This results in a more stable training process and generates higher quality images. Additionally, Wasserstein distance is used in domain adaptation, where models trained on one distribution of data need to be adapted to work with a different, but related, distribution.

Finance

In finance, Wasserstein distance is used to compare distributions of asset returns, risk measures, and other financial metrics. It helps in portfolio optimization by assessing the similarity between different assets’ return distributions, allowing for better diversification and risk management. For instance, comparing the distribution of returns for different investment portfolios using Wasserstein distance can identify portfolios with similar risk profiles, aiding in making more informed investment decisions. It is also useful in stress testing, where the similarity between different market scenarios can be assessed to understand potential impacts on financial stability.

Insurance

In insurance, Wasserstein distance is used to compare distributions of claim amounts, policyholder behaviors, and other risk factors. It aids in pricing models by comparing the distribution of claims between different groups of policyholders, helping actuaries design fair and accurate pricing strategies. Wasserstein distance also assists in risk assessment by comparing the probability distributions of various risk factors, allowing insurers to better understand and mitigate potential risks. For example, it can compare the distribution of small vs. large claims to understand the impact of rare but high-cost events on the overall risk profile.

Healthcare

In healthcare, Wasserstein distance is applied to compare distributions of medical data, such as patient outcomes or disease incidence rates. This can be particularly useful in epidemiology, where comparing the distributions of disease rates across different regions or populations can reveal insights into health disparities and inform public health interventions. It is also used in medical imaging, where comparing the distributions of pixel intensities in different scans can help in diagnosing and tracking the progression of diseases.

Natural Language Processing (NLP)

In natural language processing, Wasserstein distance is used to compare the distributions of words or phrases within different texts. This can be useful in tasks such as document similarity, topic modeling, and sentiment analysis. For instance, comparing the distribution of topics in two different sets of documents can help in identifying similarities and differences in content, which is valuable for tasks like content recommendation and information retrieval.

Climate Science

In climate science, Wasserstein distance is used to compare distributions of weather patterns, such as temperature and precipitation levels. This helps in understanding climate variability and assessing the impact of climate change. By comparing historical weather data with future climate projections, scientists can identify significant changes and trends, aiding in climate adaptation and mitigation strategies.

Understanding Wasserstein distance and its properties is essential for effectively applying it in these diverse fields. It offers a powerful tool for comparing probability distributions while considering the spatial relationships between values, leading to more meaningful and accurate analyses.

Comparison Between KL Divergence and Wasserstein Distance

Conceptual Differences

KL divergence and Wasserstein distance are both measures used to quantify the difference between probability distributions, but they do so in fundamentally different ways.

KL divergence focuses on information loss. It measures how much information is lost when using one probability distribution to approximate another. In essence, KL divergence quantifies the inefficiency of assuming that the distribution \(Q\) describes the data when the true distribution is \(P\). This measure is particularly sensitive to discrepancies in regions where the true distribution has a high probability but the approximating distribution has a low probability.

Wasserstein distance, on the other hand, focuses on the physical movement of mass. It measures the minimum amount of “work” required to transform one probability distribution into another by moving probability mass across the space. This “work” is calculated as the product of the amount of mass moved and the distance it is moved. Wasserstein distance takes into account the underlying geometry of the space, making it more suitable for comparing distributions in contexts where spatial relationships are important.

Mathematical Differences

The mathematical formulations of KL divergence and Wasserstein distance reflect their conceptual differences.

The formula for KL divergence \(D_{KL}(P \| Q)\) is: \(D_{KL}(P \| Q) = \sum_{i} P(i) \log \frac{P(i)}{Q(i)}\)

In this formula:

  • \(P(i)\) represents the probability of event \(i\) in distribution \(P\).
  • \(Q(i)\) represents the probability of event \(i\) in distribution \(Q\).
  • The logarithm of the ratio \(\frac{P(i)}{Q(i)}\) measures the information gain of event \(i\), and the sum is weighted by \(P(i)\).

The formula for Wasserstein distance \(W(P, Q)\) is: \(W(P, Q) = \inf_{\gamma \in \Gamma(P, Q)} \int_{\mathcal{X} \times \mathcal{Y}} d(x, y) \, d\gamma(x, y)\)

Where:

  • \(P\) and \(Q\) are the two probability distributions.
  • \(\Gamma(P, Q)\) is the set of all possible transport plans.
  • \(d(x, y)\) is the distance between points \(x\) and \(y\).
  • \(d\gamma(x, y)\) represents the amount of mass moved from \(x\) to \(y\).

This formulation reflects the optimal transport problem, where the goal is to find the transport plan that minimizes the total cost of moving mass, taking into account both the amount of mass moved and the distance.

Use Cases

The choice between KL divergence and Wasserstein distance depends on the specific application and the nature of the distributions being compared.

KL Divergence:

  • Machine Learning: Used in training probabilistic models and variational inference, where the goal is to minimize information loss.
  • Natural Language Processing: Useful for comparing language models and distributions of word frequencies.
  • Information Theory: Applied to measure the inefficiency of assuming one distribution over another, particularly in coding and communication.

Wasserstein Distance:

  • Computer Vision: Ideal for comparing images based on pixel intensity distributions, considering spatial relationships.
  • Generative Models: Used in Wasserstein GANs for more stable training and better quality of generated samples.
  • Finance and Insurance: Applied to compare distributions of financial returns or risk factors, taking into account the geometry of the underlying space.
  • Climate Science: Useful for comparing distributions of climate variables such as temperature and precipitation.

KL divergence is preferable when the focus is on information loss and probabilistic interpretation, while Wasserstein distance is better suited for applications requiring consideration of the underlying geometry and physical movement of mass.

Conclusion

Summary of Key Points

In this article, we explored two important measures for comparing probability distributions: Kullback-Leibler (KL) divergence and Wasserstein distance. KL divergence measures the amount of information lost when using one probability distribution to approximate another, focusing on information loss. It is particularly sensitive to discrepancies in high-probability regions of the true distribution. We provided an intuitive example involving customer preferences for fruits and showed how to calculate KL divergence using Python.

Wasserstein distance, also known as the Earth Mover’s distance, measures the minimum amount of “work” required to transform one distribution into another by moving probability mass across space. This measure considers the underlying geometry of the distributions, making it useful for applications where spatial relationships are important. We discussed its calculation through the optimal transport problem and provided examples of its use in various fields.

Importance of Understanding These Measures

Understanding KL divergence and Wasserstein distance is crucial for anyone working with probability distributions. These measures provide powerful tools for comparing distributions, each with unique properties and applications. KL divergence is widely used in machine learning, information theory, and natural language processing to minimize information loss and optimize probabilistic models. Wasserstein distance, with its consideration of spatial relationships, is essential in fields such as computer vision, finance, and climate science.

By mastering these concepts, professionals can make more informed decisions, design better models, and interpret results more accurately. Both measures offer deep insights into the nature of probability distributions, which are foundational to many scientific and engineering disciplines.

Encouragement for Further Study

For those interested in delving deeper into KL divergence and Wasserstein distance, numerous resources are available:

  • Books: “Elements of Information Theory” by Thomas M. Cover and Joy A. Thomas for a comprehensive introduction to information theory, including KL divergence.
  • Online Courses: Coursera, edX, and other platforms offer courses on machine learning, statistics, and information theory that cover these topics in detail.
  • Research Papers: Reading academic papers on optimal transport and generative models can provide deeper insights into the applications of Wasserstein distance.
  • Software Documentation: Libraries such as SciPy, NumPy, and TensorFlow include detailed documentation and tutorials for implementing these measures in Python.

By exploring these resources, you can enhance your understanding and application of these critical statistical measures, contributing to more effective and innovative solutions in your field.