Abstract

Estimating credit risk in portfolios containing loans or bonds is crucial for financial institutions. Monte Carlo simulation, the traditional method for calculating credit risk, is often computationally expensive due to the low probability of defaults, especially for highly rated entities. Importance Sampling (IS) offers a more efficient alternative by focusing simulations on scenarios that lead to rare but significant losses. This article explains the implementation of IS in a portfolio credit risk context, particularly within the normal copula model. We delve into IS theory, its practical application, and the numerical examples that support its effectiveness in improving simulation performance.


Measuring Portfolio Credit Risk with Monte Carlo Simulation

The Portfolio View of Credit Risk

Credit risk is the potential for loss due to the default of one or more obligors in a portfolio. A bank or financial institution holding a portfolio of loans, bonds, or derivatives is exposed to the risk of default across multiple entities, creating a need for accurate risk measurement.

Modern credit risk management takes a portfolio approach, accounting for the dependence between obligors—meaning that defaults may not be independent events but influenced by shared risk factors like economic downturns or regional market changes. Modeling these dependencies adds complexity to the computational process, especially in predicting large, rare losses, which is where Monte Carlo simulation typically struggles.

Monte Carlo Simulation and Rare Events

Monte Carlo simulation is a widely used computational method to estimate probabilities of different outcomes in a financial system. In the context of credit risk, Monte Carlo methods simulate various scenarios to estimate losses due to defaults. While powerful, this method can be computationally inefficient for rare events, such as the simultaneous default of many obligors.

The rare-event nature of large portfolio losses makes traditional Monte Carlo simulations slow because many runs are required to observe significant loss events. This is where Importance Sampling (IS) becomes valuable, reducing the computational burden by focusing on the rare but impactful scenarios.


Importance Sampling (IS) for Credit Risk

Overview of Importance Sampling

Importance Sampling (IS) is a variance reduction technique used to improve the efficiency of Monte Carlo simulations. The goal is to modify the sampling process to focus on events that contribute significantly to the rare-event probabilities, such as large losses in a credit portfolio.

In a basic Monte Carlo simulation, each scenario is drawn randomly based on the real-world probability distribution. In contrast, Importance Sampling changes the probability distribution from which scenarios are drawn, making extreme events (like multiple defaults) more likely in the simulation.

Once the events are sampled from this modified distribution, a weighting adjustment is applied to correct for the altered distribution, ensuring the simulation remains unbiased. This weighting is calculated using the likelihood ratio of the original and modified distributions.

The Challenge of Dependence Between Defaults

A critical challenge in applying IS to credit risk is the dependence structure between obligors in a portfolio. Dependence can be modeled using copulas, which link individual default probabilities to shared risk factors. One widely used copula in credit risk is the normal copula model.

In this model, each obligor’s default is influenced by a set of systematic factors (e.g., industry or geographic risk), making it harder to apply IS effectively. The difficulty lies in determining how to modify both the default probabilities and the distribution of these underlying factors.


The Normal Copula Model for Credit Risk

Model Setup

In the normal copula model, the correlation between obligor defaults is represented by a multivariate normal distribution. The latent variable \(X_k\) determines the creditworthiness of obligor \(k\), and a default occurs when \(X_k\) crosses a threshold.

Mathematically, the total portfolio loss \(L\) is the sum of the individual losses caused by defaults:

\[L = \sum_{k=1}^{m} c_k Y_k\]

Where:

  • \(m\) is the number of obligors,
  • \(Y_k\) is a binary indicator that is 1 if the \(k\)-th obligor defaults, and 0 otherwise,
  • \(c_k\) represents the loss if obligor \(k\) defaults.

To model default dependence, the variable \(X_k\) is defined as:

\[X_k = a_{k1} Z_1 + a_{k2} Z_2 + \dots + a_{kd} Z_d + b_k \epsilon_k\]

Here, \(Z_1, Z_2, \dots, Z_d\) are systematic risk factors, such as market or industry-wide risks, and \(\epsilon_k\) is an idiosyncratic risk unique to each obligor. Each obligor’s default probability \(p_k\) is determined by these factors and can be calculated as:

\[P(Y_k = 1 | Z) = \Phi \left( \frac{a_k Z + \Phi^{-1}(p_k)}{b_k} \right)\]

Where \(\Phi\) is the cumulative distribution function of the standard normal distribution.

Estimating Tail Probabilities

One of the key objectives in credit risk management is to estimate the tail probability, \(P(L > x)\), where \(x\) represents a large loss threshold. Importance sampling plays a crucial role in efficiently estimating this probability, particularly for high-threshold losses.


Implementing Importance Sampling in Credit Risk

Importance Sampling for Independent Obligors

The simplest case of IS involves independent obligors. Here, defaults occur without any dependence on common factors, and IS can be applied by exponentially twisting the default probabilities. This means adjusting the probabilities to make defaults more likely during simulation.

For each obligor \(k\), the default probability is altered using a twisting parameter \(\theta\):

\[p_{k,\theta} = \frac{p_k e^{\theta c_k}}{1 + p_k(e^{\theta c_k} - 1)}\]

The likelihood ratio used to correct the distribution becomes:

\[\prod_{k=1}^{m} \left( \frac{p_k}{p_{k,\theta}} \right)^{Y_k} \left( \frac{1 - p_k}{1 - p_{k,\theta}} \right)^{1 - Y_k}\]

By optimizing \(\theta\) to minimize the variance of the estimator, we significantly improve the accuracy and efficiency of the simulation.

Conditional Importance Sampling for Dependent Obligors

When obligors are dependent (i.e., influenced by common risk factors), IS becomes more complex. The IS process is divided into two steps:

  1. Conditional IS: First, we apply IS conditionally on the systematic factors \(Z\). Given these factors, the defaults of individual obligors become conditionally independent, and we can apply the same exponential twisting method as in the independent case.

  2. Shifting the Factor Distribution: To improve the effectiveness of IS when defaults are highly correlated, we also apply IS to the distribution of the factors \(Z\). By shifting the mean of the systematic factors, we increase the likelihood of scenarios that lead to large portfolio losses.


Numerical Examples

Single-Factor Homogeneous Portfolio

Consider a single-factor model where all obligors have the same exposure to a single systematic risk factor \(Z\). The latent variable for each obligor \(X_k\) is modeled as:

\[X_k = \rho Z + \sqrt{1 - \rho^2} \epsilon_k\]

The default probability given \(Z\) is:

\[p(Z) = \Phi \left( \frac{\rho Z + \Phi^{-1}(p)}{\sqrt{1 - \rho^2}} \right)\]

In this model, we find that applying IS conditionally on \(Z\) provides substantial variance reduction, especially when correlations between obligors are weak.

Multi-Factor Portfolio

For portfolios influenced by multiple risk factors, IS involves finding the optimal shift in the factor mean \(\mu\). For example, in a portfolio of 1,000 obligors with varying exposures to 10 systematic risk factors, the IS method shifts the mean of each factor to maximize the likelihood of large losses.

By using numerical optimization, the IS estimator is computed efficiently, achieving significant variance reduction over standard Monte Carlo methods.


Appendix: Python Code for Portfolio Simulation

import numpy as np

def simulate_portfolio_loss(m, default_probs, exposures, factors, factor_loadings):
    # Generate systematic factors
    Z = np.random.normal(size=factors.shape[1])
    
    # Calculate default probabilities conditional on factors
    cond_probs = np.array([
        np.dot(factor_loadings[i], Z) + np.percentile(default_probs, i) for i in range(m)
    ])
    
    # Simulate defaults and calculate total loss
    defaults = np.random.binomial(1, cond_probs)
    loss = np.dot(defaults, exposures)
    
    return loss

# Parameters
m = 1000  # Number of obligors
default_probs = np.random.uniform(0.01, 0.05, m)
exposures = np.random.uniform(1, 10, m)
factor_loadings = np.random.uniform(0, 1, (m, 5))

# Simulate a single portfolio loss
loss = simulate_portfolio_loss(m, default_probs, exposures, factor_loadings)

print(f"Portfolio loss: {loss}")

Appendix: R Code for Portfolio Simulation

# Function to simulate portfolio loss using importance sampling
simulate_portfolio_loss <- function(m, default_probs, exposures, factor_loadings) {
  # Generate systematic factors
  Z <- rnorm(ncol(factor_loadings))
  
  # Calculate default probabilities conditional on factors
  cond_probs <- sapply(1:m, function(i) {
    pnorm(sum(factor_loadings[i, ] * Z) + qnorm(default_probs[i]))
  })
  
  # Simulate defaults and calculate total loss
  defaults <- rbinom(m, size = 1, prob = cond_probs)
  loss <- sum(defaults * exposures)
  
  return(loss)
}

# Parameters
set.seed(123)  # Set a seed for reproducibility
m <- 1000  # Number of obligors
default_probs <- runif(m, min = 0.01, max = 0.05)
exposures <- runif(m, min = 1, max = 10)
factor_loadings <- matrix(runif(m * 5, min = 0, max = 1), nrow = m, ncol = 5)

# Simulate a single portfolio loss
loss <- simulate_portfolio_loss(m, default_probs, exposures, factor_loadings)

# Output the result
cat("Portfolio loss:", loss, "\n")

Appendix: Ruby Code for Portfolio Simulation

# Function to simulate portfolio loss using importance sampling
def simulate_portfolio_loss(m, default_probs, exposures, factor_loadings)
  # Generate systematic factors
  z = Array.new(factor_loadings[0].length) { rand_normal }

  # Calculate default probabilities conditional on factors
  cond_probs = (0...m).map do |i|
    sum = factor_loadings[i].each_with_index.map { |a, j| a * z[j] }.sum
    cdf_normal(sum + quantile_normal(default_probs[i]))
  end

  # Simulate defaults and calculate total loss
  defaults = cond_probs.map { |p| rand < p ? 1 : 0 }
  loss = defaults.zip(exposures).map { |d, e| d * e }.sum

  loss
end

# Helper function to generate a random number from standard normal distribution
def rand_normal
  theta = 2 * Math::PI * rand
  rho = Math.sqrt(-2 * Math.log(rand))
  rho * Math.cos(theta)
end

# Helper function to compute the inverse of the cumulative normal distribution (quantile function)
def quantile_normal(p)
  # Using the approximation from Abramowitz and Stegun formula 26.2.23
  a1 = -3.969683028665376e+01
  a2 =  2.209460984245205e+02
  a3 = -2.759285104469687e+02
  a4 =  1.383577518672690e+02
  a5 = -3.066479806614716e+01
  a6 =  2.506628277459239e+00

  b1 = -5.447609879822406e+01
  b2 =  1.615858368580409e+02
  b3 = -1.556989798598866e+02
  b4 =  6.680131188771972e+01
  b5 = -1.328068155288572e+01

  c1 = -7.784894002430293e-03
  c2 = -3.223964580411365e-01
  c3 = -2.400758277161838e+00
  c4 = -2.549732539343734e+00
  c5 =  4.374664141464968e+00
  c6 =  2.938163982698783e+00

  d1 =  7.784695709041462e-03
  d2 =  3.224671290700398e-01
  d3 =  2.445134137142996e+00
  d4 =  3.754408661907416e+00

  p_low = 0.02425
  p_high = 1 - p_low

  if p < p_low
    q = Math.sqrt(-2 * Math.log(p))
    (((((c1 * q + c2) * q + c3) * q + c4) * q + c5) * q + c6) /
      ((((d1 * q + d2) * q + d3) * q + d4) * q + 1)
  elsif p <= p_high
    q = p - 0.5
    r = q * q
    (((((a1 * r + a2) * r + a3) * r + a4) * r + a5) * r + a6) * q /
      (((((b1 * r + b2) * r + b3) * r + b4) * r + b5) * r + 1)
  else
    q = Math.sqrt(-2 * Math.log(1 - p))
    -(((((c1 * q + c2) * q + c3) * q + c4) * q + c5) * q + c6) /
      ((((d1 * q + d2) * q + d3) * q + d4) * q + 1)
  end
end

# Helper function to compute the cumulative distribution function (CDF) for standard normal distribution
def cdf_normal(x)
  0.5 * (1 + Math.erf(x / Math.sqrt(2)))
end

# Parameters
m = 1000  # Number of obligors
default_probs = Array.new(m) { rand(0.01..0.05) }
exposures = Array.new(m) { rand(1.0..10.0) }
factor_loadings = Array.new(m) { Array.new(5) { rand(0.0..1.0) } }

# Simulate a single portfolio loss
loss = simulate_portfolio_loss(m, default_probs, exposures, factor_loadings)

# Output the result
puts "Portfolio loss: #{loss}"

Appendix: Rust Code for Portfolio Simulation

use rand::distributions::{Distribution, Uniform};
use rand::Rng;
use statrs::distribution::{Normal, Univariate};
use std::f64::consts::PI;

// Function to simulate portfolio loss using importance sampling
fn simulate_portfolio_loss(
    m: usize,
    default_probs: &[f64],
    exposures: &[f64],
    factor_loadings: &[Vec<f64>],
) -> f64 {
    // Generate systematic factors
    let z: Vec<f64> = (0..factor_loadings[0].len())
        .map(|_| rand_normal())
        .collect();

    // Calculate default probabilities conditional on factors
    let cond_probs: Vec<f64> = (0..m)
        .map(|i| {
            let sum: f64 = factor_loadings[i]
                .iter()
                .zip(z.iter())
                .map(|(a, z)| a * z)
                .sum();
            let normal = Normal::new(0.0, 1.0).unwrap();
            normal.cdf(sum + normal.inverse_cdf(default_probs[i]))
        })
        .collect();

    // Simulate defaults and calculate total loss
    let mut loss = 0.0;
    for i in 0..m {
        let default_occurred = rand::thread_rng().gen_bool(cond_probs[i]);
        if default_occurred {
            loss += exposures[i];
        }
    }
    loss
}

// Helper function to generate a random number from standard normal distribution
fn rand_normal() -> f64 {
    let u1: f64 = rand::thread_rng().gen();
    let u2: f64 = rand::thread_rng().gen();
    (-2.0 * u1.ln()).sqrt() * (2.0 * PI * u2).cos()
}

fn main() {
    let m = 1000; // Number of obligors

    // Generate random default probabilities and exposures
    let uniform_prob = Uniform::new(0.01, 0.05);
    let uniform_exposure = Uniform::new(1.0, 10.0);
    let mut rng = rand::thread_rng();

    let default_probs: Vec<f64> = (0..m).map(|_| uniform_prob.sample(&mut rng)).collect();
    let exposures: Vec<f64> = (0..m).map(|_| uniform_exposure.sample(&mut rng)).collect();

    // Generate random factor loadings
    let factor_loadings: Vec<Vec<f64>> = (0..m)
        .map(|_| (0..5).map(|_| rng.gen_range(0.0..1.0)).collect())
        .collect();

    // Simulate a single portfolio loss
    let loss = simulate_portfolio_loss(m, &default_probs, &exposures, &factor_loadings);

    // Output the result
    println!("Portfolio loss: {:.2}", loss);
}