Portfolio Optimization with Python and R: Modern Portfolio Theory

In a previous post, we covered portfolio optimization and its implementations in R. In this post, We will tackle the problem of portfolio optimization using Python, which offers some elegant implementations. Much of the structure of the post is gleaned from Yves Hilpisch’s awesome book Python for Finance. Our analysis essentially boils down to the following tasks:

  • Import financial data via API’s

  • Compute returns and statistics

  • Simulate a random set of portfolios

  • Construct a set of optimal portfolios using Markowitz’s mean-variance framework

  • Visualize the set of optimal portfolios using the R plotly library (python’s plotly library is also a great alternative)

For more readings on the theory, I recommend Essentials of Investments and Practical Portfolio Performance Measurement and Attribution.


Tools from The R and Python Ecosystems

We need the following Python libraries and packages:

from datetime import datetime, timedelta
import numpy as np
import pandas as pd
import yfinance as yf
import scipy.optimize as sco
import scipy.interpolate as sci
import matplotlib.pyplot as plt

The following R packages are required if we wish to visualize the results in R:

library(plotly)
library(tidyverse)
# Used to interface with python objects from R
library(reticulate)

Daily Prices and Returns

For our sample, we select 12 different equity assets with exposure to all of the GICS sectors— energy, materials, industrials, utilities, healthcare, financials, consumer discretionary, consumer staples, information technology, communication services, and real estate. The sampling period will cover the last 10 years, starting from today’s date (2024-10-20), ensuring comprehensive historical data for analysis.

# Create a list of symbols
symbols = [
  "XOM", "SHW", "JPM", "AEP", "UNH", "AMZN", 
  "KO", "BA", "AMT", "DD", "TSN", "SLG"
]

start_date = (datetime.now() - timedelta(days = 365 * 10)).strftime('%Y-%m-%d')

daily_prices = yf.download(
  tickers = ' '.join(symbols), 
  start = start_date,
)['Adj Close']
[                       0%                       ][********              17%                       ]  2 of 12 completed[************          25%                       ]  3 of 12 completed[****************      33%                       ]  4 of 12 completed[********************  42%                       ]  5 of 12 completed[**********************50%                       ]  6 of 12 completed[**********************58%***                    ]  7 of 12 completed[**********************67%*******                ]  8 of 12 completed[**********************75%***********            ]  9 of 12 completed[**********************83%***************        ]  10 of 12 completed[**********************92%*******************    ]  11 of 12 completed[*********************100%***********************]  12 of 12 completed
daily_prices.columns = symbols
datesXOMSHWJPMAEPUNHAMZNKOBAAMTDDTSNSLG
2014-10-2238.7339076.5583715.6590106.111153.2988544.0444129.7506168.4830568.8085730.9999277.8738360.93071
2014-10-2339.5677276.7979214.3530106.293753.8913244.5602629.8743869.4232268.4667631.3396878.7156161.17674
2014-10-2639.7709076.7819614.4985106.189351.9686244.4844029.6778068.4436268.8085731.3882379.1365060.67174
2014-10-2739.6517976.9816014.7795107.485053.6342045.2354129.5321867.9341169.1992231.8412479.5745761.57167
2014-10-2839.7709075.9035414.7060107.024152.9635144.9774729.8234267.2820769.1198831.5176579.8494461.24147

Next, we find the simple daily returns for each of the 12 assets using the pct_change() method, since our data object is a Pandas DataFrame. We use simple returns since they have the property of being asset-additive, which is necessary since we need to compute portfolios returns:

# Compute daily simple returns
daily_returns = (
  daily_prices.pct_change()
            .dropna(
              # Drop the first row since we have NaN's
              axis = 0,
              how = 'any',
              inplace = False
              )
)
datesXOMSHWJPMAEPUNHAMZNKOBAAMTDDTSNSLG
2024-10-130.00549620.0110822-0.0067790-0.01344190.0036981-0.00364390.01106790.01364160.03092330.01733800.01229000.0038023
2024-10-140.00880660.03486690.00079990.0225519-0.02464020.00410870.00000000.01142750.02471070.0003342-0.0811200-0.0300613
2024-10-150.0166566-0.0083249-0.00426240.01673770.00684690.00562080.00326990.00216200.01324230.01219310.02705430.0025759
2024-10-16-0.0040465-0.01476790.00342450.0026469-0.00058630.0034877-0.0094941-0.00079620.0338423-0.0133663-0.0092939-0.0025692
2024-10-17-0.00218020.00954670.0077855-0.0019960-0.00175980.00423310.0077253-0.00185050.04817030.00367950.0063247-0.0028251

The simple daily returns may be visualized using line charts, density plots, and histograms, which are covered in my other post on visualizing asset data. Even though the visualizations in that post use the ggplot2 package in R, the plotnine package, or any other Python graphics librarires, can be employed to produce them in Python. For now, let us annualize the daily returns over the 10-year period ending in 2024-10-20. We assume the number of trading days in a year is computed as follows:

\[\begin{align*} 365.25 \text{(days on average per year)} \times \frac{5}{7} \text{(proportion work days per week)} \\ - 6 \text{(weekday holidays)} - 3\times\frac{5}{7} \text{(fixed date holidays)} = 252.75 \approx 253 \end{align*}\]

annualized_simple_daily_returns = daily_returns.mean() * 253

fig, ax = plt.subplots(figsize=(10, 6))

# Plot the annualized simple daily returns
annualized_simple_daily_returns.plot(kind='bar', ax=ax);
ax.set_title('Annualized Simple Daily Returns');
ax.set_ylabel('Annualized Simple Daily Returns');
plt.show()

As can be seen, there are significant differences in the annualized performances between these assets. The annualized variance-covariance matrix of the returns can be computed using built-in pandas method cov. Note that while the sample covariance matrix is an unbiased estimator of the population covariance matrix, it can be subject to significant estimation error, especially when the number of assets is large relative to the number of observations. In our case, because of the relatively long sample period, the estimation error is likely to be small:

annualized_sample_covariance = daily_returns.cov() * 253

fig, ax = plt.subplots();
cax = ax.matshow(annualized_sample_covariance, cmap="coolwarm");
ax.xaxis.set_ticks_position('bottom');
ax.set_xticks(np.arange(len(annualized_sample_covariance.columns)));
ax.set_yticks(np.arange(len(annualized_sample_covariance.columns)));
ax.set_xticklabels(annualized_sample_covariance.columns);
ax.set_yticklabels(annualized_sample_covariance.columns);

for i in range(annualized_sample_covariance.shape[0]):
    for j in range(annualized_sample_covariance.shape[1]):
        # Ensure text can be easily seen
        value = annualized_sample_covariance.iloc[i, j]
        color = "white" if value < 0.5 else "black"
        ax.text(j, i, f"{value:.2f}", ha="center", va="center", color=color);

plt.title("Annualized Sample Covariance Matrix of Returns Series");
plt.tight_layout();
plt.show()

The variance-covariance matrix of the returns will be needed to compute the variance of the portfolio returns.


The Optimization Problem

The portfolio optimization problem, therefore, given a universe of assets and their characteristics, deals with a method to spread the capital between them in a way that maximizes the return of the portfolio per unit of risk taken. There is no unique solution for this problem, but a set of solutions, which together define what is called an efficient frontier— the portfolios whose returns cannot be improved without increasing risk, or the portfolios where risk cannot be reduced without reducing returns as well. The Markowitz model for the solution of the portfolio optimization problem has a twin objective of maximizing return and minimizing risk, built on the Mean-Variance framework of asset returns and holding the basic constraints, which reduces to the following:

Minimize Risk given Levels of Return

\[\begin{align*} \min_{\vec{w}} \hspace{5mm} \sqrt{\vec{w}^{T} \hat{\Sigma} \vec{w}} \end{align*}\]

subject to

\[\begin{align*} &\vec{w}^{T} \hat{\mu}=\bar{r}_{P} \\ &\vec{w}^{T} \vec{1} = 1 \hspace{5mm} (\text{Full investment}) \\ &\vec{0} \le \vec{w} \le \vec{1} \hspace{5mm} (\text{Long only}) \end{align*}\]

Maximize Return given Levels of Risk

\[\begin{align*} \max _{\vec{w}} \hspace{5mm} \vec{w}^{T} \hat{\mu} \end{align*}\]

subject to

\[\begin{align*} &\vec{w}^{T} \hat{\Sigma} \vec{w}=\bar{\sigma}_{P} \\ &\vec{w}^{T} \vec{1} = 1 \hspace{5mm} (\text{Full investment}) \\ &\vec{0} \le \vec{w} \le \vec{1} \hspace{5mm} (\text{Long only}) \end{align*}\]

In absence of other constraints, the above model is loosely referred to as the “unconstrained” portfolio optimization model. Solving the mathematical model yields a set of optimal weights representing a set of optimal portfolios. The solution set to these two problems is a hyperbola that depicts the efficient frontier in the \(\mu-\sigma\) -diagram.


Monte Carlo Simulation

The first task is to simulate a random set of portfolios to visualize the risk-return profiles of our given set of assets. To carry out the Monte Carlo simulation, we define two functions that both take as inputs a vector of asset weights and output the annualized expected portfolio return and standard deviation:

Annualized Returns

# Function for computing portfolio return
def portfolio_returns(weights):
    return (np.sum(daily_returns.mean() * weights)) * 253

Annualized Standard Deviation

# Function for computing standard deviation of portfolio returns
def portfolio_sd(weights):
    return np.sqrt(np.transpose(weights) @ (daily_returns.cov() * 253) @ weights)

Next, we use a for loop to simulate random vectors of asset weights, computing the expected portfolio return and standard deviation for each permutation of random weights. Again, we ensure that each random weight vector is subject to the long-positions-only and full-investment constraints.

Monte Carlo Simulation

The empty containers we instantiate are lists, which are mutable. This makes python lists more memory efficient than growing vectors in R, provided the number of simulations isn’t excessively large (i.e., in the millions). In such cases, resizing and over-allocation costs would be huge, and we should vectorize the potoflio_returns and portfolio_sd functions to leverage NumPy’s vectorized operations.

num_portfolios = 5000
num_assets = len(symbols)

# Generate random weights for all portfolios
random_weights = np.random.random((num_portfolios, num_assets))
# Normalize the weights so that the sum of weights for each portfolio equals 1
random_weights /= np.sum(random_weights, axis=1, keepdims=True)

list_portfolio_returns = []
list_portfolio_sd = []

# Monte Carlo simulation: loop through each set of random weights
for weights in random_weights:
    list_portfolio_returns.append(portfolio_returns(weights))  # Calculate and store returns
    list_portfolio_sd.append(portfolio_sd(weights))  # Calculate and store standard deviations (risk)

port_returns = np.array(list_portfolio_returns)
port_sd = np.array(list_portfolio_sd)

Let us examine the simulation results. In particular, the highest and the lowest expected portfolio returns are as follows:

def plot_violin_with_quantiles(data: np.ndarray, title: str) -> None:
    fig, ax = plt.subplots(figsize=(6, 7))
    parts = ax.violinplot(data, showmeans=False, showmedians=False, showextrema=False)

    # Customize the violin plot appearance
    for pc in parts['bodies']:
        pc.set_facecolor('#1f77b4')
        pc.set_edgecolor('black')
        pc.set_alpha(0.7)

    # Calculate and annotate quantiles
    quantiles = np.percentile(data, [25, 50, 75])
    for q, label in zip(quantiles, ['25%', '50%', '75%']):
        ax.axhline(q, color='black', linestyle='--')
        ax.text(1.05, q, f'{label}', transform=ax.get_yaxis_transform(), ha='left', va='center')

    # Set labels and title
    ax.set_ylabel('Value')
    ax.set_title(title)
    plt.show()
plot_violin_with_quantiles(port_returns, 'Expected Portfolio Returns')

plot_violin_with_quantiles(port_sd, 'Portfolio Standard Deviations')

We may also visualize the expected returns and standard deviations on a \(\mu-\sigma\) trade-off diagram. For this task, I will leverage R’s graphics engine and the plotly graphics library. The reticulate package in R allows for relatively seamless transition between Python and R. Fortunately, the NumPy arrays created in Python can be accessed as R vector objects; this makes plotting in R using Python objects simple:

# Plot the sub-optimal portfolios
plot_ly(
  x = py$port_sd, y = py$port_returns, color = (py$port_returns / py$port_sd),
  mode = "markers", type = "scattergl", showlegend = FALSE,
  marker = list(size = 5, opacity = 0.7)
) |>
  layout(
    title = "Mean-Standard Deviation Diagram",
    yaxis = list(title = "Expected Portfolio Return (Annualized)", tickformat = ".2%"),
    xaxis = list(title = "Portoflio Standard Deviation (Annualized)", tickformat = ".2%")
  ) |>
  colorbar(title = "Sharpe Ratio")

Each point in the diagram above represents a permutation of expected-return-standard-deviation value pair. The points are color coded such that the magnitudes of the Sharpe ratios, defined as \(SR ≡ \frac{\mu_{P} – r_{f}}{\sigma_{P}}\), can be readily observed for each expected-return-standard-deviation pairing. For simplicity, we assume that \(r_{f} ≡ 0\). It could be argued that the assumption here is restrictive, so I explored using a different risk-free rate in my previous post.


The Optimal Portfolios

Solving the optimization problem defined earlier provides us with a set of optimal portfolios given the characteristics of our assets. There are two important portfolios that we may be interested in constructing— the minimum variance portfolio and the maximal Sharpe ratio portfolio. In the case of the maximal Sharpe ratio portfolio, the objective function we wish to maximize is our user-defined Sharpe ratio function. The constraint is that all weights sum up to 1. We also specify that the weights are bound between 0 and 1. In order to use the minimization function from the SciPy library, we need to transform the maximization problem into one of minimization. In other words, the negative value of the Sharpe ratio is minimized to find the maximum value; the optimal portfolio composition is therefore the array of weights that yields that maximum value of the Sharpe ratio.

# User defined Sharpe ratio function
# Negative sign to compute the negative value of Sharpe ratio
def sharpe_fun(weights):
  return - (portfolio_returns(weights) / portfolio_sd(weights))

We will use a list of dictionaries to represent the constraints:

# We use an anonymous lambda function
constraints = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}]

Next, the bound values for the weights:

# This creates 12 tuples of (0, 1), creating a sequence of (min, max) pairs
bounds = tuple(
  (0, 1) for w in weights
)

We also need to supply a starting sequences of weights, which essentially functions as an initial guesses. For our purposes, this will be an equal weight array:

# Repeat the list with the value (1 / 12) 12 times, and convert list to array
equal_weights = np.array(
  [1 / len(symbols)] * len(symbols)
)

We will use the scipy.optimize.minimize function and the Sequential Least Squares Programming (SLSQP) method for the minimization:

# Minimization results
max_sharpe_results = sco.minimize(
  # Objective function
  fun = sharpe_fun, 
  # Initial guess, which is the equal weight array
  x0 = equal_weights, 
  method = 'SLSQP',
  bounds = bounds, 
  constraints = constraints
)

The optimization results is an isntance of scipy.optimize.optimize.OptimizeResult, which contains many objects. The object of interest to us is the weight composition array, which we employ to construct the maximal Sharpe ratio portfolio:

# Extract the weight composition array
max_sharpe_results["x"]
array([1.23565114e-01, 0.00000000e+00, 3.06168743e-01, 0.00000000e+00,
       1.94085741e-16, 1.09367620e-01, 0.00000000e+00, 1.56428151e-01,
       2.55675201e-16, 0.00000000e+00, 3.04470372e-01, 4.58665717e-18])

Check that the weights sum to 1:

np.allclose(np.sum(max_sharpe_results["x"]), 1, atol=1e-12)
True

Maximal Sharpe Ratio Portfolio

The expected return, standard deviation, and Sharpe ratio of the maximal Sharpe ratio portfolio are as follows:

max_sharpe_port_return = portfolio_returns(max_sharpe_results["x"])
max_sharpe_port_sd = portfolio_sd(max_sharpe_results["x"])
pd.DataFrame(
  {
    "Expected Return": [max_sharpe_port_return],
    "Standard Deviation": [max_sharpe_port_sd],
    "Sharpe Ratio": [max_sharpe_port_return / max_sharpe_port_sd]
  }
)
   Expected Return  Standard Deviation  Sharpe Ratio
0         0.233802            0.192986      1.211493

Minimum Variance Portfolio

The minimum variance portfolio may be constructed similarly. The objective function, in this case, is the standard deviation function:

# Minimize sd
min_sd_results = sco.minimize(
  # Objective function
  fun = portfolio_sd, 
  # Initial guess, which is the equal weight array
  x0 = equal_weights, 
  method = 'SLSQP',
  bounds = bounds, 
  constraints = constraints
)

The expected return, standard deviation, and Sharpe ratio of the minimum variance portfolio are as follows:

min_sd_port_return = portfolio_returns(min_sd_results["x"])
min_sd_port_sd = portfolio_sd(min_sd_results["x"])
pd.DataFrame(
  {
    "Expected Return": [min_sd_port_return],
    "Standard Deviation": [min_sd_port_sd],
    "Sharpe Ratio": [min_sd_port_return / min_sd_port_sd]
  }
)
   Expected Return  Standard Deviation  Sharpe Ratio
0         0.141892            0.154214      0.920101

Efficient Frontier

As an investor, one is generally interested in the maximum return given a fixed risk level or the minimum risk given a fixed return expectation. As mentioned in the earlier section, the set of optimal portfolios— whose expected portfolio returns for a defined level of risk cannot be surpassed by any other portfolio— depicts the so-called efficient frontier. The Python implementation is to fix a target return level and, for each such level, minimize the volatility value. For the optimization, we essentially “fit” the twin-objective described earlier into an optimization problem that can be solved using quadratic programming. (The objective function is the portfolio standard deviation formula, which is a quadratic function) Therefore, the two linear constraints are the target return (a linear function) and that the weights must sum to 1 (another linear function). We will again use a tuple of dictionaries to represent the constraints:

# The arguments x will be the weights
constraints = (
  {'type': 'eq', 'fun': lambda x: portfolio_returns(x) - target}, 
  {'type': 'eq', 'fun': lambda x: np.sum(x) - 1}
)

The full-investment and long-positions-only specifications will remain unchanged throughout the optimization process, but the value the name target is bound to will be different during each iteration. Since dictionaries are mutable, the first constraint dictionary will be updated repeatedly during the minimization process. However, because tuples are immutable, the references held by the constraints tuple will always point to the same objects. This nuance makes the implementation Pythonic. We constrain the weights such that all weights fall within the interval \([0, 1]\):

bounds = tuple(
  (0, 1) for w in weights
)

We will use the scipy.optimize.minimize function again and the Sequential Least Squares Programming (SLSQP) method for the minimization:

# Initialize an array of target returns
target = np.linspace(
  start = 0.15, 
  stop = 0.30,
  num = 100
)

obj_sd = []
for target in target:
  # Minimize the twin-objective function given the target expected return
  min_result_object = sco.minimize(
    fun = portfolio_sd, 
    x0 = equal_weights, 
    method = 'SLSQP',
    bounds = bounds, 
    constraints = constraints
    )
  # Extract the objective value and append it to the output container
  obj_sd.append(min_result_object['fun'])

obj_sd = np.array(obj_sd)
# Rebind target to a new array object
target = np.linspace(
  start = 0.15, 
  stop = 0.30,
  num = 100
)

Before we plot the efficient frontier, we may wish to highlight the two portfolios in the plot— the maximal Sharpe ratio and the minimum variance portfolios:

# Annotations for maximal Sharpe ratio and minimum variance portfolio
annotation_data <- tibble::tibble(
  x = c(py$max_sharpe_port_sd, py$min_sd_port_sd),
  y = c(py$max_sharpe_port_return, py$min_sd_port_return),
  type = c("Maximal Sharpe Ratio Portfolio", "Minimum Variance Portfolio")
)

Since the optimal expected portfolio returns and standard deviations are both array objects, we can access them via the reticulate package and plot them in R:

plot_ly(
  x = py$port_sd, y = py$port_returns, color = (py$port_returns / py$port_sd),
  mode = "markers", type = "scattergl", showlegend = FALSE,
  marker = list(size = 5, opacity = 0.7)
) |>
  # Efficient frontier
  add_trace(
    data = tibble::tibble(
      Risk = py$obj_sd,
      Return = py$target,
      SharpeRatio = py$target / py$obj_sd
    ),
    x = ~Risk,
    y = ~Return,
    color = ~SharpeRatio,
    marker = list(size = 7)
  ) |>
  # Maximal Sharpe ratio portfolio
  add_trace(
    data = tibble::tibble(
      Risk = py$max_sharpe_port_sd,
      Return = py$max_sharpe_port_return,
      SharpeRatio = py$max_sharpe_port_return / py$max_sharpe_port_sd
    ),
    x = ~Risk,
    y = ~Return,
    color = ~SharpeRatio,
    marker = list(size = 7)
  ) |>
  # Minimum variance portfolio
  add_trace(
    data = tibble::tibble(
      Risk = py$min_sd_port_sd,
      Return = py$min_sd_port_return,
      SharpeRatio = py$min_sd_port_return / py$min_sd_port_sd
    ),
    x = ~Risk,
    y = ~Return,
    color = ~SharpeRatio,
    marker = list(size = 7)
  ) |>
  layout(
    title = "Mean-Standard Deviation Diagram",
    yaxis = list(title = "Expected Portfolio Return (Annualized)", tickformat = ".2%"),
    xaxis = list(title = "Portoflio Standard Deviation (Annualized)", tickformat = ".2%")
  ) |>
  add_annotations(
    x = annotation_data[["x"]],
    y = annotation_data[["y"]],
    text = annotation_data[["type"]],
    xref = "x",
    yref = "y",
    showarrow = TRUE,
    arrowhead = 5,
    arrowsize = .5,
    ax = 20,
    ay = -40
  ) |>
  colorbar(title = "Sharpe Ratio")

Evidently, the efficient frontier is comprised of all optimal portfolios with a higher return than the global minimum variance portfolio. These portfolios dominate all other sub-optimal portfolios in terms of expected returns given a certain risk level.


Capital Market Line

Now that we have determined a set of efficient portfolios of risky assets, we may be interested in adding a risk-free asset to the mix. By adjusting the wealth allocation for the risk-free asset, it is possible for us to achieve any risk-return profile that lies on the straight line (in the \(\mu-\sigma\) diagram) between the risk-free asset and an efficient portfolio. This is called the capital market line, which is drawn from the point of the risk-free asset to the feasible region for risky assets. The capital market line has the equation:

\[\begin{align*} R_{p}=r_{f}+\frac{R_{T}-r_{f}}{\sigma_{T}} \sigma_{p} \end{align*}\]

where

  • \(R_{p}=\text{portfolio return}\)

  • \(r_{f}=\text{risk free rate}\)

  • \(R_{T}=\text{market return}\)

  • \(\sigma_{T}=\text{standard deviation of market portfolio}\)

  • \(\sigma_{p}=\text{standard deviation of portfolio}\)

The problem then becomes: which efficient portfolio on the efficient frontier should we use? The answer is the tangency portfolio, at which point on the efficient frontier the slope of the tangent line is equal to that of the capital market line. In order to find the slope of the tangent line, a functional approximation of the efficient frontier and its first derivative must be obtained. Yves Hilpisch’s book recommends the use of cubic splines interpolation to obtain a continuously differentiable functional approximation of the efficient frontier, \(f(x)\). To carry out cubic splines interpolation, we need to create two array objects representing the expected returns and standard deviations of each of the efficient portfolios on the efficient frontier. In other words, we need to exclude from obj_sd and target (our two arrays of optimization results) the portfolios whose standard deviations are greater than that of the minimum variance portfolio but whose expected returns are smaller; they are not optimal and are therefore not on the efficient frontier:

# Use np.argmin() to find the index of the minimum variance portfolio and take a slice starting from that point, excluding previous elements
efficient_sd = obj_sd[np.argmin(a = obj_sd):]

# Take a slice of target returns starting from the minimum variance portfolio index and exclude previous elements
efficient_returns = target[np.argmin(a = obj_sd):]

We will leverage the the facilities from the scipy.interpolate sub-package, in particular, the splrep() function. Given the set of data points \((x_{i}, y_{i})\), the scipy.interpolate.splrep() function determine a smooth spline approximation of degree \(k\). The function will return a tuple \((t,c,k)\) with three elements— the vector of knots \(t\), the B-spline coefficients \(c\), and the degree of the spline \(k\).

# Cubic spline interpolation
tck = sci.splrep(
  x = np.sort(efficient_sd),
  y = efficient_returns,
  k = 3
)

Next, we use the scipy.interpolate.splev() function to define the functional approximation and the first derivative of the efficient frontier— \(f(x)\) and \(f^{\prime}(x)\). Given the knots and the B-spline coefficients, the function scipy.interpolate.splev() evaluates the value of the smoothing polynomial and its derivative:

# Define functional approximation of efficient frontier
def f(x):
  return sci.splev(
  x = x,
  tck = tck,
  der = 0
  )
# Define first derivative of the efficient frontier
def df(x):
  return sci.splev(
  x = x,
  tck = tck,
  der = 1
  )

Let us re-examine the equation for the capital market line. Notice that the line is a linear equation of the form:

\[\begin{align*} f(x)&=a + bx \end{align*}\]

where the parameters are as follows:

  • \(a=r_{f}\)

  • \(b=\frac{R_{T}-r_{f}}{\sigma_{T}}=f^{\prime}(x) \hspace{5mm} \text{Slope of the capital market line = slope of the tangent line of the efficient frontier}\)

  • \(x=\sigma_{p}\)

There are three unknown parameters and therefore we must have three equations to solve for the values of these parameters:

\[\begin{align*} a=r_{f} \hspace{5mm} \Longrightarrow \hspace{5mm} 0&=r_{f}-a \\ b=f^{\prime}(x) \hspace{5mm} \Longrightarrow \hspace{5mm} 0&=b-f^{\prime}(x) \\ f(x)=a + bx \hspace{5mm} \Longrightarrow \hspace{5mm} 0&=a+b\cdot x-f(x) \end{align*}\]

To solve the system, we will use the scipy.optimize.fsolve() function, which finds the roots of any input function. First, we need to define a function that returns the values of all three equations given a set of parameters \(p = (a, b, x)\). The set of parameters \(p\) can be represented using a python list where \(p[0]=a\), \(p[1]=b\), and \(p[2]=x\):

# System of equations
def sys_eq(p, r_f = 0.01):
  # Equations
  eq1 = r_f - p[0]
  eq2 = p[1] - df(p[2])
  eq3 = r_f + df(p[2]) * p[2] -  f(p[2])
  # Output values
  return eq1, eq2, eq3

The function scipy.optimize.fsolve() returns an ndarray object, which is the solution set to the linear system above:

# Solve the linear system
sol_set = sco.fsolve(
  # Equations to solve for
  func = sys_eq,
  # Initial guess for p
  # This can be determined by trial and error or from the plot above (may take more than one try)
  x0 = [0.05, 1, 0.15]
)

As a sanity check, we an plug our solution set into our user defined function sys_eq to see if the results are indeed zeros:

# Sanity check
np.round(
  sys_eq(
    p = sol_set
  ),
  4
)
array([ 0., -0.,  0.])

We have confirmed that our solution set for the linear system is as follows:

  • \(a=\) 0.01

  • \(b=\) 1.16

  • \(x=\) 0.1956


Tangency Portfolio

To find the tangency portfolio, we use the same routine when we solved for the efficient frontier. We can compute the target return using the \(x\) value from our solution set above. The rest is exactly the same as before.

The fun key of the first contstraint dictionary now uses the f (functional approximation of the efficient frontier) function and \(x\) (the value of the solution set) to compute the target return:

# Constraints
constraints = (
  {'type': 'eq', 'fun': lambda x: portfolio_returns(x) - f(x = sol_set[2])}, 
  {'type': 'eq', 'fun': lambda x: np.sum(x) - 1}
)

min_result_object_tangency = sco.minimize(
  fun = portfolio_sd, 
  x0 = equal_weights, 
  method = 'SLSQP',
  bounds = bounds, 
  constraints = constraints
)

The expected return, standard deviation, and Sharpe ratio of the tangency portfolio are as follows:

tangency_port_return = portfolio_returns(min_result_object_tangency["x"])
tangency_port_sd = portfolio_sd(min_result_object_tangency["x"])
pd.DataFrame(
  {
    "Expected Return": [tangency_port_return],
    "Standard Deviation": [tangency_port_sd],
    "Sharpe Ratio": [tangency_port_return / tangency_port_sd]
  }
)
   Expected Return  Standard Deviation  Sharpe Ratio
0         0.236886            0.195589      1.211138

Finally, with these parameters, we can now plot the capital market line on the \(\mu-\sigma\) diagram. Similar to our approach earlier, we first define a function \(g(x)\) whose body is the linear equation of the capital market line. This function takes as its argument the x values of the \(\mu-\sigma\) diagram:

# Capital market line
def cml(x):
  return sol_set[0] + sol_set[1] * x

Next, create arrays of plotting data:

# Standard deviations
cml_sd = np.linspace(
  start = 0, 
  stop = 0.25,
  num = 100
)
# Expected returns
cml_exp_returns = cml(x = cml_sd)

Move to R and plot the capital market line on the \(\mu-\sigma\) diagram:

# Efficient frontier and Capital market line
plot_ly(
  x = py$port_sd, y = py$port_returns, color = (py$port_returns / py$port_sd),
  mode = "markers", type = "scattergl", showlegend = FALSE,
  marker = list(size = 5, opacity = 0.7)
) |>
  # Random portfolios
  add_trace(
    data = tibble::tibble(
      Risk = py$obj_sd,
      Return = py$target,
      SharpeRatio = py$target / py$obj_sd
    ),
    x = ~Risk,
    y = ~Return,
    color = ~SharpeRatio,
    marker = list(size = 7)
  ) |>
  # Capital market line
  add_lines(
    data = tibble::tibble(
      Risk = py$cml_sd,
      Return = py$cml_exp_returns
    ),
    x = ~Risk,
    y = ~Return,
    line = list(color = "#9467bd"),
    # Do no inherit attributes from plot_ly()
    inherit = FALSE
  ) |>
  layout(
    title = "Mean-Standard Deviation Diagram",
    yaxis = list(title = "Expected Portfolio Return (Annualized)", tickformat = ".2%"),
    xaxis = list(title = "Portoflio Standard Deviation (Annualized)", tickformat = ".2%"),
    annotations = list(
      x = py$tangency_port_sd,
      y = py$tangency_port_return,
      text = "Tangency Portfolio \n Expected Return: 26.64% \n Volatility: 17.83%",
      xref = "x",
      yref = "y",
      showarrow = TRUE,
      arrowhead = 7,
      ax = 20,
      ay = -40
    )
  ) |>
  colorbar(title = "Sharpe Ratio")

You may wish to zoom in to see more clearly the values for the portfolios. Fortunately, the plotly package allows for interacting with the plots; hover over the plots to examine your options.

Related