Optimizer Help Docs
Optimizer > Troubleshooting > Covariance Fixer

Covariance Fixer

Optimizer Help Documentation

The Covariance Fixer is a separately licensed application that appears as a page in the Optimizer if you enable it in the Display Menu. The Covariance Fixer adjusts matrices to be positive definite (no negative eigenvalues), rounds correlations to a set number of decimal places, and improves the condition of badly-conditioned correlation matrices.  It is useful for those who create their own matrices or make adjustments manually, rather than using the historical correlations computed by the Estimator. If you are interested in licensing the Covariance Fixer, please contact your relationship manager.

Reasons to use the Covariance Fixer

  1. Negative or zero eigenvalues.  Correlation matrices by definition are positive semidefinite and may not contain any negative eigenvalues, since all portfolio variances must be by definition nonnegative.  In order for solutions to be unique and well-defined, optimization requires no perfect correlations among assets, or no negative eigenvalues.  The covariance fixer finds a nearby valid covariance matrix to the input matrix, if it has any negative or zero eigenvalues.

  2. High condition numbers.  Condition indicates how close a matrix is to zero.  Even historical correlations prepared in the Estimator may not be well-conditioned, though they will be technically valid.  A well-conditioned correlation matrix has a low condition number, meaning that the ratio of the largest eigenvalue to the smallest is low.  Optimization involves inverting the matrix and multiplying according to it.  A high condition number indicates that your matrix is close to zero and that multiplying by the inverse of the matrix would magnify errors.  A matrix with a low condition number is therefore easy to invert and multiply during optimization, because it is far from zero.

  3. Excessive precision.  If your correlation matrix displays full machine precision in the relationships between assets, you can lower the number of significant digits. In other words, you can dictate how many decimal places considered during optimization.

Covariance Matrices

Covariance matrices are square matrices whose diagonal entries are variances and whose off-diagonal entries correspond to covariances. Covariances are defined as correlations times the standard deviations of the two correlated assets. Thus, a covariance matrix V can be factored as V = DCD, where D is a diagonal matrix of standard deviations and C is a correlation matrix.  he matrices simplify the notation and organize the calculations for converting covariances to standard deviations and correlations, and vice-versa. The Estimator and Optimizer allow modifications on D and C rather than V directly, since it is useful to separate the covariance forecasting process into these two parts. The standard deviations can often be interpreted as a variability of the return forecast, where larger standard deviations indicate more uncertainty in the return forecast. However, New Frontier does not normally recommend exogenous forecasting of correlation matrices. Nevertheless, the capability exists for advanced users to change entries of the forecast correlation matrix. Users who opt to modify correlation matrices face the danger that the specified correlations are not consistent with each other and do not form a valid correlation matrix. In terms of linear algebra, the consequences of this are that one or more of the eigenvalues of the correlation and covariance matrices may be negative. This is equivalent to saying that the covariance matrix may not be positive semidefinite, a requirement for valid covariance matrices.  

 

Eigenvalues and Eigenvectors

A covariance matrix can be factored into its spectral decomposition V = QΛQT, where Q is a matrix whose columns are the eigenvectors of V and Λ is a diagonal matrix of the eigenvalues of V.  The n-by-n matrix of eigenvectors Q can be thought of as a rotation through n-dimensional space, preserving distances, and the n-by-n diagonal matrix Λ can be thought of as a rescaling of distances, preserving rotation. Thus a zero eigenvalue would represent a complete collapse of the variance of the data along the direction of the corresponding eigenvalues.

 

Since the covariance is a measure of dispersion in space, negative eigenvalues do not make sense, and in fact are prohibited by the definition of covariance. The consequences of attempting an optimization with a covariance matrix with negative eigenvalues are that the optimization will not represent a valid optimal portfolio since the input covariance cannot represent a true set of assets. Zero eigenvalues can be problematic as well, since they correspond to an eigenvector with zero variance. This means that some linear combination of assets is constant, a situation which occurs if returns are relative to a benchmark which is a linear combination of the assets. If there is only one such zero eigenvalue, the optimizer can find a solution, and will assume that the optimization is relative to a benchmark whose weights are proportional to the eigenvector corresponding to the zero eigenvalues. Covariance matrices with more than one zero eigenvalue will generate an error in the Optimizer and should be run through the covariance fixer if the problem at the root of the negative eigenvalue cannot be addressed. Often a negative eigenvalue may indicate a more fundamental problem with the process. It is always preferable to address the direct cause of the problem than to smooth it over with the covariance fixer.  

 

Although negative and zero eigenvalues usually occur under specific circumstances, historically estimated covariance matrices frequently have eigenvalues close to zero. When this is just a result of legitimately highly correlated assets, it is only a numerical concern. However, the greater the number of assets and the shorter the history, the more likely certain combinations of assets will have serious correlations that result in small eigenvalues.  Small positive eigenvalues can also cause problems for the Optimizer, since some of the calculations involve total or partial inversion of the covariance matrix. Small eigenvalues can occur when assets are highly correlated, and can cause numerical instability in the optimization process. “Smallness” is measured with respect to the largest eigenvalues of the covariance matrix, and is stated in terms of the condition number, or ratio of smallest to largest eigenvalues, of the matrix. Because of the numerical instability when the condition number is high we recommend use of the covariance fixer in this case.

For further information on eigenvalues, consult a good linear algebra text, such as Mike Artin's Algebra. For correlation matrix computation, consult a good multivariate statistics book, such as Johnson and Wichern's Applied Multivariate Analysis. Convex Optimization by Boyd and Vandenberghe and New Frontier's own Efficient Asset Management provide insight into how these apply to optimization.

How to use the Covariance Fixer

  1. Access the Covariance Fixer through the Display Menu.

  2. Click the Copy from Inputs Button to populate the Input Correlations table with the correlations you entered on the Inputs Worksheet.

  3. Enter the number of decimal points that you wish the Optimizer to consider in the Significant Digits field.

  4. Enter the minimum eigenvalue in the Minimum Eigenvalue field.

  5. Click the Fix Covariance Button.

  6. Review the Fixed Correlations Table and the Differences in Correlations Table.

  7. If the matrix is acceptable, click the Copy to Inputs Button to transfer the correlations into the case.

  8. Proceed with your optimization.

© 2024 New Frontier Advisors