Resources

SEC puts Liquidity Estimation Back in Focus

Date: November 16, 2017

Liquidity back in focus

As part of the SEC investment company reporting modernization rules published in October 2016, all open-ended US mutual funds and ETFs will be required to implement liquidity risk management programs overseen by their fund boards, with compliance beginning in December 2018. This includes reporting liquidity estimates for all fund positions, as well as maintaining a highly liquid minimum amount of securities and monitoring the level of illiquid securities.

This is a significant step up from the prior regulatory baseline set out for US funds in the Investment Company Act of 1940. In a nod towards the increasing breadth and complexity of fund products offered to investors (driven to a large degree by greater usage of derivatives), the SEC notes that prior to the modernization efforts ‘it was understood that redeemability meant that an open-end fund had to have a liquid portfolio [footnote 1].’ The SEC no longer believes that all managers share this understanding. The other core cited driver of the rulemaking is the evolution of technology, such as that of cloud computing, in which the SEC sees the opportunity to collect greater amounts of data in a more streamlined and standardized manner. In addition to ensuring compliance the SEC also anticipates that the data gathered will be useful to inform policy-making decisions as well as assisting investors in understanding and comparing fund liquidity profiles.

The SEC defines liquidity risk as the risk that a fund could not meet requests to redeem shares issued by the fund, without significant dilution of remaining investors. While implementing proper governance and procedures will need to be a key part of a fund’s liquidity risk management program, here we focus on the quantitative estimation of how long, or at what cost, a fund’s position in a security could be liquidated.

 

The need for data

The core of the liquidity estimation exercise is data. Consider equities where a fund manager has a good understanding of the volume traded as well as the depth of the limit order book via data from the exchange. This data can be used to help us answer two core questions in liquidity estimation:

  1. Time to Liquidate: How long will it take to liquidate a position with negligible market impact?
  2. Market Impact: What reduction in fair market value will be incurred if a position is liquidated in one day, or more generally what is the impact of liquidation over the course of X days?

The SEC liquidity requirements are very much aligned with the time to liquidate question. The genesis of this is in the original 1940 Act regulations which concern themselves with protecting shareholders from dilution and ensuring that on a daily basis they are able to request to receive their pro-rata share of the fund’s NAV and have cash delivered within seven days. Further, SEC modernization rules will also offer funds the option of swing pricing which allows a fund to charge redeeming shareholders with the cost of their specific transaction liquidity instead of the fund itself.

We can build a simple model estimating how long it might take to liquidate an equity position by comparing the average daily shares traded to the shares owned by the fund. We’ll make only one assumption in this model and that is around what percentage of an equity’s average daily volume can be traded, without a significant market impact to the share price. Sometimes called the participation rate, a common assumption for this daily volume usage is 15%. Thus, a fund holding 3,000 shares of an equity in which 10,000 shares trade daily on average, would take two days to liquidate at 15% participation (1,500 shares per day).

The challenges in liquidity estimation truly emerge as we move outside of the exchange traded equity and futures markets and into the fixed income markets. Data commonly available for bonds includes bid-ask spread, TRACE data for US bonds, the amount outstanding of a bond, as well as bond characteristics such as currency, sector, maturity, seniority and quality of the debt itself. If you have access to it, then also useful is information from dealers and electronic trading venues on voice and electronic trades (both inquiries and executed). Useful but less common still is survey data collected from bond traders on the liquidity of benchmark bond issues.

Probably the most widely available and clearly valuable piece of data is the bid-ask spread, which tells a manager about the relative liquidity of a particular bond. However, bid-ask is not directly relatable to how much volume of the bond a manger can hope to trade daily, nor does it give a good sense of depth of market for all bonds. We can look further for useful data to add to our model, for instance TRACE provides data on executed US bond trades below a limit of $5mm for IG and $1mm for non-IG bonds. We can also use other relative relationships between bonds to help our model. For example bonds issued in major currencies will be more liquid vs those in emerging currencies. It also makes sense to assume that bond issues or corporations with larger amounts outstanding will likely have more trading activity.

Bond liquidity estimation is inherently limited by the fact that the bulk of the fixed income universe does not trade nearly as often as the equity universe. Therefore, we must rely on liquid and on the run government bonds and corporates to obtain sufficient amounts of data to make direct CUSIP-level estimations of liquidity. Any model for the remainder of the less liquid bond universe must either build on this observable liquid bond data or use other data available to make reasonable assumptions about liquidity. Only in this manner will we have a model able to satisfy the SEC requirement of liquidity categorization for all portfolio securities.

 

Bias vs. Variance, or how do we choose a model?

No two bond liquidity models are exactly alike. Even if two models take the same general approach they likely have different rules for weighting and filling missing data as we move down the bond liquidity spectrum making estimates. We believe it’s worth discussing a statistical concept that may provide insight into how a model for bond liquidity may be constructed. This is called the bias vs variance tradeoff. Here bias can be thought of as an error in liquidity estimation caused by purposely excluding inputs to the model, and variance as error caused by including model inputs which may be relevant for one class of bonds but not others.

For instance, a high bias/low variance model for bond liquidity will makes use of fewer model inputs (and thus fewer assumptions). It may be less likely to provide very precise liquidity estimates across large segments of bonds, but it will also be less prone to providing estimates that don’t make intuitive sense. An example of this might be a rules-based system which makes use of only the data it has available for each bond. Here at the bottom of the liquidity spectrum where there is little data available, the model will undoubtedly have a high bias. That may be preferable for managers concerned with the ease of explaining their approach to regulators.

In contrast, a low bias/high variance model likely makes use of all the data it can get its hands on. It also may employ a machine-learning style approach to search for relationships in the large model input data set used. These types of models have the opportunity to make precise estimates, but can also easily provide poor/non-intuitive estimates due to finding spurious relationships in the large amount of data used. The output of these types of models can also be difficult to sanity check and document as part of a manager’s liquidity program.

The correct place for a manager’s liquidity risk program on the scale of bias vs variance is likely between these two extremes. It’s also likely that different models will be applied to different segments of the bond universe. Within each segment we seek liquidity estimation methods that make use of only the most relevant predictors of liquidity, but also generalize well to the bond segment as a whole. These types of models can provide useful estimates of bond liquidity while maintaining interpretability.

 

Liquidity when it matters most

Many market participants would agree that the real liquidity estimation problem has more to do with determining how long it takes to liquidate positions to raise cash in stressed markets, rather than normal functioning markets. The SEC recognizes this and will also require managers to estimate time to liquidate in both normal as well as stressed market conditions. There exists even less data to rely on when looking back at historical periods of stressed markets, and so a liquidity model must make another set of assumptions in order to provide these regulatory estimates. Further, it is also highly likely that in a forced liquidation scenario there will be a correlated decrease in asset market values. A model taking this into account should include both an estimate of liquidity market impact cost, as well as an estimate of potential market value losses of the securities, as they are liquidated.

Assumptions are unavoidable in the liquidity estimation process. Managers must be aware of the estimation approach they have chosen and fully understand both its relative strengths and weaknesses. Despite these challenges, combining data on liquidity estimation with a proper governance framework has the ability to put managers on a much more solid footing in understanding and making informed decisions around the liquidity risks present in their funds.

 

Footnotes:

READ: “The challenge of stress-testing liquidity risk” by StatPro Managing Director Dario Cintioli