This post will be a review of the book Modern Asset Allocation for Wealth Management, by Dr. David Berns, PhD. The long story short is that I think the book is a must-read for a new and different perspective on asset management, though there are some things I’d like to see that could be very easily covered with a second edition.

In my opinion, rather than provide a single how-to portfolio like some other books, such as Meb Faber’s Ivy Portfolio, or Adaptive Asset Allocation (both of which are fairly good reads), MAAWM submits a completely new way of thinking about portfolio construction–namely by incorporating a quantitative way to gauge a prospective investor’s risk appetite, and to incorporate behavioral finance into systematic portfolio construction. To me, who’s completely quantitative, the idea of incorporating something that is much more nebulous to quantify, such as individual risk preference, was something I’ve never even thought about, but after reading this book, think should be mandatory for any financial adviser to think about. (Note: as I was never part of a client-facing role at a buy-side firm, I was never sponsored for a series 65, so I’m not an official financial adviser.) The three types of behavioral risk traits are risk aversion (would you accept a bet that paid $3.85 or $0.10 with 50% chance each? 60/40? 70/30? 80/20? 90/10? Any of them?), loss aversion (would you accept a lottery with 50% chance to lose you $3 or win $6? What about if the loss was $4? $5? $6?), and reflection (would you rather take a sure $20 or a 1/3rd chance at $60? What about if it was a loss? A guaranteed loss of $20? Or a 1/3rd chance to lose $60?). Depending on how a prospective client answers such questions (with dollar amounts scaled in proportion to their annual income), one can formulate a multi-parameter utility function with a more nuanced shape than a simple log-scaling utility function as a function of gain or loss, in order to incorporate more subtle potential client risk preferences. For me, this is the first time I’ve seen the idea of quantitatively incorporating behavioral finance into systematic portfolio construction. I also think this is an absolutely fantastic insight. If someone’s younger and just wants the highest expected return, that’s a much different client than one who can’t risk a large drawdown, and if there are multiple offered strategies, such measurements mean a much more customized approach for different individual clients.

Beyond that, the book also brings to mind the idea of higher portfolio moments that can affect a client’s utility function–namely, skewness and kurtosis. Namely the idea that ideally, one wants to maximize portfolio skew (namely, cutting losers and letting winners run–which is one reason among others that I swear by momentum and trend-following), while minimizing kurtosis (tail risk is painful!). The idea is that a simple mean-variance portfolio optimizer doesn’t account for these higher moments, and that they’re important. However, this book doesn’t really present a mathematical way to tie the empirical calculation and optimization of skewness and kurtosis back into the 3-parameter utility function, so much as just building up the idea that these moments are important, and for very good reason. While I fully agree with the assertion of the importance of higher moments, unto my experience, incorporating the third and fourth moments into portfolio optimization is *hard*.

Whereas a mean-variance optimization (or rather, momentum selection and minimum variance optimization) backtests are relatively easy to run in terms of computing time, incorporating co-skewness and co-kurtosis calculations gets *very* messy, *very* quickly, and if I recall correctly, demands global optimizers such as those found in R’s PortfolioAnalytics package (I could be mistaken here). This has very real computational costs. That is, performing optimization on third and fourth portfolio moments *ex ante* is most likely much more computationally expensive in runtime than deploying a heuristic on it *ex post*. To understand just how complicated co-skewness and co-kurtosis get, I recommend looking at the paper “Estimation and Decomposition of Downside Risk for Portfolios with Non-Normal Returns”, written by Kris Boudt, Brian Peterson, and Christophe Croux. It’s a *lot* more complicated than minimizing a matrix product of weights and a covariance matrix subject to full investment and long-only weights for each asset, which means far fewer simulations to measure portfolio robustness to perturbations in parameter settings (EG lookback periods, rebalance dates, etc.)

Next, the book talks about asset selection. Here, it presents yet *another* fantastic idea I’ve seen nowhere else. The idea of the Mimicking Portfolio Tracking Error (MPTE). That is, given your current universe of assets, can an asset you’re considering adding to the universe be replicated by a combination of the others? The way to check that is with *constrained* least-squares regression, with the constraint of the weights of the other assets adding up to 100%. I’ve never seen this done before, but it seems both R and Python have ways to do this. A critique I have of this chapter, though, is that the assets in question aren’t ETFs that one can go and buy on the open market during trading hours, but rather, academic asset index classes from places like Kenneth French’s data website. And while that’s certainly fantastic as far as getting more data for analysis for further back in time, that there isn’t translation for these academic/illustrative asset classes to ETFs and mutual fund proxies for longer histories felt slightly disappointing.

The punchline here is that if the tracking error is lower than 5% for the asset considered to be added to the universe as expressed through a linear combination of assets in the existing universe, then there’s probably a great deal of collinearity between the asset in question, and the assets in the existing universe, which has a chance to confuse an optimization algorithm. The other fantastic idea presented here is the idea of performance assets (high return at the cost of high volatility, low skew, high kurtosis), and diversifying assets (lower return but that smoothen the portfolio trajectory by reducing overall portfolio volatility and kurtosis). Basically, high returns don’t mean much if a client can’t stick with them because of the emotional roller coaster ride that the portfolio is on, so a better risk-adjusted portfolio is better, especially if one can leverage the portfolio up to get better returns.

Next comes the idea with which I have a philosophical disagreement with–the idea of being able to assess returns by testing for stationarity on several decades of monthly return data using the Kolmogorov-Smirnov test. That is, the idea that what an asset class has done over a very long period of time (about a single investing lifetime), it should continue to do in aggregate for another prolonged period of time, in aggregate.

By far the best counterexample I can think of is the idea that for 50 years, coming out of the end of the second World War, the US stock market (I.E. the S&P 500) was on a tear (after all, the rest of the world was rebuilding). Then, if you invested in the S&P at the top of the dot-com boom, right as George W. Bush took office, for the next ten years, your actual return was a *negative* annualized 1% (I.E. a CAGR of -1%). For trivia’s sake, had one invested in the S&P at the top before the crash of the great depression, it would have taken until 1954 just to recover one’s initial stake. Had one invested at the top of the dot com bubble, aside from a minor new equity high in 2007, it would have taken until basically 2013 or 2014 to make new significant equity highs.

That’s…painful, to say the least, when U.S. equities are supposed to be a return-driving asset. Again, to those that extol the virtues of a permanent portfolio type of buy/hold/rebalance approach, the approach in this book is second to none. However, for those that have read my blog since the beginning, you’ll know that I swear by momentum and trend-following trading. My volatility trading strategy is a trend-follower (I simply think that there’s no other way to trade volatility, since events like Feb. 5, 2018, that caused XIV to lose 95% in a day, and be terminated, mandate it), and various tactical asset allocation strategies I’ve blogged about on this blog are *also* momentum-based trading strategies (all of which can be found on AllocateSmartly). In fact, for all the rightful condemnation that the theoretical maximum Sharpe Ratio Markowitz portfolio gets, a global minimum variance optimization on a set of assets selected *by* momentum is the practical form of this, which is the Adaptive Asset Allocation algorithm. So, if one swears by buy and hold, I think the approach outlined in this book is terrific, but if active trading is more one’s speed, then take just this one chapter with a grain of salt and understand that the approach this book recommends is more of a permanent portfolio style buy, hold, and rebalance approach with the belief that asset returns will work themselves out over a long enough time horizon. In my opinion, rebalancing a five-asset portfolio like in Adaptive Asset Allocation (or KDA, which uses the same universe) isn’t too much to ask for once a month (though should be done in a tax-free account if possible).

Speaking of which, one last issue this part of the book touches upon is taxes, which very few asset allocation books consider. I certainly don’t consider it in the formulation of my strategies, as I assume that an asset allocation firm knows how to legally avoid taxes, place their clients’ funds into various tax-free/less taxable retirement accounts, and so on. I’m a strategist, not a tax accountant, so, not an expert there. However, this book does touch upon the topic, so if an individual is running a one-man office, well, this is most likely required reading.

The last topic the book touches on is to combine everything into optimized portfolios. Prospect theory and risk tolerance parameters were established, assets selected, returns estimated, then there are various portfolios recommended for given risk profiles. In my opinion, this section of the book was slightly lacking in that there wasn’t an appendix that had a portfolio allocation for all 60 permutations of the three risk parameters presented earlier in the book, but that’s a very minor nitpick.

So, that’s the book. Long story short, there are quite a few groundbreaking ideas presented here that make this book a must-read, no questions asked. If someone’s a quantitative strategist, they should *also* read this book immediately. That said, it does lack an out-of-the-box “if you want an easy-to-implement ETF-based translation of this strategy, here’s what you do”, which I’d love to see in a second edition of this book (ideally with mutual fund proxies for backtesting). Furthermore, there may be some ideas that can be taken with a grain of salt.

All in all, a wholeheartedly recommended fantastic read that any modern-day advisor should pick up, and a source for some very interesting ideas for quantitative strategists.

Thanks for reading.

NOTE: I am always looking to hear about interesting opportunities that can make use of my skillset. To contact me, feel free to reach out to me on my LinkedIn.

Pingback: Quantocracy's Daily Wrap for 10/05/2020 | Quantocracy