A Review of Modern Asset Allocation For Wealth Management, by David M. Berns, PhD

This post will be a review of the book Modern Asset Allocation for Wealth Management, by Dr. David Berns, PhD. The long story short is that I think the book is a must-read for a new and different perspective on asset management, though there are some things I’d like to see that could be very easily covered with a second edition.

In my opinion, rather than provide a single how-to portfolio like some other books, such as Meb Faber’s Ivy Portfolio, or Adaptive Asset Allocation (both of which are fairly good reads), MAAWM submits a completely new way of thinking about portfolio construction–namely by incorporating a quantitative way to gauge a prospective investor’s risk appetite, and to incorporate behavioral finance into systematic portfolio construction. To me, who’s completely quantitative, the idea of incorporating something that is much more nebulous to quantify, such as individual risk preference, was something I’ve never even thought about, but after reading this book, think should be mandatory for any financial adviser to think about. (Note: as I was never part of a client-facing role at a buy-side firm, I was never sponsored for a series 65, so I’m not an official financial adviser.) The three types of behavioral risk traits are risk aversion (would you accept a bet that paid $3.85 or $0.10 with 50% chance each? 60/40? 70/30? 80/20? 90/10? Any of them?), loss aversion (would you accept a lottery with 50% chance to lose you $3 or win $6? What about if the loss was $4? $5? $6?), and reflection (would you rather take a sure $20 or a 1/3rd chance at $60? What about if it was a loss? A guaranteed loss of $20? Or a 1/3rd chance to lose $60?). Depending on how a prospective client answers such questions (with dollar amounts scaled in proportion to their annual income), one can formulate a multi-parameter utility function with a more nuanced shape than a simple log-scaling utility function as a function of gain or loss, in order to incorporate more subtle potential client risk preferences. For me, this is the first time I’ve seen the idea of quantitatively incorporating behavioral finance into systematic portfolio construction. I also think this is an absolutely fantastic insight. If someone’s younger and just wants the highest expected return, that’s a much different client than one who can’t risk a large drawdown, and if there are multiple offered strategies, such measurements mean a much more customized approach for different individual clients.

Beyond that, the book also brings to mind the idea of higher portfolio moments that can affect a client’s utility function–namely, skewness and kurtosis. Namely the idea that ideally, one wants to maximize portfolio skew (namely, cutting losers and letting winners run–which is one reason among others that I swear by momentum and trend-following), while minimizing kurtosis (tail risk is painful!). The idea is that a simple mean-variance portfolio optimizer doesn’t account for these higher moments, and that they’re important. However, this book doesn’t really present a mathematical way to tie the empirical calculation and optimization of skewness and kurtosis back into the 3-parameter utility function, so much as just building up the idea that these moments are important, and for very good reason. While I fully agree with the assertion of the importance of higher moments, unto my experience, incorporating the third and fourth moments into portfolio optimization is *hard*.

Whereas a mean-variance optimization (or rather, momentum selection and minimum variance optimization) backtests are relatively easy to run in terms of computing time, incorporating co-skewness and co-kurtosis calculations gets *very* messy, *very* quickly, and if I recall correctly, demands global optimizers such as those found in R’s PortfolioAnalytics package (I could be mistaken here). This has very real computational costs. That is, performing optimization on third and fourth portfolio moments *ex ante* is most likely much more computationally expensive in runtime than deploying a heuristic on it *ex post*. To understand just how complicated co-skewness and co-kurtosis get, I recommend looking at the paper “Estimation and Decomposition of Downside Risk for Portfolios with Non-Normal Returns”, written by Kris Boudt, Brian Peterson, and Christophe Croux. It’s a *lot* more complicated than minimizing a matrix product of weights and a covariance matrix subject to full investment and long-only weights for each asset, which means far fewer simulations to measure portfolio robustness to perturbations in parameter settings (EG lookback periods, rebalance dates, etc.)

Next, the book talks about asset selection. Here, it presents yet *another* fantastic idea I’ve seen nowhere else. The idea of the Mimicking Portfolio Tracking Error (MPTE). That is, given your current universe of assets, can an asset you’re considering adding to the universe be replicated by a combination of the others? The way to check that is with *constrained* least-squares regression, with the constraint of the weights of the other assets adding up to 100%. I’ve never seen this done before, but it seems both R and Python have ways to do this. A critique I have of this chapter, though, is that the assets in question aren’t ETFs that one can go and buy on the open market during trading hours, but rather, academic asset index classes from places like Kenneth French’s data website. And while that’s certainly fantastic as far as getting more data for analysis for further back in time, that there isn’t translation for these academic/illustrative asset classes to ETFs and mutual fund proxies for longer histories felt slightly disappointing.

The punchline here is that if the tracking error is lower than 5% for the asset considered to be added to the universe as expressed through a linear combination of assets in the existing universe, then there’s probably a great deal of collinearity between the asset in question, and the assets in the existing universe, which has a chance to confuse an optimization algorithm. The other fantastic idea presented here is the idea of performance assets (high return at the cost of high volatility, low skew, high kurtosis), and diversifying assets (lower return but that smoothen the portfolio trajectory by reducing overall portfolio volatility and kurtosis). Basically, high returns don’t mean much if a client can’t stick with them because of the emotional roller coaster ride that the portfolio is on, so a better risk-adjusted portfolio is better, especially if one can leverage the portfolio up to get better returns.

Next comes the idea with which I have a philosophical disagreement with–the idea of being able to assess returns by testing for stationarity on several decades of monthly return data using the Kolmogorov-Smirnov test. That is, the idea that what an asset class has done over a very long period of time (about a single investing lifetime), it should continue to do in aggregate for another prolonged period of time, in aggregate.

By far the best counterexample I can think of is the idea that for 50 years, coming out of the end of the second World War, the US stock market (I.E. the S&P 500) was on a tear (after all, the rest of the world was rebuilding). Then, if you invested in the S&P at the top of the dot-com boom, right as George W. Bush took office, for the next ten years, your actual return was a *negative* annualized 1% (I.E. a CAGR of -1%). For trivia’s sake, had one invested in the S&P at the top before the crash of the great depression, it would have taken until 1954 just to recover one’s initial stake. Had one invested at the top of the dot com bubble, aside from a minor new equity high in 2007, it would have taken until basically 2013 or 2014 to make new significant equity highs.

That’s…painful, to say the least, when U.S. equities are supposed to be a return-driving asset. Again, to those that extol the virtues of a permanent portfolio type of buy/hold/rebalance approach, the approach in this book is second to none. However, for those that have read my blog since the beginning, you’ll know that I swear by momentum and trend-following trading. My volatility trading strategy is a trend-follower (I simply think that there’s no other way to trade volatility, since events like Feb. 5, 2018, that caused XIV to lose 95% in a day, and be terminated, mandate it), and various tactical asset allocation strategies I’ve blogged about on this blog are *also* momentum-based trading strategies (all of which can be found on AllocateSmartly). In fact, for all the rightful condemnation that the theoretical maximum Sharpe Ratio Markowitz portfolio gets, a global minimum variance optimization on a set of assets selected *by* momentum is the practical form of this, which is the Adaptive Asset Allocation algorithm. So, if one swears by buy and hold, I think the approach outlined in this book is terrific, but if active trading is more one’s speed, then take just this one chapter with a grain of salt and understand that the approach this book recommends is more of a permanent portfolio style buy, hold, and rebalance approach with the belief that asset returns will work themselves out over a long enough time horizon. In my opinion, rebalancing a five-asset portfolio like in Adaptive Asset Allocation (or KDA, which uses the same universe) isn’t too much to ask for once a month (though should be done in a tax-free account if possible).

Speaking of which, one last issue this part of the book touches upon is taxes, which very few asset allocation books consider. I certainly don’t consider it in the formulation of my strategies, as I assume that an asset allocation firm knows how to legally avoid taxes, place their clients’ funds into various tax-free/less taxable retirement accounts, and so on. I’m a strategist, not a tax accountant, so, not an expert there. However, this book does touch upon the topic, so if an individual is running a one-man office, well, this is most likely required reading.

The last topic the book touches on is to combine everything into optimized portfolios. Prospect theory and risk tolerance parameters were established, assets selected, returns estimated, then there are various portfolios recommended for given risk profiles. In my opinion, this section of the book was slightly lacking in that there wasn’t an appendix that had a portfolio allocation for all 60 permutations of the three risk parameters presented earlier in the book, but that’s a very minor nitpick.

So, that’s the book. Long story short, there are quite a few groundbreaking ideas presented here that make this book a must-read, no questions asked. If someone’s a quantitative strategist, they should *also* read this book immediately. That said, it does lack an out-of-the-box “if you want an easy-to-implement ETF-based translation of this strategy, here’s what you do”, which I’d love to see in a second edition of this book (ideally with mutual fund proxies for backtesting). Furthermore, there may be some ideas that can be taken with a grain of salt.

All in all, a wholeheartedly recommended fantastic read that any modern-day advisor should pick up, and a source for some very interesting ideas for quantitative strategists.

Thanks for reading.

NOTE: I am always looking to hear about interesting opportunities that can make use of my skillset. To contact me, feel free to reach out to me on my LinkedIn.

A Tale of an Edgy Panda and some Python Reviews

This post will be a quickie detailing a rather annoying…finding about the pandas package in Python.

For those not in the know, I’ve been taking some Python courses, trying to port my R finance skills into Python, because Python is more popular as far as employers go. (If you know of an opportunity, here’s my resume.) So, I’m trying to get my Python skills going, hopefully sooner rather than later.

However, for those that think Python is all that and a bag of chips, I hope to be able to disabuse people of that.

First and foremost, as far as actual accessible coursework goes on using Python, just a quick review of courses I’ve seen so far (at least as far as DataCamp goes):

The R/Finance courses (of which I teach one, on quantstrat, which is just my Nuts and Bolts series of blog posts with coding exercises) are of…reasonable quality, actually. I know for a fact that I’ve used Ross Bennett’s PortfolioAnalytics course teachings in a professional consulting manner before, quantstrat is used in industry, and I was explicitly told that my course is now used as a University of Washington Master’s in Computational Finance prerequisite.

In contrast, DataCamp’s Python for Finance courses have not particularly impressed me. While a course in basic time series manipulation is alright, I suppose, there is one course that just uses finance as an intro to numpy. There’s another course that tries to apply machine learning methodology to finance by measuring the performance of prediction algorithms with R-squareds, and saying it’s good when the R-squared values go from negative to zero, without saying anything of more reasonable financial measures, such as Sharpe Ratio, drawdown, and so on and so forth. There are also a couple of courses on the usual risk management/covariance/VaR/drawdown/etc. concepts that so many reading this blog are familiar with. The most interesting python for finance course I found there, was actually Dakota Wixom’s (a former colleague of mine, when I consulted for Yewno) on financial concepts, which covers things like time value of money, payback periods, and a lot of other really relevant concepts which deal with longer-term capital project investments (I know that because I distinctly remember an engineering finance course covering things such as IRR, WACC, and so on, with a bunch of real-life examples written by Lehigh’s former chair of the Industrial and Systems Engineering Department).

However, rather than take multiple Python courses not particularly focused on quant finance, I’d rather redirect any reader to just *one*, that covers all the concepts found in, well, just about all of the DataCamp finance courses–and more–in its first two (of four) chapters that I’m self-pacing right now.

This one!

It’s taught by Lionel Martellini of the EDHEC school as far as concepts go, but the lion’s share of it–the programming, is taught by the CEO of Optimal Asset Management, Vijay Vaidyanathan. I worked for Vijay in 2013 and 2014, and essentially, he made my R coding (I didn’t use any spaces or style in my code.) into, well, what allow you, the readers, to follow along with my ideas in code. In fact, I started this blog shortly after I left Optimal. Basically, I view that time in my career as akin to a second master’s degree. Everyone that praises any line of code on this blog…you have Vijay to thank for that. So, I’m hoping that his courses on Python will actually get my Python skills to the point that they get me more job opportunities (hopefully quickly).

However, if people think that Python is as good as R as far as finance goes, well…so far, the going isn’t going to be easy. Namely, I’ve found that working on finance in R is much easier than in Python thanks to R’s fantastic libraries written by Brian Peterson, Josh Ulrich, Jeff Ryan, and the rest of the R/Finance crew (I wonder if I’m part of it considering I taught a course like they did).

In any case, I’ve been trying to replicate the endpoints function from R in Python, because I always use it to do subsetting for asset allocation, and because I think that being able to jump between yearly, quarterly, monthly, and even daily indices to account for timing luck–(EG if you rebalance a portfolio quarterly on Mar/Jun/Sep/Dec, does it have a similar performance to a portfolio rebalanced Jan/Apr/Jul/Oct, or how does a portfolio perform depending on the day of month it’s rebalanced, and so on)–is something fairly important that should be included in the functionality of any comprehensively-built asset allocation package. You have Corey Hoffstein of Think Newfound to thank for that, and while I’ve built in daily offsets into a generalized asset allocation function I’m working on, my last post shows that there are a lot of devils hiding in the details of how one chooses–or even measures–lookbacks and rebalancing periods.

Moving on, here’s an edge case in Python’s Pandas package, regarding how Python sees weeks. That is, I dub it–an edgy panda. Basically, imagine a panda in a leather vest with a mohawk. The issue is that in some cases, the very end of one year is seen as the start of a next one, and thus the week count is seen as 1 rather than 52 or 53, which makes finding the last given day of a week not exactly work in some cases.

So, here’s some Python code to get our usual Adaptive Asset Allocation universe.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas_datareader import data
import datetime as dt
from datetime import datetime

tickers = ["SPY", "VGK",   "EWJ",  "EEM",  "VNQ",  "RWX",  "IEF",  "TLT",  "DBC",  "GLD"]

# We would like all available data from 01/01/2000 until 12/31/2016.
start_date = '1990-01-01'
end_date = dt.datetime.today().strftime('%Y-%m-%d')

# Uses pandas_reader.data.DataReader to load the desired data. As simple as that.

adj_prices = []
for ticker in tickers:
    tickerData = data.DataReader(ticker, 'yahoo', start_date)
    adj_etf = tickerData.loc[:,'Adj Close']

adj_prices = pd.concat(adj_prices, axis = 1)
adj_prices.columns = tickers
adj_prices = adj_prices.dropna()
rets = adj_prices.pct_change().dropna()

df = rets

Anyhow, here’s something I found interesting, when trying to port over R’s endpoints function. Namely, in that while looking for a way to get the monthly endpoints, I found the following line on StackOverflow:

tmp = df.reset_index().groupby([df.index.year,df.index.month],as_index=False).last().set_index('Date')

Which gives the following ouptut:

                 SPY       VGK       EWJ  ...       TLT       DBC       GLD
Date                                      ...                              
2006-12-29 -0.004149 -0.003509  0.001409  ... -0.000791  0.004085  0.004928
2007-01-31  0.006723  0.005958 -0.004175  ...  0.008408  0.010531  0.009499
2007-02-28  0.010251  0.010942 -0.001353  ... -0.004528  0.015304  0.016358
2007-03-30  0.000211  0.001836 -0.006817  ... -0.001923 -0.014752  0.001371
2007-04-30 -0.008293 -0.003852 -0.007644  ...  0.010475 -0.008915 -0.006957

So far, so good. Right? Well, here’s an edgy panda that pops up when I try to narrow the case down to weeks. Why? Because endpoints in R has that functionality, so for the sake of meticulousness, I simply decided to change up the line from monthly to weekly. Here’s *that* input and output.

tmp = df.reset_index().groupby([df.index.year, df.index.week],as_index=False).last().set_index('Date')

                 SPY       VGK       EWJ  ...       TLT       DBC       GLD
Date                                      ...                              
2006-12-22 -0.006143 -0.002531  0.003551  ... -0.007660  0.007736  0.004399
2006-12-29 -0.004149 -0.003509  0.001409  ... -0.000791  0.004085  0.004928
2007-12-31 -0.007400 -0.010449  0.002262  ...  0.006055  0.001269 -0.006506
2007-01-12  0.007598  0.005913  0.012978  ... -0.004635  0.023400  0.025400
2007-01-19  0.001964  0.010903  0.007097  ... -0.002720  0.015038  0.011886

[5 rows x 10 columns]

Notice something funny? Instead of 2007-01-07, we get 2007-12-31. I even asked some people that use Python as their bread and butter (of which, hopefully, I will be one of soon) what was going on, and after some back and forth, it was found that the ISO standard has some weird edge cases relating to the final week of some years, and that the output is, apparently, correct, in that 2007-12-31 is apparently the first week of 2008 according to some ISO standard. Generally, when dealing with such edge cases in pandas (hence, edgy panda!), I look for another work-around. Thanks to help from Dr. Vaidyanathan, I got that workaround with the following input and output.

tmp = pd.Series(df.index,index=df.index).resample('W').max()
2006-12-24   2006-12-22
2006-12-31   2006-12-29
2007-01-07   2007-01-05
2007-01-14   2007-01-12
2007-01-21   2007-01-19
2007-01-28   2007-01-26
Freq: W-SUN, Name: Date, dtype: datetime64[ns]

Now, *that* looks far more reasonable. With this, we can write a proper endpoints function.

def endpoints(df, on = "M", offset = 0):
    Returns index of endpoints of a time series analogous to R's endpoints
    Takes in: 
        df -- a dataframe/series with a date index
        on -- a string specifying frequency of endpoints
        (E.G. "M" for months, "Q" for quarters, and so on)
        offset -- to offset by a specified index on the original data
        (E.G. if the data is daily resolution, offset of 1 offsets by a day)
        This is to allow for timing luck analysis. Thank Corey Hoffstein.
    # to allow for familiarity with R
    # "months" becomes "M" for resampling
    if len(on) > 3:
        on = on[0].capitalize()
    # get index dates of formal endpoints
    ep_dates = pd.Series(df.index, index = df.index).resample(on).max()
    # get the integer indices of dates that are the endpoints
    date_idx = np.where(df.index.isin(ep_dates))
    # append zero and last day to match R's endpoints function
    # remember, Python is indexed at 0, not 1
    date_idx = np.insert(date_idx, 0, 0)
    date_idx = np.append(date_idx, df.shape[0]-1)
    if offset != 0:
        date_idx = date_idx + offset
        date_idx[date_idx < 0] = 0
        date_idx[date_idx > df.shape[0]-1] = df.shape[0]-1
    out = np.unique(date_idx)
    return out    

Essentially, the function takes in 3 arguments: first, your basic data frame (or series–which is essentially just a time-indexed data frame in Python to my understanding).

Next, it takes the “on” argument, which can take either a string such as “months”, or just a one-letter term for immediate use with Python’s resample function (I forget all the abbreviations, but I do know that there’s W, M, Q, and Y for weekly, monthly, quarterly, and yearly), which the function will convert a longer string into. That way, for those coming from R, this function will be backwards compatible.

Lastly, because Corey Hoffstein makes a big deal about it and I respect his accomplishments, the offset argument, which offsets the endpoints by the amount specified, at the frequency of the original data. That is, if you take quarterly endpoints using daily frequency data, the function won’t read your mind and offset the quarterly endpoints by a month, which *is* functionality that probably should be *somewhere*, but currently exists neither in R nor in Python, at least not in the public sphere, so I suppose I’ll have to write it…eventually.

Anyway, here’s how the function works (now in Python!) using the data in this post:

endpoints(rets, on = "weeks")[0:20]
array([ 0,  2,  6,  9, 14, 18, 23, 28, 33, 38, 42, 47, 52, 57, 62, 67, 71,
       76, 81, 86], dtype=int64)

endpoints(rets, on = "weeks", offset = 2)[0:20]
array([ 2,  4,  8, 11, 16, 20, 25, 30, 35, 40, 44, 49, 54, 59, 64, 69, 73,
       78, 83, 88], dtype=int64)

endpoints(rets, on = "months")
array([   0,    6,   26,   45,   67,   87,  109,  130,  151,  174,  193,
        216,  237,  257,  278,  298,  318,  340,  361,  382,  404,  425,
        446,  469,  488,  510,  530,  549,  571,  592,  612,  634,  656,
        677,  698,  720,  740,  762,  781,  800,  823,  844,  864,  886,
        907,  929,  950,  971,  992, 1014, 1034, 1053, 1076, 1096, 1117,
       1139, 1159, 1182, 1203, 1224, 1245, 1266, 1286, 1306, 1328, 1348,
       1370, 1391, 1412, 1435, 1454, 1475, 1496, 1516, 1537, 1556, 1576,
       1598, 1620, 1640, 1662, 1684, 1704, 1727, 1747, 1768, 1789, 1808,
       1829, 1850, 1871, 1892, 1914, 1935, 1956, 1979, 1998, 2020, 2040,
       2059, 2081, 2102, 2122, 2144, 2166, 2187, 2208, 2230, 2250, 2272,
       2291, 2311, 2333, 2354, 2375, 2397, 2417, 2440, 2461, 2482, 2503,
       2524, 2544, 2563, 2586, 2605, 2627, 2649, 2669, 2692, 2712, 2734,
       2755, 2775, 2796, 2815, 2836, 2857, 2879, 2900, 2921, 2944, 2963,
       2986, 3007, 3026, 3047, 3066, 3087, 3108, 3130, 3150, 3172, 3194,
       3214, 3237, 3257, 3263], dtype=int64)

endpoints(rets, on = "months", offset = 10)
array([  10,   16,   36,   55,   77,   97,  119,  140,  161,  184,  203,
        226,  247,  267,  288,  308,  328,  350,  371,  392,  414,  435,
        456,  479,  498,  520,  540,  559,  581,  602,  622,  644,  666,
        687,  708,  730,  750,  772,  791,  810,  833,  854,  874,  896,
        917,  939,  960,  981, 1002, 1024, 1044, 1063, 1086, 1106, 1127,
       1149, 1169, 1192, 1213, 1234, 1255, 1276, 1296, 1316, 1338, 1358,
       1380, 1401, 1422, 1445, 1464, 1485, 1506, 1526, 1547, 1566, 1586,
       1608, 1630, 1650, 1672, 1694, 1714, 1737, 1757, 1778, 1799, 1818,
       1839, 1860, 1881, 1902, 1924, 1945, 1966, 1989, 2008, 2030, 2050,
       2069, 2091, 2112, 2132, 2154, 2176, 2197, 2218, 2240, 2260, 2282,
       2301, 2321, 2343, 2364, 2385, 2407, 2427, 2450, 2471, 2492, 2513,
       2534, 2554, 2573, 2596, 2615, 2637, 2659, 2679, 2702, 2722, 2744,
       2765, 2785, 2806, 2825, 2846, 2867, 2889, 2910, 2931, 2954, 2973,
       2996, 3017, 3036, 3057, 3076, 3097, 3118, 3140, 3160, 3182, 3204,
       3224, 3247, 3263], dtype=int64)

endpoints(rets, on = "quarters")
array([   0,    6,   67,  130,  193,  257,  318,  382,  446,  510,  571,
        634,  698,  762,  823,  886,  950, 1014, 1076, 1139, 1203, 1266,
       1328, 1391, 1454, 1516, 1576, 1640, 1704, 1768, 1829, 1892, 1956,
       2020, 2081, 2144, 2208, 2272, 2333, 2397, 2461, 2524, 2586, 2649,
       2712, 2775, 2836, 2900, 2963, 3026, 3087, 3150, 3214, 3263],

endpoints(rets, on = "quarters", offset = 10)
array([  10,   16,   77,  140,  203,  267,  328,  392,  456,  520,  581,
        644,  708,  772,  833,  896,  960, 1024, 1086, 1149, 1213, 1276,
       1338, 1401, 1464, 1526, 1586, 1650, 1714, 1778, 1839, 1902, 1966,
       2030, 2091, 2154, 2218, 2282, 2343, 2407, 2471, 2534, 2596, 2659,
       2722, 2785, 2846, 2910, 2973, 3036, 3097, 3160, 3224, 3263],

So, that’s that. Endpoints, in Python. Eventually, I’ll try and port over Return.portfolio and charts.PerformanceSummary as well in the future.

Thanks for reading.

NOTE: I am currently enrolled in Thinkful’s python/PostGresSQL data science bootcamp while also actively looking for full-time (or long-term contract) opportunities in New York, Philadelphia, or remotely. If you know of an opportunity I may be a fit for, please don’t hesitate to contact me on my LinkedIn or just feel free to take my resume from my DropBox (and if you’d like, feel free to let me know how I can improve it).

GARCH and a rudimentary application to Vol Trading

This post will review Kris Boudt’s datacamp course, along with introducing some concepts from it, discuss GARCH, present an application of it to volatility trading strategies, and a somewhat more general review of datacamp.

So, recently, Kris Boudt, one of the highest-ranking individuals pn the open-source R/Finance totem pole (contrary to popular belief, I am not the be-all end-all of coding R in finance…probably just one of the more visible individuals due to not needing to run a trading desk), taught a course on Datacamp on GARCH models.

Naturally, an opportunity to learn from one of the most intelligent individuals in the field in a hand-held course does not come along every day. In fact, on Datacamp, you can find courses from some of the most intelligent individuals in the R/Finance community, such as Joshua Ulrich, Ross Bennett (teaching PortfolioAnalytics, no less), David Matteson, and, well, just about everyone short of Doug Martin and Brian Peterson themselves. That said, most of those courses are rather introductory, but occasionally, you get a course that covers a production-tier library that allows one to do some non-trivial things, such as this course, which covers Alexios Ghalanos’s rugarch library.

Ultimately, the course is definitely good for showing the basics of rugarch. And, given how I blog and use tools, I wholly subscribe to the 80/20 philosophy–essentially that you can get pretty far using basic building blocks in creative ways, or just taking a particular punchline and applying it to some data, and throwing it into a trading strategy to see how it does.

But before we do that, let’s discuss what GARCH is.

While I’ll save the Greek notation for those that feel inclined to do a google search, here’s the acronym:

Generalized Auto-Regressive Conditional Heteroskedasticity

What it means:

Generalized: a more general form of the

Auto-Regressive: past values are used as inputs to predict future values.

Conditional: the current value differs given a past value.

Heteroskedasticity: varying volatility. That is, consider the VIX. It isn’t one constant level, such as 20. It varies with respect to time.

Or, to summarize: “use past volatility to predict future volatility because it changes over time.”

Now, there are some things that we know from empirical observation about looking at volatility in financial time series–namely that volatility tends to cluster–high vol is followed by high vol, and vice versa. That is, you don’t just have one-off huge moves one day, then calm moves like nothing ever happened. Also, volatility tends to revert over longer periods of time. That is, VIX doesn’t stay elevated for protracted periods of time, so more often than not, betting on its abatement can make some money, (assuming the timing is correct.)

Now, in the case of finance, which birthed the original GARCH, 3 individuals (Glosten-Jagannathan-Runkle) extended the model to take into account the fact that volatility in an asset spikes in the face of negative returns. That is, when did the VIX reach its heights? In the biggest of bear markets in the financial crisis. So, there’s an asymmetry in the face of positive and negative returns. This is called the GJR-GARCH model.

Now, here’s where the utility of the rugarch package comes in–rather than needing to reprogram every piece of math, Alexios Ghalanos has undertaken that effort for the good of everyone else, and implemented a whole multitude of prepackaged GARCH models that allow the end user to simply pick the type of GARCH model that best fits the assumptions the end user thinks best apply to the data at hand.

So, here’s the how-to.

First off, we’re going to get data for SPY from Yahoo finance, then specify our GARCH model.

The GARCH model has three components–the mean model–that is, assumptions about the ARMA (basic ARMA time series nature of the returns, in this case I just assumed an AR(1)), a variance model–which is the part in which you specify the type of GARCH model, along with variance targeting (which essentially forces an assumption of some amount of mean reversion, and something which I had to use to actually get the GARCH model to converge in all cases), and lastly, the distribution model of the returns. In many models, there’s some in-built assumption of normality. In rugarch, however, you can relax that assumption by specifying something such as “std” — that is, the Student T Distribution, or in this case, “sstd”–Skewed Student T Distribution. And when one thinks about the S&P 500 returns, a skewed student T distribution seems most reasonable–positive returns usually arise as a large collection of small gains, but losses occur in large chunks, so we want a distribution that can capture this property if the need arises.

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">

# get SPY data from Yahoo 
getSymbols("SPY", from = '1990-01-01')

spyRets = na.omit(Return.calculate(Ad(SPY)))

# GJR garch with AR1 innovations under a skewed student T distribution for returns
gjrSpec = ugarchspec(mean.model = list(armaOrder = c(1,0)),
                      variance.model = list(model = "gjrGARCH",
                                            variance.targeting = TRUE),
                      distribution.model = "sstd")

As you can see, with a single function call, the user can specify a very extensive model encapsulating assumptions about both the returns and the model which governs their variance. Once the model is specified,it’s equally simple to use it to create a rolling out-of-sample prediction–that is, just plug your data in, and after some burn-in period, you start to get predictions for a variety of metrics. Here’s the code to do that. 

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
# Use rolling window of 504 days, refitting the model every 22 trading days
t1 = Sys.time()
garchroll = ugarchroll(gjrSpec, data = spyRets, 
n.start = 504, refit.window = "moving", refit.every = 22)
t2 = Sys.time()

# convert predictions to data frame
garchroll = as.data.frame(garchroll)

In this case, I use a rolling 504 day window that refits every 22 days(approximately 1 trading month). To note, if the window is too short,you may run into fail-to-converge instances, which would disallow converting the predictions to a data frame. The rolling predictions take about four minutes to run on the server instance I use, so refitting every single day is most likely not advised.

Here’s how the predictions look like:

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
                      Mu       Sigma      Skew    Shape Shape(GIG)      Realized
1995-01-30  6.635618e-06 0.005554050 0.9456084 4.116495          0 -0.0043100611
1995-01-31  4.946798e-04 0.005635425 0.9456084 4.116495          0  0.0039964165
1995-02-01  6.565350e-06 0.005592726 0.9456084 4.116495          0 -0.0003310769
1995-02-02  2.608623e-04 0.005555935 0.9456084 4.116495          0  0.0059735255
1995-02-03 -1.096157e-04 0.005522957 0.9456084 4.116495          0  0.0141870212
1995-02-06 -5.922663e-04 0.005494048 0.9456084 4.116495          0  0.0042281655


The salient quantity here is the Sigma quantity–that is, the prediction for daily volatility. This is the quantity that we want to compare against the VIX.

So the strategy we’re going to be investigating is essentially what I’ve seen referred to as VRP–the Volatility Risk Premium in Tony Cooper’s seminal paper, Easy Volatility Investing.

The idea of the VRP is that we compare some measure of realized volatility (EG running standard deviation, GARCH predictions from past data) to the VIX, which is an implied volatility (so, purely forward looking). The idea is that when realized volatility (past/current measured) is greater than future volatility, people are in a panic. Similarly, when implied volatility is greater than realized volatility, things are as they should be, and it should be feasible to harvest the volatility risk premium by shorting volatility (analogous to selling insurance).

The instruments we’ll be using for this are ZIV and VXZ. ZIV because SVXY is no longer supported on InteractiveBrokers or RobinHood, and then VXZ is its long volatility counterpart.

We’ll be using close-to-close returns; that is, get the signal on Monday morning, and transact on Monday’s close, rather than observe data on Friday’s close, and transact around that time period as well(also known as magical thinking, according to Brian Peterson).

getSymbols('^VIX', from = '1990-01-01')

# convert GARCH sigma predictions to same scale as the VIX by annualizing, multiplying by 100
garchPreds = xts(garchroll$Sigma * sqrt(252) * 100, order.by=as.Date(rownames(garchroll)))
diff = garchPreds - Ad(VIX)


download('https://www.dropbox.com/s/y3cg6d3vwtkwtqx/VXZlong.TXT?raw=1', destfile='VXZlong.txt')
download('https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT?raw=1', destfile='ZIVlong.txt')

ziv = xts(read.zoo('ZIVlong.txt', format='%Y-%m-%d', sep = ',', header=TRUE))
vxz = xts(read.zoo('VXZlong.txt', format = '%Y-%m-%d', sep = ',', header = TRUE))

zivRets = na.omit(Return.calculate(Cl(ziv)))
vxzRets = na.omit(Return.calculate(Cl(vxz)))
vxzRets['2014-08-05'] = .045

zivSig = diff < 0 
vxzSig = diff > 0 

garchOut = lag(zivSig, 2) * zivRets + lag(vxzSig, 2) * vxzRets

histSpy = runSD(spyRets, n = 21, sample = FALSE) * sqrt(252) * 100
spyDiff = histSpy - Ad(VIX)

zivSig = spyDiff < 0 
zivSig = spyDiff > 0 

spyOut = lag(zivSig, 2) * zivRets + lag(vxzSig, 2) * vxzRets

avg = (garchOut + spyOut)/2
compare = na.omit(cbind(garchOut, spyOut, avg))
colnames(compare) = c("gjrGARCH", "histVol", "avg")

With the following output:

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] = stats[1,]/stats[4,]
  stats[6,] = stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] = "Worst Drawdown"
  rownames(stats)[5] = "Calmar Ratio"
  rownames(stats)[6] = "Ulcer Performance Index"


> stratStats(compare)
                           gjrGARCH   histVol       avg
Annualized Return         0.2195000 0.2186000 0.2303000
Annualized Std Dev        0.2936000 0.2947000 0.2614000
Annualized Sharpe (Rf=0%) 0.7477000 0.7419000 0.8809000
Worst Drawdown            0.4310669 0.5635507 0.4271594
Calmar Ratio              0.5092017 0.3878977 0.5391429
Ulcer Performance Index   1.3563017 1.0203611 1.5208926


So, to comment on this strategy: this is definitely not something you will take and trade out of the box. Both variants of this strategy, when forced to choose a side, walk straight into the Feb 5 volatility explosion. Luckily, switching between ZIV and VXZ keeps the account from completely exploding in a spectacular failure. To note, both variants of the VRP strategy, GJR Garch and the 22 day rolling realized volatility, suffer their own period of spectacularly large drawdown–the historical volatility in 2007-2008, and currently, though this year has just been miserable for any reasonable volatility strategy, I myself am down 20%, and I’ve seen other strategists down that much as well in their primary strategies.

That said, I do think that over time, and if using the tail-end-of-the-curve instruments such as VXZ and ZIV (now that XIV is gone and SVXY no longer supported on several brokers such as Interactive Brokers and RobinHood), that there are a number of strategies that might be feasible to pass off as a sort of trading analogue to machine learning’s “weak learner”.

That said, I’m not sure how many vastly different types of ways to approach volatility trading there are that make logical sense from an intuitive perspective (that is, “these two quantities have this type of relationship, which should give a consistent edge in trading volatility” rather than “let’s over-optimize these two parameters until we eliminate every drawdown”).

While I’ve written about the VIX3M/VIX6M ratio in the past, which has formed the basis of my proprietary trading strategy, I’d certainly love to investigate other volatility trading ideas out in public. For instance, I’d love to start the volatility trading equivalent of an AllocateSmartly type website–just a compendium of a reasonable suite of volatility trading strategies, track them, charge a subscription fee, and let users customize their own type of strategies. However, the prerequisite for that is that there are a lot of reasonable ways to trade volatility that don’t just walk into tail-end events such as the 2007-2008 transition, Feb 5, and so on.

Furthermore, as some recruiters have told me that I should also cross-post my blog scripts on my Github, I’ll start doing that also, from now on.

One last topic: a general review of Datacamp. As some of you may know, I instruct a course on datacamp. But furthermore, I’ve spent quite a bit of time taking courses (particularly in Python) on there as well, thanks to having access by being an instructor.

Generally, here’s the gist of it: Datacamp is a terrific resource for getting your feet wet and getting a basic overview of what technologies are out there. Generally, courses follow a “few minutes of lecture, do exercises using the exact same syntax you saw in the lecture”, with a lot of the skeleton already written for you, so you don’t wind up endlessly guessing. Generally, my procedure will be: “try to complete the exercise, and if I fail, go back and look at the slides to find an analogous block of code, change some names, and fill in”. 

Ultimately, if the world of data science, machine learning, and some quantitative finance is completely new to you–if you’re the kind of person that reads my blog, and completely glosses past the code: *this* is the resource for you, and I recommend it wholeheartedly. You’ll take some courses that give you a general tour of what data scientists, and occasionally, quants, do. And in some cases, you may have a professor in a fairly advanced field, like Kris Boudt, teach a fairly advanced topic, like the state-of-the art rugarch package (this *is* an industry-used package, and is actively maintained by Alexios Ghalanos, an economist at Amazon, so it’s far more than a pedagogical tool).

That said, for someone like me, who’s trying to port his career-capable R skills to Python to land a job (my last contract ended recently, so I am formally searching for a new role), Datacamp doesn’t *quite* do the trick–just yet. While there is a large catalog of courses, it does feel like there’s a lot of breadth, though not sure how much depth in terms of getting proficient enough to land interviews on the sole merits of DataCamp course completions. While there are Python course tracks (EG python developer, which I completed, and Python data analyst, which I also completed), I’m not sure they’re sufficient in terms of “this track was developed with partnership in industry–complete this capstone course, and we have industry partners willing to interview you”.

Also, from what I’ve seen of quantitative finance taught in Python, and having to rebuild all functions from numpy/pandas, I am puzzled as to   how people do quantitative finance in Python without libraries like PerformanceAnalytics, rugarch, quantstrat, PortfolioAnalytics, and so on. Those libraries make expressing and analyzing investment ideas far more efficient, and removes a great chance of making something like an off-by-one error (also known as look-ahead bias in trading). So far, I haven’t seen the Python end of Datacamp dive deep into quantitative finance, and I hope that changes in the near future.

So, as a summary, I think this is a fantastic site for code-illiterate individuals to get their hands dirty and their feet wet with some coding, but I think the opportunity to create an economic, democratized, interest to career a-la-carte, self-paced experience is still very much there for the taking. And given the quality of instructors that Datacamp has worked with in the past (David Matteson–*the* regime change expert, I think–along with many other experts), I think Datacamp has a terrific opportunity to capitalize here.

So, if you’re the kind of person who glosses past the code: don’t gloss anymore. You can now take courses to gain an understanding of what my code does, and ask questions about it.

Thanks for reading.

NOTE: I am currently looking for networking opportunities and full-time roles related to my skill set. Feel free to download my resume or contact me on LinkedIn.

A Review of James Picerno’s Quantitative Investment Portfolio Analytics in R

This is a review of James Picerno’s Quantitative Investment Portfolio Analytics in R. Overall, it’s about as fantastic a book as you can get on portfolio optimization until you start getting into corner cases stemming from large amounts of assets.

Here’s a quick summary of what the book covers:

1) How to install R.

2) How to create some rudimentary backtests.

3) Momentum.

4) Mean-Variance Optimization.

5) Factor Analysis

6) Bootstrapping/Monte-Carlo simulations.

7) Modeling Tail Risk

8) Risk Parity/Vol Targeting

9) Index replication

10) Estimating impacts of shocks

11) Plotting in ggplot

12) Downloading/saving data.

All in all, the book teaches the reader many fantastic techniques to get started doing some basic portfolio management using asset-class ETFs, and under the assumption of ideal data–that is, that there are few assets with concurrent starting times, that the number of assets is much smaller than the number of observations (I.E. 10 asset class ETFs, 90 day lookback windows, for instance), and other attributes taken for granted to illustrate concepts. I myself have used these concepts time and again (and, in fact, covered some of these topics on this blog, such as volatility targeting, momentum, and mean-variance), but in some of the work projects I’ve done, the trouble begins when the number of assets grows larger than the number of observations, or when assets move in or out of the investable universe (EG a new company has an IPO or a company goes bankrupt/merges/etc.). It also does not go into the PortfolioAnalytics package, developed by Ross Bennett and Brian Peterson. Having recently started to use this package for a real-world problem, it produces some very interesting results and its potential is immense, with the large caveat that you need an immense amount of computing power to generate lots of results for large-scale problems, which renders it impractical for many individual users. A quadratic optimization on a backtest with around 2400 periods and around 500 assets per rebalancing period (days) took about eight hours on a cloud server (when done sequentially to preserve full path dependency).

However, aside from delving into some somewhat-edge-case appears-more-in-the-professional-world topics, this book is extremely comprehensive. Simply, as far as managing a portfolio of asset-class ETFs (essentially, what the inimitable Adam Butler and crew from ReSolve Asset Management talk about, along with Walter’s fantastic site, AllocateSmartly), this book will impart a lot of knowledge that goes into doing those things. While it won’t make you as comfortable as say, an experienced professional like myself is at writing and analyzing portfolio optimization backtests, it will allow you to do a great deal of your own analysis, and certainly a lot more than anyone using Excel.

While I won’t rehash what the book covers in this post, what I will say is that it does cover some of the material I’ve posted in years past. And furthermore, rather than spending half the book about topics such as motivations, behavioral biases, and so on, this book goes right into the content that readers should know in order to execute the tasks they desire. Furthermore, the content is presented in a very coherent, English-and-code, matter-of-fact way, as opposed to a bunch of abstract mathematical derivations that treats practical implementation as an afterthought. Essentially, when one buys a cookbook, they don’t get it to read half of it for motivations as to why they should bake their own cake, but on how to do it. And as far as density of how-to, this book delivers in a way I think that other authors should strive to emulate.

Furthermore, I think that this book should be required reading for any analyst wanting to work in the field. It’s a very digestible “here’s how you do X” type of book. I.E. “here’s a data set, write a backtest based on these momentum rules, use an inverse-variance weighting scheme, do a Fama-French factor analysis on it”.

In any case, in my opinion, for anyone doing any sort of tactical asset allocation analysis in R, get this book now. For anyone doing any sort of tactical asset allocation analysis in spreadsheets, buy this book sooner than now, and then see the previous sentence. In any case, I’ll certainly be keeping this book on my shelf and referencing it if need be.

Thanks for reading.

Note: I am currently contracting but am currently on the lookout for full-time positions in New York City. If you know of a position which may benefit from my skills, please let me know. My LinkedIn profile can be found here.

A Review of Gary Antonacci’s Dual Momentum Investing Book

This review is a book review of Gary Antonacci’s Dual Momentum Investing book.

The TL;DR: 4.5 out of 5 stars.

So, I honestly have very little criticism of the book beyond the fact that the book sort of insinuates as though equity momentum is the be-all-end-all of investing, which is why I deduct a fraction of a point.

Now, for the book itself: first off, unlike other quantitative trading books I’ve read (aside from Andreas Clenow’s), the book outlines a very simple to follow strategy, to the point that it has already been replicated over at AllocateSmartly. (Side note: I think Walter’s resource at Allocate Smartly is probably the single best one-stop shop for reading up on any tactical asset allocation strategy, as it’s a compendium of many strategies in the risk/return profile of the 7-15% CAGR type strategies, and even has a correlation matrix between them all.)

Regarding the rest of the content, Antonacci does a very thorough job of stepping readers through the history/rationale of momentum, and not just that, but also addressing the alternatives to his strategy.

While the “why momentum works” aspect you can find in this book and others on the subject (I.E. Alpha Architect’s Quantitative Momentum book), I do like the section on other alternative assets. Namely, the book touches upon the fact that commodities no longer trend, so a lot of CTAs are taking it on the chin, and that historically, fixed income has performed much worse from an absolute return than equities. Furthermore, smart beta isn’t (smart), and many of these factors have very low aggregate returns (if they’re significant at all, I believe Wesley Gray at Alpha Architect has a blog post stating that they aren’t). There are a fair amount of footnotes for those interested in corroborating the assertions. Suffice to say, when it comes to strategies that don’t need daily micromanagement, when it comes to how far you can get without leverage (essentially, anything outside the space of volatility trading strategies), equity momentum is about as good as you get.

Antonacci then introduces his readers to his GEM (Global Equities Momentum) strategy, which can be explained in a few sentences: at the end of each month, calculate the total 12-month return of SPY, EAFE, and BIL. If BIL has the highest return, buy AGG for that month, otherwise buy the asset with the highest return. Repeat. That’s literally it, and the performance characteristics, on a risk-adjusted basis, are superior to just about any equity fund tied down to a tiny tracking error. Essentially, the reason for that is that equity markets have bear markets, and a dual momentum strategy lets you preserve your gains instead of giving it back in market corrections (I.E. 2000-2003, 2008, etc.) while keeping pace during the good times.

Lastly, Antonacci provides some ideas for possibly improving on GEM. I may examine on these in the future. However, the low-hanging fruit for improving on this strategy, in my opinion, is to find some other strategies that diversify its drawdowns, and raise its risk-adjusted return profile. Even if the total return goes down, I believe that an interactive brokers account can offer some amount of leverage (either 50% or 100%) to boost the total returns back up, or combine a more diversified portfolio with a volatility strategy.

Lastly, the appendix includes the original dual momentum paper, and a monthly return table for GEM going back to 1974.

All in all, this book is about as accessible and comprehensive as you can get on a solid strategy that readers actually *can* implement themselves in their brokerage account of choice (please use IB or Robinhood because there’s no point paying $8-$10 per trade if you’re retail). That said, I still think that there are venues in which to travel if you’re looking to improve your overall portfolio with GEM as a foundation.

Thanks for reading.

NOTEL I am always interested in networking and hearing about full-time roles which can benefit from my skill set. My linkedin profile can be found here.

A Review of Alpha Architect’s (Wes Gray/Jack Vogel) Quantitative Momentum book

This post will be an in-depth review of Alpha Architect’s Quantitative Momentum book. Overall, in my opinion, the book is terrific for those that are practitioners in fund management in the individual equity space, and still contains ideas worth thinking about outside of that space. However, the system detailed in the book benefits from nested ranking (rank along axis X, take the top decile, rank along axis Y within the top decile in X, and take the top decile along axis Y, essentially restricting selection to 1% of the universe). Furthermore, the book does not do much to touch upon volatility controls, which may have enhanced the system outlined greatly.

Before I get into the brunt of this post, I’d like to let my readers know that I formalized my nuts and bolts of quantstrat series of posts as a formal datacamp course. Datacamp is a very cheap way to learn a bunch of R, and financial applications are among those topics. My course covers the basics of quantstrat, and if those who complete the course like it, I may very well create more advanced quantstrat modules on datacamp. I’m hoping that the finance courses are well-received, since there are financial topics in R I’d like to learn myself that a 45 minute lecture doesn’t really suffice for (such as Dr. David Matteson’s change points magic, PortfolioAnalytics, and so on). In any case, here’s the link.

So, let’s start with a summary of the book:

Part 1 is several chapters that are the giant expose- of why momentum works (or at least, has worked for at least 20 years since 1993)…namely that human biases and irrational behaviors act in certain ways to make the anomaly work. Then there’s also the career risk (AKA it’s a risk factor, and so, if your benchmark is SPY and you run across a 3+ year period of underperformance, you have severe career risk), and essentially, a whole litany of why a professional asset manager would get fired but if you just stick with the anomaly over many many years and ride out multi-year stretches of relative underperformance, you’ll come out ahead in the very long run.

Generally, I feel like there’s work to be done if this is the best that can be done, but okay, I’ll accept it.

Essentially, part 1 is for the uninitiated. For those that have been around the momentum block a couple of times, they can skip right past this. Unfortunately, it’s half the book, so that leaves a little bit of a sour taste in the mouth.

Next, part two is where, in my opinion, the real meat and potatoes of the book–the “how”.

Essentially, the algorithm can be boiled down into the following:

Taking the universe of large and mid-cap stocks, do the following:

1) Sort the stocks into deciles by 2-12 momentum–that is, at the end of every month, calculate momentum by last month’s closing price minus the closing price 12 months ago. Essentially, research states that there’s a reversion effect on the 1-month momentum. However, this effect doesn’t carry over into the ETF universe in my experience.

2) Here’s the interesting part which makes the book worth picking up on its own (in my opinion): after sorting into deciles, rank the top decile by the following metric: multiply the sign of the 2-12 momentum by the following equation: (% negative returns – % positive). Essentially, the idea here is to determine smoothness of momentum. That is, in the most extreme situation, imagine a stock that did absolutely nothing for 230 days, and then had one massive day that gave it its entire price appreciation (think Google when it had a 10% jump off of better-than-expected numbers reports), and in the other extreme, a stock that simply had each and every single day be a small positive price appreciation. Obviously, you’d want the second type of stock. That’s this idea. Again, sort into deciles, and take the top decile. Therefore, taking the top decile of the top decile leaves you with 1% of the universe. Essentially, this makes the idea very difficult to replicate–since you’d need to track down a massive universe of stocks. That stated, I think the expression is actually a pretty good idea as a stand-in for volatility. That is, regardless of how volatile an asset is–whether it’s as volatile as a commodity like DBC, or as non-volatile as a fixed-income product like SHY, this expression is an interesting way of stating “this path is choppy” vs. “this path is smooth”. I might investigate this expression on my blog further in the future.

3) Lastly, if the portfolio is turning over quarterly instead of monthly, the best months to turn it over are the months preceding end-of-quarter month (that is, February, May, August, November) because a bunch of amateur asset managers like to “window dress” their portfolios. That is, they had a crummy quarter, so at the last month before they have to send out quarterly statements, they load up on some recent winners so that their clients don’t think they’re as amateur as they really let on, and there’s a bump for this. Similarly, January has some selling anomalies due to tax-loss harvesting. As far as practical implementations go, I think this is a very nice touch. Conceding the fact that turning over every month may be a bit too expensive, I like that Wes and Jack say “sure, you want to turn it over once every three months, but on *which* months?”. It’s a very good question to ask if it means you get an additional percentage point or 150 bps a year from that, as it just might cover the transaction costs and then some.

All in all, it’s a fairly simple to understand strategy. However, the part that sort of gates off the book to a perfect replication is the difficulty in obtaining the CRSP data.

However, I do commend Alpha Architect for disclosing the entire algorithm from start to finish.

Furthermore, if the basic 2-12 momentum is not enough, there’s an appendix detailing other types of momentum ideas (earnings momentum, ranking by distance to 52-week highs, absolute historical momentum, and so on). None of these strategies are really that much better than the basic price momentum strategy, so they’re there for those interested, but it seems there’s nothing really ground-breaking there. That is, if you’re trading once a month, there’s only so many ways of saying “hey, I think this thing is going up!”

I also like that Wes and Jack touched on the fact that trend-following, while it doesn’t improve overall CAGR or Sharpe, does a massive amount to improve on max drawdown. That is, if faced with the prospect of losing 70-80% of everything, and losing only 30%, that’s an easy choice to make. Trend-following is good, even a simplistic version.

All in all, I think the book accomplishes what it sets out to do, which is to present a well-researched algorithm. Ultimately, the punchline is on Alpha Architect’s site (I believe they have some sort of monthly stock filter). Furthermore, the book states that there are better risk-adjusted returns when combined with the algorithm outlined in the “quantitative value” book. In my experience, I’ve never had value algorithms impress me in the backtests I’ve done, but I can chalk that up to me being inexperienced with all the various valuation metrics.

My criticism of the book, however, is this:

The momentum algorithm in the book misses what I feel is one key component: volatility targeting control. Simply, the paper “momentum has its moments” (which I covered in my hypothesis-driven development series of posts) essentially states that the usual Fama-French momentum strategy does far better from a risk-reward strategy by deleveraging during times of excessive volatility, and avoiding momentum crashes. I’m not sure why Wes and Jack didn’t touch upon this paper, since the implementation is very simple (target/realized volatility = leverage factor). Ideally, I’d love if Wes or Jack could send me the stream of returns for this strategy (preferably daily, but monthly also works).

Essentially, I think this book is very comprehensive. However, I think it also has a somewhat “don’t try this at home” feel to it due to the data requirement to replicate it. Certainly, if your broker charges you $8 a transaction, it’s not a feasible strategy to drop several thousand bucks a year on transaction costs that’ll just give your returns to your broker. However, I do wonder if the QMOM ETF (from Alpha Architect, of course) is, in fact, a better version of this strategy, outside of the management fee.

In any case, my final opinion is this: while this book leaves a little bit of knowledge on the table, on a whole, it accomplishes what it sets out to do, is clear with its procedures, and provides several worthwhile ideas. For the price of a non-technical textbook (aka those $60+ books on amazon), this book is a steal.

4.5/5 stars.

Thanks for reading.

NOTE: While I am currently employed in a successful analytics capacity, I am interested in hearing about full-time positions more closely related to the topics on this blog. If you have a full-time position which can benefit from my current skills, please let me know. My Linkedin can be found here.

A Book Review of ReSolve Asset Management’s Adaptive Asset Allocation

This review will review the “Adaptive Asset Allocation: Dynamic Global Portfolios to Profit in Good Times – and Bad” book by the people at ReSolve Asset Management. Overall, this book is a definite must-read for those who have never been exposed to the ideas within it. However, when it comes to a solution that can be fully replicated, this book is lacking.

Okay, it’s been a while since I reviewed my last book, DIY Financial Advisor, from the awesome people at Alpha Architect. This book in my opinion, is set up in a similar sort of format.

This is the structure of the book, and my reviews along with it:

Part 1: Why in the heck you actually need to have a diversified portfolio, and why a diversified portfolio is a good thing. In a world in which there is so much emphasis put on single-security performance, this is certainly something that absolutely must be stated for those not familiar with portfolio theory. It highlights the example of two people–one from Abbott Labs, and one from Enron, who had so much of their savings concentrated in their company’s stock. Mr. Abbott got hit hard and changed his outlook on how to save for retirement, and Mr. Enron was never heard from again. Long story short: a diversified portfolio is good, and a properly diversified portfolio can offset one asset’s zigs with another asset’s zags. This is the key to establishing a stream of returns that will help meet financial goals. Basically, this is your common sense story (humans love being told stories) so as to motivate you to read the rest of the book. It does its job, though for someone like me, it’s more akin to a big “wait for it, wait for it…and there’s the reason why we should read on, as expected”.

Part 2: Something not often brought up in many corners of the quant world (because it’s real life boring stuff) is the importance not only of average returns, but *when* those returns are achieved. Namely, imagine your everyday saver. At the beginning of their careers, they’re taking home less salary and have less money in their retirement portfolio (or speculation portfolio, but the book uses retirement portfolio). As they get into middle age and closer to retirement, they have a lot more money in said retirement portfolio. Thus, strong returns are most vital when there is more cash available *to* the portfolio, and the difference between mediocre returns at the beginning and strong returns at the end of one’s working life as opposed to vice versa is *astronomical* and cannot be understated. Furthermore, once *in* retirement, strong returns in the early years matter far more than returns in the later years once money has been withdrawn out of the portfolio (though I’d hope that a portfolio’s returns can be so strong that one can simply “live off the interest”). Or, put more intuitively: when you have $10,000 in your portfolio, a 20% drawdown doesn’t exactly hurt because you can make more money and put more into your retirement account. But when you’re 62 and have $500,000 and suddenly lose 30% of everything, well, that’s massive. How much an investor wants to avoid such a scenario cannot be understated. Warren Buffett once said that if you can’t bear to lose 50% of everything, you shouldn’t be in stocks. I really like this part of the book because it shows just how dangerous the ideas of “a 50% drawdown is unavoidable” and other “stay invested for the long haul” refrains are. Essentially, this part of the book makes a resounding statement that any financial adviser keeping his or her clients invested in equities when they’re near retirement age is doing something not very advisable, to put it lightly. In my opinion, those who advise pension funds should especially keep this section of the book in mind, since for some people, the long-term may be coming to an end, and what matters is not only steady returns, but to make sure the strategy doesn’t fall off a cliff and destroy decades of hard-earned savings.

Part 3: This part is also one that is a very important read. First off, it lays out in clear terms that the long-term forward-looking valuations for equities are at rock bottom. That is, the expected forward 15-year returns are very low, using approximately 75 years of evidence. Currently, according to the book, equity valuations imply a *negative* 15-year forward return. However, one thing I *will* take issue with is that while forward-looking long-term returns for equities may be very low, if one believed this chart and only invested in the stock market when forecast 15-year returns were above the long term average, one would have missed out on both the 2003-2007 bull runs, *and* the one since 2009 that’s just about over. So, while the book makes a strong case for caution, readers should also take the chart with a grain of salt in my opinion. However, another aspect of portfolio construction that this book covers is how to construct a robust (assets for any economic environment) and coherent (asset classes balanced in number) universe for implementation with any asset allocation algorithm. I think this bears repeating: universe selection is an extremely important topic in the discussion of asset allocation, yet there is very little discussion about it. Most research/topics simply take some “conventional universe”, such as “all stocks on the NYSE”, or “all the stocks in the S&P 500”, or “the entire set of the 50-60 most liquid futures” without consideration for robustness and coherence. This book is the first source I’ve seen that actually puts this topic under a magnifying glass besides “finger in the air pick and choose”.

Part 4: and here’s where I level my main criticism at this book. For those that have read “Adaptive Asset Allocation: A Primer”, this section of the book is basically one giant copy and paste. It’s all one large buildup to “momentum rank + min-variance optimization”. All well and good, until there’s very little detail beyond the basics as to how the minimum variance portfolio was constructed. Namely, what exactly is the minimum variance algorithm in use? Is it one of the poor variants susceptible to numerical instability inherent in inverting sample covariance matrices? Or is it a heuristic like David Varadi’s minimum variance and minimum correlation algorithm? The one feeling I absolutely could not shake was that this book had a perfect opportunity to present a robust approach to minimum variance, and instead, it’s long on concept, short on details. While the theory of “maximize return for unit risk” is all well and good, the actual algorithm to implement that theory into practice is not trivial, with the solutions taught to undergrads and master’s students having some well-known weaknesses. On top of this, one thing that got hammered into my head in the past was that ranking *also* had a weakness at the inclusion/exclusion point. E.G. if, out of ten assets, the fifth asset had a momentum of say, 10.9%, and the sixth asset had a momentum of 10.8%, how are we so sure the fifth is so much better? And while I realize that this book was ultimately meant to be a primer, in my opinion, it would have been a no-objections five-star if there were an appendix that actually went into some detail on how to go from the simple concepts and included a small numerical example of some algorithms that may address the well-known weaknesses. This doesn’t mean Greek/mathematical jargon. Just an appendix that acknowledged that not every reader is someone only picking up his first or second book about systematic investing, and that some of us are familiar with the “whys” and are more interested in the “hows”. Furthermore, I’d really love to know where the authors of this book got their data to back-date some of these ETFs into the 90s.

Part 5: some more formal research on topics already covered in the rest of the book–namely a section about how many independent bets one can take as the number of assets grow, if I remember it correctly. Long story short? You *easily* get the most bang for your buck among disparate asset classes, such as treasuries of various duration, commodities, developed vs. emerging equities, and so on, as opposed to trying to pick among stocks in the same asset class (though there’s some potential for alpha there…just…a lot less than you imagine). So in case the idea of asset class selection, not stock selection wasn’t beaten into the reader’s head before this point, this part should do the trick. The other research paper is something I briefly skimmed over which went into more depth about volatility and retirement portfolios, though I felt that the book covered this topic earlier on to a sufficient degree by building up the intuition using very understandable scenarios.

So that’s the review of the book. Overall, it’s a very solid piece of writing, and as far as establishing the *why*, it does an absolutely superb job. For those that aren’t familiar with the concepts in this book, this is definitely a must-read, and ASAP.

However, for those familiar with most of the concepts and looking for a detailed “how” procedure, this book does not deliver as much as I would have liked. And I realize that while it’s a bad idea to publish secret sauce, I bought this book in the hope of being exposed to a new algorithm presented in the understandable and intuitive language that the rest of the book was written in, and was left wanting.

Still, that by no means diminishes the impact of the rest of the book. For those who are more likely to be its target audience, it’s a 5/5. For those that wanted some specifics, it still has its gem on universe construction.

Overall, I rate it a 4/5.

Thanks for reading.

Review: Invoance’s TRAIDE application

This review will be about Inovance Tech’s TRAIDE system. It is an application geared towards letting retail investors apply proprietary machine learning algorithms to assist them in creating systematic trading strategies. Currently, my one-line review is that while I hope the company founders mean well, the application is still in an early stage, and so, should be checked out by potential users/venture capitalists as something with proof of potential, rather than a finished product ready for mass market. While this acts as a review, it’s also my thoughts as to how Inovance Tech can improve its product.

A bit of background: I have spoken several times to some of the company’s founders, who sound like individuals at about my age level (so, fellow millennials). Ultimately, the selling point is this:

Systematic trading is cool.
Machine learning is cool.
Therefore, applying machine learning to systematic trading is awesome! (And a surefire way to make profits, as Renaissance Technologies has shown.)

While this may sound a bit snarky, it’s also, in some ways, true. Machine learning has become the talk of the town, from IBM’s Watson (RenTec itself hired a bunch of speech recognition experts from IBM a couple of decades back), to Stanford’s self-driving car (invented by Sebastian Thrun, who now heads Udacity), to the Netflix prize, to god knows what Andrew Ng is doing with deep learning at Baidu. Considering how well machine learning has done at much more complex tasks than “create a half-decent systematic trading algorithm”, it shouldn’t be too much to ask this powerful field at the intersection of computer science and statistics to help the retail investor glued to watching charts generate a lot more return on his or her investments than through discretionary chart-watching and noise trading. To my understanding from conversations with Inovance Tech’s founders, this is explicitly their mission.

(Note: Dr. Wes Gray and Alpha Architect, in their book DIY Financial Advisor, have already established that listening to pundits, and trying to succeed at discretionary trading, is on a whole, a loser’s game)

However, I am not sure that Inovance’s TRAIDE application actually accomplishes this mission in its current state.

Here’s how it works:

Users select one asset at a time, and select a date range (data going back to Dec. 31, 2009). Assets are currently limited to highly liquid currency pairs, and can take the following settings: 1 hour, 2 hour, 4 hour, 6 hour, or daily bar time frames.

Users then select from a variety of indicators, ranging from technical (moving averages, oscillators, volume calculations, etc. Mostly an assortment of 20th century indicators, though the occasional adaptive moving average has managed to sneak in–namely KAMA–see my DSTrading package, and MAMA–aka the Mesa Adaptive Moving Average, from John Ehlers) to more esoteric ones such as some sentiment indicators. Here’s where things start to head south for me, however. Namely, that while it’s easy to add as many indicators as a user would like, there is basically no documentation on any of them, with no links to reference, etc., so users will have to bear the onus of actually understanding what each and every one of the indicators they select actually does, and whether or not those indicators are useful. The TRAIDE application makes zero effort (thus far) to actually get users acquainted with the purpose of these indicators, what their theoretical objective is (measure conviction in a trend, detect a trend, oscillator type indicator, etc.)

Furthermore, regarding indicator selections, users also specify one parameter setting for each indicator per strategy. E.G. if I had an EMA crossover, I’d have to create a new strategy for a 20/100 crossover, a 21/100 crossover, rather than specifying something like this:

short EMA: 20-60
long EMA: 80-200

Quantstrat itself has this functionality, and while I don’t recall covering parameter robustness checks/optimization (in other words, testing multiple parameter sets–whether one uses them for optimization or robustness is up to the user, not the functionality) in quantstrat on this blog specifically, this information very much exists in what I deem “the official quantstrat manual”, found here. In my opinion, the option of covering a range of values is mandatory so as to demonstrate that any given parameter setting is not a random fluke. Outside of quantstrat, I have demonstrated this methodology in my Hypothesis Driven Development posts, and in coming up for parameter selection for volatility trading.

Where TRAIDE may do something interesting, however, is that after the user specifies his indicators and parameters, its “proprietary machine learning” algorithms (WARNING: COMPLETELY BLACK BOX) determine for what range of values of the indicators in question generated the best results within the backtest, and assign them bullishness and bearishness scores. In other words, “looking backwards, these were the indicator values that did best over the course of the sample”. While there is definite value to exploring the relationships between indicators and future returns, I think that TRAIDE needs to do more in this area, such as reporting P-values, conviction, and so on.

For instance, if you combine enough indicators, your “rule” is a market order that’s simply the intersection of all of the ranges of your indicators. For instance, TRAIDE may tell a user that the strongest bullish signal when the difference of the moving averages is between 1 and 2, the ADX is between 20 and 25, the ATR is between 0.5 and 1, and so on. Each setting the user selects further narrows down the number of trades the simulation makes. In my opinion, there are more ways to explore the interplay of indicators than simply one giant AND statement, such as an “OR” statement, of some sort. (E.G. select all values, put on a trade when 3 out of 5 indicators fall into the selected bullish range in order to place more trades). While it may be wise to filter down trades to very rare instances if trading a massive amount of instruments, such that of several thousand possible instruments, only several are trading at any given time, with TRAIDE, a user selects only *one* asset class (currently, one currency pair) at a time, so I’m hoping to see TRAIDE create more functionality in terms of what constitutes a trading rule.

After the user selects both a long and a short rule (by simply filtering on indicator ranges that TRAIDE’s machine learning algorithms have said are good), TRAIDE turns that into a backtest with a long equity curve, short equity curve, total equity curve, and trade statistics for aggregate, long, and short trades. For instance, in quantstrat, one only receives aggregate trade statistics. Whether long or short, all that matters to quantstrat is whether or not the trade made or lost money. For sophisticated users, it’s trivial enough to turn one set of rules on or off, but TRAIDE does more to hold the user’s hand in that regard.

Lastly, TRAIDE then generates MetaTrader4 code for a user to download.

And that’s the process.

In my opinion, while what Inovance Tech has set out to do with TRAIDE is interesting, I wouldn’t recommend it in its current state. For sophisticated individuals that know how to go through a proper research process, TRAIDE is too stringent in terms of parameter settings (one at a time), pre-coded indicators (its target audience probably can’t program too well), and asset classes (again, one at a time). However, for retail investors, my issue with TRAIDE is this:

There is a whole assortment of undocumented indicators, which then move to black-box machine learning algorithms. The result is that the user has very little understanding of what the underlying algorithms actually do, and why the logic he or she is presented with is the output. While TRAIDE makes it trivially easy to generate any one given trading system, as multiple individuals have stated in slightly different ways before, writing a strategy is the easy part. Doing the work to understand if that strategy actually has an edge is much harder. Namely, checking its robustness, its predictive power, its sensitivity to various regimes, and so on. Given TRAIDE’s rather short data history (2010 onwards), and coupled with the opaqueness that the user operates under, my analogy would be this:

It’s like giving an inexperienced driver the keys to a sports car in a thick fog on a winding road. Nobody disputes that a sports car is awesome. However, the true burden of the work lies in making sure that the user doesn’t wind up smashing into a tree.

Overall, I like the TRAIDE application’s mission, and I think it may have potential as something for the retail investors that don’t intend to learn the ins-and-outs of coding a trading system in R (despite me demonstrating many times over how to put such systems together). I just think that there needs to be more work put into making sure that the results a user sees are indicative of an edge, rather than open the possibility of highly-flexible machine learning algorithms chasing ghosts in one of the noisiest and most dynamic data sets one can possibly find.

My recommendations are these:

1) Multiple asset classes
2) Allow parameter ranges, and cap the number of trials at any given point (E.G. 4 indicators with ten settings each = 10,000 possible trading systems = blow up the servers). To narrow down the number of trial runs, use techniques from experimental design to arrive at decent combinations. (I wish I remembered my response surface methodology techniques from my master’s degree about now!)
3) Allow modifications of order sizing (E.G. volatility targeting, stop losses), such as I wrote about in my hypothesis-driven development posts.
4) Provide *some* sort of documentation for the indicators, even if it’s as simple as a link to investopedia (preferably a lot more).
5) Far more output is necessary, especially for users who don’t program. Namely, to distinguish whether or not there is a legitimate edge, or if there are too few observations to reject the null hypothesis of random noise.
6) Far longer data histories. 2010 onwards just seems too short of a time-frame to be sure of a strategy’s efficacy, at least on daily data (may not be true for hourly).
7) Factor in transaction costs. Trading on an hourly time frame will mean far less P&L per trade than on a daily resolution. If MT4 charges a fixed ticket price, users need to know how this factors into their strategy.
8) Lastly, dogfooding. When I spoke last time with Inovance Tech’s founders, they claimed they were using their own algorithms to create a forex strategy, which was doing well in live trading. By the time more of these suggestions are implemented, it’d be interesting to see if they have a track record as a fund, in addition to as a software provider.

If all of these things are accounted for and automated, the product will hopefully accomplish its mission of bringing systematic trading and machine learning to more people. I think TRAIDE has potential, and I’m hoping that its staff will realize that potential.

Thanks for reading.

NOTE: I am currently contracting in downtown Chicago, and am always interested in networking with professionals in the systematic trading and systematic asset management/allocation spaces. Find my LinkedIn here.

EDIT: Today in my email (Dec. 3, 2015), I received a notice that Inovance was making TRAIDE completely free. Perhaps they want a bunch more feedback on it?

A Review of DIY Financial Advisor, by Gray, Vogel, and Foulke

This post will review the DIY Financial Advisor book, which I thought was a very solid read, and especially pertinent to those who are more beginners at investing (especially systematic investing). While it isn’t exactly perfect, it’s about as excellent a primer on investing as one will find out there that is accessible to the lay-person, in my opinion.

Okay, so, official announcement: I am starting a new section of posts called “Reviews”, which I received from being asked to review this book. Essentially, I believe that anyone that’s trying to create a good product that will help my readers deserves a spotlight, and I myself would like to know what cool and innovative financial services/products are coming about. For those who’d like exposure on this site, if you’re offering an affordable and innovative product or service that can be of use to an audience like mine, reach out to me.

Anyway, this past weekend, while relocating to Chicago, I had the pleasure of reading Alpha Architect’s (Gray, Vogel, Foulke) book “DIY financial advisor”, essentially making a case as to why a retail investor should be able to outperform the expert financial advisers that charge several percentage points a year to manage one’s wealth.
The book starts off by citing various famous studies showing how many subtle subconscious biases and fallacies human beings are susceptible to (there are plenty), such as falling for complexity, overconfidence, and so on—none of which emotionless computerized systems and models suffer from. Furthermore, it also goes on to provide several anecdotal examples of experts gone bust, such as Victor Niederhoffer, who blew up not once, but twice (and rumor has it he blew up a third time), and studies showing that systematic data analysis has shown to beat expert recommendations time and again—including when experts were armed with the outputs of the models themselves. Throw in some quotes from Jim Simons (CEO of the best hedge fund in the world, Renaissance Technologies), and the first part of the book can be summed up like this:

1) Your rank and file human beings are susceptible to many subconscious biases.

2) Don’t trust the recommendations of experts. Even simpler models have systematically outperformed said “experts”. Some experts have even blown up, multiple times even (E.G. Victor Niederhoffer).

3) Building an emotionless system will keep these human fallacies from wrecking your investment portfolio.

4) Sticking to a well thought-out system is a good idea, even when it’s uncomfortable—such as when a marine has to wear a Kevlar helmet, hold extra ammo, and extra water in a 126 degree Iraq desert (just ask Dr./Captain Gray!).

This is all well and good—essentially making a very strong case for why you should build a system, and let the system do the investment allocation heavy lifting for you.
Next, the book goes into the FACTS acronym of different manager selection—fees, access, complexity, taxes, and search. Fees: how much does it cost to have someone manage your investments? Pretty self-explanatory here. Access: how often can you pull your capital (EG a hedge fund that locks you up for a year especially when it loses money should be run from, and fast). Complexity: do you understand how the investments are managed? Taxes: long-term capital gains, or shorter-term? Generally, very few decent systems will be holding for a year or more, so in my opinion, expect to pay short-term taxes. Search: that is, how hard is it to find a good candidate? Given the sea of hedge funds (especially those with short-term track records, or only track records managing tiny amounts of money), how hard is it to find a manager who’ll beat the benchmark after fees? Answer: very difficult. In short, all the glitzy sophisticated managers you hear about? Far from a terrific deal, assuming you can even find one.

Continuing, the book goes into two separate anomalies that should form the foundation for any equity investment strategy – value, and momentum. The value system essentially goes long the top decile of the EBIT/TEV metric for the top 60% of market-cap companies traded on the NYSE every year. In my opinion, this is a system that is difficult to implement for the average investor in terms of managing the data process for this system, along with having the proper capital to allocate to all the various companies. That is, if you have a small amount of capital to invest, you might not be able to get that equal weight allocation across a hundred separate companies. However, I believe that with the QVAL and IVAL etfs (from Alpha Architect, and full disclosure, I have some of my IRA invested there), I think that the systematic value component can be readily accessed through these two funds.

The momentum strategy, however, is much simpler. There’s a momentum component, and a moving average component. There’s some math that shows that these two signals are related (a momentum signal is actually proportional to a difference of a moving average and its last value), and the ROBUST system that this book proposes is a combination of a momentum signal and an SMA signal. This is how it works. Assume you have $100,000 and 5 assets to invest in, for the sake of math. Divide the portfolio into a $50,000 momentum component and a $50,000 moving average component. Every month, allocate $10,000 to each of the five assets with a positive 12-month momentum, or stay in cash for that asset. Next, allocate another $10,000 to each of the five assets with a price above a 12-month simple moving average. It’s that simple, and given the recommended ETFs (commodities, bonds, foreign stocks, domestic stocks, real estate), it’s a system that most investors can rather easily implement, especially if they’ve been following my blog.

For those interested in more market anomalies (especially value anomalies), there’s a chapter which contains a large selection of academic papers that go back and forth on the efficacies of various anomalies and how well they can predict returns. So, for those interested, it’s there.

The book concludes with some potential pitfalls which a DIY investor should be aware of when running his or her own investments, which is essentially another psychology chapter.
Overall, in my opinion, this book is fairly solid in terms of reasons why a retail investor should take the plunge and manage his or her own investments in a systematic fashion. Namely, that flesh and blood advisers are prone to human error, and on top of that, usually charge an unjustifiably high fee, and then deliver lackluster performance. The book recommends a couple of simple systems, one of which I think anyone who can copy and paste some rudimentary R can follow (the ROBUST momentum system), and another which I think most stay-at-home investors should shy away from (the value system, simply because of the difficulty of dealing with all that data), and defer to either or both of Alpha Architect’s 2 ETFs.
In terms of momentum, there are the ALFA, GMOM, and MTUM tickers (do your homework, I’m long ALFA) for various differing exposures to this anomaly, for those that don’t want to pay the constant transaction costs/incur short-term taxes of running their own momentum strategy.

In terms of where this book comes up short, here are my two cents:
Tested over nearly a century, the risk-reward tradeoffs of these systems can still be frightening at times. That is, for a system that delivers a CAGR of around 15%, are you still willing to risk a 50% drawdown? Losing half of everything on the cusp of retirement sounds very scary, no matter the long-term upside.

Furthermore, this book keeps things simple, with an intended audience of mom and pop investors (who have historically underperformed the S&P 500!). While I think it accomplishes this, I think there could have been value added, even for such individuals, by outlining some ETFs or simple ETF/ETN trading systems that can diversify a portfolio. For instance, while volatility trading sounds very scary, in the context of providing diversification, it may be worth looking into. For instance, 2008 was a banner year for most volatility trading strategies that managed to go long and stay long volatility through the crisis. I myself still have very little knowledge of all of the various exotic ETFs that are popping up left and right, and I would look very favorably on a reputable source that can provide a tour of some that can provide respectable return diversification to a basic equities/fixed-income/real asset ETF-based portfolio, as outlined in one of the chapters in this book, and other books, such as Meb Faber’s Global Asset Allocation (a very cheap ebook).

One last thing that I’d like to touch on—this book is written in a very accessible style, and even the math (yes, math!) is understandable for someone that’s passed basic algebra. It’s something that can be read in one or two sittings, and I’d recommend it to anyone that’s a beginner in investing or systematic investing.

Overall, I give this book a solid 4/5 stars. It’s simple, easily understood, and brings systematic investing to the masses in a way that many people can replicate at home. However, I would have liked to see some beyond-the-basics content as well given the plethora of different ETFs.

Thanks for reading.