Testing the Hierarchical Risk Parity algorithm

This post will be a modified backtest of the Adaptive Asset Allocation backtest from AllocateSmartly, using the Hierarchical Risk Parity algorithm from last post, because Adam Butler was eager to see my results. On a whole, as Adam Butler had told me he had seen, HRP does not generate outperformance when applied to a small, carefully-constructed, diversified-by-selection universe of asset classes, as opposed to a universe of hundreds or even several thousand assets, where its theoretically superior properties result in it being a superior algorithm.

First off, I would like to thank one Matthew Barry, for helping me modify my HRP algorithm so as to not use the global environment for recursion. You can find his github here.

Here is the modified HRP code.

covMat <- read.csv('cov.csv', header = FALSE)
corMat <- read.csv('corMat.csv', header = FALSE)

clustOrder <- hclust(dist(corMat), method = 'single')$order

getIVP <- function(covMat) {
  invDiag <- 1/diag(as.matrix(covMat))
  weights <- invDiag/sum(invDiag)
  return(weights)
}

getClusterVar <- function(covMat, cItems) {
  covMatSlice <- covMat[cItems, cItems]
  weights <- getIVP(covMatSlice)
  cVar <- t(weights) %*% as.matrix(covMatSlice) %*% weights
  return(cVar)
}

getRecBipart <- function(covMat, sortIx) {
  w <- rep(1,ncol(covMat))
  w <- recurFun(w, covMat, sortIx)
  return(w)
}

recurFun <- function(w, covMat, sortIx) {
  subIdx <- 1:trunc(length(sortIx)/2)
  cItems0 <- sortIx[subIdx]
  cItems1 <- sortIx[-subIdx]
  cVar0 <- getClusterVar(covMat, cItems0)
  cVar1 <- getClusterVar(covMat, cItems1)
  alpha <- 1 - cVar0/(cVar0 + cVar1)
  
  # scoping mechanics using w as a free parameter
  w[cItems0] <- w[cItems0] * alpha
  w[cItems1] <- w[cItems1] * (1-alpha)
  
  if(length(cItems0) > 1) {
    w <- recurFun(w, covMat, cItems0)
  }
  if(length(cItems1) > 1) {
    w <- recurFun(w, covMat, cItems1)
  }
  return(w)
}


out <- getRecBipart(covMat, clustOrder)
out

With covMat and corMat being from the last post. In fact, this function can be further modified by encapsulating the clustering order within the getRecBipart function, but in the interest of keeping the code as similar to Marcos Lopez de Prado’s code as I could, I’ll leave this here.

Anyhow, the backtest will follow. One thing I will mention is that I’m using Quandl’s EOD database, as Yahoo has really screwed up their financial database (I.E. some sector SPDRs have broken data, dividends not adjusted, etc.). While this database is a $50/month subscription, I believe free users can access it up to 150 times in 60 days, so that should be enough to run backtests from this blog, so long as you save your downloaded time series for later use by using write.zoo.

This code needs the tseries library for the portfolio.optim function for the minimum variance portfolio (Dr. Kris Boudt has a course on this at datacamp), and the other standard packages.

A helper function for this backtest (and really, any other momentum rotation backtest) is the appendMissingAssets function, which simply adds on assets not selected to the final weighting and re-orders the weights by the original ordering.

require(tseries)
require(PerformanceAnalytics)
require(quantmod)
require(Quandl)

Quandl.api_key("YOUR_AUTHENTICATION_HERE") # not displaying my own api key, sorry :(

# function to append missing (I.E. assets not selected) asset names and sort into original order
appendMissingAssets <- function(wts, allAssetNames, wtsDate) {
  absentAssets <- allAssetNames[!allAssetNames %in% names(wts)]
  absentWts <- rep(0, length(absentAssets))
  names(absentWts) <- absentAssets
  wts <- c(wts, absentWts)
  wts <- xts(t(wts), order.by=wtsDate)
  wts <- wts[,allAssetNames]
  return(wts)
}

Next, we make the call to Quandl to get our data.

symbols <- c("SPY", "VGK",	"EWJ",	"EEM",	"VNQ",	"RWX",	"IEF",	"TLT",	"DBC",	"GLD")	

rets <- list()
for(i in 1:length(symbols)) {
  
  # quandl command to download from EOD database. Free users should use write.zoo in this loop.
  
  returns <- Return.calculate(Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close)
  colnames(returns) <- symbols[i]
  rets[[i]] <- returns
}
rets <- na.omit(do.call(cbind, rets))

While Josh Ulrich fixed quantmod to actually get Yahoo data after Yahoo broke the API, the problem is that the Yahoo data is now garbage as well, and I’m not sure how much Josh Ulrich can do about that. I really hope some other provider can step up and provide free, usable EOD data so that I don’t have to worry about readers not being able to replicate the backtest, as my policy for this blog is that readers should be able to replicate the backtests so they don’t just nod and take my word for it. If you are or know of such a provider, please leave a comment so that I can let the blog readers know all about you.

Next, we initialize the settings for the backtest.

invVolWts <- list()
minVolWts <- list()
hrpWts <- list()
ep <- endpoints(rets, on =  "months")
nMonths = 6 # month lookback (6 as per parameters from allocateSmartly)
nVol = 20 # day lookback for volatility (20 ibid)

While the AAA backtest actually uses a 126 day lookback instead of a 6 month lookback, as it trades at the end of every month, that’s effectively a 6 month lookback, give or take a few days out of 126, but the code is less complex this way.

Next, we have our actual backtest.

for(i in 1:(length(ep)-nMonths)) {
  
  # get returns subset and compute absolute momentum
  retSubset <- rets[c(ep[i]:ep[(i+nMonths)]),]
  retSubset <- retSubset[-1,]
  moms <- Return.cumulative(retSubset)
  
  # select top performing assets and subset returns for them
  highRankAssets <- rank(moms) >= 6 # top 5 assets
  posReturnAssets <- moms > 0 # positive momentum assets
  selectedAssets <- highRankAssets & posReturnAssets # intersection of the above
  selectedSubset <- retSubset[,selectedAssets] # subset returns slice
  
  if(sum(selectedAssets)==0) { # if no qualifying assets, zero weight for period
    
    wts <- xts(t(rep(0, ncol(retSubset))), order.by=last(index(retSubset)))
    colnames(wts) <- colnames(retSubset)
    invVolWts[[i]] <- minVolWts[[i]] <- hrpWts[[i]] <- wts
    
  } else if (sum(selectedAssets)==1) { # if one qualifying asset, invest fully into it
    
    wts <- xts(t(rep(0, ncol(retSubset))), order.by=last(index(retSubset)))
    colnames(wts) <- colnames(retSubset)
    wts[, which(selectedAssets==1)] <- 1
    invVolWts[[i]] <- minVolWts[[i]] <- hrpWts[[i]] <- wts
    
  } else { # otherwise, use weighting algorithms
    
    cors <- cor(selectedSubset) # correlation
    volSubset <- tail(selectedSubset, nVol) # 20 day volatility
    vols <- StdDev(volSubset)
    covs <- t(vols) %*% vols * cors
    
    # minimum volatility using portfolio.optim from tseries
    minVolRets <- t(matrix(rep(1, sum(selectedAssets))))
    minVolWt <- portfolio.optim(x=minVolRets, covmat = covs)$pw
    names(minVolWt) <- colnames(covs)
    minVolWt <- appendMissingAssets(minVolWt, colnames(retSubset), last(index(retSubset)))
    minVolWts[[i]] <- minVolWt
    
    # inverse volatility weights
    invVols <- 1/vols 
    invVolWt <- invVols/sum(invVols) 
    invNames <- colnames(invVolWt)
    invVolWt <- as.numeric(invVolWt) 
    names(invVolWt) <- invNames
    invVolWt <- appendMissingAssets(invVolWt, colnames(retSubset), last(index(retSubset)))
    invVolWts[[i]] <- invVolWt
    
    # hrp weights
    clustOrder <- hclust(dist(cors), method = 'single')$order
    hrpWt <- getRecBipart(covs, clustOrder)
    names(hrpWt) <- colnames(covs)
    hrpWt <- appendMissingAssets(hrpWt, colnames(retSubset), last(index(retSubset)))
    hrpWts[[i]] <- hrpWt
  }
}

In a few sentences, this is what happens:

The algorithm takes a subset of the returns (the past six months at every month), and computes absolute momentum. It then ranks the ten absolute momentum calculations, and selects the intersection of the top 5, and those with a return greater than zero (so, a dual momentum calculation).

If no assets qualify, the algorithm invests in nothing. If there’s only one asset that qualifies, the algorithm invests in that one asset. If there are two or more qualifying assets, the algorithm computes a covariance matrix using 20 day volatility multiplied with a 126 day correlation matrix (that is, sd_20′ %*% sd_20 * (elementwise) cor_126. It then computes normalized inverse volatility weights using the volatility from the past 20 days, a minimum variance portfolio with the portfolio.optim function, and lastly, the hierarchical risk parity weights using the HRP code above from Marcos Lopez de Prado’s paper.

Lastly, the program puts together all of the weights, and adds a cash investment for any period without any investments.

invVolWts <- round(do.call(rbind, invVolWts), 3) # round for readability
minVolWts <- round(do.call(rbind, minVolWts), 3)
hrpWts <- round(do.call(rbind, hrpWts), 3)

# allocate to cash if no allocation made due to all negative momentum assets
invVolWts$cash <- 0; invVolWts$cash <- 1-rowSums(invVolWts)
hrpWts$cash <- 0; hrpWts$cash <- 1-rowSums(hrpWts)
minVolWts$cash <- 0; minVolWts$cash <- 1-rowSums(minVolWts)

# cash value will be zero
rets$cash <- 0

# compute backtest returns
invVolRets <- Return.portfolio(R = rets, weights = invVolWts)
minVolRets <- Return.portfolio(R = rets, weights = minVolWts)
hrpRets <- Return.portfolio(R = rets, weights = hrpWts)

Here are the results:

compare <- cbind(invVolRets, minVolRets, hrpRets)
colnames(compare) <- c("invVol", "minVol", "HRP")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))  
                             invVol    minVol       HRP
Annualized Return         0.0872000 0.0724000 0.0792000
Annualized Std Dev        0.1208000 0.1025000 0.1136000
Annualized Sharpe (Rf=0%) 0.7221000 0.7067000 0.6968000
Worst Drawdown            0.1548801 0.1411368 0.1593287
Calmar Ratio              0.5629882 0.5131956 0.4968234

In short, in the context of a small, carefully-selected and allegedly diversified (I’ll let Adam Butler speak for that one) universe dominated by the process of which assets to invest in as opposed to how much, the theoretical upsides of an algorithm which simultaneously exploits a covariance structure without needing to invert a covariance matrix can be lost.

However, this test (albeit from 2007 onwards, thanks to ETF inception dates combined with lookback burn-in) confirms what Adam Butler himself told me, which is that HRP hasn’t impressed him, and from this backtest, I can see why. However, in the context of dual momentum rank selection, I’m not convinced that any weighting scheme will realize much better performance than any other.

Thanks for reading.

NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedIn profile can be found here.

A Review of Gary Antonacci’s Dual Momentum Investing Book

This review is a book review of Gary Antonacci’s Dual Momentum Investing book.

The TL;DR: 4.5 out of 5 stars.

So, I honestly have very little criticism of the book beyond the fact that the book sort of insinuates as though equity momentum is the be-all-end-all of investing, which is why I deduct a fraction of a point.

Now, for the book itself: first off, unlike other quantitative trading books I’ve read (aside from Andreas Clenow’s), the book outlines a very simple to follow strategy, to the point that it has already been replicated over at AllocateSmartly. (Side note: I think Walter’s resource at Allocate Smartly is probably the single best one-stop shop for reading up on any tactical asset allocation strategy, as it’s a compendium of many strategies in the risk/return profile of the 7-15% CAGR type strategies, and even has a correlation matrix between them all.)

Regarding the rest of the content, Antonacci does a very thorough job of stepping readers through the history/rationale of momentum, and not just that, but also addressing the alternatives to his strategy.

While the “why momentum works” aspect you can find in this book and others on the subject (I.E. Alpha Architect’s Quantitative Momentum book), I do like the section on other alternative assets. Namely, the book touches upon the fact that commodities no longer trend, so a lot of CTAs are taking it on the chin, and that historically, fixed income has performed much worse from an absolute return than equities. Furthermore, smart beta isn’t (smart), and many of these factors have very low aggregate returns (if they’re significant at all, I believe Wesley Gray at Alpha Architect has a blog post stating that they aren’t). There are a fair amount of footnotes for those interested in corroborating the assertions. Suffice to say, when it comes to strategies that don’t need daily micromanagement, when it comes to how far you can get without leverage (essentially, anything outside the space of volatility trading strategies), equity momentum is about as good as you get.

Antonacci then introduces his readers to his GEM (Global Equities Momentum) strategy, which can be explained in a few sentences: at the end of each month, calculate the total 12-month return of SPY, EAFE, and BIL. If BIL has the highest return, buy AGG for that month, otherwise buy the asset with the highest return. Repeat. That’s literally it, and the performance characteristics, on a risk-adjusted basis, are superior to just about any equity fund tied down to a tiny tracking error. Essentially, the reason for that is that equity markets have bear markets, and a dual momentum strategy lets you preserve your gains instead of giving it back in market corrections (I.E. 2000-2003, 2008, etc.) while keeping pace during the good times.

Lastly, Antonacci provides some ideas for possibly improving on GEM. I may examine on these in the future. However, the low-hanging fruit for improving on this strategy, in my opinion, is to find some other strategies that diversify its drawdowns, and raise its risk-adjusted return profile. Even if the total return goes down, I believe that an interactive brokers account can offer some amount of leverage (either 50% or 100%) to boost the total returns back up, or combine a more diversified portfolio with a volatility strategy.

Lastly, the appendix includes the original dual momentum paper, and a monthly return table for GEM going back to 1974.

All in all, this book is about as accessible and comprehensive as you can get on a solid strategy that readers actually *can* implement themselves in their brokerage account of choice (please use IB or Robinhood because there’s no point paying $8-$10 per trade if you’re retail). That said, I still think that there are venues in which to travel if you’re looking to improve your overall portfolio with GEM as a foundation.

Thanks for reading.

NOTEL I am always interested in networking and hearing about full-time roles which can benefit from my skill set. My linkedin profile can be found here.

The Problem With Depmix For Online Regime Prediction

This post will be about attempting to use the Depmix package for online state prediction. While the depmix package performs admirably when it comes to describing the states of the past, when used for one-step-ahead prediction, under the assumption that tomorrow’s state will be identical to today’s, the hidden markov model process found within the package does not perform to expectations.

So, to start off, this post was motivated by Michael Halls-Moore, who recently posted some R code about using the depmixS4 library to use hidden markov models. Generally, I am loath to create posts on topics I don’t feel I have an absolutely front-to-back understanding of, but I’m doing this in the hope of learning from others on how to appropriately do online state-space prediction, or “regime switching” detection, as it may be called in more financial parlance.

Here’s Dr. Halls-Moore’s post.

While I’ve seen the usual theory of hidden markov models (that is, it can rain or it can be sunny, but you can only infer the weather judging by the clothes you see people wearing outside your window when you wake up), and have worked with toy examples in MOOCs (Udacity’s self-driving car course deals with them, if I recall correctly–or maybe it was the AI course), at the end of the day, theory is only as good as how well an implementation can work on real data.

For this experiment, I decided to take SPY data since inception, and do a full in-sample “backtest” on the data. That is, given that the HMM algorithm from depmix sees the whole history of returns, with this “god’s eye” view of the data, does the algorithm correctly classify the regimes, if the backtest results are any indication?

Here’s the code to do so, inspired by Dr. Halls-Moore’s.

require(depmixS4)
require(quantmod)
getSymbols('SPY', from = '1990-01-01', src='yahoo', adjust = TRUE)
spyRets <- na.omit(Return.calculate(Ad(SPY)))

set.seed(123)

hmm <- depmix(SPY.Adjusted ~ 1, family = gaussian(), nstates = 3, data=spyRets)
hmmfit <- fit(hmm, verbose = FALSE)
post_probs <- posterior(hmmfit)
post_probs <- xts(post_probs, order.by=index(spyRets))
plot(post_probs$state)
summaryMat <- data.frame(summary(hmmfit))
colnames(summaryMat) <- c("Intercept", "SD")
bullState <- which(summaryMat$Intercept > 0)
bearState <- which(summaryMat$Intercept < 0)

hmmRets <- spyRets * lag(post_probs$state == bullState) - spyRets * lag(post_probs$state == bearState)
charts.PerformanceSummary(hmmRets)
table.AnnualizedReturns(hmmRets)

Essentially, while I did select three states, I noted that anything with an intercept above zero is a bull state, and below zero is a bear state, so essentially, it reduces to two states.

With the result:

table.AnnualizedReturns(hmmRets)
                          SPY.Adjusted
Annualized Return               0.1355
Annualized Std Dev              0.1434
Annualized Sharpe (Rf=0%)       0.9448

So, not particularly terrible. The algorithm works, kind of, sort of, right?

Well, let’s try online prediction now.

require(DoMC)

dailyHMM <- function(data, nPoints) {
  subRets <- data[1:nPoints,]
  hmm <- depmix(SPY.Adjusted ~ 1, family = gaussian(), nstates = 3, data = subRets)
  hmmfit <- fit(hmm, verbose = FALSE)
  post_probs <- posterior(hmmfit)
  summaryMat <- data.frame(summary(hmmfit))
  colnames(summaryMat) <- c("Intercept", "SD")
  bullState <- which(summaryMat$Intercept > 0)
  bearState <- which(summaryMat$Intercept < 0)
  if(last(post_probs$state) %in% bullState) {
    state <- xts(1, order.by=last(index(subRets)))
  } else if (last(post_probs$state) %in% bearState) {
    state <- xts(-1, order.by=last(index(subRets)))
  } else {
    state <- xts(0, order.by=last(index(subRets)))
  }
  colnames(state) <- "State"
  return(state)
}

# took 3 hours in parallel
t1 <- Sys.time()
set.seed(123)
registerDoMC((detectCores() - 1))
states <- foreach(i = 500:nrow(spyRets), .combine=rbind) %dopar% {
  dailyHMM(data = spyRets, nPoints = i)
}
t2 <- Sys.time()
print(t2-t1)

So what I did here was I took an expanding window, starting from 500 days since SPY’s inception, and kept increasing it, by one day at a time. My prediction, was, trivially enough, the most recent day, using a 1 for a bull state, and a -1 for a bear state. I ran this process in parallel (on a linux cluster, because windows’s doParallel library seems to not even know that certain packages are loaded, and it’s more messy), and the first big issue is that this process took about three hours on seven cores for about 23 years of data. Not exactly encouraging, but computing time isn’t expensive these days.

So let’s see if this process actually works.

First, let’s test if the algorithm does what it’s actually supposed to do and use one day of look-ahead bias (that is, the algorithm tells us the state at the end of the day–how correct is it even for that day?).


onlineRets <- spyRets * states 
charts.PerformanceSummary(onlineRets)
table.AnnualizedReturns(onlineRets)

With the result:

> table.AnnualizedReturns(onlineRets)
                          SPY.Adjusted
Annualized Return               0.2216
Annualized Std Dev              0.1934
Annualized Sharpe (Rf=0%)       1.1456

So, allegedly, the algorithm seems to do what it was designed to do, which is to classify a state for a given data set. Now, the most pertinent question: how well do these predictions do even one day ahead? You’d think that state space predictions would be parsimonious from day to day, given the long history, correct?


onlineRets <- spyRets * lag(states)
charts.PerformanceSummary(onlineRets)
table.AnnualizedReturns(onlineRets)

With the result:

> table.AnnualizedReturns(onlineRets)
                          SPY.Adjusted
Annualized Return               0.0172
Annualized Std Dev              0.1939
Annualized Sharpe (Rf=0%)       0.0888

That is, without the lookahead bias, the state space prediction algorithm is atrocious. Why is that?

Well, here’s the plot of the states:

In short, the online hmm algorithm in the depmix package seems to change its mind very easily, with obvious (negative) implications for actual trading strategies.

So, that wraps it up for this post. Essentially, the main message here is this: there’s a vast difference between loading doing descriptive analysis (AKA “where have you been, why did things happen”) vs. predictive analysis (that is, “if I correctly predict the future, I get a positive payoff”). In my opinion, while descriptive statistics have their purpose in terms of explaining why a strategy may have performed how it did, ultimately, we’re always looking for better prediction tools. In this case, depmix, at least in this “out-of-the-box” demonstration does not seem to be the tool for that.

If anyone has had success with using depmix (or other regime-switching algorithm in R) for prediction, I would love to see work that details the procedure taken, as it’s an area I’m looking to expand my toolbox into, but don’t have any particular good leads. Essentially, I’d like to think of this post as me describing my own experiences with the package.

Thanks for reading.

NOTE: On Oct. 5th, I will be in New York City. On Oct. 6th, I will be presenting at The Trading Show on the Programming Wars panel.

NOTE: My current analytics contract is up for review at the end of the year, so I am officially looking for other offers as well. If you have a full-time role which may benefit from the skills you see on my blog, please get in touch with me. My linkedin profile can be found here.

How To Compute Turnover With Return.Portfolio in R

This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return.Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs.

So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven’t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times.

So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return.Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics’s functions.

Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 1/9th if invested into all 9, 100% into 1 if only 1 invested into, 100% cash, denoted with a zero return vector, if no sectors are invested into). It’s a simple, toy strategy, as the strategy isn’t the point of the demonstration.

Here’s the basic setup code:

require(TTR)
require(PerformanceAnalytics)
require(quantmod)

symbols <- c("XLF", "XLK", "XLU", "XLE", "XLP", "XLF", "XLB", "XLV", "XLY")
getSymbols(symbols, src='yahoo', from = '1990-01-01-01')
prices <- list()
for(i in 1:length(symbols)) {
  tmp <- Ad(get(symbols[[i]]))
  prices[[i]] <- tmp
}
prices <- do.call(cbind, prices)

# Our signal is a simple adjusted price over 200 day SMA
signal <- prices > xts(apply(prices, 2, SMA, n = 200), order.by=index(prices))

# equal weight all assets with price above SMA200
returns <- Return.calculate(prices)
weights <- signal/(rowSums(signal)+1e-16)

# With Return.portfolio, need all weights to sum to 1
weights$zeroes <- 1 - rowSums(weights)
returns$zeroes <- 0

monthlyWeights <- na.omit(weights[endpoints(weights, on = 'months'),])
weights <- na.omit(weights)
returns <- na.omit(returns)

So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return.Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here).

Now, here’s how to compute turnover:

out <- Return.portfolio(R = returns, weights = monthlyWeights, verbose = TRUE)
beginWeights <- out$BOP.Weight
endWeights <- out$EOP.Weight
txns <- beginWeights - lag(endWeights)
monthlyTO <- xts(rowSums(abs(txns[,1:9])), order.by=index(txns))
plot(monthlyTO)

So, the trick is this: when you call Return.portfolio, use the verbose = TRUE option. This creates several objects, among them returns, BOP.Weight, and EOP.Weight. These stand for Beginning Of Period Weight, and End Of Period Weight.

The way that turnover is computed is simply the difference between how the day’s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day’s drift/return for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight.

This is what such transactions look like for this strategy.

Something we can do with such data is take a one-year rolling turnover, accomplished with the following code:

yearlyTO <- runSum(monthlyTO, 252)
plot(yearlyTO, main = "running one year turnover")

It looks like this:

This essentially means that one year’s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100% turnover, and buying an entirely new set of assets is another 100%, then two-way turnover is 200%) is around 800% at maximum. That may be pretty high for some people.

Now, here’s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact $100).

txnCosts <- monthlyTO * -.0020
retsWithTxnCosts <- out$returns + txnCosts
compare <- na.omit(cbind(out$returns, retsWithTxnCosts))
colnames(compare) <- c("NoTxnCosts", "TxnCosts20BPs")
charts.PerformanceSummary(compare)
table.AnnualizedReturns(compare)

And the result:


                          NoTxnCosts TxnCosts20BPs
Annualized Return             0.0587        0.0489
Annualized Std Dev            0.1554        0.1553
Annualized Sharpe (Rf=0%)     0.3781        0.3149

So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible.

So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return.portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs.

Thanks for reading.

NOTE: I will be giving a lightning talk at R/Finance, so for those attending, you’ll be able to find me there.

Create Amazing Looking Backtests With This One Wrong–I Mean Weird–Trick! (And Some Troubling Logical Invest Results)

This post will outline an easy-to-make mistake in writing vectorized backtests–namely in using a signal obtained at the end of a period to enter (or exit) a position in that same period. The difference in results one obtains is massive.

Today, I saw two separate posts from Alpha Architect and Mike Harris both referencing a paper by Valeriy Zakamulin on the fact that some previous trend-following research by Glabadanidis was done with shoddy results, and that Glabadanidis’s results were only reproducible through instituting lookahead bias.

The following code shows how to reproduce this lookahead bias.

First, the setup of a basic moving average strategy on the S&P 500 index from as far back as Yahoo data will provide.

require(quantmod)
require(xts)
require(TTR)
require(PerformanceAnalytics)

getSymbols('^GSPC', src='yahoo', from = '1900-01-01')
monthlyGSPC <- Ad(GSPC)[endpoints(GSPC, on = 'months')]

# change this line for signal lookback
movAvg <- SMA(monthlyGSPC, 10)

signal <- monthlyGSPC > movAvg
gspcRets <- Return.calculate(monthlyGSPC)

And here is how to institute the lookahead bias.

lookahead <- signal * gspcRets
correct <- lag(signal) * gspcRets

These are the “results”:

compare <- na.omit(cbind(gspcRets, lookahead, correct))
colnames(compare) <- c("S&P 500", "Lookahead", "Correct")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
logRets <- log(cumprod(1+compare))
chart.TimeSeries(logRets, legend.loc='topleft')

Of course, this equity curve is of no use, so here’s one in log scale.

As can be seen, lookahead bias makes a massive difference.

Here are the numerical results:

                            S&P 500  Lookahead   Correct
Annualized Return         0.0740000 0.15550000 0.0695000
Annualized Std Dev        0.1441000 0.09800000 0.1050000
Annualized Sharpe (Rf=0%) 0.5133000 1.58670000 0.6623000
Worst Drawdown            0.5255586 0.08729914 0.2699789
Calmar Ratio              0.1407286 1.78119192 0.2575219

Again, absolutely ridiculous.

Note that when using Return.Portfolio (the function in PerformanceAnalytics), that package will automatically give you the next period’s return, instead of the current one, for your weights. However, for those writing “simple” backtests that can be quickly done using vectorized operations, an off-by-one error can make all the difference between a backtest in the realm of reasonable, and pure nonsense. However, should one wish to test for said nonsense when faced with impossible-to-replicate results, the mechanics demonstrated above are the way to do it.

Now, onto other news: I’d like to thank Gerald M for staying on top of one of the Logical Invest strategies–namely, their simple global market rotation strategy outlined in an article from an earlier blog post.

Up until March 2015 (the date of the blog post), the strategy had performed well. However, after said date?

It has been a complete disaster, which, in hindsight, was evident when I passed it through the hypothesis-driven development framework process I wrote about earlier.

So, while there has been a great deal written about not simply throwing away a strategy because of short-term underperformance, and that anomalies such as momentum and value exist because of career risk due to said short-term underperformance, it’s never a good thing when a strategy creates historically large losses, particularly after being published in such a humble corner of the quantitative financial world.

In any case, this was a post demonstrating some mechanics, and an update on a strategy I blogged about not too long ago.

Thanks for reading.

NOTE: I am always interested in hearing about new opportunities which may benefit from my expertise, and am always happy to network. You can find my LinkedIn profile here.

A Book Review of ReSolve Asset Management’s Adaptive Asset Allocation

This review will review the “Adaptive Asset Allocation: Dynamic Global Portfolios to Profit in Good Times – and Bad” book by the people at ReSolve Asset Management. Overall, this book is a definite must-read for those who have never been exposed to the ideas within it. However, when it comes to a solution that can be fully replicated, this book is lacking.

Okay, it’s been a while since I reviewed my last book, DIY Financial Advisor, from the awesome people at Alpha Architect. This book in my opinion, is set up in a similar sort of format.

This is the structure of the book, and my reviews along with it:

Part 1: Why in the heck you actually need to have a diversified portfolio, and why a diversified portfolio is a good thing. In a world in which there is so much emphasis put on single-security performance, this is certainly something that absolutely must be stated for those not familiar with portfolio theory. It highlights the example of two people–one from Abbott Labs, and one from Enron, who had so much of their savings concentrated in their company’s stock. Mr. Abbott got hit hard and changed his outlook on how to save for retirement, and Mr. Enron was never heard from again. Long story short: a diversified portfolio is good, and a properly diversified portfolio can offset one asset’s zigs with another asset’s zags. This is the key to establishing a stream of returns that will help meet financial goals. Basically, this is your common sense story (humans love being told stories) so as to motivate you to read the rest of the book. It does its job, though for someone like me, it’s more akin to a big “wait for it, wait for it…and there’s the reason why we should read on, as expected”.

Part 2: Something not often brought up in many corners of the quant world (because it’s real life boring stuff) is the importance not only of average returns, but *when* those returns are achieved. Namely, imagine your everyday saver. At the beginning of their careers, they’re taking home less salary and have less money in their retirement portfolio (or speculation portfolio, but the book uses retirement portfolio). As they get into middle age and closer to retirement, they have a lot more money in said retirement portfolio. Thus, strong returns are most vital when there is more cash available *to* the portfolio, and the difference between mediocre returns at the beginning and strong returns at the end of one’s working life as opposed to vice versa is *astronomical* and cannot be understated. Furthermore, once *in* retirement, strong returns in the early years matter far more than returns in the later years once money has been withdrawn out of the portfolio (though I’d hope that a portfolio’s returns can be so strong that one can simply “live off the interest”). Or, put more intuitively: when you have $10,000 in your portfolio, a 20% drawdown doesn’t exactly hurt because you can make more money and put more into your retirement account. But when you’re 62 and have $500,000 and suddenly lose 30% of everything, well, that’s massive. How much an investor wants to avoid such a scenario cannot be understated. Warren Buffett once said that if you can’t bear to lose 50% of everything, you shouldn’t be in stocks. I really like this part of the book because it shows just how dangerous the ideas of “a 50% drawdown is unavoidable” and other “stay invested for the long haul” refrains are. Essentially, this part of the book makes a resounding statement that any financial adviser keeping his or her clients invested in equities when they’re near retirement age is doing something not very advisable, to put it lightly. In my opinion, those who advise pension funds should especially keep this section of the book in mind, since for some people, the long-term may be coming to an end, and what matters is not only steady returns, but to make sure the strategy doesn’t fall off a cliff and destroy decades of hard-earned savings.

Part 3: This part is also one that is a very important read. First off, it lays out in clear terms that the long-term forward-looking valuations for equities are at rock bottom. That is, the expected forward 15-year returns are very low, using approximately 75 years of evidence. Currently, according to the book, equity valuations imply a *negative* 15-year forward return. However, one thing I *will* take issue with is that while forward-looking long-term returns for equities may be very low, if one believed this chart and only invested in the stock market when forecast 15-year returns were above the long term average, one would have missed out on both the 2003-2007 bull runs, *and* the one since 2009 that’s just about over. So, while the book makes a strong case for caution, readers should also take the chart with a grain of salt in my opinion. However, another aspect of portfolio construction that this book covers is how to construct a robust (assets for any economic environment) and coherent (asset classes balanced in number) universe for implementation with any asset allocation algorithm. I think this bears repeating: universe selection is an extremely important topic in the discussion of asset allocation, yet there is very little discussion about it. Most research/topics simply take some “conventional universe”, such as “all stocks on the NYSE”, or “all the stocks in the S&P 500”, or “the entire set of the 50-60 most liquid futures” without consideration for robustness and coherence. This book is the first source I’ve seen that actually puts this topic under a magnifying glass besides “finger in the air pick and choose”.

Part 4: and here’s where I level my main criticism at this book. For those that have read “Adaptive Asset Allocation: A Primer”, this section of the book is basically one giant copy and paste. It’s all one large buildup to “momentum rank + min-variance optimization”. All well and good, until there’s very little detail beyond the basics as to how the minimum variance portfolio was constructed. Namely, what exactly is the minimum variance algorithm in use? Is it one of the poor variants susceptible to numerical instability inherent in inverting sample covariance matrices? Or is it a heuristic like David Varadi’s minimum variance and minimum correlation algorithm? The one feeling I absolutely could not shake was that this book had a perfect opportunity to present a robust approach to minimum variance, and instead, it’s long on concept, short on details. While the theory of “maximize return for unit risk” is all well and good, the actual algorithm to implement that theory into practice is not trivial, with the solutions taught to undergrads and master’s students having some well-known weaknesses. On top of this, one thing that got hammered into my head in the past was that ranking *also* had a weakness at the inclusion/exclusion point. E.G. if, out of ten assets, the fifth asset had a momentum of say, 10.9%, and the sixth asset had a momentum of 10.8%, how are we so sure the fifth is so much better? And while I realize that this book was ultimately meant to be a primer, in my opinion, it would have been a no-objections five-star if there were an appendix that actually went into some detail on how to go from the simple concepts and included a small numerical example of some algorithms that may address the well-known weaknesses. This doesn’t mean Greek/mathematical jargon. Just an appendix that acknowledged that not every reader is someone only picking up his first or second book about systematic investing, and that some of us are familiar with the “whys” and are more interested in the “hows”. Furthermore, I’d really love to know where the authors of this book got their data to back-date some of these ETFs into the 90s.

Part 5: some more formal research on topics already covered in the rest of the book–namely a section about how many independent bets one can take as the number of assets grow, if I remember it correctly. Long story short? You *easily* get the most bang for your buck among disparate asset classes, such as treasuries of various duration, commodities, developed vs. emerging equities, and so on, as opposed to trying to pick among stocks in the same asset class (though there’s some potential for alpha there…just…a lot less than you imagine). So in case the idea of asset class selection, not stock selection wasn’t beaten into the reader’s head before this point, this part should do the trick. The other research paper is something I briefly skimmed over which went into more depth about volatility and retirement portfolios, though I felt that the book covered this topic earlier on to a sufficient degree by building up the intuition using very understandable scenarios.

So that’s the review of the book. Overall, it’s a very solid piece of writing, and as far as establishing the *why*, it does an absolutely superb job. For those that aren’t familiar with the concepts in this book, this is definitely a must-read, and ASAP.

However, for those familiar with most of the concepts and looking for a detailed “how” procedure, this book does not deliver as much as I would have liked. And I realize that while it’s a bad idea to publish secret sauce, I bought this book in the hope of being exposed to a new algorithm presented in the understandable and intuitive language that the rest of the book was written in, and was left wanting.

Still, that by no means diminishes the impact of the rest of the book. For those who are more likely to be its target audience, it’s a 5/5. For those that wanted some specifics, it still has its gem on universe construction.

Overall, I rate it a 4/5.

Thanks for reading.

A First Attempt At Applying Ensemble Filters

This post will outline a first failed attempt at applying the ensemble filter methodology to try and come up with a weighting process on SPY that should theoretically be a gradual process to shift from conviction between a bull market, a bear market, and anywhere in between. This is a follow-up post to this blog post.

So, my thinking went like this: in a bull market, as one transitions from responsiveness to smoothness, responsive filters should be higher than smooth filters, and vice versa, as there’s generally a trade-off between the two. In fact, in my particular formulation, the quantity of the square root of the EMA of squared returns punishes any deviation from a flat line altogether (although inspired by Basel’s measure of volatility, which is the square root of the 18-day EMA of squared returns), while the responsiveness quantity punishes any deviation from the time series of the realized prices. Whether these are the two best measures of smoothness and responsiveness is a topic I’d certainly appreciate feedback on.

In any case, an idea I had on the top of my head was that in addition to having a way of weighing multiple filters by their responsiveness (deviation from price action) and smoothness (deviation from a flat line), that by taking the sums of the sign of the difference between one filter and its neighbor on the responsiveness to smoothness spectrum, provided enough ensemble filters (say, 101, so there are 100 differences), one would obtain a way to move from full conviction of a bull market, to a bear market, to anything in between, and have this be a smooth process that doesn’t have schizophrenic swings of conviction.

Here’s the code to do this on SPY from inception to 2003:

require(TTR)
require(quantmod)
require(PerformanceAnalytics)

getSymbols('SPY', from = '1990-01-01')

smas <- list()
for(i in 2:250) {
  smas[[i]] <- SMA(Ad(SPY), n = i)
}
smas <- do.call(cbind, smas)

xtsApply <- function(x, FUN, n, ...) {
  out <- xts(apply(x, 2, FUN, n = n, ...), order.by=index(x))
  return(out)
}

sumIsNa <- function(x){
  return(sum(is.na(x)))
}

ensembleFilter <- function(data, filters, n = 20, conviction = 1, emphasisSmooth = .51) {
  
  # smoothness error
  filtRets <- Return.calculate(filters)
  sqFiltRets <- filtRets * filtRets * 100 #multiply by 100 to prevent instability
  smoothnessError <- sqrt(xtsApply(sqFiltRets, EMA, n = n))
  
  # responsiveness error
  repX <- xts(matrix(data, nrow = nrow(filters), ncol=ncol(filters)), 
              order.by = index(filters))
  dataFilterReturns <- repX/filters - 1
  sqDataFilterQuotient <- dataFilterReturns * dataFilterReturns * 100 #multiply by 100 to prevent instability
  responseError <- sqrt(xtsApply(sqDataFilterQuotient, EMA, n = n))
  
  # place smoothness and responsiveness errors on same notional quantities
  meanSmoothError <- rowMeans(smoothnessError)
  meanResponseError <- rowMeans(responseError)
  ratio <- meanSmoothError/meanResponseError
  ratio <- xts(matrix(ratio, nrow=nrow(filters), ncol=ncol(filters)),
               order.by=index(filters))
  responseError <- responseError * ratio
  
  # for each term in emphasisSmooth, create a separate filter
  ensembleFilters <- list()
  for(term in emphasisSmooth) {
    
    # compute total errors, raise them to a conviction power, find the normalized inverse
    totalError <- smoothnessError * term + responseError * (1-term)
    totalError <- totalError ^ conviction
    invTotalError <- 1/totalError
    normInvError <- invTotalError/rowSums(invTotalError)
    
    # ensemble filter is the sum of candidate filters in proportion
    # to the inverse of their total error
    tmp <- xts(rowSums(filters * normInvError), order.by=index(data))
    
    #NA out time in which one or more filters were NA
    initialNAs <- apply(filters, 1, sumIsNa) 
    tmp[initialNAs > 0] <- NA
    tmpName <- paste("emphasisSmooth", term, sep="_")
    colnames(tmp) <- tmpName
    ensembleFilters[[tmpName]] <- tmp
  }
  
  # compile the filters
  out <- do.call(cbind, ensembleFilters)
  return(out)
}

t1 <- Sys.time()
filts <- ensembleFilter(Ad(SPY), smas, n = 20, conviction = 2, emphasisSmooth = seq(0, 1, by=.01))
t2 <- Sys.time()

par(mfrow=c(3,1))
filtDiffs <- sign(filts[,1:100] - filts[,2:101])
sumDiffs <- xts(rowSums(filtDiffs), order.by=index(filtDiffs))

plot(Ad(SPY)["::2003"])
plot(sumDiffs["::2003"])
plot(diff(sumDiffs["::2003"]))

And here’s the very underwhelming result:

Essentially, while I expected to see changes in conviction of maybe 20 at most, instead, my indicator of sum of sign differences did exactly as I had hoped it wouldn’t, which is to be a very binary sort of mechanic. My intuition was that between an “obvious bull market” and an “obvious bear market” that some differences would be positive, some negative, and that they’d net each other out, and the conviction would be zero. Furthermore, that while any individual crossover is binary, all one hundred signs being either positive or negative would be a more gradual process. Apparently, this was not the case. To continue this train of thought later, one thing to try would be an all-pairs sign difference. Certainly, I don’t feel like giving up on this idea at this point, and, as usual, feedback would always be appreciated.

Thanks for reading.

NOTE: I am currently consulting in an analytics capacity in downtown Chicago. However, I am also looking for collaborators that wish to pursue interesting trading ideas. If you feel my skills may be of help to you, let’s talk. You can email me at ilya.kipnis@gmail.com, or find me on my LinkedIn here.

How well can you scale your strategy?

This post will deal with a quick, finger in the air way of seeing how well a strategy scales–namely, how sensitive it is to latency between signal and execution, using a simple volatility trading strategy as an example. The signal will be the VIX/VXV ratio trading VXX and XIV, an idea I got from Volatility Made Simple’s amazing blog, particularly this post. The three signals compared will be the “magical thinking” signal (observe the close, buy the close, named from the ruleOrderProc setting in quantstrat), buy on next-day open, and buy on next-day close.

Let’s get started.

require(downloader)
require(PerformanceAnalytics)
require(IKTrading)
require(TTR)

download("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vxvdailyprices.csv", 
         destfile="vxvData.csv")
download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")
download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
         destfile="longVXX.txt") #requires downloader package
getSymbols('^VIX', from = '1990-01-01')


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))
vixVxv <- Cl(VIX)/Cl(vxv)


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))

vxxCloseRets <- Return.calculate(Cl(vxx))
vxxOpenRets <- Return.calculate(Op(vxx))
xivCloseRets <- Return.calculate(Cl(xiv))
xivOpenRets <- Return.calculate(Op(xiv))

vxxSig <- vixVxv > 1
xivSig <- 1-vxxSig

magicThinking <- vxxCloseRets * lag(vxxSig) + xivCloseRets * lag(xivSig)
nextOpen <- vxxOpenRets * lag(vxxSig, 2) + xivOpenRets * lag(xivSig, 2)
nextClose <- vxxCloseRets * lag(vxxSig, 2) + xivCloseRets * lag(xivSig, 2)
tradeWholeDay <- (nextOpen + nextClose)/2

compare <- na.omit(cbind(magicThinking, nextOpen, nextClose, tradeWholeDay))
colnames(compare) <- c("Magic Thinking", "Next Open", 
                       "Next Close", "Execute Through Next Day")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), 
      maxDrawdown(compare), CalmarRatio(compare))

par(mfrow=c(1,1))
chart.TimeSeries(log(cumprod(1+compare), base = 10), legend.loc='topleft', ylab='log base 10 of additional equity',
                 main = 'VIX vx. VXV different execution times')

So here’s the run-through. In addition to the magical thinking strategy (observe the close, buy that same close), I tested three other variants–a variant which transacts the next open, a variant which transacts the next close, and the average of those two. Effectively, I feel these three could give a sense of a strategy’s performance under more realistic conditions–that is, how well does the strategy perform if transacted throughout the day, assuming you’re managing a sum of money too large to just plow into the market in the closing minutes (and if you hope to get rich off of trading, you will have a larger sum of money than the amount you can apply magical thinking to). Ideally, I’d use VWAP pricing, but as that’s not available for free anywhere I know of, that means that readers can’t replicate it even if I had such data.

In any case, here are the results.

Equity curves:

Log scale (for Mr. Tony Cooper and others):

Stats:

                          Magic Thinking Next Open Next Close Execute Through Next Day
Annualized Return               0.814100 0.8922000  0.5932000                 0.821900
Annualized Std Dev              0.622800 0.6533000  0.6226000                 0.558100
Annualized Sharpe (Rf=0%)       1.307100 1.3656000  0.9529000                 1.472600
Worst Drawdown                  0.566122 0.5635336  0.6442294                 0.601014
Calmar Ratio                    1.437989 1.5831686  0.9208586                 1.367510

My reaction? The execute on next day’s close performance being vastly lower than the other configurations (and that deterioration occurring in the most recent years) essentially means that the fills will have to come pretty quickly at the beginning of the day. While the strategy seems somewhat scalable through the lens of this finger-in-the-air technique, in my opinion, if the first full day of possible execution after signal reception will tank a strategy from a 1.44 Calmar to a .92, that’s a massive drop-off, after holding everything else constant. In my opinion, I think this is quite a valid question to ask anyone who simply sells signals, as opposed to manages assets. Namely, how sensitive are the signals to execution on the next day? After all, unless those signals come at 3:55 PM, one is most likely going to be getting filled the next day.

Now, while this strategy is a bit of a tomato can in terms of how good volatility trading strategies can get (they can get a *lot* better in my opinion), I think it made for a simple little demonstration of this technique. Again, a huge thank you to Mr. Helmuth Vollmeier for so kindly keeping up his dropbox all this time for the volatility data!

Thanks for reading.

NOTE: I am currently contracting in a data science capacity in Chicago. You can email me at ilya.kipnis@gmail.com, or find me on my LinkedIn here. I’m always open to beers after work if you’re in the Chicago area.

NOTE 2: Today, on October 21, 2015, if you’re in Chicago, there’s a Chicago R Users Group conference at Jaks Tap at 6:00 PM. Free pizza, networking, and R, hosted by Paul Teetor, who’s a finance guy. Hope to see you there.

A Review of DIY Financial Advisor, by Gray, Vogel, and Foulke

This post will review the DIY Financial Advisor book, which I thought was a very solid read, and especially pertinent to those who are more beginners at investing (especially systematic investing). While it isn’t exactly perfect, it’s about as excellent a primer on investing as one will find out there that is accessible to the lay-person, in my opinion.

Okay, so, official announcement: I am starting a new section of posts called “Reviews”, which I received from being asked to review this book. Essentially, I believe that anyone that’s trying to create a good product that will help my readers deserves a spotlight, and I myself would like to know what cool and innovative financial services/products are coming about. For those who’d like exposure on this site, if you’re offering an affordable and innovative product or service that can be of use to an audience like mine, reach out to me.

Anyway, this past weekend, while relocating to Chicago, I had the pleasure of reading Alpha Architect’s (Gray, Vogel, Foulke) book “DIY financial advisor”, essentially making a case as to why a retail investor should be able to outperform the expert financial advisers that charge several percentage points a year to manage one’s wealth.
The book starts off by citing various famous studies showing how many subtle subconscious biases and fallacies human beings are susceptible to (there are plenty), such as falling for complexity, overconfidence, and so on—none of which emotionless computerized systems and models suffer from. Furthermore, it also goes on to provide several anecdotal examples of experts gone bust, such as Victor Niederhoffer, who blew up not once, but twice (and rumor has it he blew up a third time), and studies showing that systematic data analysis has shown to beat expert recommendations time and again—including when experts were armed with the outputs of the models themselves. Throw in some quotes from Jim Simons (CEO of the best hedge fund in the world, Renaissance Technologies), and the first part of the book can be summed up like this:

1) Your rank and file human beings are susceptible to many subconscious biases.

2) Don’t trust the recommendations of experts. Even simpler models have systematically outperformed said “experts”. Some experts have even blown up, multiple times even (E.G. Victor Niederhoffer).

3) Building an emotionless system will keep these human fallacies from wrecking your investment portfolio.

4) Sticking to a well thought-out system is a good idea, even when it’s uncomfortable—such as when a marine has to wear a Kevlar helmet, hold extra ammo, and extra water in a 126 degree Iraq desert (just ask Dr./Captain Gray!).

This is all well and good—essentially making a very strong case for why you should build a system, and let the system do the investment allocation heavy lifting for you.
Next, the book goes into the FACTS acronym of different manager selection—fees, access, complexity, taxes, and search. Fees: how much does it cost to have someone manage your investments? Pretty self-explanatory here. Access: how often can you pull your capital (EG a hedge fund that locks you up for a year especially when it loses money should be run from, and fast). Complexity: do you understand how the investments are managed? Taxes: long-term capital gains, or shorter-term? Generally, very few decent systems will be holding for a year or more, so in my opinion, expect to pay short-term taxes. Search: that is, how hard is it to find a good candidate? Given the sea of hedge funds (especially those with short-term track records, or only track records managing tiny amounts of money), how hard is it to find a manager who’ll beat the benchmark after fees? Answer: very difficult. In short, all the glitzy sophisticated managers you hear about? Far from a terrific deal, assuming you can even find one.

Continuing, the book goes into two separate anomalies that should form the foundation for any equity investment strategy – value, and momentum. The value system essentially goes long the top decile of the EBIT/TEV metric for the top 60% of market-cap companies traded on the NYSE every year. In my opinion, this is a system that is difficult to implement for the average investor in terms of managing the data process for this system, along with having the proper capital to allocate to all the various companies. That is, if you have a small amount of capital to invest, you might not be able to get that equal weight allocation across a hundred separate companies. However, I believe that with the QVAL and IVAL etfs (from Alpha Architect, and full disclosure, I have some of my IRA invested there), I think that the systematic value component can be readily accessed through these two funds.

The momentum strategy, however, is much simpler. There’s a momentum component, and a moving average component. There’s some math that shows that these two signals are related (a momentum signal is actually proportional to a difference of a moving average and its last value), and the ROBUST system that this book proposes is a combination of a momentum signal and an SMA signal. This is how it works. Assume you have $100,000 and 5 assets to invest in, for the sake of math. Divide the portfolio into a $50,000 momentum component and a $50,000 moving average component. Every month, allocate $10,000 to each of the five assets with a positive 12-month momentum, or stay in cash for that asset. Next, allocate another $10,000 to each of the five assets with a price above a 12-month simple moving average. It’s that simple, and given the recommended ETFs (commodities, bonds, foreign stocks, domestic stocks, real estate), it’s a system that most investors can rather easily implement, especially if they’ve been following my blog.

For those interested in more market anomalies (especially value anomalies), there’s a chapter which contains a large selection of academic papers that go back and forth on the efficacies of various anomalies and how well they can predict returns. So, for those interested, it’s there.

The book concludes with some potential pitfalls which a DIY investor should be aware of when running his or her own investments, which is essentially another psychology chapter.
Overall, in my opinion, this book is fairly solid in terms of reasons why a retail investor should take the plunge and manage his or her own investments in a systematic fashion. Namely, that flesh and blood advisers are prone to human error, and on top of that, usually charge an unjustifiably high fee, and then deliver lackluster performance. The book recommends a couple of simple systems, one of which I think anyone who can copy and paste some rudimentary R can follow (the ROBUST momentum system), and another which I think most stay-at-home investors should shy away from (the value system, simply because of the difficulty of dealing with all that data), and defer to either or both of Alpha Architect’s 2 ETFs.
In terms of momentum, there are the ALFA, GMOM, and MTUM tickers (do your homework, I’m long ALFA) for various differing exposures to this anomaly, for those that don’t want to pay the constant transaction costs/incur short-term taxes of running their own momentum strategy.

In terms of where this book comes up short, here are my two cents:
Tested over nearly a century, the risk-reward tradeoffs of these systems can still be frightening at times. That is, for a system that delivers a CAGR of around 15%, are you still willing to risk a 50% drawdown? Losing half of everything on the cusp of retirement sounds very scary, no matter the long-term upside.

Furthermore, this book keeps things simple, with an intended audience of mom and pop investors (who have historically underperformed the S&P 500!). While I think it accomplishes this, I think there could have been value added, even for such individuals, by outlining some ETFs or simple ETF/ETN trading systems that can diversify a portfolio. For instance, while volatility trading sounds very scary, in the context of providing diversification, it may be worth looking into. For instance, 2008 was a banner year for most volatility trading strategies that managed to go long and stay long volatility through the crisis. I myself still have very little knowledge of all of the various exotic ETFs that are popping up left and right, and I would look very favorably on a reputable source that can provide a tour of some that can provide respectable return diversification to a basic equities/fixed-income/real asset ETF-based portfolio, as outlined in one of the chapters in this book, and other books, such as Meb Faber’s Global Asset Allocation (a very cheap ebook).

One last thing that I’d like to touch on—this book is written in a very accessible style, and even the math (yes, math!) is understandable for someone that’s passed basic algebra. It’s something that can be read in one or two sittings, and I’d recommend it to anyone that’s a beginner in investing or systematic investing.

Overall, I give this book a solid 4/5 stars. It’s simple, easily understood, and brings systematic investing to the masses in a way that many people can replicate at home. However, I would have liked to see some beyond-the-basics content as well given the plethora of different ETFs.

Thanks for reading.

Hypothesis-Driven Development Part V: Stop-Loss, Deflating Sharpes, and Out-of-Sample

This post will demonstrate a stop-loss rule inspired by Andrew Lo’s paper “when do stop-loss rules stop losses”? Furthermore, it will demonstrate how to deflate a Sharpe ratio to account for the total number of trials conducted, which is presented in a paper written by David H. Bailey and Marcos Lopez De Prado. Lastly, the strategy will be tested on the out-of-sample ETFs, rather than the mutual funds that have been used up until now (which actually cannot be traded more than once every two months, but have been used simply for the purpose of demonstration).

First, however, I’d like to fix some code from the last post and append some results.

A reader asked about displaying the max drawdown for each of the previous rule-testing variants based off of volatility control, and Brian Peterson also recommended displaying max leverage, which this post will provide.

Here’s the updated rule backtest code:

ruleBacktest <- function(returns, nMonths, dailyReturns,
nSD=126, volTarget = .1) {
nMonthAverage <- apply(returns, 2, runSum, n = nMonths)
nMonthAverage <- na.omit(xts(nMonthAverage, order.by = index(returns)))
nMonthAvgRank <- t(apply(nMonthAverage, 1, rank))
nMonthAvgRank <- xts(nMonthAvgRank, order.by=index(nMonthAverage))
selection <- (nMonthAvgRank==5) * 1 #select highest average performance
dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection)
constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252))
monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"])
wts <- cbind(monthlyLeverage, 1-monthlyLeverage)
constantVolComponents <- cbind(dailyBacktest, 0)
out <- Return.portfolio(R = constantVolComponents, weights = wts)
out <- apply.monthly(out, Return.cumulative)
maxLeverage <- max(monthlyLeverage, na.rm = TRUE)
return(list(out, maxLeverage))
}

t1 <- Sys.time()
allPermutations <- list()
allDDs <- list()
leverages <- list()
for(i in seq(21, 252, by = 21)) {
monthVariants <- list()
ddVariants <- list()
leverageVariants <- list()
for(j in 1:12) {
trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i)
sharpe <- table.AnnualizedReturns(trial[[1]])[3,]
dd <- maxDrawdown(trial[[1]])
monthVariants[[j]] <- sharpe
ddVariants[[j]] <- dd
leverageVariants[[j]] <- trial[[2]]
}
allPermutations[[i]] <- do.call(c, monthVariants)
allDDs[[i]] <- do.call(c, ddVariants)
leverages[[i]] <- do.call(c, leverageVariants)
}
allPermutations <- do.call(rbind, allPermutations)
allDDs <- do.call(rbind, allDDs)
leverages <- do.call(rbind, leverages)
t2 <- Sys.time()
print(t2-t1)

Drawdowns:

Leverage:

Here are the results presented as a hypothesis test–a linear regression of drawdowns and leverage against momentum formation period and volatility calculation period:

ddLM <- lm(meltedDDs$MaxDD~meltedDDs$volFormation + meltedDDs$momentumFormation)
summary(ddLM)

Call:
lm(formula = meltedDDs$MaxDD ~ meltedDDs$volFormation + meltedDDs$momentumFormation)

Residuals:
Min 1Q Median 3Q Max
-0.08022 -0.03434 -0.00135 0.02911 0.20077

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.240146 0.010922 21.99 < 2e-16 ***
meltedDDs$volFormation -0.000484 0.000053 -9.13 6.5e-16 ***
meltedDDs$momentumFormation 0.001533 0.001112 1.38 0.17
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.0461 on 141 degrees of freedom
Multiple R-squared: 0.377, Adjusted R-squared: 0.368
F-statistic: 42.6 on 2 and 141 DF, p-value: 3.32e-15

levLM <- lm(meltedLeverage$MaxLeverage~meltedLeverage$volFormation + meltedDDs$momentumFormation)
summary(levLM)

Call:
lm(formula = meltedLeverage$MaxLeverage ~ meltedLeverage$volFormation +
meltedDDs$momentumFormation)

Residuals:
Min 1Q Median 3Q Max
-0.9592 -0.5179 -0.0908 0.3679 3.1022

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.076870 0.164243 24.82 <2e-16 ***
meltedLeverage$volFormation -0.009916 0.000797 -12.45 <2e-16 ***
meltedDDs$momentumFormation 0.009869 0.016727 0.59 0.56
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.693 on 141 degrees of freedom
Multiple R-squared: 0.524, Adjusted R-squared: 0.517
F-statistic: 77.7 on 2 and 141 DF, p-value: <2e-16

Easy interpretation here–the shorter-term volatility estimates are unstable due to the one-asset rotation nature of the system. Particularly silly is using the one-month volatility estimate. Imagine the system just switched from the lowest-volatility instrument to the highest. It would then take excessive leverage and get blown up that month for no particularly good reason. A longer-term volatility estimate seems to do much better for this system. So, while the Sharpe is generally improved, the results become far more palatable when using a more stable calculation for volatility, which sets maximum leverage to about 2 when targeting an annualized volatility of 10%. Also, to note, the period to compute volatility matters far more than the momentum formation period when addressing volatility targeting, which lends credence (at least in this case) to so many people that say “the individual signal rules matter far less than the position-sizing rules!”. According to some, position sizing is often a way for people to mask only marginally effective (read: bad) strategies with a separate layer to create a better result. I’m not sure which side of the debate (even assuming there is one) I fall upon, but for what it’s worth, there it is.

Moving on, I want to test out one more rule, which is inspired by Andrew Lo’s stop-loss rule. Essentially, the way it works is this (to my interpretation): it evaluates a running standard deviation, and if the drawdown exceeds some threshold of the running standard deviation, to sit out for some fixed period of time, and then re-enter. According to Andrew Lo, stop-losses help momentum strategies, so it seems as good a rule to test as any.

However, rather than test different permutations of the stop rule on all 144 prior combinations of volatility-adjusted configurations, I’m going to take an ensemble strategy, inspired by a conversation I had with Adam Butler, the CEO of ReSolve Asset Management, who stated that “we know momentum exists, but we don’t know the perfect way to measure it”, from the section I just finished up and use an equal weight of all 12 of the momentum formation periods with a 252-day rolling annualized volatility calculation, and equal weight them every month.

Here are the base case results from that trial (bringing our total to 169).

strat <- list()
for(i in 1:12) {
strat[[i]] <- ruleBacktest(returns = monthRets, nMonths = i, dailyReturns = sample, nSD = 252)[[1]]
}
strat <- do.call(cbind, strat)
strat <- Return.portfolio(R = na.omit(strat), rebalance_on="months")

rbind(table.AnnualizedReturns(strat), maxDrawdown(strat), CalmarRatio(strat))

With the following result:

portfolio.returns
Annualized Return 0.12230
Annualized Std Dev 0.10420
Annualized Sharpe (Rf=0%) 1.17340
Worst Drawdown 0.09616
Calmar Ratio 1.27167

Of course, also worth nothing is that the annualized standard deviation is indeed very close to 10%, even with the ensemble. And it’s nice that there is a Sharpe past 1. Of course, given that these are mutual funds being backtested, these results are optimistic due to the unrealistic execution assumptions (can’t trade sooner than once every *two* months).

Anyway, let’s introduce our stop-loss rule, inspired by Andrew Lo’s paper.

loStopLoss <- function(returns, sdPeriod = 12, sdScaling = 1, sdThresh = 1.5, cooldown = 3) {
stratRets <- list()
count <- 1
stratComplete <- FALSE
originalRets <- returns
ddThresh <- -runSD(returns, n = sdPeriod) * sdThresh * sdScaling
while(!stratComplete) {
retDD <- PerformanceAnalytics:::Drawdowns(returns)
DDbreakthrough <- retDD < ddThresh & lag(retDD) > ddThresh
firstBreak <- which.max(DDbreakthrough) #first threshold breakthrough, if 1, we have no breakthrough
#the above line is unintuitive since this is a boolean vector, so it returns the first value of TRUE
if(firstBreak > 1) { #we have a drawdown breakthrough if this is true
stratRets[[count]] <- returns[1:firstBreak,] #subset returns through our threshold breakthrough
nextPoint <- firstBreak + cooldown + 1 #next point of re-entry is the point after the cooldown period
if(nextPoint <= (nrow(returns)-1)) { #if we can re-enter, subset the returns and return to top of loop
returns <- returns[nextPoint:nrow(returns),]
ddThresh <- ddThresh[nextPoint:nrow(ddThresh),]
count <- count+1
} else { #re-entry point is after data exhausted, end strategy
stratComplete <- TRUE
}
} else { #there are no more critical drawdown breakthroughs, end strategy
stratRets[[count]] <- returns
stratComplete <- TRUE
}
}
stratRets <- do.call(rbind, stratRets) #combine returns
expandRets <- cbind(stratRets, originalRets) #account for all the days we missed
expandRets[is.na(expandRets[,1]), 1] <- 0 #cash positions will be zero
rets <- expandRets[,1]
colnames(rets) <- paste(cooldown, sdThresh, sep="_")
return(rets)
}

Essentially, the way it works is like this: the function computes all the drawdowns for a return series, along with its running standard deviation (non-annualized–if you want to annualize it, change the sdScaling parameter to something like sqrt(12) for monthly or sqrt(252) for daily data). Next, it looks for when the drawdown crossed a critical threshold, then cuts off that portion of returns and standard deviation history, and moves ahead in history by the cooldown period specified, and repeats. Most of the code is simply dealing with corner cases (is there even a time to use the stop rule? What about iterating when there isn’t enough data left?), and then putting the results back together again.

In any case, for the sake of simplicity, this function doesn’t use two different time scales (IE compute volatility using daily data, make decisions monthly), so I’m sticking with using a 12-month rolling volatility, as opposed to 252 day rolling volatility multiplied by the square root of 21.

Finally, here are another 54 runs to see if Andrew Lo’s stop-loss rule works here. Essentially, the intuition behind this is that if the strategy breaks down, it’ll continue to break down, so it would be prudent to just turn it off for a little while.

Here are the trial runs:


threshVec <- seq(0, 2, by=.25)
cooldownVec <- c(1:6)
sharpes <- list()
params <- expand.grid(threshVec, cooldownVec)
for(i in 1:nrow(params)) {
configuration <- loStopLoss(returns = strat, sdThresh = params[i,1],
cooldown = params[i, 2])
sharpes[[i]] <- table.AnnualizedReturns(configuration)[3,]
}
sharpes <- do.call(c, sharpes)

loStoplossFrame <- cbind(params, sharpes)
loStoplossFrame$improvement <- loStoplossFrame[,3] - table.AnnualizedReturns(strat)[3,]

colnames(loStoplossFrame) <- c("Threshold", "Cooldown", "Sharpe", "Improvement")

And a plot of the results.

ggplot(loStoplossFrame, aes(x = Threshold, y = Cooldown, fill=Improvement)) +
geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = 0)

Result:

Result: at this level, and at this frequency (retaining the monthly decision-making process), the stop-loss rule basically does nothing in order to improve the risk-reward trade-off in the best case scenarios, and in most scenarios, simply hurts. 54 trials down the drain, bringing us up to 223 trials. So, what does the final result look like?

charts.PerformanceSummary(strat)

Here’s the final in-sample equity curve–and the first one featured in this entire series. This is, of course, a *feature* of hypothesis-driven development. Playing whack-a-mole with equity curve bumps is what is a textbook case of overfitting. So, without further ado:

And now we can see why stop-loss rules generally didn’t add any value to this strategy. Simply, it had very few periods of sustained losses at the monthly frequency, and thus, very little opportunity for a stop-loss rule to add value. Sure, the occasional negative month crept in, but there was no period of sustained losses. Furthermore, Yahoo Finance may not have perfect fidelity on dividends on mutual funds from the late 90s to early 2000s, so the initial flat performance may also be a rather conservative estimate on the strategy’s performance (then again, as I stated before, using mutual funds themselves is optimistic given the unrealistic execution assumptions, so maybe it cancels out). Now, if this equity curve were to be presented without any context, one may easily question whether or not it was curve-fit. To an extent, one can argue that the volatility computation period may be optimized, though I’d hardly call a 252-day (one-year) rolling volatility estimate a curve-fit.

Next, I’d like to introduce another concept on this blog that I’ve seen colloquially addressed in other parts of the quantitative blogging space, particularly by Mike Harris of Price Action Lab, namely that of multiple hypothesis testing, and about the need to correct for that.

Luckily for that, Drs. David H. Bailey and Marcos Lopez De Prado wrote a paper to address just that. Also, I’d like to note one very cool thing about this paper: it actually has a worked-out numerical example! In my opinion, there are very few things as helpful as showing a simple result that transforms a collection of mathematical symbols into a result to demonstrate what those symbols actually mean in the span of one page. Oh, and it also includes *code* in the appendix (albeit Python — even though, you know, R is far more developed. If someone can get Marcos Lopez De Prado to switch to R–aka the better research language, that’d be a godsend!).

In any case, here’s the formula for the deflated Sharpe ratio, implemented straight from the paper.

deflatedSharpe <- function(sharpe, nTrials, varTrials, skew, kurt, numPeriods, periodsInYear) {
emc <- .5772
sr0_term1 <- (1 - emc) * qnorm(1 - 1/nTrials)
sr0_term2 <- emc * qnorm(1 - 1/nTrials * exp(-1))
sr0 <- sqrt(varTrials * 1/periodsInYear) * (sr0_term1 + sr0_term2)

numerator <- (sharpe/sqrt(periodsInYear) - sr0)*sqrt(numPeriods - 1)

skewnessTerm <- 1 - skew * sharpe/sqrt(periodsInYear)
kurtosisTerm <- (kurt-1)/4*(sharpe/sqrt(periodsInYear))^2

denominator <- sqrt(skewnessTerm + kurtosisTerm)

result <- pnorm(numerator/denominator)
pval <- 1-result
return(pval)
}

The inputs are the strategy’s Sharpe ratio, the number of backtest runs, the variance of the sharpe ratios of those backtest runs, the skewness of the candidate strategy, its non-excess kurtosis, the number of periods in the backtest, and the number of periods in a year. Unlike the De Prado paper, I choose to return the p-value (EG 1-.

Let’s collect all our Sharpe ratios now.

allSharpes <- c(as.numeric(table.AnnualizedReturns(sigBoxplots)[3,]),
meltedSharpes$Sharpe,
as.numeric(table.AnnualizedReturns(strat)[3,]),
loStoplossFrame$Sharpe)

And now, let’s plug and chug!

stratSignificant <- deflatedSharpe(sharpe = as.numeric(table.AnnualizedReturns(strat)[3,]),
nTrials = length(allSharpes), varTrials = var(allSharpes),
skew = as.numeric(skewness(strat)), kurt = as.numeric(kurtosis(strat)) + 3,
numPeriods = nrow(strat), periodsInYear = 12)

And the result!

> stratSignificant
[1] 0.01311

Success! At least at the 5% level…and a rejection at the 1% level, and any level beyond that.

So, one last thing! Out-of-sample testing on ETFs (and mutual funds during the ETF burn-in period)!

symbols2 <- c("CWB", "JNK", "TLT", "SHY", "PCY")
getSymbols(symbols2, from='1900-01-01')
prices2 <- list()
for(tmp in symbols2) {
prices2[[tmp]] <- Ad(get(tmp))
}
prices2 <- do.call(cbind, prices2)
colnames(prices2) <- substr(colnames(prices2), 1, 3)
returns2 <- na.omit(Return.calculate(prices2))

monthRets2 <- apply.monthly(returns2, Return.cumulative)

oosStrat <- list()
for(i in 1:12) {
oosStrat[[i]] <- ruleBacktest(returns = monthRets2, nMonths = i, dailyReturns = returns2, nSD = 252)[[1]]
}
oosStrat <- do.call(cbind, oosStrat)
oosStrat <- Return.portfolio(R = na.omit(oosStrat), rebalance_on="months")

symbols <- c("CNSAX", "FAHDX", "VUSTX", "VFISX", "PREMX")
getSymbols(symbols, from='1900-01-01')
prices <- list()
for(symbol in symbols) {
prices[[symbol]] <- Ad(get(symbol))
}
prices <- do.call(cbind, prices)
colnames(prices) <- substr(colnames(prices), 1, 5)
oosMFreturns <- na.omit(Return.calculate(prices))
oosMFmonths <- apply.monthly(oosMFreturns, Return.cumulative)

oosMF <- list()
for(i in 1:12) {
oosMF[[i]] <- ruleBacktest(returns = oosMFmonths, nMonths = i, dailyReturns = oosMFreturns, nSD=252)[[1]]
}
oosMF <- do.call(cbind, oosMF)
oosMF <- Return.portfolio(R = na.omit(oosMF), rebalance_on="months")
oosMF <- oosMF["2009-04/2011-03"]

fullOOS <- rbind(oosMF, oosStrat)

rbind(table.AnnualizedReturns(fullOOS), maxDrawdown(fullOOS), CalmarRatio(fullOOS))
charts.PerformanceSummary(fullOOS)

And the results:

portfolio.returns
Annualized Return 0.1273
Annualized Std Dev 0.0901
Annualized Sharpe (Rf=0%) 1.4119
Worst Drawdown 0.1061
Calmar Ratio 1.1996

And one more equity curve (only the second!).

In other words, the out-of-sample statistics compare to the in-sample statistics. The Sharpe ratio is higher, the Calmar slightly lower. But on a whole, the performance has kept up. Unfortunately, the strategy is currently in a drawdown, but that’s the breaks.

So, whew. That concludes my first go at hypothesis-driven development, and has hopefully at least demonstrated the process to a satisfactory degree. What started off as a toy strategy instead turned from a rejection to a not rejection to demonstrating ideas from three separate papers, and having out-of-sample statistics that largely matched if not outperformed the in-sample statistics. For those thinking about investing in this strategy (again, here is the strategy: take 12 different portfolios, each selecting the asset with the highest momentum over months 1-12, target an annualized volatility of 10%, with volatility defined as the rolling annualized 252-day standard deviation, and equal-weight them every month), what I didn’t cover was turnover and taxes (this is a bond ETF strategy, so dividends will play a large role).

Now, one other request–many of the ideas for this blog come from my readers. I am especially interested in things to think about from readers with line-management responsibilities, as I think many of the questions from those individuals are likely the most universally interesting ones. If you’re one such individual, I’d appreciate an introduction, and knowing who more of the individuals in my reader base are.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up, consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.