The Return of Free Data and Possible Volatility Trading Subscription

This post will be about pulling free data from AlphaVantage, and gauging interest for a volatility trading subscription service.

So first off, ever since the yahoos at Yahoo decided to turn off their free data, the world of free daily data has been in somewhat of a dark age. Well, thanks to http://blog.fosstrading.com/2017/10/getsymbols-and-alpha-vantage.html#gpluscommentsJosh Ulrich, Paul Teetor, and other R/Finance individuals, the latest edition of quantmod (which can be installed from CRAN) now contains a way to get free financial data from AlphaVantage since the year 2000, which is usually enough for most backtests, as that date predates the inception of most ETFs.

Here’s how to do it.

First off, you need to go to alphaVantage, register, and https://www.alphavantage.co/support/#api-keyget an API key.

Once you do that, downloading data is simple, if not slightly slow. Here’s how to do it.

require(quantmod)

getSymbols('SPY', src = 'av', adjusted = TRUE, output.size = 'full', api.key = YOUR_KEY_HERE)

And the results:

> head(SPY)
           SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted
2000-01-03   148.25   148.25 143.875  145.4375    8164300     104.3261
2000-01-04   143.50   144.10 139.600  139.8000    8089800     100.2822
2000-01-05   139.90   141.20 137.300  140.8000    9976700     100.9995
2000-01-06   139.60   141.50 137.800  137.8000    6227200      98.8476
2000-01-07   140.30   145.80 140.100  145.8000    8066500     104.5862
2000-01-10   146.30   146.90 145.000  146.3000    5741700     104.9448

Which means if any one of my old posts on asset allocation has been somewhat defunct thanks to bad yahoo data, it will now work again with a slight modification to the data input algorithms.

Beyond demonstrating this routine, one other thing I’d like to do is to gauge interest for a volatility signal subscription service, for a system I have personally started trading a couple of months ago.

Simply, I have seen other websites with subscription services with worse risk/reward than the strategy I currently trade, which switches between XIV, ZIV, and VXX. Currently, the equity curve, in log 10, looks like this:

volStratPerf

That is, $1000 in 2008 would have become approximately $1,000,000 today, if one was able to trade this strategy since then.

Since 2011 (around the time of inception for XIV), the performance has been:


                        Performance
Annualized Return         0.8265000
Annualized Std Dev        0.3544000
Annualized Sharpe (Rf=0%) 2.3319000
Worst Drawdown            0.2480087
Calmar Ratio              3.3325450

Considering that some websites out there charge upwards of $50 a month for either a single tactical asset rotation strategy (and a lot more for a combination) with inferior risk/return profiles, or a volatility strategy that may have had a massive and historically record-breaking drawdown, I was hoping to gauge a price point for what readers would consider paying for signals from a better strategy than those.

Thanks for reading.

NOTE: I am currently interested in networking and am seeking full-time opportunities related to my skill set. My LinkedIn profile can be found here.

The Kelly Criterion — Does It Work?

This post will be about implementing and investigating the running Kelly Criterion — that is, a constantly adjusted Kelly Criterion that changes as a strategy realizes returns.

For those not familiar with the Kelly Criterion, it’s the idea of adjusting a bet size to maximize a strategy’s long term growth rate. Both https://en.wikipedia.org/wiki/Kelly_criterionWikipedia and Investopedia have entries on the Kelly Criterion. Essentially, it’s about maximizing your long-run expectation of a betting system, by sizing bets higher when the edge is higher, and vice versa.

There are two formulations for the Kelly criterion: the Wikipedia result presents it as mean over sigma squared. The Investopedia definition is P-[(1-P)/winLossRatio], where P is the probability of a winning bet, and the winLossRatio is the average win over the average loss.

In any case, here are the two implementations.

investoPediaKelly <- function(R, kellyFraction = 1, n = 63) {
  signs <- sign(R)
  posSigns <- signs; posSigns[posSigns < 0] <- 0
  negSigns <- signs; negSigns[negSigns > 0] <- 0; negSigns <- negSigns * -1
  probs <- runSum(posSigns, n = n)/(runSum(posSigns, n = n) + runSum(negSigns, n = n))
  posVals <- R; posVals[posVals < 0] <- 0
  negVals <- R; negVals[negVals > 0] <- 0; 
  wlRatio <- (runSum(posVals, n = n)/runSum(posSigns, n = n))/(runSum(negVals, n = n)/runSum(negSigns, n = n))
  kellyRatio <- probs - ((1-probs)/wlRatio)
  out <- kellyRatio * kellyFraction
  return(out)
}

wikiKelly <- function(R, kellyFraction = 1, n = 63) {
  return(runMean(R, n = n)/runVar(R, n = n)*kellyFraction)
}

Let’s try this with some data. At this point in time, I’m going to show a non-replicable volatility strategy that I currently trade.

volSince2011

For the record, here are its statistics:

                              Close
Annualized Return         0.8021000
Annualized Std Dev        0.3553000
Annualized Sharpe (Rf=0%) 2.2574000
Worst Drawdown            0.2480087
Calmar Ratio              3.2341613

Now, let’s see what the Wikipedia version does:

badKelly <- out * lag(wikiKelly(out), 2)
charts.PerformanceSummary(badKelly)

badKelly

The results are simply ridiculous. And here would be why: say you have a mean return of .0005 per day (5 bps/day), and a standard deviation equal to that (that is, a Sharpe ratio of 1). You would have 1/.0005 = 2000. In other words, a leverage of 2000 times. This clearly makes no sense.

The other variant is the more particular Investopedia definition.

invKelly <- out * lag(investKelly(out), 2)
charts.PerformanceSummary(invKelly)

invKelly

Looks a bit more reasonable. However, how does it stack up against not using it at all?

compare <- na.omit(cbind(out, invKelly))
charts.PerformanceSummary(compare)

kellyCompare

Turns out, the fabled Kelly Criterion doesn’t really change things all that much.

For the record, here are the statistical comparisons:

                               Base     Kelly
Annualized Return         0.8021000 0.7859000
Annualized Std Dev        0.3553000 0.3588000
Annualized Sharpe (Rf=0%) 2.2574000 2.1903000
Worst Drawdown            0.2480087 0.2579846
Calmar Ratio              3.2341613 3.0463063

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

Leverage Up When You’re Down?

This post will investigate the idea of reducing leverage when drawdowns are small, and increasing leverage as losses accumulate. It’s based on the idea that whatever goes up must come down, and whatever comes down generally goes back up.

I originally came across this idea from this blog post.

So, first off, let’s write an easy function that allows replication of this idea. Essentially, we have several arguments:

One: the default leverage (that is, when your drawdown is zero, what’s your exposure)? For reference, in the original post, it’s 10%.

Next: the various leverage levels. In the original post, the leverage levels are 25%, 50%, and 100%.

And lastly, we need the corresponding thresholds at which to apply those leverage levels. In the original post, those levels are 20%, 40%, and 55%.

So, now we can create a function to implement that in R. The idea being that we have R compute the drawdowns, and then use that information to determine leverage levels as precisely and frequently as possible.

Here’s a quick piece of code to do so:

require(xts)
require(PerformanceAnalytics)

drawdownLev <- function(rets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55)) {
  # compute drawdowns
  dds <- PerformanceAnalytics:::Drawdowns(rets)
  
  # initialize leverage to the default level
  dds$lev <- defaultLev
  
  # change the leverage for every threshold
  for(i in 1:length(ddthresh)) {
    
    # as drawdowns go through thresholds, adjust leverage
    dds$lev[dds$Close < ddthresh[i]] <- levs[i]
  }
  
  # compute the new strategy returns -- apply leverage at tomorrow's close
  out <- rets * lag(dds$lev, 2)
  
  # return the leverage and the new returns
  leverage <- dds$lev
  colnames(leverage) <- c("DDLev_leverage")
  return(list(leverage, out))
}

So, let’s replicate some results.

require(downloader)
require(xts)
require(PerformanceAnalytics)


download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

xivDDlev <- drawdownLev(xivRets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55))
compare <- na.omit(cbind(xivDDlev[[2]], xivRets))
colnames(compare) <- c("XIV_DD_leverage", "XIV")

charts.PerformanceSummary(compare['2011::2016'])

And our results look something like this:

xivddlev

                          XIV_DD_leverage       XIV
Annualized Return               0.2828000 0.2556000
Annualized Std Dev              0.3191000 0.6498000
Annualized Sharpe (Rf=0%)       0.8862000 0.3934000
Worst Drawdown                  0.4870604 0.7438706
Calmar Ratio                    0.5805443 0.3436668

That said, what would happen if one were to extend the data for all available XIV data?

xivddlev2

> rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
                          XIV_DD_leverage       XIV
Annualized Return               0.1615000 0.3319000
Annualized Std Dev              0.3691000 0.5796000
Annualized Sharpe (Rf=0%)       0.4375000 0.5727000
Worst Drawdown                  0.8293650 0.9215784
Calmar Ratio                    0.1947428 0.3601385

A different story.

In this case, I think the takeaway is that such a mechanism does well when the drawdowns for the benchmark in question occur sharply, so that the lower exposure protects from those sharp drawdowns, and then the benchmark spends much of the time in a recovery mode, so that the increased exposure has time to earn outsized returns, and then draws down again. When the benchmark continues to see drawdowns after maximum leverage is reached, or continues to perform well when not in drawdown, such a mechanism falls behind quickly.

As always, there is no free lunch when it comes to drawdowns, as trying to lower exposure in preparation for a correction will necessarily mean forfeiting a painful amount of upside in the good times, at least as presented in the original post.

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

Let’s Talk Drawdowns (And Affiliates)

This post will be directed towards those newer in investing, with an explanation of drawdowns–in my opinion, a simple and highly important risk statistic.

Would you invest in this?

SP500ew

As it turns out, millions of people do, and did. That is the S&P 500, from 2000 through 2012, more colloquially referred to as “the stock market”. Plenty of people around the world invest in it, and for a risk to reward payoff that is very bad, in my opinion. This is an investment that, in ten years, lost half of its value–twice!

At its simplest, an investment–placing your money in an asset like a stock, a savings account, and so on, instead of spending it, has two things you need to look at.

cagr

First, what’s your reward? If you open up a bank CD, you might be fortunate to get 3%. If you invest it in the stock market, you might get 8% per year (on average) if you held it for 20 years. In other words, you stow away $100 on January 1st, and you might come back and find $108 in your account on December 31st. This is often called the compound annualized growth rate (CAGR)–meaning that if you have $100 one year, earn 8%, you have 108, and then earn 8% on that, and so on.

The second thing to look at is the risk. What can you lose? The simplest answer to this is “the maximum drawdown”. If this sounds complicated, it simply means “the biggest loss”. So, if you had $100 one month, $120 next month, and $90 the month after that, your maximum drawdown (that is, your maximum loss) would be 1 – 90/120 = 25%.

Maximum-Drawdown

When you put the reward and risk together, you can create a ratio, to see how your rewards and risks line up. This is called a Calmar ratio, and you get it by dividing your CAGR by your maximum drawdown. The Calmar Ratio is a ratio that I interpret as “for every dollar you lose in your investment’s worst performance, how many dollars can you make back in a year?” For my own investments, I prefer this number to be at least 1, and know of a strategy for which that number is above 2 since 2011, or higher than 3 if simulated back to 2008.

Most stocks don’t even have a Calmar ratio of 1, which means that on average, an investment makes more than it can possibly lose in a year. Even Amazon, the company whose stock made Jeff Bezos now the richest man in the world, only has a Calmar Ratio of less than 2/5, with a maximum loss of more than 90% in the dot-com crash. The S&P 500, again, “the stock market”, since 1993, has a Calmar Ratio of around 1/6. That is, the worst losses can take *years* to make back.

A lot of wealth advisers like to say that they recommend a large holding of stocks for young people. In my opinion, whether you’re young or old, losing half of everything hurts, and there are much better ways to make money than to simply buy and hold a collection of stocks.

****

For those with coding skills, one way to gauge just how good or bad an investment is, is this:

An investment has a history–that is, in January, it made 3%, in February, it lost 2%, in March, it made 5%, and so on. By shuffling that history around, so that say, January loses 2%, February makes 5%, and March makes 3%, you can create an alternate history of the investment. It will start and end in the same place, but the journey will be different. For investments that have existed for a few years, it is possible to create many different histories, and compare the Calmar ratio of the original investment to its shuffled “alternate histories”. Ideally, you want the investment to be ranked among the highest possible ways to have made the money it did.

To put it simply: would you rather fall one inch a thousand times, or fall a thousand inches once? Well, the first one is no different than jumping rope. The second one will kill you.

Here is some code I wrote in R (if you don’t code in R, don’t worry) to see just how the S&P 500 (the stock market) did compared to how it could have done.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)
require(data.table)

SPY <- Quandl("EOD/SPY", start_date="1990-01-01", type = "xts")
SPYrets <- na.omit(Return.calculate(SPY$Adj_Close))

spySims <- list()
set.seed(123)
for(i in 1:999) {
  simulatedSpy <- xts(sample(coredata(SPYrets), size = length(SPYrets), replace = FALSE), order.by=index(SPYrets))
  colnames(simulatedSpy) <- paste("sampleSPY", i, sep="_")
  spySims[[i]] <- simulatedSpy
}
spySims <- do.call(cbind, spySims)
spySims <- cbind(spySims, SPYrets)
colnames(spySims)[1000] <- "Original SPY"

dailyReturnCAGR <- function(rets) {
  return(prod(1+rets)^(252/length(rets))-1)
}

rets <- sapply(spySims, dailyReturnCAGR)
drawdowns <- maxDrawdown(spySims)
calmars <- rets/drawdowns
ranks <- rank(calmars)
plot(density(as.numeric(calmars)), main = 'Calmars of reshuffled SPY, realized reality in red')
abline(v=as.numeric(calmars[1000]), col = 'red')

This is the resulting plot:

spyCalmars

That red line is the actual performance of the S&P 500 compared to what could have been. And of the 1000 different simulations, only 91 did worse than what happened in reality.

This means that the stock market isn’t a particularly good investment, and that you can do much better using tactical asset allocation strategies.

****

One site I’m affiliated with, is AllocateSmartly. It is a cheap investment subscription service ($30 a month) that compiles a collection of asset allocation strategies that perform better than many wealth advisers. When you combine some of those strategies, the performance is better still. To put it into perspective, one model strategy I’ve come up with has this performance:
allocateSmartlyModelPortfolio

In this case, the compound annualized growth rate is nearly double that of the maximum loss. For those interested in something a bit more aggressive, this strategy ensemble uses some fairly conservative strategies in its approach.

****

In conclusion, when considering how to invest your money, keep in mind both the reward, and the risk. One very simple and important way to understand risk is how much an investment can possibly lose, from its highest, to its lowest value following that peak. When you combine the reward and the risk, you can get a ratio that tells you about how much you can stand to make for every dollar lost in an investment’s worst performance.

Thanks for reading.

NOTE: I am interested in networking opportunities, projects, and full-time positions related to my skill set. If you are looking to collaborate, please contact me on my LinkedIn here.

An Out of Sample Update on DDN’s Volatility Momentum Trading Strategy and Beta Convexity

The first part of this post is a quick update on Tony Cooper’s of Double Digit Numerics’s volatility ETN momentum strategy from the volatility made simple blog (which has stopped updating as of a year and a half ago). The second part will cover Dr. Jonathan Kinlay’s Beta Convexity concept.

So, now that I have the ability to generate a term structure and constant expiry contracts, I decided to revisit some of the strategies on Volatility Made Simple and see if any of them are any good (long story short: all of the publicly detailed ones aren’t so hot besides mine–they either have a massive drawdown in-sample around the time of the crisis, or a massive drawdown out-of-sample).

Why this strategy? Because it seemed different from most of the usual term structure ratio trades (of which mine is an example), so I thought I’d check out how it did since its first publishing date, and because it’s rather easy to understand.

Here’s the strategy:

Take XIV, VXX, ZIV, VXZ, and SHY (this last one as the “risk free” asset), and at the close, invest in whichever has had the highest 83 day momentum (this was the result of optimization done on volatilityMadeSimple).

Here’s the code to do this in R, using the Quandl EOD database. There are two variants tested–observe the close, buy the close (AKA magical thinking), and observe the close, buy tomorrow’s close.

require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)

Quandl.api_key("yourKeyHere")

symbols <- c("XIV", "VXX", "ZIV", "VXZ", "SHY")

prices <- list()
for(i in 1:length(symbols)) {
  price <- Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close
  colnames(price) <- symbols[i]
  prices[[i]] <- price
}
prices <- na.omit(do.call(cbind, prices))
returns <- na.omit(Return.calculate(prices))

# find highest asset, assign column names
topAsset <- function(row, assetNames) {
  out <- row==max(row, na.rm = TRUE)
  names(out) <- assetNames
  out <- data.frame(out)
  return(out)
}

# compute momentum
momentums <- na.omit(xts(apply(prices, 2, ROC, n = 83), order.by=index(prices)))

# find highest asset each day, turn it into an xts
highestMom <- apply(momentums, 1, topAsset, assetNames = colnames(momentums))
highestMom <- xts(t(do.call(cbind, highestMom)), order.by=index(momentums))

# observe today's close, buy tomorrow's close
buyTomorrow <- na.omit(xts(rowSums(returns * lag(highestMom, 2)), order.by=index(highestMom)))

# observe today's close, buy today's close (aka magic thinking)
magicThinking <- na.omit(xts(rowSums(returns * lag(highestMom)), order.by=index(highestMom)))

out <- na.omit(cbind(buyTomorrow, magicThinking))
colnames(out) <- c("buyTomorrow", "magicalThinking")

# results
charts.PerformanceSummary(out['2014-04-11::'], legend.loc = 'top')
rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))

Pretty simple.

Here are the results.

capture

> rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))
                          buyTomorrow magicalThinking
Annualized Return          -0.0320000       0.0378000
Annualized Std Dev          0.5853000       0.5854000
Annualized Sharpe (Rf=0%)  -0.0547000       0.0646000
Worst Drawdown              0.8166912       0.7761655

Looks like this strategy didn’t pan out too well. Just a daily reminder that if you’re using fine grid-search to select a particularly good parameter (EG n = 83 days? Maybe 4 21-day trading months, but even that would have been n = 82), you’re asking for a visit from, in the words of Mr. Tony Cooper, a visit from the grim reaper.

****

Moving onto another topic, whenever Dr. Jonathan Kinlay posts something that I think I can replicate that I’d be very wise to do so, as he is a very skilled and experienced practitioner (and also includes me on his blogroll).

A topic that Dr. Kinlay covered is the idea of beta convexity–namely, that an asset’s beta to a benchmark may be different when the benchmark is up as compared to when it’s down. Essentially, it’s the idea that we want to weed out firms that are what I’d deem as “losers in disguise”–I.E. those that act fine when times are good (which is when we really don’t care about diversification, since everything is going up anyway), but do nothing during bad times.

The beta convexity is calculated quite simply: it’s the beta of an asset to a benchmark when the benchmark has a positive return, minus the beta of an asset to a benchmark when the benchmark has a negative return, then squaring the difference. That is, (beta_bench_positive – beta_bench_negative) ^ 2.

Here’s some R code to demonstrate this, using IBM vs. the S&P 500 since 1995.

ibm <- Quandl("EOD/IBM", start_date="1995-01-01", type = "xts")
ibmRets <- Return.calculate(ibm$Adj_Close)

spy <- Quandl("EOD/SPY", start_date="1995-01-01", type = "xts")
spyRets <- Return.calculate(spy$Adj_Close)

rets <- na.omit(cbind(ibmRets, spyRets))
colnames(rets) <- c("IBM", "SPY")

betaConvexity <- function(Ra, Rb) {
  positiveBench <- Rb[Rb > 0]
  assetPositiveBench <- Ra[index(positiveBench)]
  positiveBeta <- CAPM.beta(Ra = assetPositiveBench, Rb = positiveBench)
  
  negativeBench <- Rb[Rb < 0]
  assetNegativeBench <- Ra[index(negativeBench)]
  negativeBeta <- CAPM.beta(Ra = assetNegativeBench, Rb = negativeBench)
  
  out <- (positiveBeta - negativeBeta) ^ 2
  return(out)
}

betaConvexity(rets$IBM, rets$SPY)

For the result:

> betaConvexity(rets$IBM, rets$SPY)
[1] 0.004136034

Thanks for reading.

NOTE: I am always looking to network, and am currently actively looking for full-time opportunities which may benefit from my skill set. If you have a position which may benefit from my skills, do not hesitate to reach out to me. My LinkedIn profile can be found here.

Testing the Hierarchical Risk Parity algorithm

This post will be a modified backtest of the Adaptive Asset Allocation backtest from AllocateSmartly, using the Hierarchical Risk Parity algorithm from last post, because Adam Butler was eager to see my results. On a whole, as Adam Butler had told me he had seen, HRP does not generate outperformance when applied to a small, carefully-constructed, diversified-by-selection universe of asset classes, as opposed to a universe of hundreds or even several thousand assets, where its theoretically superior properties result in it being a superior algorithm.

First off, I would like to thank one Matthew Barry, for helping me modify my HRP algorithm so as to not use the global environment for recursion. You can find his github here.

Here is the modified HRP code.

covMat <- read.csv('cov.csv', header = FALSE)
corMat <- read.csv('corMat.csv', header = FALSE)

clustOrder <- hclust(dist(corMat), method = 'single')$order

getIVP <- function(covMat) {
  invDiag <- 1/diag(as.matrix(covMat))
  weights <- invDiag/sum(invDiag)
  return(weights)
}

getClusterVar <- function(covMat, cItems) {
  covMatSlice <- covMat[cItems, cItems]
  weights <- getIVP(covMatSlice)
  cVar <- t(weights) %*% as.matrix(covMatSlice) %*% weights
  return(cVar)
}

getRecBipart <- function(covMat, sortIx) {
  w <- rep(1,ncol(covMat))
  w <- recurFun(w, covMat, sortIx)
  return(w)
}

recurFun <- function(w, covMat, sortIx) {
  subIdx <- 1:trunc(length(sortIx)/2)
  cItems0 <- sortIx[subIdx]
  cItems1 <- sortIx[-subIdx]
  cVar0 <- getClusterVar(covMat, cItems0)
  cVar1 <- getClusterVar(covMat, cItems1)
  alpha <- 1 - cVar0/(cVar0 + cVar1)
  
  # scoping mechanics using w as a free parameter
  w[cItems0] <- w[cItems0] * alpha
  w[cItems1] <- w[cItems1] * (1-alpha)
  
  if(length(cItems0) > 1) {
    w <- recurFun(w, covMat, cItems0)
  }
  if(length(cItems1) > 1) {
    w <- recurFun(w, covMat, cItems1)
  }
  return(w)
}


out <- getRecBipart(covMat, clustOrder)
out

With covMat and corMat being from the last post. In fact, this function can be further modified by encapsulating the clustering order within the getRecBipart function, but in the interest of keeping the code as similar to Marcos Lopez de Prado’s code as I could, I’ll leave this here.

Anyhow, the backtest will follow. One thing I will mention is that I’m using Quandl’s EOD database, as Yahoo has really screwed up their financial database (I.E. some sector SPDRs have broken data, dividends not adjusted, etc.). While this database is a $50/month subscription, I believe free users can access it up to 150 times in 60 days, so that should be enough to run backtests from this blog, so long as you save your downloaded time series for later use by using write.zoo.

This code needs the tseries library for the portfolio.optim function for the minimum variance portfolio (Dr. Kris Boudt has a course on this at datacamp), and the other standard packages.

A helper function for this backtest (and really, any other momentum rotation backtest) is the appendMissingAssets function, which simply adds on assets not selected to the final weighting and re-orders the weights by the original ordering.

require(tseries)
require(PerformanceAnalytics)
require(quantmod)
require(Quandl)

Quandl.api_key("YOUR_AUTHENTICATION_HERE") # not displaying my own api key, sorry :(

# function to append missing (I.E. assets not selected) asset names and sort into original order
appendMissingAssets <- function(wts, allAssetNames, wtsDate) {
  absentAssets <- allAssetNames[!allAssetNames %in% names(wts)]
  absentWts <- rep(0, length(absentAssets))
  names(absentWts) <- absentAssets
  wts <- c(wts, absentWts)
  wts <- xts(t(wts), order.by=wtsDate)
  wts <- wts[,allAssetNames]
  return(wts)
}

Next, we make the call to Quandl to get our data.

symbols <- c("SPY", "VGK",	"EWJ",	"EEM",	"VNQ",	"RWX",	"IEF",	"TLT",	"DBC",	"GLD")	

rets <- list()
for(i in 1:length(symbols)) {
  
  # quandl command to download from EOD database. Free users should use write.zoo in this loop.
  
  returns <- Return.calculate(Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close)
  colnames(returns) <- symbols[i]
  rets[[i]] <- returns
}
rets <- na.omit(do.call(cbind, rets))

While Josh Ulrich fixed quantmod to actually get Yahoo data after Yahoo broke the API, the problem is that the Yahoo data is now garbage as well, and I’m not sure how much Josh Ulrich can do about that. I really hope some other provider can step up and provide free, usable EOD data so that I don’t have to worry about readers not being able to replicate the backtest, as my policy for this blog is that readers should be able to replicate the backtests so they don’t just nod and take my word for it. If you are or know of such a provider, please leave a comment so that I can let the blog readers know all about you.

Next, we initialize the settings for the backtest.

invVolWts <- list()
minVolWts <- list()
hrpWts <- list()
ep <- endpoints(rets, on =  "months")
nMonths = 6 # month lookback (6 as per parameters from allocateSmartly)
nVol = 20 # day lookback for volatility (20 ibid)

While the AAA backtest actually uses a 126 day lookback instead of a 6 month lookback, as it trades at the end of every month, that’s effectively a 6 month lookback, give or take a few days out of 126, but the code is less complex this way.

Next, we have our actual backtest.

for(i in 1:(length(ep)-nMonths)) {
  
  # get returns subset and compute absolute momentum
  retSubset <- rets[c(ep[i]:ep[(i+nMonths)]),]
  retSubset <- retSubset[-1,]
  moms <- Return.cumulative(retSubset)
  
  # select top performing assets and subset returns for them
  highRankAssets <- rank(moms) >= 6 # top 5 assets
  posReturnAssets <- moms > 0 # positive momentum assets
  selectedAssets <- highRankAssets & posReturnAssets # intersection of the above
  selectedSubset <- retSubset[,selectedAssets] # subset returns slice
  
  if(sum(selectedAssets)==0) { # if no qualifying assets, zero weight for period
    
    wts <- xts(t(rep(0, ncol(retSubset))), order.by=last(index(retSubset)))
    colnames(wts) <- colnames(retSubset)
    invVolWts[[i]] <- minVolWts[[i]] <- hrpWts[[i]] <- wts
    
  } else if (sum(selectedAssets)==1) { # if one qualifying asset, invest fully into it
    
    wts <- xts(t(rep(0, ncol(retSubset))), order.by=last(index(retSubset)))
    colnames(wts) <- colnames(retSubset)
    wts[, which(selectedAssets==1)] <- 1
    invVolWts[[i]] <- minVolWts[[i]] <- hrpWts[[i]] <- wts
    
  } else { # otherwise, use weighting algorithms
    
    cors <- cor(selectedSubset) # correlation
    volSubset <- tail(selectedSubset, nVol) # 20 day volatility
    vols <- StdDev(volSubset)
    covs <- t(vols) %*% vols * cors
    
    # minimum volatility using portfolio.optim from tseries
    minVolRets <- t(matrix(rep(1, sum(selectedAssets))))
    minVolWt <- portfolio.optim(x=minVolRets, covmat = covs)$pw
    names(minVolWt) <- colnames(covs)
    minVolWt <- appendMissingAssets(minVolWt, colnames(retSubset), last(index(retSubset)))
    minVolWts[[i]] <- minVolWt
    
    # inverse volatility weights
    invVols <- 1/vols 
    invVolWt <- invVols/sum(invVols) 
    invNames <- colnames(invVolWt)
    invVolWt <- as.numeric(invVolWt) 
    names(invVolWt) <- invNames
    invVolWt <- appendMissingAssets(invVolWt, colnames(retSubset), last(index(retSubset)))
    invVolWts[[i]] <- invVolWt
    
    # hrp weights
    clustOrder <- hclust(dist(cors), method = 'single')$order
    hrpWt <- getRecBipart(covs, clustOrder)
    names(hrpWt) <- colnames(covs)
    hrpWt <- appendMissingAssets(hrpWt, colnames(retSubset), last(index(retSubset)))
    hrpWts[[i]] <- hrpWt
  }
}

In a few sentences, this is what happens:

The algorithm takes a subset of the returns (the past six months at every month), and computes absolute momentum. It then ranks the ten absolute momentum calculations, and selects the intersection of the top 5, and those with a return greater than zero (so, a dual momentum calculation).

If no assets qualify, the algorithm invests in nothing. If there’s only one asset that qualifies, the algorithm invests in that one asset. If there are two or more qualifying assets, the algorithm computes a covariance matrix using 20 day volatility multiplied with a 126 day correlation matrix (that is, sd_20′ %*% sd_20 * (elementwise) cor_126. It then computes normalized inverse volatility weights using the volatility from the past 20 days, a minimum variance portfolio with the portfolio.optim function, and lastly, the hierarchical risk parity weights using the HRP code above from Marcos Lopez de Prado’s paper.

Lastly, the program puts together all of the weights, and adds a cash investment for any period without any investments.

invVolWts <- round(do.call(rbind, invVolWts), 3) # round for readability
minVolWts <- round(do.call(rbind, minVolWts), 3)
hrpWts <- round(do.call(rbind, hrpWts), 3)

# allocate to cash if no allocation made due to all negative momentum assets
invVolWts$cash <- 0; invVolWts$cash <- 1-rowSums(invVolWts)
hrpWts$cash <- 0; hrpWts$cash <- 1-rowSums(hrpWts)
minVolWts$cash <- 0; minVolWts$cash <- 1-rowSums(minVolWts)

# cash value will be zero
rets$cash <- 0

# compute backtest returns
invVolRets <- Return.portfolio(R = rets, weights = invVolWts)
minVolRets <- Return.portfolio(R = rets, weights = minVolWts)
hrpRets <- Return.portfolio(R = rets, weights = hrpWts)

Here are the results:

compare <- cbind(invVolRets, minVolRets, hrpRets)
colnames(compare) <- c("invVol", "minVol", "HRP")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))  
                             invVol    minVol       HRP
Annualized Return         0.0872000 0.0724000 0.0792000
Annualized Std Dev        0.1208000 0.1025000 0.1136000
Annualized Sharpe (Rf=0%) 0.7221000 0.7067000 0.6968000
Worst Drawdown            0.1548801 0.1411368 0.1593287
Calmar Ratio              0.5629882 0.5131956 0.4968234

In short, in the context of a small, carefully-selected and allegedly diversified (I’ll let Adam Butler speak for that one) universe dominated by the process of which assets to invest in as opposed to how much, the theoretical upsides of an algorithm which simultaneously exploits a covariance structure without needing to invert a covariance matrix can be lost.

However, this test (albeit from 2007 onwards, thanks to ETF inception dates combined with lookback burn-in) confirms what Adam Butler himself told me, which is that HRP hasn’t impressed him, and from this backtest, I can see why. However, in the context of dual momentum rank selection, I’m not convinced that any weighting scheme will realize much better performance than any other.

Thanks for reading.

NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedIn profile can be found here.

The Marcos Lopez de Prado Hierarchical Risk Parity Algorithm

This post will be about replicating the Marcos Lopez de Prado algorithm from his paper building diversified portfolios that outperform out of sample. This algorithm is one that attempts to make a tradeoff between the classic mean-variance optimization algorithm that takes into account a covariance structure, but is unstable, and an inverse volatility algorithm that ignores covariance, but is more stable.

This is a paper that I struggled with until I ran the code in Python (I have anaconda installed but have trouble installing some packages such as keras because I’m on windows…would love to have someone walk me through setting up a Linux dual-boot), as I assumed that the clustering algorithm actually was able to concretely group every asset into a particular cluster (I.E. ETF 1 would be in cluster 1, ETF 2 in cluster 3, etc.). Turns out, that isn’t at all the case.

Here’s how the algorithm actually works.

First off, it computes a covariance and correlation matrix (created from simulated data in Marcos’s paper). Next, it uses a hierarchical clustering algorithm on a distance-transformed correlation matrix, with the “single” method (I.E. friend of friends–do ?hclust in R to read up more on this). The key output here is the order of the assets from the clustering algorithm. Note well: this is the only relevant artifact of the entire clustering algorithm.

Using this order, it then uses an algorithm that does the following:

Initialize a vector of weighs equal to 1 for each asset.

Then, run the following recursive algorithm:

1) Break the order vector up into two equal-length (or as close to equal length) lists as possible.

2) For each half of the list, compute the inverse variance weights (that is, just the diagonal) of the covariance matrix slice containing the assets of interest, and then compute the variance of the cluster when multiplied by the weights (I.E. w’ * S^2 * w).

3) Then, do a basic inverse-variance weight for the two clusters. Call the weight of cluster 0 alpha = 1-cluster_variance_0/(cluster_variance_0 + cluster_variance_1), and the weight of cluster 1 its complement. (1 – alpha).

4) Multiply all assets in the original vector of weights containing assets in cluster 0 with the weight of cluster 0, and all weights containing assets in cluster 1 with the weight of cluster 1. That is, weights[index_assets_cluster_0] *= alpha, weights[index_assets_cluster_1] *= 1-alpha.

5) Lastly, if the list isn’t of length 1 (that is, not a single asset), repeat this entire process until every asset is its own cluster.

Here is the implementation in R code.

First off, the correlation matrix and the covariance matrix for use in this code, obtained from Marcos Lopez De Prado’s code in the appendix in his paper.

> covMat
             V1           V2           V3           V4           V5          V6           V7           V8           V9          V10
1   1.000647799 -0.003050479  0.010033224 -0.010759689 -0.005036503 0.008762563  0.998201625 -0.001393196 -0.001254522 -0.009365991
2  -0.003050479  1.009021349  0.008613817  0.007334478 -0.009492688 0.013031817 -0.009420720 -0.015346223  1.010520047  1.013334849
3   0.010033224  0.008613817  1.000739363 -0.000637885  0.001783293 1.001574768  0.006385368  0.001922316  0.012902050  0.007997935
4  -0.010759689  0.007334478 -0.000637885  1.011854725  0.005759976 0.000905812 -0.011912269  0.000461894  0.012572661  0.009621670
5  -0.005036503 -0.009492688  0.001783293  0.005759976  1.005835878 0.005606343 -0.009643250  1.008567427 -0.006183035 -0.007942770
6   0.008762563  0.013031817  1.001574768  0.000905812  0.005606343 1.064309825  0.004413960  0.005780148  0.017185396  0.011601336
7   0.998201625 -0.009420720  0.006385368 -0.011912269 -0.009643250 0.004413960  1.058172027 -0.006755374 -0.008099181 -0.016240271
8  -0.001393196 -0.015346223  0.001922316  0.000461894  1.008567427 0.005780148 -0.006755374  1.074833155 -0.011903469 -0.013738378
9  -0.001254522  1.010520047  0.012902050  0.012572661 -0.006183035 0.017185396 -0.008099181 -0.011903469  1.075346677  1.015220126
10 -0.009365991  1.013334849  0.007997935  0.009621670 -0.007942770 0.011601336 -0.016240271 -0.013738378  1.015220126  1.078586686
> corMat
             V1           V2           V3           V4           V5          V6           V7           V8           V9          V10
1   1.000000000 -0.003035829  0.010026270 -0.010693011 -0.005020245 0.008490954  0.970062043 -0.001343386 -0.001209382 -0.009015412
2  -0.003035829  1.000000000  0.008572055  0.007258718 -0.009422702 0.012575370 -0.009117080 -0.014736040  0.970108941  0.971348946
3   0.010026270  0.008572055  1.000000000 -0.000633903  0.001777455 0.970485047  0.006205079  0.001853505  0.012437239  0.007698212
4  -0.010693011  0.007258718 -0.000633903  1.000000000  0.005709500 0.000872861 -0.011512172  0.000442908  0.012052964  0.009210090
5  -0.005020245 -0.009422702  0.001777455  0.005709500  1.000000000 0.005418538 -0.009347204  0.969998023 -0.005945165 -0.007625721
6   0.008490954  0.012575370  0.970485047  0.000872861  0.005418538 1.000000000  0.004159261  0.005404237  0.016063910  0.010827955
7   0.970062043 -0.009117080  0.006205079 -0.011512172 -0.009347204 0.004159261  1.000000000 -0.006334331 -0.007592568 -0.015201540
8  -0.001343386 -0.014736040  0.001853505  0.000442908  0.969998023 0.005404237 -0.006334331  1.000000000 -0.011072068 -0.012759610
9  -0.001209382  0.970108941  0.012437239  0.012052964 -0.005945165 0.016063910 -0.007592568 -0.011072068  1.000000000  0.942667300
10 -0.009015412  0.971348946  0.007698212  0.009210090 -0.007625721 0.010827955 -0.015201540 -0.012759610  0.942667300  1.000000000

Now, for the implementation.

This reads in the two matrices above and gets the clustering order.

covMat <- read.csv('cov.csv', header = FALSE)
corMat <- read.csv('corMat.csv', header = FALSE)

clustOrder <- hclust(dist(corMat), method = 'single')$order

This is the clustering order:

> clustOrder
 [1]  9  2 10  1  7  3  6  4  5  8

Next, the getIVP (get Inverse Variance Portfolio) and getClusterVar functions (note: I’m trying to keep the naming conventions identical to Dr. Lopez’s paper)

getIVP <- function(covMat) {
  # get inverse variance portfolio from diagonal of covariance matrix
  invDiag <- 1/diag(as.matrix(covMat))
  weights <- invDiag/sum(invDiag)
  return(weights)
}

getClusterVar <- function(covMat, cItems) {
  # compute cluster variance from the inverse variance portfolio above
  covMatSlice <- covMat[cItems, cItems]
  weights <- getIVP(covMatSlice)
  cVar <- t(weights) %*% as.matrix(covMatSlice) %*% weights
  return(cVar)
}

Next, my code diverges from the code in the paper, because I do not use the list comprehension structure, but instead opt for a recursive algorithm, as I find that style to be more readable.

One wrinkle to note is the use of the double arrow dash operator, to assign to a variable outside the scope of the recurFun function. I assign the initial weights vector w in the global environment, and update it from within the recurFun function. I am aware that it is a faux pas to create variables in the global environment, but my attempts at creating a temporary environment in which to update the weight vector did not produce the updating mechanism I had hoped to, so a little bit of assistance with refactoring this code would be appreciated.

getRecBipart <- function(covMat, sortIx) {
  # keeping track of weights vector in the global environment
  assign("w", value = rep(1, ncol(covMat)), envir = .GlobalEnv)

  # run recursion function
  recurFun(covMat, sortIx)
  return(w)
}

recurFun <- function(covMat, sortIx) {
  # get first half of sortIx which is a cluster order
  subIdx <- 1:trunc(length(sortIx)/2)

  # subdivide ordering into first half and second half
  cItems0 <- sortIx[subIdx]
  cItems1 <- sortIx[-subIdx]

  # compute cluster variances of covariance matrices indexed
  # on first half and second half of ordering
  cVar0 <- getClusterVar(covMat, cItems0)
  cVar1 <- getClusterVar(covMat, cItems1)
  alpha <- 1 - cVar0/(cVar0 + cVar1)
  
  # updating weights outside the function using scoping mechanics 
  w[cItems0] <<- w[cItems0] * alpha
  w[cItems1] <<- w[cItems1] * (1-alpha)
  
  # rerun the function on a half if the length of that half is greater than 1
  if(length(cItems0) > 1) {
    recurFun(covMat, cItems0)
  }
  if(length(cItems1) > 1) {
    recurFun(covMat, cItems1)
  }
}

Lastly, let’s run the function.

out <- getRecBipart(covMat, clustOrder)

With the result (which matches the paper):

> out
 [1] 0.06999366 0.07592151 0.10838948 0.19029104 0.09719887 0.10191545 0.06618868 0.09095933 0.07123881 0.12790318

So, hopefully this democratizes the use of this technology in R. While I have seen a raw Rcpp implementation and one from the Systematic Investor Toolbox, neither of those implementations satisfied me from a “plug and play” perspective. This implementation solves that issue. Anyone here can copy and paste these functions into their environment and immediately make use of one of the algorithms devised by one of the top minds in quantitative finance.

A demonstration in a backtest using this methodology will be forthcoming.

Thanks for reading.

NOTE: I am always interested in networking and full-time opportunities which may benefit from my skills. Furthermore, I am also interested in project work in the volatility ETF trading space. My linkedin profile can be found here.

Constant Expiry VIX Futures (Using Public Data)

This post will be about creating constant expiry (E.G. a rolling 30-day contract) using VIX settlement data from the CBOE and the spot VIX calculation (from Yahoo finance, or wherever else). Although these may be able to be traded under certain circumstances, this is not always the case (where the desired expiry is shorter than the front month’s time to expiry).

The last time I visited this topic, I created a term structure using publicly available data from the CBOE, along with an external expiry calendar.

The logical next step, of course, is to create constant-expiry contracts, which may or may not be tradable (if your contract expiry is less than 30 days, know that the front month has days in which the time to expiry is more than 30 days).

So here’s where we left off: a way to create a continuous term structure using CBOE settlement VIX data.

So from here, before anything, we need to get VIX data. And while the getSymbols command used to be easier to use, because Yahoo broke its API (what else do you expect from an otherwise-irrelevant, washed-up web 1.0 dinosaur?), it’s not possible to get free Yahoo data at this point in time (in the event that Josh Ulrich doesn’t fix this issue in the near future, I’m open to suggestions for other free sources of data which provide data of reputable quality), so we need to get VIX data from elsewhere (particularly, the CBOE itself, which is a one-stop shop for all VIX-related data…and most likely some other interesting futures as well.)

So here’s how to get VIX data from the CBOE (thanks, all you awesome CBOE people! And a shoutout to all my readers from the CBOE, I’m sure some of you are from there).

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))
spotVix <- Cl(VIX)

Next, there’s a need for some utility functions to help out with identifying which futures contracts to use for constructing synthetics.

# find column with greatest days to expiry less than or equal to desired days to expiry
shortDurMinMax <- function(row, daysToExpiry) {
  return(max(which(row <= daysToExpiry)))
}

# find column with least days to expiry greater desired days to expiry
longDurMinMax <- function(row, daysToExpiry) {
  return(min(which(row > daysToExpiry)))
}

# gets the difference between the two latest contracts (either expiry days or price)
getLastDiff <- function(row) {
  indices <- rev(which(!is.na(row)))
  out <- row[indices[1]] - row[indices[2]]
  return(out)
}

# gets the rightmost non-NA value of a row
getLastValue <- function(row) {
  indices <- rev(which(!is.na(row)))
  out <- row[indices[1]]
  return(out)
}

The first two functions are to determine short-duration and long-duration contracts. Simply, provided a row of data and the desired constant time to expiry, the first function finds the contract with a time closest to expiry less than or equal to the desired amount, while the second function does the inverse.

The next two functions are utilized in the scenario of a function whose time to expiry is greater than the expiry of the longest trading contract. Such a synthetic would obviously not be able to be traded, but can be created for the purposes of using as an indicator. The third function gets the last two non-NA values in a row (I.E. the two last prices, the two last times to expiry), and the fourth one simply gets the rightmost non-NA value in a row.

The algorithm to create a synthetic constant-expiry contract/indicator is divided into three scenarios:

One, in which the desired time to expiry of the contract is shorter than the front month, such as a constant 30-day expiry contract, when the front month has more than 30 days to maturity (such as on Nov 17, 2016), at which point, the weight will be the desired time to expiry over the remaining time to expiry in the front month, and the remainder in spot VIX (another asset that cannot be traded, at least conventionally).

The second scenario is one in which the desired time to expiry is longer than the last traded contract. For instance, if the desire was to create a contract
with a year to expiry when the furthest out is eight months, there obviously won’t be data for such a contract. In such a case, the algorithm is to compute the linear slope between the last two available contracts, and add the extrapolated product of the slope multiplied by the time remaining between the desired and the last contract to the price of the last contract.

Lastly, the third scenario (and the most common one under most use cases) is that of the synthetic for which there is both a trading contract that has less time to expiry than the desired constant rate, and one with more time to expiry. In this instance, a matter of linear algebra (included in the comments) denotes the weight of the short expiry contract, which is (desired – expiry_long)/(expiry_short – expiry_long).

The algorithm iterates through all three scenarios, and due to the mechanics of xts automatically sorting by timestamp, one obtains an xts object in order of dates of a synthetic, constant expiry futures contract.

Here is the code for the function.


constantExpiry <- function(spotVix, termStructure, expiryStructure, daysToExpiry) {
  
  # Compute synthetics that are too long (more time to expiry than furthest contract)
  
  # can be Inf if no column contains values greater than daysToExpiry (I.E. expiry is 3000 days)
  suppressWarnings(longCol <- xts(apply(expiryStructure, 1, longDurMinMax, daysToExpiry), order.by=index(termStructure)))
  longCol[longCol == Inf] <- 10
  
  # xts for too long to expiry -- need a NULL for rbinding if empty
  tooLong <- NULL
  
  # Extend the last term structure slope an arbitrarily long amount of time for those with too long expiry
  tooLongIdx <- index(longCol[longCol==10])
  if(length(tooLongIdx) > 0) {
    tooLongTermStructure <- termStructure[tooLongIdx]
    tooLongExpiryStructure <- expiryStructure[tooLongIdx]
    
    # difference in price/expiry for longest two contracts, use it to compute a slope
    priceDiff <- xts(apply(tooLongTermStructure, 1, getLastDiff), order.by = tooLongIdx)
    expiryDiff <- xts(apply(tooLongExpiryStructure, 1, getLastDiff), order.by = tooLongIdx)
    slope <- priceDiff/expiryDiff
    
    # get longest contract price and compute additional days to expiry from its time to expiry 
    # I.E. if daysToExpiry is 180 and longest is 120, additionalDaysToExpiry is 60
    maxDaysToExpiry <- xts(apply(tooLongExpiryStructure, 1, max, na.rm = TRUE), order.by = tooLongIdx)
    longestContractPrice <- xts(apply(tooLongTermStructure, 1, getLastValue), order.by = tooLongIdx)
    additionalDaysToExpiry <- daysToExpiry - maxDaysToExpiry
    
    # add slope multiplied by additional days to expiry to longest contract price
    tooLong <- longestContractPrice + additionalDaysToExpiry * slope
  }
  
  # compute synthetics that are too short (less time to expiry than shortest contract)
  
  # can be -Inf if no column contains values less than daysToExpiry (I.E. expiry is 5 days)
  suppressWarnings(shortCol <- xts(apply(expiryStructure, 1, shortDurMinMax, daysToExpiry), order.by=index(termStructure)))
  shortCol[shortCol == -Inf] <- 0
  
  # xts for too short to expiry -- need a NULL for rbinding if empty
  tooShort <- NULL
  
  tooShortIdx <- index(shortCol[shortCol==0])
  
  if(length(tooShortIdx) > 0) {
    tooShort <- termStructure[,1] * daysToExpiry/expiryStructure[,1] + spotVix * (1 - daysToExpiry/expiryStructure[,1])
    tooShort <- tooShort[tooShortIdx]
  }
  
  
  # compute everything in between (when time to expiry is between longest and shortest)
  
  # get unique permutations for contracts that term structure can create
  colPermutes <- cbind(shortCol, longCol)
  colnames(colPermutes) <- c("short", "long")
  colPermutes <- colPermutes[colPermutes$short > 0,]
  colPermutes <- colPermutes[colPermutes$long < 10,]
  
  regularSynthetics <- NULL
  
  # if we can construct synthetics from regular futures -- someone might enter an extremely long expiry
  # so this may not always be the case
  
  if(nrow(colPermutes) > 0) {
    
    # pasting long and short expiries into a single string for easier subsetting
    shortLongPaste <- paste(colPermutes$short, colPermutes$long, sep="_")
    uniqueShortLongPaste <- unique(shortLongPaste)
    
    regularSynthetics <- list()
    for(i in 1:length(uniqueShortLongPaste)) {
      # get unique permutation of short-expiry and long-expiry contracts
      permuteSlice <- colPermutes[which(shortLongPaste==uniqueShortLongPaste[i]),]
      expirySlice <- expiryStructure[index(permuteSlice)]
      termStructureSlice <- termStructure[index(permuteSlice)]
      
      # what are the parameters?
      shortCol <- unique(permuteSlice$short); longCol <- unique(permuteSlice$long)
      
      # computations -- some linear algebra
      
      # S/L are weights, ex_S/ex_L are time to expiry
      # D is desired constant time to expiry
      
      # S + L = 1
      # L = 1 - S
      # S + (1-S) = 1
      # 
      # ex_S * S + ex_L * (1-S) = D
      # ex_S * S + ex_L - ex_L * S = D
      # ex_S * S - ex_L * S = D - ex_L
      # S(ex_S - ex_L) = D - ex_L
      # S = (D - ex_L)/(ex_S - ex_L)
      
      weightShort <- (daysToExpiry - expirySlice[, longCol])/(expirySlice[, shortCol] - expirySlice[, longCol])
      weightLong <- 1 - weightShort
      syntheticValue <- termStructureSlice[, shortCol] * weightShort + termStructureSlice[, longCol] * weightLong
      
      regularSynthetics[[i]] <- syntheticValue
    }
    
    regularSynthetics <- do.call(rbind, regularSynthetics)
  }
  
  out <- rbind(tooShort, regularSynthetics, tooLong)
  colnames(out) <- paste0("Constant_", daysToExpiry)
  return(out)
}

And here’s how to use it:

constant30 <- constantExpiry(spotVix = vixSpot, termStructure = termStructure, expiryStructure = expiryStructure, daysToExpiry = 30)
constant180 <- constantExpiry(spotVix = vixSpot, termStructure = termStructure, expiryStructure = expiryStructure, daysToExpiry = 180)

constantTermStructure <- cbind(constant30, constant180)

chart.TimeSeries(constantTermStructure, legend.loc = 'topright', main = "Constant Term Structure")

With the result:
Capture

Which means that between the CBOE data itself, and this function that creates constant expiry futures from CBOE spot and futures prices, one can obtain any futures contract, whether real or synthetic, to use as an indicator for volatility trading strategies. This allows for exploration of a wide variety of volatility trading strategies.

Thanks for reading.

NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedin can be found here.

Furthermore, if you are a volatility ETF/futures trading professional, I am interested in possible project-based collaboration. If you are interested, please contact me.

A Review of Gary Antonacci’s Dual Momentum Investing Book

This review is a book review of Gary Antonacci’s Dual Momentum Investing book.

The TL;DR: 4.5 out of 5 stars.

So, I honestly have very little criticism of the book beyond the fact that the book sort of insinuates as though equity momentum is the be-all-end-all of investing, which is why I deduct a fraction of a point.

Now, for the book itself: first off, unlike other quantitative trading books I’ve read (aside from Andreas Clenow’s), the book outlines a very simple to follow strategy, to the point that it has already been replicated over at AllocateSmartly. (Side note: I think Walter’s resource at Allocate Smartly is probably the single best one-stop shop for reading up on any tactical asset allocation strategy, as it’s a compendium of many strategies in the risk/return profile of the 7-15% CAGR type strategies, and even has a correlation matrix between them all.)

Regarding the rest of the content, Antonacci does a very thorough job of stepping readers through the history/rationale of momentum, and not just that, but also addressing the alternatives to his strategy.

While the “why momentum works” aspect you can find in this book and others on the subject (I.E. Alpha Architect’s Quantitative Momentum book), I do like the section on other alternative assets. Namely, the book touches upon the fact that commodities no longer trend, so a lot of CTAs are taking it on the chin, and that historically, fixed income has performed much worse from an absolute return than equities. Furthermore, smart beta isn’t (smart), and many of these factors have very low aggregate returns (if they’re significant at all, I believe Wesley Gray at Alpha Architect has a blog post stating that they aren’t). There are a fair amount of footnotes for those interested in corroborating the assertions. Suffice to say, when it comes to strategies that don’t need daily micromanagement, when it comes to how far you can get without leverage (essentially, anything outside the space of volatility trading strategies), equity momentum is about as good as you get.

Antonacci then introduces his readers to his GEM (Global Equities Momentum) strategy, which can be explained in a few sentences: at the end of each month, calculate the total 12-month return of SPY, EAFE, and BIL. If BIL has the highest return, buy AGG for that month, otherwise buy the asset with the highest return. Repeat. That’s literally it, and the performance characteristics, on a risk-adjusted basis, are superior to just about any equity fund tied down to a tiny tracking error. Essentially, the reason for that is that equity markets have bear markets, and a dual momentum strategy lets you preserve your gains instead of giving it back in market corrections (I.E. 2000-2003, 2008, etc.) while keeping pace during the good times.

Lastly, Antonacci provides some ideas for possibly improving on GEM. I may examine on these in the future. However, the low-hanging fruit for improving on this strategy, in my opinion, is to find some other strategies that diversify its drawdowns, and raise its risk-adjusted return profile. Even if the total return goes down, I believe that an interactive brokers account can offer some amount of leverage (either 50% or 100%) to boost the total returns back up, or combine a more diversified portfolio with a volatility strategy.

Lastly, the appendix includes the original dual momentum paper, and a monthly return table for GEM going back to 1974.

All in all, this book is about as accessible and comprehensive as you can get on a solid strategy that readers actually *can* implement themselves in their brokerage account of choice (please use IB or Robinhood because there’s no point paying $8-$10 per trade if you’re retail). That said, I still think that there are venues in which to travel if you’re looking to improve your overall portfolio with GEM as a foundation.

Thanks for reading.

NOTEL I am always interested in networking and hearing about full-time roles which can benefit from my skill set. My linkedin profile can be found here.

Creating a VIX Futures Term Structure In R From Official CBOE Settlement Data

This post will be detailing a process to create a VIX term structure from freely available CBOE VIX settlement data and a calendar of freely obtainable VIX expiry dates. This has applications for volatility trading strategies.

So this post, as has been the usual for quite some time, will not be about a strategy, but rather, a tool that can be used for exploring future strategies. Particularly, volatility strategies–which seems to have been a hot topic on this blog some time ago (and might very well be still, especially since the Volatility Made Simple blog has just stopped tracking open-sourced strategies for the past year).

This post’s topic is the VIX term structure–that is, creating a set of continuous contracts–properly rolled according to VIX contract specifications, rather than a hodgepodge of generic algorithms as found on some other websites. The idea is, as of the settlement of a previous day (or whenever the CBOE actually releases their data), you can construct a curve of contracts, and see if it’s in contango (front month cheaper than next month and so on) or backwardation (front month more expensive than next month, etc.).

The first (and most code-intensive) part of the procedure is fairly simple–map the contracts to an expiration date, then put their settlement dates and times to expiry into two separate xts objects, with one column for each contract.

The expiries text file is simply a collection of copied and pasted expiry dates from this site. It includes the January 2018 expiration date. Here is what it looks like:

> head(expiries)
  V1       V2   V3
1 18  January 2006
2 15 February 2006
3 22    March 2006
4 19    April 2006
5 17      May 2006
6 21     June 2006
require(xts)
require(data.table)

# 06 through 17
years <- c(paste0("0", c(6:9)), as.character(c(10:17)))

# futures months
futMonths <- c("F", "G", "H", "J", "K", "M",
            "N", "Q", "U", "V", "X", "Z")

# expiries come from http://www.macroption.com/vix-expiration-calendar/
expiries <- read.table("expiries.txt", header = FALSE, sep = " ")

# convert expiries into dates in R
dateString <- paste(expiries$V3, expiries$V2, expiries$V1, sep = "-")
dates <- as.Date(dateString, format = "%Y-%B-%d")

# map futures months to numbers for dates
monthMaps <- cbind(futMonths, c("01", "02", "03", "04", "05", "06",
                                   "07", "08", "09", "10", "11", "12"))
monthMaps <- data.frame(monthMaps)
colnames(monthMaps) <- c("futureStem", "monthNum")

dates <- data.frame(dates)
dates$dateMon <- substr(dates$dates, 1, 7)

contracts <- expand.grid(futMonths, years)
contracts <- paste0(contracts[,1], contracts[,2])
contracts <- c(contracts, "F18")
stem <- "https://cfe.cboe.com/Publish/ScheduledTask/MktData/datahouse/CFE_"
#contracts <- paste0(stem, contracts, "_VX.csv")

masterlist <- list()
timesToExpiry <- list()
for(i in 1:length(contracts)) {
  
  # obtain data
  contract <- contracts[i]
  dataFile <- paste0(stem, contract, "_VX.csv")
  expiryYear <- paste0("20",substr(contract, 2, 3))
  expiryMonth <- monthMaps$monthNum[monthMaps$futureStem == substr(contract,1,1)]
  expiryDate <- dates$dates[dates$dateMon == paste(expiryYear, expiryMonth, sep="-")]
  data <- suppressWarnings(fread(dataFile))
  
  # create dates
  dataDates <- as.Date(data$`Trade Date`, format = '%m/%d/%Y')
  
  # create time to expiration xts
  toExpiry <- xts(expiryDate - dataDates, order.by=dataDates)
  colnames(toExpiry) <- contract
  timesToExpiry[[i]] <- toExpiry
  
  # get settlements
  settlement <- xts(data$Settle, order.by=dataDates)
  colnames(settlement) <- contract
  masterlist[[i]] <- settlement
}

# cbind outputs
masterlist <- do.call(cbind, masterlist)
timesToExpiry <- do.call(cbind, timesToExpiry)

# NA out zeroes in settlements
masterlist[masterlist==0] <- NA

From there, we need to visualize how many contracts are being traded at once on any given day (I.E. what’s a good steady state number for the term structure)?

sumNonNA <- function(row) {
  return(sum(!is.na(row)))
}

simultaneousContracts <- xts(apply(masterlist, 1, sumNonNA), order.by=index(masterlist))
chart.TimeSeries(simultaneousContracts)

The result looks like this:

So, 8 contracts (give or take) at any given point in time. This is confirmed by the end of the master list of settlements.

dim(masterlist)
tail(masterlist[,135:145])
> dim(masterlist)
[1] 3002  145
> tail(masterlist[,135:145])
           H17    J17    K17    M17    N17    Q17    U17    V17    X17    Z17   F18
2017-04-18  NA 14.725 14.325 14.525 15.175 15.475 16.225 16.575 16.875 16.925    NA
2017-04-19  NA 14.370 14.575 14.525 15.125 15.425 16.175 16.575 16.875 16.925    NA
2017-04-20  NA     NA 14.325 14.325 14.975 15.375 16.175 16.575 16.875 16.900    NA
2017-04-21  NA     NA 14.325 14.225 14.825 15.175 15.925 16.350 16.725 16.750    NA
2017-04-24  NA     NA 12.675 13.325 14.175 14.725 15.575 16.025 16.375 16.475 17.00
2017-04-25  NA     NA 12.475 13.125 13.975 14.425 15.225 15.675 16.025 16.150 16.75

Using this information, an algorithm can create eight continuous contracts, ranging from front month to eight months out. The algorithm starts at the first day of the master list to the first expiry, then moves between expiry windows, and just appends the front month contract, and the next seven contracts to a list, before rbinding them together, and does the same with the expiry structure.

termStructure <- list()
expiryStructure <- list()
masterDates <- unique(c(first(index(masterlist)), dates$dates[dates$dates %in% index(masterlist)], Sys.Date()-1))
for(i in 1:(length(masterDates)-1)) {
  subsetDates <- masterDates[c(i, i+1)]
  dateRange <- paste(subsetDates[1], subsetDates[2], sep="::")
  subset <- masterlist[dateRange,c(i:(i+7))]
  subset <- subset[-1,]
  expirySubset <- timesToExpiry[index(subset), c(i:(i+7))]
  colnames(subset) <- colnames(expirySubset) <- paste0("C", c(1:8))
  termStructure[[i]] <- subset
  expiryStructure[[i]] <- expirySubset
}

termStructure <- do.call(rbind, termStructure)
expiryStructure <- do.call(rbind, expiryStructure)

Again, one more visualization of when we have a suitable number of contracts:

simultaneousContracts <- xts(apply(termStructure, 1, sumNonNA), order.by=index(termStructure))
chart.TimeSeries(simultaneousContracts)

And in order to preserve the most data, we’ll cut the burn-in period off when we first have 7 contracts trading at once.

first(index(simultaneousContracts)[simultaneousContracts >= 7])
termStructure <- termStructure["2006-10-23::"]
expiryStructure <- expiryStructure[index(termStructure)]

So there you have it–your continuous VIX futures contract term structure, as given by the official CBOE settlements. While some may try and simulate a trading strategy based on these contracts, I myself prefer to use them as indicators or features to a model that would rather buy XIV or VXX.

One last trick, for those that want to visualize things, a way to actually visualize the term structure on any given day, in particular, the most recent one in the term structure.

plot(t(coredata(last(termStructure))), type = 'b')

A clear display of contango.

A post on how to compute synthetic constant-expiry contracts (EG constant 30 day expiry contracts) will be forthcoming in the near future.

Thanks for reading.

NOTE: I am currently interested in networking and full-time positions which may benefit from my skills. I may be contacted at my LinkedIn profile found here.