A Different Way To Think About Drawdown — Geometric Calmar Ratio

This post will discuss the idea of the geometric Calmar ratio — a way to modify the Calmar ratio to account for compounding returns.

So, one thing that recently had me sort of annoyed in terms of my interpretation of the Calmar ratio is this: essentially, the way I interpret it is that it’s a back of the envelope measure of how many years it takes you to recover from the worst loss. That is, if a strategy makes 10% a year (on average), and has a loss of 10%, well, intuition serves that from that point on, on average, it’ll take about a year to make up that loss–that is, a Calmar ratio of 1. Put another way, it means that on average, a strategy will make money at the end of 252 trading days.

But, that isn’t really the case in all circumstances. If an investment manager is looking to create a small, meager return for their clients, and is looking to make somewhere between 5-10%, then sure, the Calmar ratio approximation and interpretation makes sense in that context. Or, it makes sense in the context of “every year, we withdraw all profits and deposit to make up for any losses”. But in the context of a hedge fund trying to create large, market-beating returns for its investors, those hedge funds can have fairly substantial drawdowns.

Citadel–one of the gold standards of the hedge fund industry, had a drawdown of more than 50% during the financial crisis, and of course, there was https://www.reuters.com/article/us-usa-fund-volatility/exclusive-ljm-partners-shutting-its-doors-after-vol-mageddon-losses-in-u-s-stocks-idUSKCN1GC29Hat least one fund that blew up in the storm-in-a-teacup volatility spike on Feb. 5 (in other words, if those guys were professionals, what does that make me? Or if I’m an amateur, what does that make them?).

In any case, in order to recover from such losses, it’s clear that a strategy would need to make back a lot more than what it lost. Lose 25%? 33% is the high water mark. Lose 33%? 50% to get back to even. Lose 50%? 100%. Beyond that? You get the idea.

In order to capture this dynamic, we should write a new Calmar ratio to express this idea.

So here’s a function to compute the geometric calmar ratio:

require(PerformanceAnalytics)

geomCalmar <- function(r) {
  rAnn <- Return.annualized(r)
  maxDD <- maxDrawdown(r)
  toHighwater <- 1/(1-maxDD) - 1
  out <- rAnn/toHighwater
  return(out)
}

So, let's compare how some symbols stack up. We'll take a high-volatility name (AMZN), the good old S&P 500 (SPY), and a very low volatility instrument (SHY).

getSymbols(c('AMZN', 'SPY', 'SHY'), from = '1990-01-01')
rets <- na.omit(cbind(Return.calculate(Ad(AMZN)), Return.calculate(Ad(SPY)), Return.calculate(Ad(SHY))))
compare <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets), CalmarRatio(rets), geomCalmar(rets))
rownames(compare)[6] <- "Geometric Calmar"
compare

The returns start from July 31, 2002. Here are the statistics.

                           AMZN.Adjusted SPY.Adjusted SHY.Adjusted
Annualized Return             0.3450000   0.09110000   0.01940000
Annualized Std Dev            0.4046000   0.18630000   0.01420000
Annualized Sharpe (Rf=0%)     0.8528000   0.48860000   1.36040000
Worst Drawdown                0.6525491   0.55189461   0.02231459
Calmar Ratio                  0.5287649   0.16498652   0.86861760
Geometric Calmar              0.1837198   0.07393135   0.84923475

For my own proprietary volatility trading strategy, a strategy which has a Calmar above 2 (interpretation: finger in the air means that you make a new equity high every six months in the worst case scenario), here are the statistics:

> CalmarRatio(stratRetsAggressive[[2]]['2011::'])
                Close
Calmar Ratio 3.448497
> geomCalmar(stratRetsAggressive[[2]]['2011::'])
                     Close
Annualized Return 2.588094

Essentially, because of the nature of losses compounding, the geometric Calmar ratio will always be lower than the standard Calmar ratio, which is to be expected when dealing with the geometric nature of compounding returns.

Essentially, I hope that this gives individuals some thought about re-evaluating the Calmar Ratio.

Thanks for reading.

NOTES: registration for R/Finance 2018 is open. As usual, I’ll be giving a lightning talk, this time on volatility trading.

I am currently contracting and seek network opportunities, along with information about prospective full time roles starting in July. Those interested in my skill set can feel free to reach out to me on LinkedIn here.

Creating a Table of Monthly Returns With R and a Volatility Trading Interview

This post will cover two aspects: the first will be a function to convert daily returns into a table of monthly returns, complete with drawdowns and annual returns. The second will be an interview I had with David Lincoln (now on youtube) to talk about the events of Feb. 5, 2018, and my philosophy on volatility trading.

So, to start off with, a function that I wrote that’s supposed to mimic PerforamnceAnalytics’s table.CalendarReturns is below. What table.CalendarReturns is supposed to do is to create a month X year table of monthly returns with months across and years down. However, it never seemed to give me the output I was expecting, so I went and wrote another function.

Here’s the code for the function:

require(data.table)
require(PerformanceAnalytics)
require(scales)
require(Quandl)

# helper functions
pastePerc <- function(x) {return(paste0(comma(x),"%"))}
rowGsub <- function(x) {x <- gsub("NA%", "NA", x);x}

calendarReturnTable <- function(rets, digits = 3, percent = FALSE) {
  
  # get maximum drawdown using daily returns
  dds <- apply.yearly(rets, maxDrawdown)
  
  # get monthly returns
  rets <- apply.monthly(rets, Return.cumulative)
  
  # convert to data frame with year, month, and monthly return value
  dfRets <- cbind(year(index(rets)), month(index(rets)), coredata(rets))
  
  # convert to data table and reshape into year x month table
  dfRets <- data.frame(dfRets)
  colnames(dfRets) <- c("Year", "Month", "Value")
  monthNames <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
  for(i in 1:length(monthNames)) {
    dfRets$Month[dfRets$Month==i] <- monthNames[i]
  }
  dfRets <- data.table(dfRets)
  dfRets <- data.table::dcast(dfRets, Year~Month)
  
  # create row names and rearrange table in month order
  dfRets <- data.frame(dfRets)
  yearNames <- dfRets$Year
  rownames(dfRets) <- yearNames; dfRets$Year <- NULL
  dfRets <- dfRets[,monthNames]
  
  # append yearly returns and drawdowns
  yearlyRets <- apply.yearly(rets, Return.cumulative)
  dfRets$Annual <- yearlyRets
  dfRets$DD <- dds
  
  # convert to percentage
  if(percent) {
    dfRets <- dfRets * 100
  }
  
  # round for formatting
  dfRets <- apply(dfRets, 2, round, digits)
   
  # paste the percentage sign
  if(percent) {
    dfRets <- apply(dfRets, 2, pastePerc)
    dfRets <- apply(dfRets, 2, rowGsub)
    dfRets <- data.frame(dfRets)
    rownames(dfRets) <- yearNames
  }
  return(dfRets)
}

Here’s how the output looks like.

spy <- Quandl("EOD/SPY", type='xts', start_date='1990-01-01')
spyRets <- Return.calculate(spy$Adj_Close)
calendarReturnTable(spyRets, percent = FALSE)
        Jan    Feb    Mar    Apr    May    Jun    Jul    Aug    Sep    Oct    Nov    Dec Annual    DD
1993  0.000  0.011  0.022 -0.026  0.027  0.004 -0.005  0.038 -0.007  0.020 -0.011  0.012  0.087 0.047
1994  0.035 -0.029 -0.042  0.011  0.016 -0.023  0.032  0.038 -0.025  0.028 -0.040  0.007  0.004 0.085
1995  0.034  0.041  0.028  0.030  0.040  0.020  0.032  0.004  0.042 -0.003  0.044  0.016  0.380 0.026
1996  0.036  0.003  0.017  0.011  0.023  0.009 -0.045  0.019  0.056  0.032  0.073 -0.024  0.225 0.076
1997  0.062  0.010 -0.044  0.063  0.063  0.041  0.079 -0.052  0.048 -0.025  0.039  0.019  0.335 0.112
1998  0.013  0.069  0.049  0.013 -0.021  0.043 -0.014 -0.141  0.064  0.081  0.056  0.065  0.287 0.190
1999  0.035 -0.032  0.042  0.038 -0.023  0.055 -0.031 -0.005 -0.022  0.064  0.017  0.057  0.204 0.117
2000 -0.050 -0.015  0.097 -0.035 -0.016  0.020 -0.016  0.065 -0.055 -0.005 -0.075 -0.005 -0.097 0.171
2001  0.044 -0.095 -0.056  0.085 -0.006 -0.024 -0.010 -0.059 -0.082  0.013  0.078  0.006 -0.118 0.288
2002 -0.010 -0.018  0.033 -0.058 -0.006 -0.074 -0.079  0.007 -0.105  0.082  0.062 -0.057 -0.216 0.330
2003 -0.025 -0.013  0.002  0.085  0.055  0.011  0.018  0.021 -0.011  0.054  0.011  0.050  0.282 0.137
2004  0.020  0.014 -0.013 -0.019  0.017  0.018 -0.032  0.002  0.010  0.013  0.045  0.030  0.107 0.075
2005 -0.022  0.021 -0.018 -0.019  0.032  0.002  0.038 -0.009  0.008 -0.024  0.044 -0.002  0.048 0.070
2006  0.024  0.006  0.017  0.013 -0.030  0.003  0.004  0.022  0.027  0.032  0.020  0.013  0.158 0.076
2007  0.015 -0.020  0.012  0.044  0.034 -0.015 -0.031  0.013  0.039  0.014 -0.039 -0.011  0.051 0.099
2008 -0.060 -0.026 -0.009  0.048  0.015 -0.084 -0.009  0.015 -0.094 -0.165 -0.070  0.010 -0.368 0.476
2009 -0.082 -0.107  0.083  0.099  0.058 -0.001  0.075  0.037  0.035 -0.019  0.062  0.019  0.264 0.271
2010 -0.036  0.031  0.061  0.015 -0.079 -0.052  0.068 -0.045  0.090  0.038  0.000  0.067  0.151 0.157
2011  0.023  0.035  0.000  0.029 -0.011 -0.017 -0.020 -0.055 -0.069  0.109 -0.004  0.010  0.019 0.186
2012  0.046  0.043  0.032 -0.007 -0.060  0.041  0.012  0.025  0.025 -0.018  0.006  0.009  0.160 0.097
2013  0.051  0.013  0.038  0.019  0.024 -0.013  0.052 -0.030  0.032  0.046  0.030  0.026  0.323 0.056
2014 -0.035  0.046  0.008  0.007  0.023  0.021 -0.013  0.039 -0.014  0.024  0.027 -0.003  0.135 0.073
2015 -0.030  0.056 -0.016  0.010  0.013 -0.020  0.023 -0.061 -0.025  0.085  0.004 -0.017  0.013 0.119
2016 -0.050 -0.001  0.067  0.004  0.017  0.003  0.036  0.001  0.000 -0.017  0.037  0.020  0.120 0.103
2017  0.018  0.039  0.001  0.010  0.014  0.006  0.021  0.003  0.020  0.024  0.031  0.012  0.217 0.026
2018  0.056 -0.031     NA     NA     NA     NA     NA     NA     NA     NA     NA     NA  0.023 0.101

And with percentage formatting:

calendarReturnTable(spyRets, percent = TRUE)
Using 'Value' as value column. Use 'value.var' to override
         Jan      Feb     Mar     Apr     May     Jun     Jul      Aug      Sep      Oct     Nov     Dec   Annual      DD
1993  0.000%   1.067%  2.241% -2.559%  2.697%  0.367% -0.486%   3.833%  -0.726%   1.973% -1.067%  1.224%   8.713%  4.674%
1994  3.488%  -2.916% -4.190%  1.121%  1.594% -2.288%  3.233%   3.812%  -2.521%   2.843% -3.982%  0.724%   0.402%  8.537%
1995  3.361%   4.081%  2.784%  2.962%  3.967%  2.021%  3.217%   0.445%   4.238%  -0.294%  4.448%  1.573%  38.046%  2.595%
1996  3.558%   0.319%  1.722%  1.087%  2.270%  0.878% -4.494%   1.926%   5.585%   3.233%  7.300% -2.381%  22.489%  7.629%
1997  6.179%   0.957% -4.414%  6.260%  6.321%  4.112%  7.926%  -5.180%   4.808%  -2.450%  3.870%  1.910%  33.478% 11.203%
1998  1.288%   6.929%  4.876%  1.279% -2.077%  4.259% -1.351% -14.118%   6.362%   8.108%  5.568%  6.541%  28.688% 19.030%
1999  3.523%  -3.207%  4.151%  3.797% -2.287%  5.538% -3.102%  -0.518%  -2.237%   6.408%  1.665%  5.709%  20.388% 11.699%
2000 -4.979%  -1.523%  9.690% -3.512% -1.572%  1.970% -1.570%   6.534%  -5.481%  -0.468% -7.465% -0.516%  -9.730% 17.120%
2001  4.446%  -9.539% -5.599%  8.544% -0.561% -2.383% -1.020%  -5.933%  -8.159%   1.302%  7.798%  0.562% -11.752% 28.808%
2002 -0.980%  -1.794%  3.324% -5.816% -0.593% -7.376% -7.882%   0.680% -10.485%   8.228%  6.168% -5.663% -21.588% 32.968%
2003 -2.459%  -1.348%  0.206%  8.461%  5.484%  1.066%  1.803%   2.063%  -1.089%   5.353%  1.092%  5.033%  28.176% 13.725%
2004  1.977%   1.357% -1.320% -1.892%  1.712%  1.849% -3.222%   0.244%   1.002%   1.288%  4.451%  3.015%  10.704%  7.526%
2005 -2.242%   2.090% -1.828% -1.874%  3.222%  0.150%  3.826%  -0.937%   0.800%  -2.365%  4.395% -0.190%   4.827%  6.956%
2006  2.401%   0.573%  1.650%  1.263% -3.012%  0.264%  0.448%   2.182%   2.699%   3.152%  1.989%  1.337%  15.847%  7.593%
2007  1.504%  -1.962%  1.160%  4.430%  3.392% -1.464% -3.131%   1.283%   3.870%   1.357% -3.873% -1.133%   5.136%  9.925%
2008 -6.046%  -2.584% -0.903%  4.766%  1.512% -8.350% -0.899%   1.545%  -9.437% -16.519% -6.961%  0.983% -36.807% 47.592%
2009 -8.211% -10.745%  8.348%  9.935%  5.845% -0.068%  7.461%   3.694%   3.545%  -1.923%  6.161%  1.907%  26.364% 27.132%
2010 -3.634%   3.119%  6.090%  1.547% -7.945% -5.175%  6.830%  -4.498%   8.955%   3.820%  0.000%  6.685%  15.057% 15.700%
2011  2.330%   3.474%  0.010%  2.896% -1.121% -1.688% -2.000%  -5.498%  -6.945%  10.915% -0.406%  1.044%   1.888% 18.609%
2012  4.637%   4.341%  3.216% -0.668% -6.006%  4.053%  1.183%   2.505%   2.535%  -1.820%  0.566%  0.900%  15.991%  9.687%
2013  5.119%   1.276%  3.798%  1.921%  2.361% -1.336%  5.168%  -2.999%   3.168%   4.631%  2.964%  2.589%  32.307%  5.552%
2014 -3.525%   4.552%  0.831%  0.695%  2.321%  2.064% -1.344%   3.946%  -1.379%   2.355%  2.747% -0.256%  13.462%  7.273%
2015 -2.963%   5.620% -1.574%  0.983%  1.286% -2.029%  2.259%  -6.095%  -2.543%   8.506%  0.366% -1.718%   1.252% 11.910%
2016 -4.979%  -0.083%  6.724%  0.394%  1.701%  0.350%  3.647%   0.120%   0.008%  -1.734%  3.684%  2.028%  12.001% 10.306%
2017  1.789%   3.929%  0.126%  0.993%  1.411%  0.637%  2.055%   0.292%   2.014%   2.356%  3.057%  1.209%  21.700%  2.609%
2018  5.636%  -3.118%      NA      NA      NA      NA      NA       NA       NA       NA      NA      NA   2.342% 10.102%

That covers it for the function. Now, onto volatility trading. Dodging the February short volatility meltdown has, in my opinion, been one of the best out-of-sample validators for my volatility trading research. My subscriber numbers confirm it, as I’ve received 12 new subscribers this month, as individuals interested in the volatility trading space have gained a newfound respect for the risk management that my system uses. After all, it’s the down months that vindicate system traders like myself that do not employ leverage in the up times. Those interested in following my trades can subscribe here. Furthermore, recently, I was able to get a chance to speak with David Lincoln about my background, and philosophy on trading in general, and trading volatility in particular. Those interested can view the interview here.

Thanks for reading.

NOTE: I am currently interested in networking, full-time positions related to my skill set, and long-term consulting projects. Those interested in discussing professional opportunities can find me on LinkedIn after writing a note expressing their interest.

Which Implied Volatility Ratio Is Best?

This post will be about comparing a volatility signal using three different variations of implied volatility indices to predict when to enter a short volatility position.

In volatility trading, there are three separate implied volatility indices that have a somewhat long history for trading–the VIX (everyone knows this one), the VXV (more recently changed to be called the VIX3M), which is like the VIX, except for a three-month period), and the VXMT, which is the implied six-month volatility period.

This relationship gives investigation into three separate implied volatility ratios: VIX/VIX3M (aka VXV), VIX/VXMT, and VIX3M/VXMT, as predictors for entering a short (or long) volatility position.

So, let’s get the data.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(data.table)

download("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vix3mdailyprices.csv", 
         destfile="vxvData.csv")
download("http://www.cboe.com/publish/ScheduledTask/MktData/datahouse/vxmtdailyprices.csv", 
         destfile="vxmtData.csv")

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))


vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))
vxmt <- xts(read.zoo("vxmtData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))

download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))

xivRets <- Return.calculate(Cl(xiv))

One quick strategy to investigate is simple–the idea that the ratio should be below 1 (I.E. contango in implied volatility term structure) and decreasing (below a moving average). So when the ratio will be below 1 (that is, with longer-term implied volatility greater than shorter-term), and the ratio will be below its 60-day moving average, the strategy will take a position in XIV.

Here’s the code to do that.

vixVix3m <- Cl(VIX)/Cl(vxv)
vixVxmt <- Cl(VIX)/Cl(vxmt)
vix3mVxmt <- Cl(vxv)/Cl(vxmt)

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

maShort <- SMA(vixVix3m, 60)
maMed <- SMA(vixVxmt, 60)
maLong <- SMA(vix3mVxmt, 60)

sigShort <- vixVix3m < 1 & vixVix3m < maShort
sigMed <- vixVxmt < 1 & vixVxmt < maMed 
sigLong <- vix3mVxmt < 1 & vix3mVxmt < maLong 

retsShort <- lag(sigShort, 2) * xivRets 
retsMed <- lag(sigMed, 2) * xivRets 
retsLong <- lag(sigLong, 2) * xivRets

compare <- na.omit(cbind(retsShort, retsMed, retsLong))
colnames(compare) <- c("Short", "Medium", "Long")
charts.PerformanceSummary(compare)
stratStats(compare)

With the following performance:

3ratios.PNG

> stratStats(compare)
                              Short    Medium     Long
Annualized Return         0.5485000 0.6315000 0.638600
Annualized Std Dev        0.3874000 0.3799000 0.378900
Annualized Sharpe (Rf=0%) 1.4157000 1.6626000 1.685600
Worst Drawdown            0.5246983 0.5318472 0.335756
Calmar Ratio              1.0453627 1.1873711 1.901976
Ulcer Performance Index   3.7893478 4.6181788 5.244137

In other words, the VIX3M/VXMT sports the lowest drawdowns (by a large margin) with higher returns.

So, when people talk about which implied volatility ratio to use, I think this offers some strong evidence for the longer-out horizon as a predictor for which implied vol term structure to use. It’s also why it forms the basis of my subscription strategy.

Thanks for reading.

NOTE: I am currently seeking a full-time position (remote or in the northeast U.S.) related to my skill set demonstrated on this blog. Please message me on LinkedIn if you know of any opportunities which may benefit from my skill set.

Replicating Volatility ETN Returns From CBOE Futures

This post will demonstrate how to replicate the volatility ETNs (XIV, VXX, ZIV, VXZ) from CBOE futures, thereby allowing any individual to create synthetic ETF returns from before their inception, free of cost.

So, before I get to the actual algorithm, it depends on an update to the term structure algorithm I shared some months back.

In that algorithm, mistakenly (or for the purpose of simplicity), I used calendar days as the time to expiry, when it should have been business days, which also accounts for weekends, and holidays, which are an irritating artifact to keep track of.

So here’s the salient change, in the loop that calculates times to expiry:

source("tradingHolidays.R")

masterlist <- list()
timesToExpiry <- list()
for(i in 1:length(contracts)) {
  
  # obtain data
  contract <- contracts[i]
  dataFile <- paste0(stem, contract, "_VX.csv")
  expiryYear <- paste0("20",substr(contract, 2, 3))
  expiryMonth <- monthMaps$monthNum[monthMaps$futureStem == substr(contract,1,1)]
  expiryDate <- dates$dates[dates$dateMon == paste(expiryYear, expiryMonth, sep="-")]
  data <- tryCatch(
    {
      suppressWarnings(fread(dataFile))
    }, error = function(e){return(NULL)}
  )
  
  if(!is.null(data)) {
    # create dates
    dataDates <- as.Date(data$`Trade Date`, format = '%m/%d/%Y')
    
    # create time to expiration xts
    toExpiry <- xts(bizdays(dataDates, expiryDate), order.by=dataDates)
    colnames(toExpiry) <- contract
    timesToExpiry[[i]] <- toExpiry
    
    # get settlements
    settlement <- xts(data$Settle, order.by=dataDates)
    colnames(settlement) <- contract
    masterlist[[i]] <- settlement
  }
}

The one salient line in particular, is this:

toExpiry <- xts(bizdays(dataDates, expiryDate), order.by=dataDates)

What is this bizdays function? It comes from the bizdays package in R.

There’s also the tradingHolidays.R script, which makes further use of the bizdays package. Here’s what goes on under the hood in tradingHolidays.R, for those that wish to replicate the code:

easters <- read.csv("easters.csv", header = FALSE)
easterDates <- as.Date(paste0(substr(easters$V2, 1, 6), easters$V3), format = '%m/%d/%Y')-2

nonEasters <- read.csv("nonEasterHolidays.csv", header = FALSE)
nonEasterDates <- as.Date(paste0(substr(nonEasters$V2, 1, 6), nonEasters$V3), format = '%m/%d/%Y')

weekdayNonEasters <- nonEasterDates[which(!weekdays(nonEasterDates) %in% c("Saturday", "Sunday"))]

hurricaneSandy <- as.Date(c("2012-10-29", "2012-10-30"))

holidays <- sort(c(easterDates, weekdayNonEasters, hurricaneSandy))
holidays <- holidays[holidays > as.Date("2003-12-31") & holidays < as.Date("2019-01-01")]


require(bizdays)

create.calendar("HolidaysUS", holidays, weekdays = c("saturday", "sunday"))
bizdays.options$set(default.calendar = "HolidaysUS")

There are two CSVs that I manually compiled, but will share screenshots of–they are the easter holidays (because they have to be adjusted for turning Sunday to Friday because of Easter Fridays), and the rest of the national holidays.

Here is what the easters csv looks like:

eastersScreenshot

And the nonEasterHolidays, which contains New Year’s Day, MLK Jr. Day, President’s Day, Memorial Day, Independence Day, Labor Day, Thanksgiving Day, and Christmas Day (along with their observed dates) nonEasterScreenshot CSV:

Furthermore, we need to adjust for the two days that equities were not trading due to Hurricane Sandy.

So then, the list of holidays looks like this:

> holidays
  [1] "2004-01-01" "2004-01-19" "2004-02-16" "2004-04-09" "2004-05-31" "2004-07-05" "2004-09-06" "2004-11-25"
  [9] "2004-12-24" "2004-12-31" "2005-01-17" "2005-02-21" "2005-03-25" "2005-05-30" "2005-07-04" "2005-09-05"
 [17] "2005-11-24" "2005-12-26" "2006-01-02" "2006-01-16" "2006-02-20" "2006-04-14" "2006-05-29" "2006-07-04"
 [25] "2006-09-04" "2006-11-23" "2006-12-25" "2007-01-01" "2007-01-02" "2007-01-15" "2007-02-19" "2007-04-06"
 [33] "2007-05-28" "2007-07-04" "2007-09-03" "2007-11-22" "2007-12-25" "2008-01-01" "2008-01-21" "2008-02-18"
 [41] "2008-03-21" "2008-05-26" "2008-07-04" "2008-09-01" "2008-11-27" "2008-12-25" "2009-01-01" "2009-01-19"
 [49] "2009-02-16" "2009-04-10" "2009-05-25" "2009-07-03" "2009-09-07" "2009-11-26" "2009-12-25" "2010-01-01"
 [57] "2010-01-18" "2010-02-15" "2010-04-02" "2010-05-31" "2010-07-05" "2010-09-06" "2010-11-25" "2010-12-24"
 [65] "2011-01-17" "2011-02-21" "2011-04-22" "2011-05-30" "2011-07-04" "2011-09-05" "2011-11-24" "2011-12-26"
 [73] "2012-01-02" "2012-01-16" "2012-02-20" "2012-04-06" "2012-05-28" "2012-07-04" "2012-09-03" "2012-10-29"
 [81] "2012-10-30" "2012-11-22" "2012-12-25" "2013-01-01" "2013-01-21" "2013-02-18" "2013-03-29" "2013-05-27"
 [89] "2013-07-04" "2013-09-02" "2013-11-28" "2013-12-25" "2014-01-01" "2014-01-20" "2014-02-17" "2014-04-18"
 [97] "2014-05-26" "2014-07-04" "2014-09-01" "2014-11-27" "2014-12-25" "2015-01-01" "2015-01-19" "2015-02-16"
[105] "2015-04-03" "2015-05-25" "2015-07-03" "2015-09-07" "2015-11-26" "2015-12-25" "2016-01-01" "2016-01-18"
[113] "2016-02-15" "2016-03-25" "2016-05-30" "2016-07-04" "2016-09-05" "2016-11-24" "2016-12-26" "2017-01-02"
[121] "2017-01-16" "2017-02-20" "2017-04-14" "2017-05-29" "2017-07-04" "2017-09-04" "2017-11-23" "2017-12-25"
[129] "2018-01-01" "2018-01-15" "2018-02-19" "2018-03-30" "2018-05-28" "2018-07-04" "2018-09-03" "2018-11-22"
[137] "2018-12-25"

So once we have a list of holidays, we use the bizdays package to set the holidays and weekends (Saturday and Sunday) as our non-business days, and use that function to calculate the correct times to expiry.

So, now that we have the updated expiry structure, we can write a function that will correctly replicate the four main volatility ETNs–XIV, VXX, ZIV, and VXZ.

Here’s the English explanation:

VXX is made up of two contracts–the front month, and the back month, and has a certain number of trading days (AKA business days) that it trades until expiry, say, 17. During that timeframe, the front month (let’s call it M1) goes from being the entire allocation of funds, to being none of the allocation of funds, as the front month contract approaches expiry. That is, as a contract approaches expiry, the second contract gradually receives more and more weight, until, at expiry of the front month contract, the second month contract contains all of the funds–just as it *becomes* the front month contract. So, say you have 17 days to expiry on the front month. At the expiry of the previous contract, the second month will have a weight of 17/17–100%, as it becomes the front month. Then, the next day, that contract, now the front month, will have a weight of 16/17 at settle, then 15/17, and so on. That numerator is called dr, and the denominator is called dt.

However, beyond this, there’s a second mechanism that’s responsible for the VXX looking like it does as compared to a basic futures contract (that is, the decay responsible for short volatility’s profits), and that is the “instantaneous” rebalancing. That is, the returns for a given day are today’s settles multiplied by yesterday’s weights, over yesterday’s settles multiplied by yesterday’s weights, minus one. That is, (S_1_t * dr/dt_t-1 + S_2_t * 1-dr/dt_t-1) / (S_1_t-1 * dr/dt_t-1 + S_2_t-1 * 1-dr/dt_t-1) – 1 (I could use a tutorial on LaTeX). So, when you move forward a day, well, tomorrow, today’s weights become t-1. Yet, when were the assets able to be rebalanced? Well, in the ETNs such as VXX and VXZ, the “hand-waving” is that it happens instantaneously. That is, the weight for the front month was 93%, the return was realized at settlement (that is, from settle to settle), and immediately after that return was realized, the front month’s weight shifts from 93%, to, say, 88%. So, say Credit Suisse (that issues these ETNs ), has $10,000 (just to keep the arithmetic and number of zeroes tolerable, obviously there are a lot more in reality) worth of XIV outstanding after immediately realizing returns, it will sell $500 of its $9300 in the front month, and immediately move them to the second month, so it will immediately go from $9300 in M1 and $700 in M2 to $8800 in M1 and $1200 in M2. When did those $500 move? Immediately, instantaneously, and if you like, you can apply Clarke’s Third Law and call it “magically”.

The only exception is the day after roll day, in which the second month simply becomes the front month as the previous front month expires, so what was a 100% weight on the second month will now be a 100% weight on the front month, so there’s some extra code that needs to be written to make that distinction.

That’s the way it works for VXX and XIV. What’s the difference for VXZ and ZIV? It’s really simple–instead of M1 and M2, VXZ uses the exact same weightings (that is, the time remaining on front month vs. how many days exist for that contract to be the front month), uses M4, M5, M6, and M7, with M4 taking dr/dt, M5 and M6 always being 1, and M7 being 1-dr/dt.

In any case, here’s the code.

syntheticXIV <- function(termStructure, expiryStructure) {
  
  # find expiry days
  zeroDays <- which(expiryStructure$C1 == 0)
  
  # dt = days in contract period, set after expiry day of previous contract
  dt <- zeroDays + 1
  dtXts <- expiryStructure$C1[dt,]
  
  # create dr (days remaining) and dt structure
  drDt <- cbind(expiryStructure[,1], dtXts)
  colnames(drDt) <- c("dr", "dt")
  drDt$dt <- na.locf(drDt$dt)
  
  # add one more to dt to account for zero day
  drDt$dt <- drDt$dt + 1
  drDt <- na.omit(drDt)
  
  # assign weights for front month and back month based on dr and dt
  wtC1 <- drDt$dr/drDt$dt
  wtC2 <- 1-wtC1
  
  # realize returns with old weights, "instantaneously" shift to new weights after realizing returns at settle
  # assumptions are a bit optimistic, I think
  valToday <- termStructure[,1] * lag(wtC1) + termStructure[,2] * lag(wtC2)
  valYesterday <- lag(termStructure[,1]) * lag(wtC1) + lag(termStructure[,2]) * lag(wtC2)
  syntheticRets <- (valToday/valYesterday) - 1
  
  # on the day after roll, C2 becomes C1, so reflect that in returns
  zeroes <- which(drDt$dr == 0) + 1 
  zeroRets <- termStructure[,1]/lag(termStructure[,2]) - 1
  
  # override usual returns with returns that reflect back month becoming front month after roll day
  syntheticRets[index(syntheticRets)[zeroes]] <- zeroRets[index(syntheticRets)[zeroes]]
  syntheticRets <- na.omit(syntheticRets)
  
  # vxxRets are syntheticRets
  vxxRets <- syntheticRets
  
  # repeat same process for vxz -- except it's dr/dt * 4th contract + 5th + 6th + 1-dr/dt * 7th contract
  vxzToday <- termStructure[,4] * lag(wtC1) + termStructure[,5] + termStructure[,6] + termStructure[,7] * lag(wtC2)
  vxzYesterday <- lag(termStructure[,4]) * lag(wtC1) + lag(termStructure[, 5]) + lag(termStructure[,6]) + lag(termStructure[,7]) * lag(wtC2)
  syntheticVxz <- (vxzToday/vxzYesterday) - 1
  
  # on zero expiries, next day will be equal (4+5+6)/lag(5+6+7) - 1
  zeroVxz <- (termStructure[,4] + termStructure[,5] + termStructure[,6])/
    lag(termStructure[,5] + termStructure[,6] + termStructure[,7]) - 1
  syntheticVxz[index(syntheticVxz)[zeroes]] <- zeroVxz[index(syntheticVxz)[zeroes]]
  syntheticVxz <- na.omit(syntheticVxz)
  
  vxzRets <- syntheticVxz
  
  # write out weights for actual execution
  if(last(drDt$dr!=0)) {
    print(paste("Previous front-month weight was", round(last(drDt$dr)/last(drDt$dt), 5)))
    print(paste("Front-month weight at settle today will be", round((last(drDt$dr)-1)/last(drDt$dt), 5)))
    if((last(drDt$dr)-1)/last(drDt$dt)==0){
      print("Front month will be zero at end of day. Second month becomes front month.")
    }
  } else {
    print("Previous front-month weight was zero. Second month became front month.")
    print(paste("New front month weights at settle will be", round(last(expiryStructure[,2]-1)/last(expiryStructure[,2]), 5)))
  }
  
  return(list(vxxRets, vxzRets))
}

So, a big thank you goes out to Michael Kapler of Systematic Investor Toolbox for originally doing the replication and providing his code. My code essentially does the same thing, in, hopefully a more commented way.

So, ultimately, does it work? Well, using my updated term structure code, I can test that.

While I’m not going to paste my entire term structure code (again, available here, just update the script with my updates from this post), here’s how you’d run the new function:

> out <- syntheticXIV(termStructure, expiryStructure)
[1] "Previous front-month weight was 0.17647"
[1] "Front-month weight at settle today will be 0.11765"

And since it returns both the vxx returns and the vxz returns, we can compare them both.

compareXIV <- na.omit(cbind(xivRets, out[[1]] * -1))
colnames(compareXIV) <- c("XIV returns", "Replication returns")
charts.PerformanceSummary(compareXIV)

With the result:

xivComparison

Basically, a perfect match.

Let’s do the same thing, with ZIV.

compareZIV <- na.omit(cbind(ZIVrets, out[[2]]*-1))
colnames(compareZIV) <- c("ZIV returns", "Replication returns")
charts.PerformanceSummary(compareZIV)

zivComparison.PNG

So, rebuilding from the futures does a tiny bit better than the ETN. But the trajectory is largely identical.

That concludes this post. I hope it has shed some light on how these volatility ETNs work, and how to obtain them directly from the futures data published by the CBOE, which are the inputs to my term structure algorithm.

This also means that for institutions interested in trading my strategy, that they can obtain leverage to trade the futures-composite replicated variants of these ETNs, at greater volume.

Thanks for reading.

NOTES: For those interested in a retail subscription strategy to trading volatility, do not hesitate to subscribe to my volatility-trading strategy. For those interested in employing me full-time or for long-term consulting projects, I can be reached on my LinkedIn, or my email: ilya.kipnis@gmail.com.

(Don’t Get) Contangled Up In Noise

This post will be about investigating the efficacy of contango as a volatility trading signal.

For those that trade volatility (like me), a term you may see that’s somewhat ubiquitous is the term “contango”. What does this term mean?

Well, simple: it just means the ratio of the second month of VIX futures over the first. The idea being is that when the second month of futures is more than the first, that people’s outlook for volatility is greater in the future than it is for the present, and therefore, the futures are “in contango”, which is most of the time.

Furthermore, those that try to find decent volatility trading ideas may have often seen that futures in contango implies that holding a short volatility position will be profitable.

Is this the case?

Well, there’s an easy way to answer that.

First off, refer back to my post on obtaining free futures data from the CBOE.

Using this data, we can obtain our signal (that is, in order to run the code in this post, run the code in that post).

xivSig <- termStructure$C2 > termStructure$C1

Now, let’s get our XIV data (again, big thanks to Mr. Helmuth Vollmeier for so kindly providing it.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)
require(data.table)

download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")

download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
         destfile="longVXX.txt") #requires downloader package

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

Now, here’s how this works: as the CBOE doesn’t update its settles until around 9:45 AM EST on the day after (EG a Tuesday’s settle data won’t release until Wednesday at 9:45 AM EST), we have to enter at close of the day after the signal fires. (For those wondering, my subscription strategy uses this mechanism, giving subscribers ample time to execute throughout the day.)

So, let’s calculate our backtest returns. Here’s a stratStats function to compute some summary statistics.

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}
stratRets <- lag(xivSig, 2) * xivRets
charts.PerformanceSummary(stratRets)
stratStats(stratRets)

With the following results:

contangled

                                 C2
Annualized Return         0.3749000
Annualized Std Dev        0.4995000
Annualized Sharpe (Rf=0%) 0.7505000
Worst Drawdown            0.7491131
Calmar Ratio              0.5004585
Ulcer Performance Index   0.7984454

So, this is obviously a disaster. Visual inspection will show devastating, multi-year drawdowns. Using the table.Drawdowns command, we can view the worst ones.

> table.Drawdowns(stratRets, top = 10)
         From     Trough         To   Depth Length To Trough Recovery
1  2007-02-23 2008-12-15 2010-04-06 -0.7491    785       458      327
2  2010-04-21 2010-06-30 2010-10-25 -0.5550    131        50       81
3  2014-07-07 2015-12-11 2017-01-04 -0.5397    631       364      267
4  2012-03-27 2012-06-01 2012-07-17 -0.3680     78        47       31
5  2017-07-25 2017-08-17 2017-10-16 -0.3427     59        18       41
6  2013-09-27 2014-04-11 2014-06-18 -0.3239    182       136       46
7  2011-02-15 2011-03-16 2011-04-26 -0.3013     49        21       28
8  2013-02-20 2013-03-01 2013-04-23 -0.2298     44         8       36
9  2013-05-20 2013-06-20 2013-07-08 -0.2261     34        23       11
10 2012-12-19 2012-12-28 2013-01-23 -0.2154     23         7       16

So, the top 3 are horrendous, and then anything above 30% is still pretty awful. A couple of those drawdowns lasted multiple years as well, with a massive length to the trough. 458 trading days is nearly two years, and 364 is about one and a half years. Imagine seeing a strategy be consistently on the wrong side of the trade for nearly two years, and when all is said and done, you’ve lost three-fourths of everything in that strategy.

There’s no sugar-coating this: such a strategy can only be called utter trash.

Let’s try one modification: we’ll require both contango (C2 > C1), and that contango be above its 60-day simple moving average, similar to my VXV/VXMT strategy.

contango <- termStructure$C2/termStructure$C1
maContango <- SMA(contango, 60)
xivSig <- contango > 1 & contango > maContango
stratRets <- lag(xivSig, 2) * xivRets

With the results:

stillContangled

> stratStats(stratRets)
                                 C2
Annualized Return         0.4271000
Annualized Std Dev        0.3429000
Annualized Sharpe (Rf=0%) 1.2457000
Worst Drawdown            0.5401002
Calmar Ratio              0.7907792
Ulcer Performance Index   1.7515706

Drawdowns:

> table.Drawdowns(stratRets, top = 10)
         From     Trough         To   Depth Length To Trough Recovery
1  2007-04-17 2008-03-17 2010-01-06 -0.5401    688       232      456
2  2014-12-08 2014-12-31 2015-04-09 -0.2912     84        17       67
3  2017-07-25 2017-09-05 2017-12-08 -0.2610     97        30       67
4  2012-03-27 2012-06-21 2012-07-02 -0.2222     68        61        7
5  2012-07-20 2012-12-06 2013-02-08 -0.2191    139        96       43
6  2015-10-20 2015-11-13 2016-03-16 -0.2084    102        19       83
7  2013-12-27 2014-04-11 2014-05-23 -0.1935    102        73       29
8  2017-03-21 2017-05-17 2017-06-26 -0.1796     68        41       27
9  2012-02-07 2012-02-15 2012-03-12 -0.1717     24         7       17
10 2016-09-08 2016-09-09 2016-12-06 -0.1616     63         2       61

So, a Calmar still safely below 1, an Ulcer Performance Index still in the basement, a maximum drawdown that’s long past the point that people will have abandoned the strategy, and so on.

So, even though it was improved, it’s still safe to say this strategy doesn’t perform too well. Even after the large 2007-2008 drawdown, it still gets some things pretty badly wrong, like being exposed to all of August 2017.

While I think there are applications to contango in volatility investing, I don’t think its use is in generating the long/short volatility signal on its own. Rather, I think other indices and sources of data do a better job of that. Such as the VXV/VXMT, which has since been iterated on to form my subscription strategy.

Thanks for reading.

NOTE: I am currently seeking networking opportunities, long-term projects, and full-time positions related to my skill set. My linkedIn profile can be found here.

Comparing Some Strategies from Easy Volatility Investing, and the Table.Drawdowns Command

This post will be about comparing strategies from the paper “Easy Volatility Investing”, along with a demonstration of R’s table.Drawdowns command.

First off, before going further, while I think the execution assumptions found in EVI don’t lend the strategies well to actual live trading (although their risk/reward tradeoffs also leave a lot of room for improvement), I think these strategies are great as benchmarks.

So, some time ago, I did an out-of-sample test for one of the strategies found in EVI, which can be found here.

Using the same source of data, I also obtained data for SPY (though, again, AlphaVantage can also provide this service for free for those that don’t use Quandl).

Here’s the new code.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)
require(data.table)

download("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vix3mdailyprices.csv", destfile="vxvData.csv")

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))

vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))

ma_vRatio <- SMA(Cl(VIX)/Cl(vxv), 10)
xivSigVratio <- ma_vRatio < 1 
vxxSigVratio <- ma_vRatio > 1 

# V-ratio (VXV/VXMT)
vRatio <- lag(xivSigVratio) * xivRets + lag(vxxSigVratio) * vxxRets
# vRatio <- lag(xivSigVratio, 2) * xivRets + lag(vxxSigVratio, 2) * vxxRets


# Volatility Risk Premium Strategy
spy <- Quandl("EOD/SPY", start_date='1990-01-01', type = 'xts')
spyRets <- Return.calculate(spy$Adj_Close)
histVol <- runSD(spyRets, n = 10, sample = FALSE) * sqrt(252) * 100
vixDiff <- Cl(VIX) - histVol
maVixDiff <- SMA(vixDiff, 5)

vrpXivSig <- maVixDiff > 0 
vrpVxxSig <- maVixDiff < 0
vrpRets <- lag(vrpXivSig, 1) * xivRets + lag(vrpVxxSig, 1) * vxxRets


obsCloseMomentum <- magicThinking # from previous post

compare <- na.omit(cbind(xivRets, obsCloseMomentum, vRatio, vrpRets))
colnames(compare) <- c("BH_XIV", "DDN_Momentum", "DDN_VRatio", "DDN_VRP")

So, an explanation: there are four return streams here–buy and hold XIV, the DDN momentum from a previous post, and two other strategies.

The simpler one, called the VRatio is simply the ratio of the VIX over the VXV. Near the close, check this quantity. If this is less than one, buy XIV, otherwise, buy VXX.

The other one, called the Volatility Risk Premium strategy (or VRP for short), compares the 10 day historical volatility (that is, the annualized running ten day standard deviation) of the S&P 500, subtracts it from the VIX, and takes a 5 day moving average of that. Near the close, when that’s above zero (that is, VIX is higher than historical volatility), go long XIV, otherwise, go long VXX.

Again, all of these strategies are effectively “observe near/at the close, buy at the close”, so are useful for demonstration purposes, though not for implementation purposes on any large account without incurring market impact.

Here are the results, since 2011 (that is, around the time of XIV’s actual inception):
easyVolInvesting

To note, both the momentum and the VRP strategy underperform buying and holding XIV since 2011. The VRatio strategy, on the other hand, does outperform.

Here’s a summary statistics function that compiles some top-level performance metrics.

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

And the result:

> stratStats(compare['2011::'])
                             BH_XIV DDN_Momentum DDN_VRatio   DDN_VRP
Annualized Return         0.3801000    0.2837000  0.4539000 0.2572000
Annualized Std Dev        0.6323000    0.5706000  0.6328000 0.6326000
Annualized Sharpe (Rf=0%) 0.6012000    0.4973000  0.7172000 0.4066000
Worst Drawdown            0.7438706    0.6927479  0.7665093 0.7174481
Calmar Ratio              0.5109759    0.4095285  0.5921650 0.3584929
Ulcer Performance Index   1.1352168    1.2076995  1.5291637 0.7555808

To note, all of the benchmark strategies suffered very large drawdowns since XIV’s inception, which we can examine using the table.Drawdowns command, as seen below:

> table.Drawdowns(compare[,1]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2011-07-08 2011-11-25 2012-11-26 -0.7439    349        99      250
2 2015-06-24 2016-02-11 2016-12-21 -0.6783    379       161      218
3 2014-07-07 2015-01-30 2015-06-11 -0.4718    236       145       91
4 2011-02-15 2011-03-16 2011-04-20 -0.3013     46        21       25
5 2013-04-15 2013-06-24 2013-07-22 -0.2877     69        50       19
> table.Drawdowns(compare[,2]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2014-07-07 2016-06-27 2017-03-13 -0.6927    677       499      178
2 2012-03-27 2012-06-13 2012-09-13 -0.4321    119        55       64
3 2011-10-04 2011-10-28 2012-03-21 -0.3621    117        19       98
4 2011-02-15 2011-03-16 2011-04-21 -0.3013     47        21       26
5 2011-06-01 2011-08-04 2011-08-18 -0.2723     56        46       10
> table.Drawdowns(compare[,3]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2014-01-23 2016-02-11 2017-02-14 -0.7665    772       518      254
2 2011-09-13 2011-11-25 2012-03-21 -0.5566    132        53       79
3 2012-03-27 2012-06-01 2012-07-19 -0.3900     80        47       33
4 2011-02-15 2011-03-16 2011-04-20 -0.3013     46        21       25
5 2013-04-15 2013-06-24 2013-07-22 -0.2877     69        50       19
> table.Drawdowns(compare[,4]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2015-06-24 2016-02-11 2017-10-11 -0.7174    581       161      420
2 2011-07-08 2011-10-03 2012-02-03 -0.6259    146        61       85
3 2014-07-07 2014-12-16 2015-05-21 -0.4818    222       115      107
4 2013-02-20 2013-07-08 2014-06-10 -0.4108    329        96      233
5 2012-03-27 2012-06-01 2012-07-17 -0.3900     78        47       31

Note that the table.Drawdowns command only examines one return stream at a time. Furthermore, the top argument specifies how many drawdowns to look at, sorted by greatest drawdown first.

One reason I think that these strategies seem to suffer the drawdowns they do is that they’re either all-in on one asset, or its exact opposite, with no room for error.

One last thing, for the curious, here is the comparison with my strategy since 2011 (essentially XIV inception) benchmarked against the strategies in EVI (which I have been trading with live capital since September, and have recently opened a subscription service for):

volPerfBenchmarks

stratStats(compare['2011::'])
                             QST_vol    BH_XIV DDN_Momentum DDN_VRatio   DDN_VRP
Annualized Return          0.8133000 0.3801000    0.2837000  0.4539000 0.2572000
Annualized Std Dev         0.3530000 0.6323000    0.5706000  0.6328000 0.6326000
Annualized Sharpe (Rf=0%)  2.3040000 0.6012000    0.4973000  0.7172000 0.4066000
Worst Drawdown             0.2480087 0.7438706    0.6927479  0.7665093 0.7174481
Calmar Ratio               3.2793211 0.5109759    0.4095285  0.5921650 0.3584929
Ulcer Performance Index   10.4220721 1.1352168    1.2076995  1.5291637 0.7555808

Thanks for reading.

NOTE: I am currently looking for networking and full-time opportunities related to my skill set. My LinkedIn profile can be found here.

The Return of Free Data and Possible Volatility Trading Subscription

This post will be about pulling free data from AlphaVantage, and gauging interest for a volatility trading subscription service.

So first off, ever since the yahoos at Yahoo decided to turn off their free data, the world of free daily data has been in somewhat of a dark age. Well, thanks to http://blog.fosstrading.com/2017/10/getsymbols-and-alpha-vantage.html#gpluscommentsJosh Ulrich, Paul Teetor, and other R/Finance individuals, the latest edition of quantmod (which can be installed from CRAN) now contains a way to get free financial data from AlphaVantage since the year 2000, which is usually enough for most backtests, as that date predates the inception of most ETFs.

Here’s how to do it.

First off, you need to go to alphaVantage, register, and https://www.alphavantage.co/support/#api-keyget an API key.

Once you do that, downloading data is simple, if not slightly slow. Here’s how to do it.

require(quantmod)

getSymbols('SPY', src = 'av', adjusted = TRUE, output.size = 'full', api.key = YOUR_KEY_HERE)

And the results:

> head(SPY)
           SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted
2000-01-03   148.25   148.25 143.875  145.4375    8164300     104.3261
2000-01-04   143.50   144.10 139.600  139.8000    8089800     100.2822
2000-01-05   139.90   141.20 137.300  140.8000    9976700     100.9995
2000-01-06   139.60   141.50 137.800  137.8000    6227200      98.8476
2000-01-07   140.30   145.80 140.100  145.8000    8066500     104.5862
2000-01-10   146.30   146.90 145.000  146.3000    5741700     104.9448

Which means if any one of my old posts on asset allocation has been somewhat defunct thanks to bad yahoo data, it will now work again with a slight modification to the data input algorithms.

Beyond demonstrating this routine, one other thing I’d like to do is to gauge interest for a volatility signal subscription service, for a system I have personally started trading a couple of months ago.

Simply, I have seen other websites with subscription services with worse risk/reward than the strategy I currently trade, which switches between XIV, ZIV, and VXX. Currently, the equity curve, in log 10, looks like this:

volStratPerf

That is, $1000 in 2008 would have become approximately $1,000,000 today, if one was able to trade this strategy since then.

Since 2011 (around the time of inception for XIV), the performance has been:


                        Performance
Annualized Return         0.8265000
Annualized Std Dev        0.3544000
Annualized Sharpe (Rf=0%) 2.3319000
Worst Drawdown            0.2480087
Calmar Ratio              3.3325450

Considering that some websites out there charge upwards of $50 a month for either a single tactical asset rotation strategy (and a lot more for a combination) with inferior risk/return profiles, or a volatility strategy that may have had a massive and historically record-breaking drawdown, I was hoping to gauge a price point for what readers would consider paying for signals from a better strategy than those.

Thanks for reading.

NOTE: I am currently interested in networking and am seeking full-time opportunities related to my skill set. My LinkedIn profile can be found here.

The Kelly Criterion — Does It Work?

This post will be about implementing and investigating the running Kelly Criterion — that is, a constantly adjusted Kelly Criterion that changes as a strategy realizes returns.

For those not familiar with the Kelly Criterion, it’s the idea of adjusting a bet size to maximize a strategy’s long term growth rate. Both https://en.wikipedia.org/wiki/Kelly_criterionWikipedia and Investopedia have entries on the Kelly Criterion. Essentially, it’s about maximizing your long-run expectation of a betting system, by sizing bets higher when the edge is higher, and vice versa.

There are two formulations for the Kelly criterion: the Wikipedia result presents it as mean over sigma squared. The Investopedia definition is P-[(1-P)/winLossRatio], where P is the probability of a winning bet, and the winLossRatio is the average win over the average loss.

In any case, here are the two implementations.

investoPediaKelly <- function(R, kellyFraction = 1, n = 63) {
  signs <- sign(R)
  posSigns <- signs; posSigns[posSigns < 0] <- 0
  negSigns <- signs; negSigns[negSigns > 0] <- 0; negSigns <- negSigns * -1
  probs <- runSum(posSigns, n = n)/(runSum(posSigns, n = n) + runSum(negSigns, n = n))
  posVals <- R; posVals[posVals < 0] <- 0
  negVals <- R; negVals[negVals > 0] <- 0; 
  wlRatio <- (runSum(posVals, n = n)/runSum(posSigns, n = n))/(runSum(negVals, n = n)/runSum(negSigns, n = n))
  kellyRatio <- probs - ((1-probs)/wlRatio)
  out <- kellyRatio * kellyFraction
  return(out)
}

wikiKelly <- function(R, kellyFraction = 1, n = 63) {
  return(runMean(R, n = n)/runVar(R, n = n)*kellyFraction)
}

Let’s try this with some data. At this point in time, I’m going to show a non-replicable volatility strategy that I currently trade.

volSince2011

For the record, here are its statistics:

                              Close
Annualized Return         0.8021000
Annualized Std Dev        0.3553000
Annualized Sharpe (Rf=0%) 2.2574000
Worst Drawdown            0.2480087
Calmar Ratio              3.2341613

Now, let’s see what the Wikipedia version does:

badKelly <- out * lag(wikiKelly(out), 2)
charts.PerformanceSummary(badKelly)

badKelly

The results are simply ridiculous. And here would be why: say you have a mean return of .0005 per day (5 bps/day), and a standard deviation equal to that (that is, a Sharpe ratio of 1). You would have 1/.0005 = 2000. In other words, a leverage of 2000 times. This clearly makes no sense.

The other variant is the more particular Investopedia definition.

invKelly <- out * lag(investKelly(out), 2)
charts.PerformanceSummary(invKelly)

invKelly

Looks a bit more reasonable. However, how does it stack up against not using it at all?

compare <- na.omit(cbind(out, invKelly))
charts.PerformanceSummary(compare)

kellyCompare

Turns out, the fabled Kelly Criterion doesn’t really change things all that much.

For the record, here are the statistical comparisons:

                               Base     Kelly
Annualized Return         0.8021000 0.7859000
Annualized Std Dev        0.3553000 0.3588000
Annualized Sharpe (Rf=0%) 2.2574000 2.1903000
Worst Drawdown            0.2480087 0.2579846
Calmar Ratio              3.2341613 3.0463063

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

Leverage Up When You’re Down?

This post will investigate the idea of reducing leverage when drawdowns are small, and increasing leverage as losses accumulate. It’s based on the idea that whatever goes up must come down, and whatever comes down generally goes back up.

I originally came across this idea from this blog post.

So, first off, let’s write an easy function that allows replication of this idea. Essentially, we have several arguments:

One: the default leverage (that is, when your drawdown is zero, what’s your exposure)? For reference, in the original post, it’s 10%.

Next: the various leverage levels. In the original post, the leverage levels are 25%, 50%, and 100%.

And lastly, we need the corresponding thresholds at which to apply those leverage levels. In the original post, those levels are 20%, 40%, and 55%.

So, now we can create a function to implement that in R. The idea being that we have R compute the drawdowns, and then use that information to determine leverage levels as precisely and frequently as possible.

Here’s a quick piece of code to do so:

require(xts)
require(PerformanceAnalytics)

drawdownLev <- function(rets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55)) {
  # compute drawdowns
  dds <- PerformanceAnalytics:::Drawdowns(rets)
  
  # initialize leverage to the default level
  dds$lev <- defaultLev
  
  # change the leverage for every threshold
  for(i in 1:length(ddthresh)) {
    
    # as drawdowns go through thresholds, adjust leverage
    dds$lev[dds$Close < ddthresh[i]] <- levs[i]
  }
  
  # compute the new strategy returns -- apply leverage at tomorrow's close
  out <- rets * lag(dds$lev, 2)
  
  # return the leverage and the new returns
  leverage <- dds$lev
  colnames(leverage) <- c("DDLev_leverage")
  return(list(leverage, out))
}

So, let’s replicate some results.

require(downloader)
require(xts)
require(PerformanceAnalytics)


download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

xivDDlev <- drawdownLev(xivRets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55))
compare <- na.omit(cbind(xivDDlev[[2]], xivRets))
colnames(compare) <- c("XIV_DD_leverage", "XIV")

charts.PerformanceSummary(compare['2011::2016'])

And our results look something like this:

xivddlev

                          XIV_DD_leverage       XIV
Annualized Return               0.2828000 0.2556000
Annualized Std Dev              0.3191000 0.6498000
Annualized Sharpe (Rf=0%)       0.8862000 0.3934000
Worst Drawdown                  0.4870604 0.7438706
Calmar Ratio                    0.5805443 0.3436668

That said, what would happen if one were to extend the data for all available XIV data?

xivddlev2

> rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
                          XIV_DD_leverage       XIV
Annualized Return               0.1615000 0.3319000
Annualized Std Dev              0.3691000 0.5796000
Annualized Sharpe (Rf=0%)       0.4375000 0.5727000
Worst Drawdown                  0.8293650 0.9215784
Calmar Ratio                    0.1947428 0.3601385

A different story.

In this case, I think the takeaway is that such a mechanism does well when the drawdowns for the benchmark in question occur sharply, so that the lower exposure protects from those sharp drawdowns, and then the benchmark spends much of the time in a recovery mode, so that the increased exposure has time to earn outsized returns, and then draws down again. When the benchmark continues to see drawdowns after maximum leverage is reached, or continues to perform well when not in drawdown, such a mechanism falls behind quickly.

As always, there is no free lunch when it comes to drawdowns, as trying to lower exposure in preparation for a correction will necessarily mean forfeiting a painful amount of upside in the good times, at least as presented in the original post.

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

Let’s Talk Drawdowns (And Affiliates)

This post will be directed towards those newer in investing, with an explanation of drawdowns–in my opinion, a simple and highly important risk statistic.

Would you invest in this?

SP500ew

As it turns out, millions of people do, and did. That is the S&P 500, from 2000 through 2012, more colloquially referred to as “the stock market”. Plenty of people around the world invest in it, and for a risk to reward payoff that is very bad, in my opinion. This is an investment that, in ten years, lost half of its value–twice!

At its simplest, an investment–placing your money in an asset like a stock, a savings account, and so on, instead of spending it, has two things you need to look at.

cagr

First, what’s your reward? If you open up a bank CD, you might be fortunate to get 3%. If you invest it in the stock market, you might get 8% per year (on average) if you held it for 20 years. In other words, you stow away $100 on January 1st, and you might come back and find $108 in your account on December 31st. This is often called the compound annualized growth rate (CAGR)–meaning that if you have $100 one year, earn 8%, you have 108, and then earn 8% on that, and so on.

The second thing to look at is the risk. What can you lose? The simplest answer to this is “the maximum drawdown”. If this sounds complicated, it simply means “the biggest loss”. So, if you had $100 one month, $120 next month, and $90 the month after that, your maximum drawdown (that is, your maximum loss) would be 1 – 90/120 = 25%.

Maximum-Drawdown

When you put the reward and risk together, you can create a ratio, to see how your rewards and risks line up. This is called a Calmar ratio, and you get it by dividing your CAGR by your maximum drawdown. The Calmar Ratio is a ratio that I interpret as “for every dollar you lose in your investment’s worst performance, how many dollars can you make back in a year?” For my own investments, I prefer this number to be at least 1, and know of a strategy for which that number is above 2 since 2011, or higher than 3 if simulated back to 2008.

Most stocks don’t even have a Calmar ratio of 1, which means that on average, an investment makes more than it can possibly lose in a year. Even Amazon, the company whose stock made Jeff Bezos now the richest man in the world, only has a Calmar Ratio of less than 2/5, with a maximum loss of more than 90% in the dot-com crash. The S&P 500, again, “the stock market”, since 1993, has a Calmar Ratio of around 1/6. That is, the worst losses can take *years* to make back.

A lot of wealth advisers like to say that they recommend a large holding of stocks for young people. In my opinion, whether you’re young or old, losing half of everything hurts, and there are much better ways to make money than to simply buy and hold a collection of stocks.

****

For those with coding skills, one way to gauge just how good or bad an investment is, is this:

An investment has a history–that is, in January, it made 3%, in February, it lost 2%, in March, it made 5%, and so on. By shuffling that history around, so that say, January loses 2%, February makes 5%, and March makes 3%, you can create an alternate history of the investment. It will start and end in the same place, but the journey will be different. For investments that have existed for a few years, it is possible to create many different histories, and compare the Calmar ratio of the original investment to its shuffled “alternate histories”. Ideally, you want the investment to be ranked among the highest possible ways to have made the money it did.

To put it simply: would you rather fall one inch a thousand times, or fall a thousand inches once? Well, the first one is no different than jumping rope. The second one will kill you.

Here is some code I wrote in R (if you don’t code in R, don’t worry) to see just how the S&P 500 (the stock market) did compared to how it could have done.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)
require(data.table)

SPY <- Quandl("EOD/SPY", start_date="1990-01-01", type = "xts")
SPYrets <- na.omit(Return.calculate(SPY$Adj_Close))

spySims <- list()
set.seed(123)
for(i in 1:999) {
  simulatedSpy <- xts(sample(coredata(SPYrets), size = length(SPYrets), replace = FALSE), order.by=index(SPYrets))
  colnames(simulatedSpy) <- paste("sampleSPY", i, sep="_")
  spySims[[i]] <- simulatedSpy
}
spySims <- do.call(cbind, spySims)
spySims <- cbind(spySims, SPYrets)
colnames(spySims)[1000] <- "Original SPY"

dailyReturnCAGR <- function(rets) {
  return(prod(1+rets)^(252/length(rets))-1)
}

rets <- sapply(spySims, dailyReturnCAGR)
drawdowns <- maxDrawdown(spySims)
calmars <- rets/drawdowns
ranks <- rank(calmars)
plot(density(as.numeric(calmars)), main = 'Calmars of reshuffled SPY, realized reality in red')
abline(v=as.numeric(calmars[1000]), col = 'red')

This is the resulting plot:

spyCalmars

That red line is the actual performance of the S&P 500 compared to what could have been. And of the 1000 different simulations, only 91 did worse than what happened in reality.

This means that the stock market isn’t a particularly good investment, and that you can do much better using tactical asset allocation strategies.

****

One site I’m affiliated with, is AllocateSmartly. It is a cheap investment subscription service ($30 a month) that compiles a collection of asset allocation strategies that perform better than many wealth advisers. When you combine some of those strategies, the performance is better still. To put it into perspective, one model strategy I’ve come up with has this performance:
allocateSmartlyModelPortfolio

In this case, the compound annualized growth rate is nearly double that of the maximum loss. For those interested in something a bit more aggressive, this strategy ensemble uses some fairly conservative strategies in its approach.

****

In conclusion, when considering how to invest your money, keep in mind both the reward, and the risk. One very simple and important way to understand risk is how much an investment can possibly lose, from its highest, to its lowest value following that peak. When you combine the reward and the risk, you can get a ratio that tells you about how much you can stand to make for every dollar lost in an investment’s worst performance.

Thanks for reading.

NOTE: I am interested in networking opportunities, projects, and full-time positions related to my skill set. If you are looking to collaborate, please contact me on my LinkedIn here.