So, Harry Long (of Houston) came out with a new strategy on SeekingAlpha involving some usual mix of SPXL (3x leveraged SPY), TMF (3x leveraged TLT), and some volatility indices (in this case, ZIV and TVIX). Now, since we’ve tread this path before, expectations are rightfully set. It’s a strategy that’s going to look good in the sample he used, it’s going to give some performance back in the crisis, and it’ll ultimately prove to be a simple-to-implement, simple-to-backtest strategy with its own set of ups and downs that does outperform the usual buy-and-hold indices.
Once again, a huge thanks to Mr. Helmuth Vollmeier for the long history volatility data.
So here’s the code (I’ll skip a lot of the comparing equity curves of my synthetic instruments to the Yahoo-finance variants, as you’ve all seen that before) to get to the initial equity curve comparison.
require(downloader) require(PerformanceAnalytics) download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", destfile="longVXX.txt") VXX <- xts(read.zoo("longVXX.txt", sep=",", header=TRUE)) TVIXrets <- Return.calculate(Cl(VXX))*2 getSymbols("TVIX", from="1990-01-01") TVIX <- TVIX[-which(index(TVIX)=="2014-12-30"),] #trashy Yahoo data, removing obvious bad print compare <- merge(TVIXrets, Return.calculate(Ad(TVIX)), join='inner') charts.PerformanceSummary(compare) charts.PerformanceSummary(compare["2012::"]) charts.PerformanceSummary(compare["2013::"]) charts.PerformanceSummary(compare["2014::"]) charts.PerformanceSummary(compare["2015::"]) #okay we're good download("https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT", destfile="longZIV.txt") ZIV <- xts(read.zoo("longZIV.txt", sep=",", header=TRUE)) ZIVrets <- Return.calculate(Cl(ZIV)) getSymbols("SPY", from="1990-01-01") SPXLrets <- Return.calculate(Ad(SPY))*3 getSymbols("TMF", from="1990-01-01") TMFrets <- Return.calculate(Ad(TMF)) getSymbols("TLT", from="1990-01-01") TLTrets <- Return.calculate(Ad(TLT)) tmf3TLT <- merge(TMFrets, 3*TLTrets, join='inner') charts.PerformanceSummary(tmf3TLT) discrepancy <- as.numeric(Return.annualized(tmf3TLT[,2]-tmf3TLT[,1])) tmf3TLT[,2] <- tmf3TLT[,2] - ((1+discrepancy)^(1/252)-1) charts.PerformanceSummary(tmf3TLT) modifiedTLT <- 3*TLTrets - ((1+discrepancy)^(1/252)-1) TMFrets <- modifiedTLT components <- cbind(SPXLrets, ZIVrets, TMFrets, TVIXrets) components <- components["2004-03-29::"] stratRets <- Return.portfolio(R = components, weights = c(.4, .2, .35, .05), rebalance_on="years") charts.PerformanceSummary(stratRets) SPYrets <- Return.calculate(Ad(SPY)) compare <- merge(stratRets, SPYrets, join='inner') charts.PerformanceSummary(compare["2011::"])With the following equity curve display:
So far, so good. Let’s look at the full backtest performance.
charts.PerformanceSummary(compare)With the resultant equity curve:
Which, given what we’ve seen before, isn’t outside the realm of expectation.
For those interested in the log equity curves, here you go:
compare[is.na(compare)] <- 0 plot(log(cumprod(1+compare)), legend.loc="topleft")
And for fun, let’s look at the outperformance equity curve.
diff <- compare[,1] - compare[,2] charts.PerformanceSummary(diff, main="relative performance")And the result:
Now this is somewhere in the ballpark of what you’d love to see from your strategy against a benchmark — aside from a couple of spikes which do a number on the corresponding drawdowns chart, it looks like a steady outperformance.
However, the new features I’d like to introduce in this blog post are a quicker way of generating the usual statistics table I display, and a more in-depth drawdown analysis.
Here’s how:
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare))Which gives us the following result:
> rbind(table.AnnualizedReturns(compare), maxDrawdown(compare)) portfolio.returns SPY.Adjusted Annualized Return 0.2181000 0.0799000 Annualized Std Dev 0.2159000 0.1982000 Annualized Sharpe (Rf=0%) 1.0100000 0.4030000 Worst Drawdown 0.4326138 0.5518552Since this saves me typing, I’ll be using this format from now on. And as a bonus, it displays annualized standard deviation. While I don’t particularly care for that statistic as I believe that max drawdown captures the notion of “here’s the pain on the other end of your returns” better than “here’s how much your strategy wiggles from day to day”, the fact that it’s thrown in and is a statistic that a lot of other people (particularly portfolio managers, pension fund managers, etc.) are interested in, so much the better.
Now, moving onto a more in-depth analysis of drawdown, PerformanceAnalytics has the following functionality:
dd <- table.Drawdowns(compare[,1], top=100) dd <- dd[dd$Depth < -.05,] dd sum(dd$"To Trough")/nrow(compare)This brings up the following table (it seems that with multiple return streams, it’ll just default to the first one), and a derived statistic.
> dd From Trough To Depth Length To Trough Recovery 1 2008-12-19 2009-03-09 2010-03-16 -0.4326 310 53 257 2 2007-10-30 2008-10-15 2008-12-04 -0.3275 278 243 35 3 2013-05-22 2013-06-24 2013-09-18 -0.1617 83 23 60 4 2004-04-02 2004-05-10 2004-09-17 -0.1450 116 26 90 5 2010-04-26 2010-07-02 2010-09-21 -0.1230 104 49 55 6 2006-03-20 2006-06-19 2006-09-20 -0.1229 129 64 65 7 2007-05-08 2007-08-15 2007-10-01 -0.1229 102 70 32 8 2005-07-29 2005-10-27 2005-12-14 -0.1112 97 64 33 9 2011-06-01 2011-08-11 2011-09-06 -0.1056 68 51 17 10 2005-02-09 2005-03-22 2005-05-31 -0.1051 77 29 48 11 2010-11-05 2010-11-17 2011-02-07 -0.1022 64 9 55 12 2011-09-23 2011-10-27 2011-12-19 -0.0836 61 25 36 13 2013-09-19 2013-10-09 2013-10-17 -0.0815 21 15 6 14 2012-05-02 2012-05-18 2012-06-29 -0.0803 42 13 29 15 2012-10-18 2012-11-15 2012-11-29 -0.0721 28 19 9 16 2014-09-02 2014-10-13 2014-11-05 -0.0679 47 30 17 17 2008-12-05 2008-12-08 2008-12-16 -0.0589 8 2 6 18 2011-02-18 2011-03-16 2011-04-01 -0.0580 30 18 12 19 2014-07-23 2014-08-06 2014-08-15 -0.0536 18 11 7 20 2012-04-03 2012-04-10 2012-04-26 -0.0517 17 5 12What I did was I simply wanted to query the table for all drawdowns that were more than 5%, or the 100 biggest drawdowns (though considering that we have about 20 drawdowns over about a decade, it seems the rule is 2 drawdowns over 5% per year, give or take, and this is a pretty volatile strategy). Lastly, I wanted to know the proportion of the time that someone watching the strategy will be feeling the pain of watching the strategy go to those depths, so I took the sum of the “To Trough” column and divided it by the amount of days of the backtest. This is the result:
> sum(dd$"To Trough")/nrow(compare) [1] 0.3005505I’m fairly certain some individuals more seasoned than I am would do something different given this information and functionality, and if so, feel free to leave a comment, but this is just a licked-finger-in-the-air calculation I did. So, 30% of the time, whoever is investing real money into this will want to go and grab a few more drinks than usual.
Let’s do the same analysis for the relative performance.
tmp <- rbind(table.AnnualizedReturns(diff), maxDrawdown(diff)) rownames(tmp)[4] <- "Worst Drawdown" tmpWhen running only one set of returns, apparently the last row will just simply be called “4”, so I had to manually rename that row. Here’s the result:
> tmp portfolio.returns Annualized Return 0.1033000 Annualized Std Dev 0.2278000 Annualized Sharpe (Rf=0%) 0.4535000 Worst Drawdown 0.3874082Far from spectacular, but there it is for what it’s worth.
Now the drawdowns.
dd <- table.Drawdowns(diff, top=100) dd <- dd[dd$Depth < -.05,] dd sum(dd$"To Trough")/nrow(diff)With the following result:
> dd From Trough To Depth Length To Trough Recovery 1 2008-11-21 2009-06-22 2013-04-04 -0.3874 1097 145 952 2 2008-10-28 2008-11-04 2008-11-19 -0.2328 17 6 11 3 2007-11-27 2008-06-12 2008-10-07 -0.1569 218 137 81 4 2013-05-03 2013-06-25 2013-12-26 -0.1386 165 37 128 5 2008-10-13 2008-10-13 2008-10-24 -0.1327 10 1 9 6 2005-06-08 2006-06-28 2006-11-08 -0.1139 360 267 93 7 2004-03-29 2004-05-10 2004-09-20 -0.1102 121 30 91 8 2007-03-08 2007-06-12 2007-11-26 -0.0876 183 67 116 9 2005-02-10 2005-03-28 2005-05-31 -0.0827 76 31 45 10 2014-09-02 2014-09-17 2014-10-14 -0.0613 31 12 19In short, the spikes in outperformance gave us some pretty…interesting…drawdown statistics, which just essentially meant that the strategy wasn’t roaring at the same exact time that the SPY had its bounce from the bottom. And for interest, my finger in the air pain statistic:
> sum(dd$"To Trough")/nrow(diff) [1] 0.2689908So approximately 27% of the time, the strategy underperforms its benchmark–meaning that 73% of the time, you’re fairly happy–assuming you’ve chosen the correct benchmark.
In short, overall, more freebies from Harry Long with a title to attract readers. The strategy is what it is–something that boasts strong absolute returns and definitely outperforms SPY, though like anything else, has its moments that it’ll burn you (no pain, no gain, as they say). However, the quicker statistics table functionality combined with the more in-depth drawdown analysis is something that I am definitely happy to have stumbled upon.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
Pingback: The Whole Street’s Daily Wrap for 1/24/2015 | The Whole Street
Thanks for the great analysis and getting a little deeper on key drawdown analysis. I appreciated the From_Trough_To_Depth_To_Recovery. I’ve looked for a way to quickly replicate this in python; eventually I’ll plug few lines here.
For a pandas.dataframe df containing a list of (daily) drawdowns:
mask = df <= thresh_dd
groupnum = mask.diff().fillna(method='bfill').cumsum()
for key,grp in df.groupby(groupnum):
….
I suppose any given function can be replicated in Python one at a time, but I’d advise that R’s systematic trading/asset allocation stack (EG xts, zoo, blotter, PerformanceAnalytics, quantstrat, etc.) has been developed with years of man hours by highly experienced professionals. If one absolutely must create their weights in Python, I wouldn’t recommend reinventing the wheel, but just exporting the weights and returns as csv files, reading them into R, and doing the heavy lifting from there.
Do you have a favorite strategy of Mr. Long?
No. I’d be very wary of any strategy published in public. They are never meant to be money makers.
Pingback: The Quarterly Tactical Strategy (aka QTS) | QuantStrat TradeR
This post’s layout seems a little bit disturbed. Something probably changed in the way wordpress handles some of the layout.
Argh. Yeah, wordpress new mechanics screwed these up post-edit :(