The Quarterly Tactical Strategy (aka QTS)

This post introduces the Quarterly Tactical Strategy, introduced by Cliff Smith on a Seeking Alpha article. It presents a variation on the typical dual-momentum strategy that only trades over once a quarter, yet delivers a seemingly solid risk/return profile. The article leaves off a protracted period of unimpressive performance at the turn of the millennium, however.

First off, due to the imprecision of the English language, I received some help from TrendXplorer in implementing this strategy. Those who are fans of Amibroker are highly encouraged to visit his blog.

In any case, this strategy is fairly simple:

Take a group of securities (in this case, 8 mutual funds), and do the following:

Rank a long momentum (105 days) and a short momentum (20 days), and invest in the security with the highest composite rank, with ties going to the long momentum (that is, .501*longRank + .499*shortRank, for instance). If the security with the highest composite rank is greater than its three month SMA, invest in that security, otherwise, hold cash.

There are two critical points that must be made here:

1) The three-month SMA is *not* a 63-day SMA. It is precisely a three-point SMA up to that point on the monthly endpoints of that security.
2) Unlike in flexible asset allocation or elastic asset allocation, the cash asset is not treated as a formal asset.

Let’s look at the code. Here’s the data–which are adjusted-data mutual fund data (although with a quarterly turnover, the frequent trading constraint of not trading out of the security is satisfied, though I’m not sure how dividends are treated–that is, whether a retail investor would actually realize these returns less a hopefully tiny transaction cost through their brokers–aka hopefully not too much more than $1 per transaction):

require(quantmod)
require(PerformanceAnalytics)
require(TTR)

#get our data from yahoo, use adjusted prices
symbols <- c("NAESX", #small cap
             "PREMX", #emerging bond
             "VEIEX", #emerging markets
             "VFICX", #intermediate investment grade
             "VFIIX", #GNMA mortgage
             "VFINX", #S&P 500 index
             "VGSIX", #MSCI REIT
             "VGTSX", #total intl stock idx
             "VUSTX") #long term treasury (cash)

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))

#define our cash asset and keep track of which column it is
cashAsset <- "VUSTX"
cashCol <- grep(cashAsset, colnames(prices))

#start our data off on the security with the least data (VGSIX in this case)
prices <- prices[!is.na(prices[,7]),] 

#cash is not a formal asset in our ranking
cashPrices <- prices[, cashCol]
prices <- prices[, -cashCol]

Nothing anybody hasn’t seen before up to this point. Get data, start it off at most recent inception mutual fund, separate the cash prices, moving along.

What follows is a rather rough implementation of QTS, not wrapped up in any sort of function that others can plug and play with (though I hope I made the code readable enough for others to tinker with).

Let’s define parameters and compute momentum.

#define our parameters
nShort <- 20
nLong <- 105
nMonthSMA <- 3

#compute momentums
rocShort <- prices/lag(prices, nShort) - 1
rocLong <- prices/lag(prices, nLong) - 1

Now comes some endpoints functionality (or, more colloquially, magic) that the xts library provides. It’s what allows people to get work done in R much faster than in other programming languages.

#take the endpoints of quarter start/end
quarterlyEps <- endpoints(prices, on="quarters")
monthlyEps <- endpoints(prices, on = "months")

#take the prices at quarterly endpoints
quarterlyPrices <- prices[quarterlyEps,]

#short momentum at quarterly endpoints (20 day)
rocShortQtrs <- rocShort[quarterlyEps,]

#long momentum at quarterly endpoints (105 day)
rocLongQtrs <- rocLong[quarterlyEps,]

In short, get the quarterly endpoints (and monthly, we need those for the monthly SMA which you’ll see shortly) and subset our momentum computations on those quarterly endpoints. Now let’s get the total rank for those subset-on-quarters momentum computations.

#rank short momentum, best highest rank
rocSrank <- t(apply(rocShortQtrs, 1, rank))

#rank long momentum, best highest rank
rocLrank <- t(apply(rocLongQtrs, 1, rank))

#total rank, long slightly higher than short, sum them
totalRank <- 1.01*rocLrank + rocSrank 

#function that takes 100% position in highest ranked security
maxRank <- function(rankRow) {
  return(rankRow==max(rankRow))
}

#apply above function to our quarterly ranks every quarter
rankPos <- t(apply(totalRank, 1, maxRank))

So as you can see, I rank the momentum computations by row, take a weighted sum (in slight favor of the long momentum), and then simply take the security with the highest rank at every period, giving me one 1 in every row and 0s otherwise.

Now let’s do the other end of what determines position, which is the SMA filter. In this case, we need monthly data points for our three-month SMA, and then subset it to quarters to be on the same timescale as the quarterly ranks.

#SMA of securities, only use monthly endpoints
#subset to quarters
#then filter
monthlyPrices <- prices[monthlyEps,]
monthlySMAs <- xts(apply(monthlyPrices, 2, SMA, n=nMonthSMA), order.by=index(monthlyPrices))
quarterlySMAs <- monthlySMAs[index(quarterlyPrices),]
smaFilter <- quarterlyPrices > quarterlySMAs

Now let’s put it together to get our final positions. Our cash position is simply one if we don’t have a single investment in the time period, zero else.

finalPos <- rankPos*smaFilter
finalPos <- finalPos[!is.na(rocLongQtrs[,1]),]
cash <- xts(1-rowSums(finalPos), order.by=index(finalPos))
finalPos <- merge(finalPos, cash, join='inner')

Now we can finally compute our strategy returns.

prices <- merge(prices, cashPrices, join='inner')
returns <- Return.calculate(prices)
stratRets <- Return.portfolio(returns, finalPos)
table.AnnualizedReturns(stratRets)
maxDrawdown(stratRets)
charts.PerformanceSummary(stratRets)
plot(log(cumprod(1+stratRets)))

So what do things look like?

Like this:

> table.AnnualizedReturns(stratRets)
                          portfolio.returns
Annualized Return                    0.1899
Annualized Std Dev                   0.1619
Annualized Sharpe (Rf=0%)            1.1730
> maxDrawdown(stratRets)
[1] 0.1927991

And since the first equity curve doesn’t give much of an indication in the early years, I’ll take Tony Cooper’s (of Double Digit Numerics) advice and show the log equity curve as well.

In short, from 1997 through 2002, this strategy seemed to be going nowhere, and then took off. As I was able to get this backtest going back to 1997, it makes me wonder why it was only started in 2003 for the SeekingAlpha article, since even with 1997-2002 thrown in, this strategy’s risk/reward profile still looks fairly solid. CAR about 1 (slightly less, but that’s okay, for something that turns over so infrequently, and in so few securities!), and a Sharpe higher than 1. Certainly better than what the market itself offered over the same period of time for retail investors. Perhaps Cliff Smith himself could chime in regarding his choice of time frame.

In any case, Cliff Smith marketed the strategy as having a higher than 28% CAGR, and his article was published on August 15, 2014, and started from 2003. Let’s see if we can replicate those results.

stratRets <- stratRets["2002-12-31::2014-08-15"]
table.AnnualizedReturns(stratRets)
maxDrawdown(stratRets)
charts.PerformanceSummary(stratRets)
plot(log(cumprod(1+stratRets)))

Which results in this:

> table.AnnualizedReturns(stratRets)
                          portfolio.returns
Annualized Return                    0.2862
Annualized Std Dev                   0.1734
Annualized Sharpe (Rf=0%)            1.6499
> maxDrawdown(stratRets)
[1] 0.1911616

A far improved risk/return profile without 1997-2002 (or the out-of-sample period after Cliff Smith’s publishing date). Here are the two equity curves in-sample.

In short, the results look better, and the SeekingAlpha article’s results are validated.

Now, let’s look at the out-of-sample periods on their own.

stratRets <- Return.portfolio(returns, finalPos)
earlyOOS <- stratRets["::2002-12-31"]
table.AnnualizedReturn(earlyOOS)
maxDrawdown(earlyOOS)
charts.PerformanceSummary(earlyOOS)

Here are the results:

> table.AnnualizedReturns(earlyOOS)
                          portfolio.returns
Annualized Return                    0.0321
Annualized Std Dev                   0.1378
Annualized Sharpe (Rf=0%)            0.2327
> maxDrawdown(earlyOOS)
[1] 0.1927991

And with the corresponding equity curve (which does not need a log-scale this time).

In short, it basically did nothing for an entire five years. That’s rough, and I definitely don’t like the fact that it was left off of the SeekingAlpha article, as anytime I can extend a backtest further back than a strategy’s original author and then find skeletons in the closet (as happened for each and every one of Harry Long’s strategies), it sets off red flags on this end, so I’m hoping that there’s some good explanation for leaving off 1997-2002 that I’m simply failing to mention.

Lastly, let’s look at the out-of-sample performance.

lateOOS <- stratRets["2014-08-15::"]
charts.PerformanceSummary(lateOOS)
table.AnnualizedReturns(lateOOS)
maxDrawdown(lateOOS)

With the following results:

> table.AnnualizedReturns(lateOOS)
                          portfolio.returns
Annualized Return                    0.0752
Annualized Std Dev                   0.1426
Annualized Sharpe (Rf=0%)            0.5277
> maxDrawdown(lateOOS)
[1] 0.1381713

And the following equity curve:

Basically, while it’s ugly, it made new equity highs over only two more transactions (and in such a small sample size, anything can happen), so I’ll put this one down as a small, ugly win, but a win nevertheless.

If anyone has any questions or comments about this strategy, I’d love to see them, as this is basically a first-pass replica. To Mr. Cliff Smith’s credit, the results check out, and when the worst thing one can say about a strategy is that it had a period of a flat performance (aka when the market crested at the end of the Clinton administration right before the dot-com burst), well, that’s not the worst thing in the world.

More replications (including one requested by several readers) will be upcoming.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

Comparing Flexible and Elastic Asset Allocation

So recently, I tried to combine Flexible and Elastic Asset Allocation. The operative word being–tried. Essentially, I saw Flexible Asset Allocation as an incomplete algorithm — namely that although it was an excellent method for selecting securities, that there had to have been a better way to weigh stocks than a naive equal-weight scheme.

It turns out, the methods outlined in Elastic Asset Allocation weren’t doing the trick (that is, a four month cumulative return raised to the return weight multiplied by the correlation to a daily-rebalanced equal-weight index of the selected securities with cumulative return greater than zero). Either I managed a marginally higher return at the cost of much higher volatility and protracted drawdown, or I maintained my Sharpe ratio at the cost of much lower returns. Thus, I scrapped all of it, which was a shame as I was hoping to be able to combine the two methodologies into one system that would extend the research I previously blogged on. Instead, after scrapping it, I decided to have a look as to why I was running into the issues I was.

In any case, here’s the quick demo I did.

require(quantmod)
require(PerformanceAnalytics)
require(IKTrading)

symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX")

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
ep <- endpoints(prices, "months")
adPrices <- prices
prices <- prices[ep,]
prices <- prices["1997-03::"]
adPrices <- adPrices["1997-04::"]

eaaOffensive <- EAA(monthlyPrices = prices, returnWeights = TRUE, cashAsset = "VBMFX", bestN = 3)
eaaOffNoCrash <- EAA(monthlyPrices = prices, returnWeights = TRUE, cashAsset ="VBMFX", 
                     bestN = 3, enableCrashProtection = FALSE)
faa <- FAA(prices = adPrices, riskFreeName = "VBMFX", bestN = 3, returnWeights = TRUE, stepCorRank = TRUE)
faaNoStepwise <- FAA(prices = adPrices, riskFreeName = "VBMFX", bestN = 3, returnWeights = TRUE, stepCorRank = FALSE)

eaaOffDaily <- Return.portfolio(R = Return.calculate(adPrices), weights = eaaOffensive[[1]])
eaaOffNoCrash <- Return.portfolio(R = Return.calculate(adPrices), weights = eaaOffNoCrash[[1]])
charts.PerformanceSummary(cbind(faa[[2]], eaaDaily))

comparison <- cbind(eaaOffDaily, eaaOffNoCrash, faa[[2]], faaNoStepwise[[2]])
colnames(comparison) <- c("Offensive EAA", "Offensive EAA (no crash protection)", "FAA (stepwise)", "FAA (no stepwise)")
charts.PerformanceSummary(comparison)

rbind(table.AnnualizedReturns(comparison), maxDrawdown(comparison))

Essentially, I compared FAA with the stepwise correlation rank algorithm, without it, and the offensive EAA with and without crash protection. The results were disappointing.

Here are the equity curves:

In short, the best default FAA variant handily outperforms any of the EAA variants.

And here are the statistics:

                          Offensive EAA Offensive EAA (no crash protection) FAA (stepwise) FAA (no stepwise)
Annualized Return             0.1247000                           0.1305000      0.1380000          0.131400
Annualized Std Dev            0.1225000                           0.1446000      0.0967000          0.089500
Annualized Sharpe (Rf=0%)     1.0187000                           0.9021000      1.4271000          1.467900
Worst Drawdown                0.1581859                           0.2696754      0.1376124          0.130865

Note of warning: if you run EAA, it seems it’s unwise to do it without crash protection (aka decreasing your stake in everything but the cash/risk free asset by a proportion of the number of negative return securities). I didn’t include the defensive variant of EAA since that gives markedly lower returns.

Not that this should discredit EAA, but on a whole, I feel that there should probably be a way to beat the (usually) equal-weight weighting scheme (sometimes the cash asset gets a larger value due to a negative momentum asset making it into the top assets by virtue of the rank of its volatility and correlation, and ends up getting zeroed out) that FAA employs, and that treating FAA as an asset selection mechanism as opposed to a weighting mechanism may yield some value. However, I have not yet found it myself.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

An Update On EAA and a Volatility Strategy

Again, before starting this post, I’d like to inform readers that the book Quantitative Trading With R, written by Harry Georgakopoulos, with contributions from myself, is now available for order on Amazon. Already, it has garnered a pair of five-star reviews, and it deals not only with quantstrat, but with aspects such as spread trading, high frequency data, and options. I highly recommend it.

So, first things first, I want to inform everyone that EAA (that is, Elastic Asset Allocation, the new algorithm recently released by Dr. Wouter Keller a couple of weeks ago) is now in my IKTrading package. I made some modifications to deal with incongruous security starting dates (that is, handled NA momentum, and so on, similarly to the process in FAA). Again, no particular guarantees, but at this point, I think the algorithm won’t regularly break (but I may be missing some edge case, so feedback is always appreciated). Also, after thinking about it a bit more, I don’t foresee EAA as it stands being able to make use of a conditional correlation algorithm, since rather than using correlation simply for security selection, it uses correlations to make weighting decisions, which raises the question of what the correlation value of the first security would be. 0? -1? Ideas on how to address this are always welcome, since applying conditional correlation outside of a ranking context is a topic now of interest to me.

Furthermore, TrendXplorer has recently posted his own post on EAA after seeing mine on his blog. It is *very* comprehensive, and for those that are more inclined towards AmiBroker, you’ll be in Nirvana. It can be found here. Also, it seems he has done some work with another SeekingAlpha contributor named Cliff Smith (and seems to have worked hand in hand with him), and thus, had a far more positive experience than I did going solo replicating Harry Long’s strategies (or, if some of you may like, marketing materials). TrendXplorer did some work with a strategy called QTS, which I hope I’ll be able to cover in the near future. That can all be found here. So, I’d like to formally extend thanks to TrendXplorer for the work he has done with both EAA, and also pointing me towards yet another viable asset allocation strategy.

In terms of my own updated EAA, to test it out, I added Tesla Motors to the original seven securities. So let’s look at the as-of-now-current EAA.

"EAA" <- function(monthlyPrices, wR=1, wV=0, wC=.5, wS=2, errorJitter=1e-6, 
                cashAsset=NULL, bestN=1+ceiling(sqrt(ncol(monthlyPrices))),
                enableCrashProtection = TRUE, returnWeights=FALSE, monthlyRiskFree=NULL) {
  returns <- Return.calculate(monthlyPrices)
  returns <- returns[-1,] #return calculation uses one observation
  if(!is.null(monthlyRiskFree)) {
    returnsRF <- Return.calculate(monthlyRiskFree)
    returnsRF <- returnsRF[-1,]
  }
  
  if(is.null(cashAsset)) {
    returns$zeroes <- 0
    cashAsset <- "zeroes"
    warning("No cash security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. 
            Using vector of zeroes instead.")
  }
  
  cashCol <- grep(cashAsset, colnames(returns))
  
  weights <- list()
  for(i in 1:(nrow(returns)-11)) {
    returnsData <- returns[i:(i+11),] #each chunk will be 12 months of returns data
    #per-month mean of cumulative returns of 1, 3, 6, and 12 month periods
    periodReturn <- ((returnsData[12,] + Return.cumulative(returnsData[10:12,]) + 
                      Return.cumulative(returnsData[7:12,]) + Return.cumulative(returnsData)))/22
    
    if(!is.null(monthlyRiskFree)) {
      rfData <- returnsRF[i:(i+11),]
      rfReturn <- ((rfData[12,] + Return.cumulative(rfData[10:12,]) + 
                    Return.cumulative(rfData[7:12,]) + Return.cumulative(rfData)))/22
      periodReturn <- periodReturn - as.numeric(rfReturn)
    }
    
    vols <- StdDev.annualized(returnsData) 
    mktIndex <- xts(rowMeans(returnsData, na.rm=TRUE), order.by=index(returnsData)) #equal weight returns of universe
    cors <- cor(returnsData, mktIndex) #correlations to market index
    
    weightedRets <- periodReturn ^ wR
    weightedCors <- (1 - as.numeric(cors)) ^ wC
    weightedVols <- (vols + errorJitter) ^ wV
    wS <- wS + errorJitter
    
    z <- (weightedRets * weightedCors / weightedVols) ^ wS #compute z_i and zero out negative returns
    z[periodReturn < 0] <- 0
    crashProtection <- sum(z==0, na.rm=TRUE)/sum(!is.na(z)) #compute crash protection cash cushion
    
    orderedZ <- sort(as.numeric(z), decreasing=TRUE)
    selectedSecurities <- z >= orderedZ[bestN]
    preNormalizedWeights <- z*selectedSecurities #select top N securities, keeping z_i scores
    periodWeights <- preNormalizedWeights/sum(preNormalizedWeights, na.rm=TRUE) #normalize
    if (enableCrashProtection) {
      periodWeights <- periodWeights * (1-crashProtection) #CP rule
    }
    periodWeights[is.na(periodWeights)] <- 0
    weights[[i]] <- periodWeights
  }
  
  weights <- do.call(rbind, weights)
  weights[, cashCol] <- weights[, cashCol] + 1-rowSums(weights) #add to risk-free asset all non-invested weight
  strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns
  if(returnWeights) {
    return(list(weights, strategyReturns))
  } else {
    return(strategyReturns)
  }
}

Essentially, little changed aside from some lines dealing with NAs (AKA securities that were not yet around at the time whose prices are given as NAs).

To test out whether the algorithm worked, I added TSLA to see if it didn’t break the code. Here is the new test code.

require(quantmod)
require(PerformanceAnalytics)

symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX", "TSLA")

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
ep <- endpoints(prices, "months")
prices <- prices[ep,]
prices <- prices["1997-03::"]

getSymbols("^IRX", from="1990-01-01")
dailyYield <- (1+(Cl(IRX)/100))^(1/252) - 1
threeMoPrice <- cumprod(1+dailyYield)
threeMoPrice <- threeMoPrice["1997-03::"]
threeMoPrice <- threeMoPrice[endpoints(threeMoPrice, "months"),]

offensive <- EAA(prices, cashAsset="VBMFX", bestN=3)
defensive <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1)
offRF <- EAA(prices, cashAsset="VBMFX", bestN=3, monthlyRiskFree = threeMoPrice)
defRF <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1, monthlyRiskFree = threeMoPrice)
compare <- cbind(offensive, defensive, offRF, defRF)
colnames(compare) <- c("Offensive", "Defensive", "OffRF", "DefRF")
stats <- rbind(Return.annualized(compare)*100, StdDev.annualized(compare)*100, maxDrawdown(compare)*100, SharpeRatio.annualized(compare))
rownames(stats)[3] <- "Worst Drawdown"
charts.PerformanceSummary(compare)
stats

With the following statistics table and equity curve:

> stats
                                 Offensive Defensive      OffRF     DefRF
Annualized Return               17.6174693 13.805683 16.7376777 13.709368
Annualized Standard Deviation   22.7328695 13.765444 22.3854966 13.504313
Worst Drawdown                  25.3534015 12.135310 25.3559118 12.146654
Annualized Sharpe Ratio (Rf=0%)  0.7749778  1.002923  0.7477019  1.015184

Essentially, TSLA — a high momentum, high-volatility stock causes some consternation in the offensive variant of the algorithm. Let’s look at the weight statistics of TSLA when it was in the portfolio.

test <- EAA(prices, cashAsset = "VBMFX", bestN=3, returnWeights=TRUE)
weights <- test[[1]]
summary(weights$TSLA[weights$TSLA > 0])

With the results:

    Index                 TSLA        
 Min.   :2011-07-29   Min.   :0.01614  
 1st Qu.:2012-09-14   1st Qu.:0.32345  
 Median :2013-07-31   Median :0.48542  
 Mean   :2013-06-20   Mean   :0.51415  
 3rd Qu.:2014-04-15   3rd Qu.:0.75631  
 Max.   :2014-12-31   Max.   :0.95793  

Also, to be clear, R’s summary function was not created with xts type objects in mind, so the Index statistics are just pure nonsense (R is trying to do summary statistics on the underlying numerical values of the date index — they have no relation to the TSLA weights), so if you ever call summary on anything in an xts, be aware that it isn’t actually providing you the dates of the corresponding weights (if they exist at all — E.G. the mean of the weights isn’t an actual weight at any point in time).

In any case, it seems that the offensive variant of the algorithm is susceptible to creating portfolios that are very poorly diversified, since the offensive variant doesn’t place any weight on security volatility–simply correlation. So if there was a very volatile instrument that was on a roaring trend, EAA would tell you to just place your entire portfolio in that one instrument–which of course, can be the correct thing to do if you know for certain that said trend will continue, until, of course, it doesn’t.

I’m sure there are still some methods to account for instruments of wildly different risk/return profiles, even without the need of additional code, by varying the parameters. I just wanted to demonstrate the need to be aware of this phenomenon, which I happened upon simply by testing the portfolio for incongruous starting dates and just so happened to pick a “hot topic” stock.

Last (for this post), I’d like to make readers aware that the blogger Volatility Made Simple has created a variant of a strategy I had written about earlier (again, thanks to Mr. Helmuth Vollmeier for providing the initial foundation), in which he mixed signals from the three variants I had found to be in stable regions, and I’m really happy he has done so, as he’s one of the first people who have explicitly extended my work.

Unfortunately, said strategy is currently in drawdown. However, looking at its drawdown curve against that of XIV itself, it seems that volatility has been doing crazy things lately, and the drawdown has been worse in the past. I am concerned, however, that it may be a strategy prone to overfitting, and it’s a constant reminder that there is still more to learn, and more techniques to use to convince oneself that a backtest isn’t just an overfit, data-mined, sample-dependent illusion with good marketing that will break down immediately upon looking at a larger sample. However, as I did not originate the strategy myself, I’d at least like to hope that whoever was the first person who came up with the VXV/VXMT ratio idea had some good rationale for the strategy to begin with.

In the immediate future, I’ll be looking into change point analysis and twitter’s new breakout detection package.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

Adding a Risk-Free Rate To Your Analyses

First off, before beginning this post, I’d like to make my readers aware of the release of a book that I contributed almost an entire chapter for.

Quantitative Trading With R is a primer on quantitative trading in R written by Harry Georgakopoulos, one of Chicago’s better quants. I contributed almost the entire chapter on quantstrat. If you’ve been able to follow and understand the code I write on this blog, then said chapter will mostly be review and a basic nuts and bolts reference. But for those of my readers who gloss over the code and wait for the punchline, I highly recommend it. In addition, there are chapters on high frequency trading, options, spreads, and other things that I do not believe are available in any other book that actually teaches readers the details of implementation. Now, onto the post.

As part of my continuation of Elastic Asset Allocation, I wanted to cover how to implement a measure of a risk-free rate in your analyses. In this post, I’ll analyze two slight variations of EAA from the last post.

Essentially, the idea that rather than look at absolute return, we should look at *excess* return over some sort of risk-free rate, such as the 13-week treasury bill.

Luckily, Yahoo actually *has* a way of getting the returns of the risk-free asset, namely, IRX. But first, let’s get the similarities to the last post out of the way.

require(quantmod)
require(PerformanceAnalytics)

symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX")

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
ep <- endpoints(prices, "months")
prices <- prices[ep,]
prices <- prices["1997-03::"]

Okay, everything fine so far, same as before. Now here’s the new innovation, brought to my attention by TrendXplorer. It turns out that the IRX index is actually the annualized yield for the short-term (three month) treasuries. So by adding 1, raising it to the 252nd root, and taking the cumulative product, we can actually get the “price” of the risk-free rate, and from that, compute daily returns (this is most likely redundant, but I want all my returns computed the same way).

getSymbols("^IRX", from="1990-01-01")
dailyYield <- (1+(Cl(IRX)/100))^(1/252) - 1
threeMoPrice <- cumprod(1+dailyYield)
threeMoPrice <- threeMoPrice["1997-03::"]
threeMoPrice <- threeMoPrice[endpoints(threeMoPrice, "months"),]

So how does this fit into EAA? Well, simply, I added a new argument called monthlyRiskFree, which will let a user pass in the monthly price series of the risk-free asset, in this case the derived IRX price series. That information is then used to compute a risk-free return, which is subtracted from the returns of all assets, and rather than taking the absolute return of the assets in the universe, instead the algorithm computes the return in excess of the risk-free asset.

Here’s the modified function:

EAA <- function(monthlyPrices, wR=1, wV=0, wC=.5, wS=2, errorJitter=1e-6, 
                cashAsset=NULL, bestN=1+ceiling(sqrt(ncol(monthlyPrices))),
                enableCrashProtection = TRUE, returnWeights=FALSE, monthlyRiskFree=NULL) {
  returns <- Return.calculate(monthlyPrices)
  returns <- returns[-1,] #return calculation uses one observation
  if(!is.null(monthlyRiskFree)) {
    returnsRF <- Return.calculate(monthlyRiskFree)
    returnsRF <- returnsRF[-1,]
  }
  
  if(is.null(cashAsset)) {
    returns$zeroes <- 0
    cashAsset <- "zeroes"
    warning("No cash security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. 
            Using vector of zeroes instead.")
  }
  
  cashCol <- grep(cashAsset, colnames(returns))
  
  weights <- list()
  for(i in 1:(nrow(returns)-11)) {
    returnsData <- returns[i:(i+11),] #each chunk will be 12 months of returns data
    #per-month mean of cumulative returns of 1, 3, 6, and 12 month periods
    periodReturn <- ((returnsData[12,] + Return.cumulative(returnsData[10:12,]) + 
                      Return.cumulative(returnsData[7:12,]) + Return.cumulative(returnsData)))/22
    
    if(!is.null(monthlyRiskFree)) {
      rfData <- returnsRF[i:(i+11),]
      rfReturn <- ((rfData[12,] + Return.cumulative(rfData[10:12,]) + 
                    Return.cumulative(rfData[7:12,]) + Return.cumulative(rfData)))/22
      periodReturn <- periodReturn - as.numeric(rfReturn)
    }
    
    vols <- StdDev.annualized(returnsData) 
    mktIndex <- xts(rowMeans(returnsData), order.by=index(returnsData)) #equal weight returns of universe
    cors <- cor(returnsData, mktIndex) #correlations to market index
    
    weightedRets <- periodReturn ^ wR
    weightedCors <- (1 - as.numeric(cors)) ^ wC
    weightedVols <- (vols + errorJitter) ^ wV
    wS <- wS + errorJitter
    
    z <- (weightedRets * weightedCors / weightedVols) ^ wS #compute z_i and zero out negative returns
    z[periodReturn < 0] <- 0
    crashProtection <- sum(z==0)/length(z) #compute crash protection cash cushion
    
    orderedZ <- sort(as.numeric(z), decreasing=TRUE)
    selectedSecurities <- z >= orderedZ[bestN]
    preNormalizedWeights <- z*selectedSecurities #select top N securities, keeping z_i scores
    periodWeights <- preNormalizedWeights/sum(preNormalizedWeights) #normalize
    if (enableCrashProtection) {
      periodWeights <- periodWeights * (1-crashProtection) #CP rule
    }
    weights[[i]] <- periodWeights
  }
  
  weights <- do.call(rbind, weights)
  weights[, cashCol] <- weights[, cashCol] + 1-rowSums(weights) #add to risk-free asset all non-invested weight
  strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns
  if(returnWeights) {
    return(list(weights, strategyReturns))
  } else {
    return(strategyReturns)
  }
}

Essentially, the key new block of code is this:

    if(!is.null(monthlyRiskFree)) {
      rfData <- returnsRF[i:(i+11),]
      rfReturn <- ((rfData[12,] + Return.cumulative(rfData[10:12,]) + 
                    Return.cumulative(rfData[7:12,]) + Return.cumulative(rfData)))/22
      periodReturn <- periodReturn - as.numeric(rfReturn)
    }

Which does exactly as I mentioned above — computes the EAA variant of the returns for the period for the risk-free asset and subtracts it from the other returns.

So how does using this new innovation compare to simply looking at absolute returns?

Let’s find out.

offensive <- EAA(prices, cashAsset="VBMFX", bestN=3)
defensive <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1)
offRF <- EAA(prices, cashAsset="VBMFX", bestN=3, monthlyRiskFree = threeMoPrice)
defRF <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1, monthlyRiskFree = threeMoPrice)
compare <- cbind(offensive, defensive, offRF, defRF)
colnames(compare) <- c("Offensive", "Defensive", "OffRF", "DefRF")
stats <- rbind(Return.annualized(compare)*100, StdDev.annualized(compare)*100, maxDrawdown(compare)*100, SharpeRatio.annualized(compare))
rownames(stats)[3] <- "Worst Drawdown"
charts.PerformanceSummary(compare)

Here’s the table of statistics:

                                Offensive Defensive     OffRF     DefRF
Annualized Return               12.206133 10.262766 11.415583 10.269146
Annualized Standard Deviation   11.352728  8.615134 10.551722  8.129250
Worst Drawdown                  12.629251  8.134785 14.351895  9.376533
Annualized Sharpe Ratio (Rf=0%)  1.075172  1.191249  1.081869  1.263234

And the corresponding chart:

In short, nope. No dice there. On the defensive portfolio, the change is negligible, and on the offensive side, it seems to encourage more risk than necessary, with…nothing to show for it, really.

That stated, just because this method didn’t pan out doesn’t mean that the actual mechanics of obtaining risk-free data are without value.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

For A New Year, A New Asset Allocation System Just Published in SSRN

Happy New Year!

So, this is something I’ve been working on before its official publication (so this is the first place on the entire internet that you’ll see it outside SSRN, and certainly one of the few places that will extend it), directly in contact with the original paper author, Dr. Wouter Keller, who published the Flexible Asset Allocation algorithm that I improved on earlier. It’s called Elastic Asset Allocation, and seems to be a simpler yet more general algorithm. Here’s the link to the SSRN

Essentially, the algorithm can be explained simply:

Use monthly data.

For a 12 month period, we want the monthly average of the 1, 3, 6, and 12 month cumulative returns (that is, sum all those up, divide by 22), the volatility of the individual monthly returns, and the correlation of the universe of returns to the monthly average of the returns. Then arrange those values in the following expression:

z_i = (r_i ^ wR * (1-c_i) ^ wC / (v_i + error) ^ wV) ^ (wS + error) if r_i > 0, 0 otherwise, where:

r_i are the average returns
c_i are the correlations
v_i are the volatilities

wR, wC, and wV are the respective weights for returns, correlations, and volatilities.

Next, select the top N of P assets to include in the portfolio.

Then, the weights for each security can be expressed as the normalized values of the sum of selected z_i’s.

If a crash protection rule is enabled, compute CP, which is the fraction of the universe of securities (not selected securities, but the entire universe) below zero, divided by the size of the universe, and multiply all weights by 1-CP. Reinvest the remainder in the cash asset (something like VBMFX, VFISX, SHY, etc.). In FAA, I called this the risk-free asset, but in this case, it’s simply the cash asset.

The error term is 1e-6, or some other small value to avoid divide by zero errors.

Finally, wS is an aggression parameter. Setting this value to 0 essentially forces an equal weight portfolio, and infinity will simply select the best asset each month.

Anyhow, let’s look at a prototype of the code (will bug with NA returns, and still doesn’t include excess returns over some treasury security), using the original FAA securities. The weights are wR=1, wC=.5, and wV=0, with wS = 2 for an offensive scheme or wS = .5 and wC = 1 for a more defensive scheme. Crash protection is enabled.

require(quantmod)
require(PerformanceAnalytics)

symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX")

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
ep <- endpoints(prices, "months")
prices <- prices[ep,]
prices <- prices["1997-03::"]

EAA <- function(monthlyPrices, wR=1, wV=0, wC=.5, wS=2, errorJitter=1e-6, 
                cashAsset=NULL, bestN=1+ceiling(sqrt(ncol(monthlyPrices))),
                enableCrashProtection = TRUE, returnWeights=FALSE) {
  returns <- Return.calculate(monthlyPrices)
  returns <- returns[-1,] #return calculation uses one observation
  
  if(is.null(cashAsset)) {
    returns$zeroes <- 0
    cashAsset <- "zeroes"
    warning("No cash security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. 
            Using vector of zeroes instead.")
  }
  
  cashCol <- grep(cashAsset, colnames(returns))
  
  weights <- list()
  for(i in 1:(nrow(returns)-11)) {
    returnsData <- returns[i:(i+11),] #each chunk will be 12 months of returns data
    #per-month mean of cumulative returns of 1, 3, 6, and 12 month periods
    periodReturn <- ((returnsData[12,] + Return.cumulative(returnsData[10:12,]) + 
      Return.cumulative(returnsData[7:12,]) + Return.cumulative(returnsData)))/22
    vols <- StdDev.annualized(returnsData) 
    mktIndex <- xts(rowMeans(returnsData), order.by=index(returnsData)) #equal weight returns of universe
    cors <- cor(returnsData, mktIndex) #correlations to market index
    
    weightedRets <- periodReturn ^ wR
    weightedCors <- (1 - as.numeric(cors)) ^ wC
    weightedVols <- (vols + errorJitter) ^ wV
    wS <- wS + errorJitter
    
    z <- (weightedRets * weightedCors / weightedVols) ^ wS #compute z_i and zero out negative returns
    z[periodReturn < 0] <- 0
    crashProtection <- sum(z==0)/length(z) #compute crash protection cash cushion
    
    orderedZ <- sort(as.numeric(z), decreasing=TRUE)
    selectedSecurities <- z >= orderedZ[bestN]
    preNormalizedWeights <- z*selectedSecurities #select top N securities, keeping z_i scores
    periodWeights <- preNormalizedWeights/sum(preNormalizedWeights) #normalize
    if (enableCrashProtection) {
      periodWeights <- periodWeights * (1-crashProtection) #CP rule
    }
    weights[[i]] <- periodWeights
  }
  
  weights <- do.call(rbind, weights)
  weights[, cashCol] <- weights[, cashCol] + 1-rowSums(weights) #add to risk-free asset all non-invested weight
  strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns
  if(returnWeights) {
    return(list(weights, strategyReturns))
  } else {
    return(strategyReturns)
  }
}

offensive <- EAA(prices, cashAsset="VBMFX", bestN=3)
defensive <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1)
compare <- cbind(offensive, defensive)
colnames(compare) <- c("Offensive", "Defensive")
stats <- rbind(Return.annualized(compare)*100, StdDev.annualized(compare)*100, maxDrawdown(compare)*100, SharpeRatio.annualized(compare))
rownames(stats)[3] <- "Worst Drawdown"
charts.PerformanceSummary(compare)

And here are the results:

                                Offensive Defensive
Annualized Return               12.085183 10.197450
Annualized Standard Deviation   11.372610  8.633327
Worst Drawdown                  12.629251  8.134785
Annualized Sharpe Ratio (Rf=0%)  1.062657  1.181173

And the resultant equity curves:

Risk and return on full display here. The defensive variant possessing a higher CAGR than max drawdown make it look attractive at first glance, but I’m fairly certain it’s due to investing heavily in bond funds during QE.

Not a bad algorithm, though the fact that there are only 7 securities here leave it open to some idiosyncratic risk. The low-hanging fruit, of course, is that the correlation uses a single-pass variant, meaning that it is quite possible to select correlated assets that are not correlated to non-selected assets, but are indeed correlated to each other. However, since the actual correlations are used here as opposed to correlation rank, I am thinking that a stepwise correlation selection process would have to be specifically written for this particular algorithm. I’ll no doubt run this by David Varadi and see how to properly set up a stepwise correlation algorithm when working with the interplay of correlations with other values (returns, volatility).

In any case, what I like about this algorithm is that not only is it a security *selection* algorithm, which was what FAA was, but it also includes another aspect, which is the actual weighting aspect, rather than simply leaving all assets at equal weight.

At the moment, this is still in its prototypical phase, and I am aware of the bugs of assets that don’t start at the same time, which will be fixed before I commit this to my IKTrading package. But I wanted to put this out there for those that wished to experiment with it and provide feedback if they came across something they believe is worth sharing.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The ZOMMA Warthog Index

Harry Long posted another article on SeekingAlpha. As usual, it’s another strategy to be taken with a grain of salt, but very much worth discussing. So here’s the strategy:

Rebalance annually:

50% XLP, 15% GLD, 35% TLT — aka a variation of the stocks/bonds portfolio, just with some gold added in. The twist is the idea that XLP is less risky than SPY. The original article, written by Harry Long, can be found here.

This strategy is about as simple as they come, and is tested just as easily in R.

Here’s the original replication:

require(PerformanceAnalytics)
require(quantmod)
getSymbols("XLP", from="1990-01-01")
getSymbols("GLD", from="1990-01-01")
getSymbols("TLT", from="1990-01-01")
prices <- cbind(Ad(XLP), Ad(GLD), Ad(TLT))
prices <- prices[!is.na(prices[, 2]),]
rets <- Return.calculate(prices)
warthogRets <- Return.portfolio(rets, weights=c(.5, .15, .35), rebalance_on = "years")
getSymbols("SPY", from="1990-01-01")
SPYrets <- Return.calculate(Ad(SPY))
comparison <- merge(warthogRets, SPYrets, join='inner')
charts.PerformanceSummary(comparison)

And the resulting replicated (approximately) equity curve:

So far, so good.

Now, let’s put the claim of outperforming over an entire cycle to the test.

years <- apply.yearly(comparison, Return.cumulative)
years <- years[-1,] #remove 2004
years
sum(years[,1] > years[,2])/nrow(years)
sapply(years, mean)

And here’s the output:

> years
           portfolio.returns SPY.Adjusted
2005-12-30        0.07117683   0.04834811
2006-12-29        0.10846819   0.15843582
2007-12-31        0.14529234   0.05142241
2008-12-31        0.05105269  -0.36791039
2009-12-31        0.03126667   0.26344690
2010-12-31        0.14424559   0.15053339
2011-12-30        0.20384853   0.01897321
2012-12-31        0.07186807   0.15991238
2013-12-31        0.04234810   0.32309145
2014-12-10        0.16070863   0.11534450
> sum(years[,1] > years[,2])/nrow(years)
[1] 0.5
> sapply(years, mean)
portfolio.returns      SPY.Adjusted 
       0.10302756        0.09215978 

So, even with SPY’s 2008 in mind, this strategy seems to break even on a year-to-year basis, and the returns are slightly better, which I suppose should be par for the course for a backtest against SPY through the recession.

Now, let’s just remove that one year and see how things stack up:

years <- years[-4,] #remove 2008
sapply(years, mean)

And we obtain the following results:

> sapply(years, mean)
portfolio.returns      SPY.Adjusted 
        0.1088025         0.1432787 

In short, the so-called outperformance is largely sample-dependent on the S&P having a terrible year. Beyond that, things don’t look so stellar for the strategy.

Lastly, anytime anyone makes a claim about a strategy outperforming a benchmark, one very, very easy test to do is to simply look at the equity curve of a “market neutral” strategy that shorts the benchmark against the strategy, and see if that looks like an equity curve one would be comfortable holding. Here are the results:

diff <- comparison[,1] - comparison[,2]
charts.PerformanceSummary(diff, main="short SPY against portfolio")

And the resulting equity curve of the outperformance:

Basically, ever since the crisis passed, the strategy has been flat to underperforming against SPY, which is fair enough considering that the market has been on a roar since then.

To hammer the point, let’s look at the equity curves from 2009 onwards.

#strategy against SPY post-crisis
charts.PerformanceSummary(comparison["2009::"])

With the result:

To its credit, the strategy’s equity curve is certainly a smoother ride. At the same time, the underperformance is clear as well. I suppose this makes sense as a portfolio invested 35% in bonds should expect to underperform a portfolio fully invested in equities on an absolute returns basis.

Now, as I did before, in It’s Amazing How Well Dumb Things Get Marketed, I’m going to backtest this strategy as far as I can using proxies from Quandl for gold and bonds, by going directly to the futures. Refer to that post to see that the proxies are relatively decent for the ETFs. In this case, I switched from adjusted to close prices, but the concept should still hold.

require(IKTrading)
goldFutures <- quandClean(stemCode = "CHRIS/CME_GC", verbose = TRUE)
thirtyBond <- quandClean(stemCode="CHRIS/CME_US", verbose = TRUE)
longPrices <- cbind(Cl(XLP), Cl(goldFutures), Cl(thirtyBond))
longRets <- Return.calculate(longPrices)
longRets <- longRets[!is.na(longRets[,1]),]
names(longRets) <- c("XLP", "Gold", "Bonds")
longWarthog <- Return.portfolio(longRets, weights=c(.5, .15, .35), rebalance_on = "years")
longComparison <- merge(longWarthog, SPYrets, join='inner')
charts.PerformanceSummary(longComparison)

And here’s the resultant equity curve comparison:

So, the idea that the strategy outperforms SPY on an absolute returns basis? It seems to be due to SPY’s horrendous performance in 2008, which is a once-in-several-decades event. As you can see, if you started from XLP’s inception to approximately 2005, the strategy was actually in drawdown for all those years, at least if you started in 1998.

Let’s look at the risk/reward metrics for this timeframe:

stats <- rbind(Return.annualized(longComparison), 
      maxDrawdown(longComparison),
      SharpeRatio.annualized(longComparison),
      Return.annualized(longComparison)/maxDrawdown(longComparison)
)
rownames(stats)[4] <- "Return Over MaxDD"
stats

And the results:

> stats
                                portfolio.returns SPY.Adjusted
Annualized Return                       0.0409267   0.05344120
Worst Drawdown                          0.2439091   0.55186722
Annualized Sharpe Ratio (Rf=0%)         0.4846184   0.26295594
Return Over MaxDD                       0.1677949   0.09683706

To its credit, the strategy has a significantly better return to risk profile than SPY does, which I hope would be the case for any backtest going up against SPY with 2008 in mind. Overall, the strategy has a bit of merit as a defensive play, and for a simple static allocation based strategy that plays on the defensive side, it delivers what you might expect from a risk-averse strategy, which is reflected in a much better return to drawdown profile than the market. Nevertheless, one should be careful not to misinterpret a strategy whose focus is to control risk for a strategy that aims to swing for the fences. Furthermore, this is a strategy that Harry Long has been so kind as to give away, so take it for what it’s worth–a simple-to-implement defensive strategy which delivers on its defensive promises.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

An Update on Flexible Asset Allocation

A few weeks back, after seeing my replication, one of the original authors of the Flexible Asset Allocation paper got in touch with me to tell me to make a slight adjustment to the code, in that rather than remove any negative-momentum securities before performing any ranking, to perform all ranking without taking absolute momentum into account, and only removing negative absolute momentum securities at the very end, after allocating weights.

Here’s the new code:

FAA <- function(prices, monthLookback = 4,
                weightMom = 1, weightVol = .5, weightCor = .5, 
                riskFreeName = NULL, bestN = 3,
                stepCorRank = FALSE, stepStartMethod = c("best", "default"),
                geometric = TRUE, ...) {
  stepStartMethod <- stepStartMethod[1]
  if(is.null(riskFreeName)) {
    prices$zeroes <- 0
    riskFreeName <- "zeroes"
    warning("No risk-free security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. 
            Using vector of zeroes instead.")
  }
  returns <- Return.calculate(prices)
  monthlyEps <- endpoints(prices, on = "months")
  riskFreeCol <- grep(riskFreeName, colnames(prices))
  tmp <- list()
  dates <- list()
  
  for(i in 2:(length(monthlyEps) - monthLookback)) {
    #subset data
    priceData <- prices[monthlyEps[i]:monthlyEps[i+monthLookback],]
    returnsData <- returns[monthlyEps[i]:monthlyEps[i+monthLookback],]
    
    #perform computations
    momentum <- data.frame(t(t(priceData[nrow(priceData),])/t(priceData[1,]) - 1))
    momentum <- momentum[,!is.na(momentum)]
    #momentum[is.na(momentum)] <- -1 #set any NA momentum to negative 1 to keep R from crashing
    priceData <- priceData[,names(momentum)]
    returnsData <- returnsData[,names(momentum)]
    
    momRank <- rank(momentum)
    vols <- data.frame(StdDev(returnsData))
    volRank <- rank(-vols)
    cors <- cor(returnsData, use = "complete.obs")
    if (stepCorRank) {
      if(stepStartMethod=="best") {
        compositeMomVolRanks <- weightMom*momRank + weightVol*volRank
        maxRank <- compositeMomVolRanks[compositeMomVolRanks==max(compositeMomVolRanks)]
        corRank <- stepwiseCorRank(corMatrix=cors, startNames = names(maxRank), 
                                        bestHighestRank = TRUE, ...)
        
      } else {
        corRank <- stepwiseCorRank(corMatrix=cors, bestHighestRank=TRUE, ...)
      }
    } else {
      corRank <- rank(-rowSums(cors))
    }
    
    totalRank <- rank(weightMom*momRank + weightVol*volRank + weightCor*corRank)
    
    upper <- length(names(returnsData))
    lower <- max(upper-bestN+1, 1)
    topNvals <- sort(totalRank, partial=seq(from=upper, to=lower))[c(upper:lower)]
    
    #compute weights
    longs <- totalRank %in% topNvals #invest in ranks length - bestN or higher (in R, rank 1 is lowest)
    longs[momentum < 0] <- 0 #in previous algorithm, removed momentums < 0, this time, we zero them out at the end.
    longs <- longs/sum(longs) #equal weight all candidates
    longs[longs > 1/bestN] <- 1/bestN #in the event that we have fewer than top N invested into, lower weights to 1/top N
    names(longs) <- names(totalRank)
    
    
    #append removed names (those with momentum < 0)
    removedZeroes <- rep(0, ncol(returns)-length(longs))
    names(removedZeroes) <- names(returns)[!names(returns) %in% names(longs)]
    longs <- c(longs, removedZeroes)
    
    #reorder to be in the same column order as original returns/prices
    longs <- data.frame(t(longs))
    longs <- longs[, names(returns)]
    
    #append lists
    tmp[[i]] <- longs
    dates[[i]] <- index(returnsData)[nrow(returnsData)]
  }
  
  weights <- do.call(rbind, tmp)
  dates <- do.call(c, dates)
  weights <- xts(weights, order.by=as.Date(dates)) 
  weights[, riskFreeCol] <- weights[, riskFreeCol] + 1-rowSums(weights)
  strategyReturns <- Return.rebalancing(R = returns, weights = weights, geometric = geometric)
  colnames(strategyReturns) <- paste(monthLookback, weightMom, weightVol, weightCor, sep="_")
  return(strategyReturns)
}

And here are the new results, both with the original configuration, and using the stepwise correlation ranking algorithm introduced by David Varadi:

mutualFunds <- c("VTSMX", #Vanguard Total Stock Market Index
                 "FDIVX", #Fidelity Diversified International Fund
                 "VEIEX", #Vanguard Emerging Markets Stock Index Fund
                 "VFISX", #Vanguard Short-Term Treasury Fund
                 "VBMFX", #Vanguard Total Bond Market Index Fund
                 "QRAAX", #Oppenheimer Commodity Strategy Total Return 
                 "VGSIX" #Vanguard REIT Index Fund
)

#mid 1997 to end of 2012
getSymbols(mutualFunds, from="1997-06-30", to="2014-10-30")
tmp <- list()
for(fund in mutualFunds) {
  tmp[[fund]] <- Ad(get(fund))
}

#always use a list hwne intending to cbind/rbind large quantities of objects
adPrices <- do.call(cbind, args = tmp)
colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))

original <- FAA(adPrices, riskFreeName="VFISX")
swc <- FAA(adPrices, riskFreeName="VFISX", stepCorRank = TRUE)
originalOld <- FAAreturns(adPrices, riskFreeName="VFISX")
swcOld <- FAAreturns(adPrices, riskFreeName="VFISX", stepCorRank=TRUE)
all4 <- cbind(original, swc, originalOld, swcOld)
names(all4) <- c("original", "swc", "origOld", "swcOld")
charts.PerformanceSummary(all4)
> rbind(Return.annualized(all4)*100,
+       maxDrawdown(all4)*100,
+       SharpeRatio.annualized(all4))
                                 original       swc   origOld    swcOld
Annualized Return               12.795205 14.135997 13.221775 14.037137
Worst Drawdown                  11.361801 11.361801 13.082294 13.082294
Annualized Sharpe Ratio (Rf=0%)  1.455302  1.472924  1.377914  1.390025

And the resulting equity curve comparison

Overall, it seems filtering on absolute momentum after applying all weightings using only relative momentum to rank actually improves downside risk profiles ever so slightly compared to removing negative momentum securities ahead of time. In any case, FAAreturns will be the function that removes negative momentum securities ahead of time, and FAA will be the ones that removes them after all else is said and done.

I’ll return to the standard volatility trading agenda soon.

Thanks for reading.

Note: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

Seeking Volatility and Leverage

So Harry Long recently posted several articles, a couple of them all that have variations on a theme of a combination of leveraging SPY (aka SPXL), leveraging TLT (aka TMF), and some small exposure to the insanely volatile volatility indices (VXX, TVIX, ZIV, etc.), which can have absolutely insane drawdowns. Again, before anything else, a special thanks to Mr. Helmuth Vollmeier for his generosity in providing long-dated VXX and ZIV data, both of which will be leveraged for this post (in more ways than one).

In any case, here is the link to the two articles:

A Weird All-Long Strategy That Beats the S&P 500 Every Year
A Refined All-Long Strategy III

As usual, the challenge is that the exact ETFs in question didn’t exist prior to the financial crisis, giving a very handy justification as to why not to show the downsides of the strategy/strategies. From a conceptual standpoint, it’s quite trivial to realize that upon reading the articles, that when a large chunk of the portfolio consists of a leveraged SPY exposure, one is obviously going to look like a genius outperforming the SPY itself in a bull run. The question, obviously, is what happens when the market doesn’t support the strategy. If offered a 50% coin flip, with the outcome of heads winning a million dollars and being told nothing else, the obvious question to ask would be: “and what happens on tails?”

This post aims to address this for three separate configurations of the strategy.

First off, in order to create a believable backtest, the goal is to first create substitutes to the short-dated newfangled ETFs (SPXL and TMF), which will be done very simply: leverage the adjusted returns of SPY and TLT, respectively (I had to use adjusted due to the split in SPXL–normally I don’t like using adjusted data for anything, but splits sort of necessitate this evil).

Here we go:

SPXL vs. SPY:

require(quantmod)
require(PerformanceAnalytics)
require(downloader)

getSymbols("SPXL", from="1990-01-01")
spxlRets <- Return.calculate(Ad(SPXL)) #have to use adjusted due to split 
getSymbols("SPY", from="1990-01-01")
SPYrets <- Return.calculate(Ad(SPY))
spxl3SPY <- merge(spxlRets, 3*SPYrets, join='inner')
charts.PerformanceSummary(spxl3SPY)

So, the adjusted data in use for this simulation will slightly overshoot in regards to the absolute returns. That stated, it isn’t so much the returns we care about in this post (we know they’re terrific when times are good), but the drawdowns. The drawdowns are basically on top of one another, which is good.

Let’s repeat this with TMF and TLT:

getSymbols("TMF", from="1990-01-01")
TMFrets <- Return.calculate(Ad(TMF))
getSymbols("TLT", from="1990-01-01")
TLTrets <- Return.calculate(Ad(TLT))
tmf3TLT <- merge(TMFrets, 3*TLTrets, join='inner')
charts.PerformanceSummary(tmf3TLT)

The result:

> Return.annualized(tmf3TLT[,2]-tmf3TLT[,1])
                  TLT.Adjusted
Annualized Return   0.03123479

A bit more irritating, as there’s clearly a bit of discrepancy to the tune of approximately 3.1% a year in terms of annualized returns in favor of the leveraged TLT vs. the actual TMF (so if you can borrow for less than 3% a year, this may be a good strategy for you–though I’m completely in the dark about why this sort of mechanic exists–is it impossible to actually short TMF, or buy TLT on margin? If someone is more intimately familiar with this trade, let me know), so, I’m going to make like an engineer and apply a little patch to remove the bias–subtract the daily returns of the discrepancy from the leveraged adjusted TLT.

discrepancy <- as.numeric(Return.annualized(tmf3TLT[,2]-tmf3TLT[,1]))
tmf3TLT[,2] <- tmf3TLT[,2] - ((1+discrepancy)^(1/252)-1)

charts.PerformanceSummary(tmf3TLT)

Much better. Let’s save those modified TLT returns (our synthetic TMF):

modifiedTLT <- 3*TLTrets - ((1+discrepancy)^(1/252)-1)

With VXX, luckily, we simply need to compare Mr. Vollmeier’s data to Yahoo’s return data so that we can verify if two separate return streams check out.

#get long VXX -- thank you so much, Mr. Helmuth Vollmeier
download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
         destfile="longVXX.txt") #requires downloader package
VXXlong <- read.csv("longVXX.txt", stringsAsFactors=FALSE)
VXXlong <- xts(VXXlong[,2:5], order.by=as.Date(VXXlong$Date))
VXXrets <- Return.calculate(Cl(VXXlong)) #long data only has close

getSymbols("VXX", from="1990-01-01")
vxxYhooRets <- Return.calculate(Ad(VXX))
vxx2source <- merge(VXXrets, vxxYhooRets, join='inner')
charts.PerformanceSummary(vxx2source) #identical

And the result:

No discrepancies here whatsoever. So once again, I am very fortunate to have experienced readers commenting on this blog.

So, with this in mind, let’s attempt to recreate the equity curve of the first strategy, which consists of 50% SPXL, 45% TMF, and 5% VXX.

rSPY_TLT_VXX <- cbind(3*SPYrets, modifiedTLT, VXXrets)
rSPY_TLT_VXX <- rSPY_TLT_VXX[!is.na(rSPY_TLT_VXX[,3]),]
colnames(rSPY_TLT_VXX) <- c("SPY", "TLT", "VXX")

strat <- Return.rebalancing(R = rSPY_TLT_VXX, weights = c(.5, .45, .05), 
                            rebalance_on = "years", geometric=TRUE)
stratAndSPY <- merge(strat, SPYrets, join='inner')
charts.PerformanceSummary(stratAndSPY["2009-04-16::"])

About there.

One other note, on a purely mechanical issue: when using the

geometric = TRUE

argument with R, when creating synthetic leverage, you cannot create it in the actual

weights

argument, or it will leverage your capital at every rebalancing period, giving you obviously incorrect results. Furthermore, these results were achieved using geometric = TRUE in two places: one in the Return.rebalancing argument (which implies reinvesting the capital), and then once again when calling the PerformanceAnalytics functions. Essentially, the implication of this is reinvesting all gains at the rebalancing period, and not touching any position no matter what. Used inappropriately, this will create results that border on the optimistic.

Now that we’ve replicated the general shape and pattern of the original equity curve, let’s look at this strategy on a whole.

charts.PerformanceSummary(stratAndSPY)

If you just look at the top chart, it looks pretty amazing, doesn’t it? Now look at the bottom chart. Not only is there a massive drawdown, but there’s a massive spike up, and then *another* massive, larger drawdown. Imagine what would have happened to someone who didn’t follow this strategy to the letter. Get out at the very worst moment, get back in after a run-up, and then get hit *again*.

Here are the usual statistics I use:

> Return.annualized(stratAndSPY)
                  portfolio.returns SPY.Adjusted
Annualized Return         0.2305339   0.07937642
> maxDrawdown(stratAndSPY)
               portfolio.returns SPY.Adjusted
Worst Drawdown         0.4901882    0.5518672
> SharpeRatio.annualized(stratAndSPY)
                                portfolio.returns SPY.Adjusted
Annualized Sharpe Ratio (Rf=0%)         0.9487574    0.3981902

An annualized Sharpe just shy of 1, using adjusted data, with a CAGR/max drawdown ratio of less than one half, and a max drawdown far beyond the levels of acceptable (even 20% may be too much for some people, though I’d argue it’s acceptable over a long enough time frame provided it’s part of a diversified portfolio of other such uncorrelated strategies).

Now, the claim is that this strategy consistently beats the S&P 500 year after year? That can also be tested.

diff <- stratAndSPY[,1] - stratAndSPY[,2]
diffAndModTLT <- cbind(diff, modifiedTLT)
charts.PerformanceSummary(diffAndModTLT)

Essentially, I shorted the SPY against the strategy (which would simply mean still long the SPY, except at 50% instead of 150%), and this is the result, in comparison to the 3x leveraged TLT (and cut down by the original discrepancy on a daily level)

So even after shorting the SPY and its massive drawdown away, one is still left with what amounts to a diluted TMF position, which has its own issues. Here are the three statistics, once again:

> Return.annualized(diffAndModTLT)
                  portfolio.returns TLT.Adjusted
Annualized Return         0.1181923    0.1356003
> maxDrawdown(diffAndModTLT)
               portfolio.returns TLT.Adjusted
Worst Drawdown         0.3930016    0.6348332
> SharpeRatio.annualized(diffAndModTLT)
                                portfolio.returns TLT.Adjusted
Annualized Sharpe Ratio (Rf=0%)         0.4889822    0.3278975

In short, for a strategy that markets itself on beating the SPY, shorting the SPY against it costs more in upside than is gained on the downside. Generally, anytime I see an article claiming “this strategy does really well against benchmark XYZ”, my immediate intuition is: “so what does the equity curve look like when you short your benchmark against your strategy?” If the performance deteriorates, that once again means some tough questions need asking. That stated, the original strategy handily trounced the SPY benchmark, and the difference trounced the leveraged TLT. Just that my own personal benchmark is an annualized return over max drawdown of 1 or more (meaning that even the worst streak can be made up for within a year’s time–or, more practically, that generally, you don’t go a year without getting paid).

Let’s move on to the second strategy. In this instance, it’s highly similar–50% SPXL (3x SPY), 40% TMF (3x TLT), and 10% TVIX (2x VXX). Again, let’s compare synthetic to actual.

getSymbols("TVIX", from="1990-01-01")
TVIXrets <- Return.calculate(Ad(TVIX))
vxxTvix <- merge(2*VXXrets, TVIXrets, join='inner')
charts.PerformanceSummary(vxxTvix) #about identical

We’re in luck. This chart is about identical, so no tricks necessary.

The other two instruments are identical, so we can move straight to the strategy.

First, let’s replicate an equity curve:

rSPY_TLT_VXX2 <- cbind(3*SPYrets, modifiedTLT,  2*VXXrets)
rSPY_TLT_VXX2 <- rSPY_TLT_VXX2[!is.na(rSPY_TLT_VXX2[,3]),]

stratTwo <- Return.rebalancing(R=rSPY_TLT_VXX, weights = c(.5, .4, .1), rebalance_on="years", geometric=TRUE)
stratTwoAndSPY <- merge(stratTwo, SPYrets, join='inner')
charts.PerformanceSummary(stratTwoAndSPY["2010-11-30::"], geometric=TRUE)

General shape and pattern of the strategy’s equity curve achieved. What does it look like since the inception of the original VIX futures?

charts.PerformanceSummary(stratTwoAndSPY)

Very similar to the one before. Let’s look at them side by side.

bothStrats <- merge(strat, stratTwo, join='inner')
colnames(bothStrats) <- c("strategy one", "strategy two")
charts.PerformanceSummary(bothStrats)

First of all, let’s do a side by side comparison of the three statistics:

> Return.annualized(bothStrats)
                  strategy one strategy two
Annualized Return    0.2305339    0.2038783
> maxDrawdown(bothStrats)
               strategy one strategy two
Worst Drawdown    0.4901882    0.4721624
> SharpeRatio.annualized(bothStrats)
                                strategy one strategy two
Annualized Sharpe Ratio (Rf=0%)    0.9487574    0.9075242

The second strategy seems to be strictly worse than the first. If we’d short the second against the first, essentially, it’d mean we have a 15% exposure to TLT, and a -15% exposure to VXX. For a fun tangent, what does such a strategy’s equity curve look like?

stratDiff <- bothStrats[,1] - bothStrats[,2]
charts.PerformanceSummary(stratDiff)

With the following statistics:

> Return.annualized(stratDiff)
                  strategy one
Annualized Return   0.02606221
> maxDrawdown(stratDiff)
[1] 0.1254502
> SharpeRatio.annualized(stratDiff)
                                strategy one
Annualized Sharpe Ratio (Rf=0%)    0.8544455

Basically, a 1 to 5 annualized return to max drawdown ratio. In short, this may be how a lot of mediocre managers go out of business–see an idea that looks amazing, leverage it up, then have one short period of severe underperformance, where everything goes wrong for a small amount of time (EG equity market-neutral quant meltdown of August 2007, flash crash, etc.), and then a whole fund keels over. In fact, these spikes of underperformance are the absolute worst type of phenomena that can happen to many systematic strategies, since they trigger the risk-exit mechanisms, and then recover right before the strategy can make it back in.

Finally, we have our third strategy, which introduces one last instrument–ZIV. Here’s the specification for that strategy:

30% SPXL
30% ZIV
30% TMF
10% TVIX

Again, let’s go through the process and get our replicated equity curve.

download("https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT", destfile="longZIV.txt")
ZIVlong <- read.csv("longZIV.txt", stringsAsFactors=FALSE)
ZIVlong <- xts(ZIVlong[,2:5], order.by=as.Date(ZIVlong$Date))
ZIVrets <- Return.calculate(Cl(ZIVlong))

strat3components <- cbind(3*SPYrets, ZIVrets, modifiedTLT, 2*VXXrets)
strat3components <- strat3components[!is.na(strat3components[,4]),]
stratThree <- Return.rebalancing(strat3components, weights=c(.3, .3, .3, .1), rebalance_on="years", geometric=TRUE)
stratThreeAndSPY <- merge(stratThree, SPYrets, join='inner')
chart.TimeSeries(log(cumprod(1+stratThreeAndSPY["2010-11-30::"])))

With the resulting equity curve replication:

And again, the full-backtest equity curve:

charts.PerformanceSummary(stratThreeAndSPY)

To put it together, let’s combine all three strategies, and the SPY.

threeStrats <- merge(bothStrats, stratThree, join='inner')
threeStratsSPY <- merge(threeStrats, SPYrets, join='inner')
colnames(threeStratsSPY)[3] <- "strategy three"
charts.PerformanceSummary(threeStratsSPY)

stats <- data.frame(cbind(t(Return.annualized(threeStratsSPY))*100, 
               t(maxDrawdown(threeStratsSPY))*100, 
               t(SharpeRatio.annualized(threeStratsSPY))))
stats$returnToDrawdown <- stats[,1]/stats[,2]

The resultant equity curve:

The resultant statistics:

> stats
               Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0.. returnToDrawdown
strategy.one           23.053387       49.01882                       0.9487574        0.4702967
strategy.two           20.387835       47.21624                       0.9075242        0.4317970
strategy three         15.812291       39.31843                       0.8019835        0.4021597
SPY.Adjusted            7.937642       55.18672                       0.3981902        0.1438325

In short, all of them share the same sort of profile–very strong annualized returns, even scarier drawdowns, Sharpe ratios close to 1 (albeit using adjusted data), and return to drawdown ratios slightly less than .5 (also scary). Are these complete strategies on their own? No. Do they beat the S&P 500? Yes. Does it make sense that they beat the S&P 500? Considering that two of these configurations have a greater than 100% market exposure, and the direction of the equity markets tends to be up over time (at least over the period during which the VIX futures traded), then this absolutely makes sense. Should one short the S&P against these strategies? I wouldn’t say so.

One last thing to note–the period over which these strategies were tested (inception of VIX futures) had no stagflation, and the Fed’s QE may be partially (or a great deal) responsible for the rise in the equity markets since the crisis. If the market declines as a result of the fed raising rates (which it inevitably will have to at some point), these strategies might be seriously hurt, so I’d certainly advise a great deal of caution, even going forward.

In any case, for better or for worse, here are a few strategies from SeekingAlpha, replicated to as far as synthetic data is available.

Thanks for reading.

It’s Amazing How Well Dumb Things Work?

Recently, Harry Long posted not one but four new articles on Seeking Alpha called It’s Amazing How Well Dumb Things Work II. The last time I replicated a strategy by Mr. Long, it came up short on the expectations it initially built. For the record, here are the links to the four part series.

First post
Second post (very valuable comments section)
Third post
Fourth post

While the strategy itself wasn’t the holy grail (nothing is, really, and it did outperform the S&P), it did pay off with some long-history ETF data for which I am extremely grateful (a project I’m currently working on involves those exact indices, I’ll see if I can blog about it), and it did give me the chance to show some R functionality that I hadn’t shown before that point, which in my opinion, made the endeavor more than worth it.

For this particular strategy, I’m not so sure that’s the case.

In short, the S&P 500, gold, and long-term bonds, rebalance to an equal-weight portfolio annually. Dirt simple. So simple, in fact, that I can backtest this strategy back to 1978!

The data I used was the actual GSPC index from Yahoo Finance, and some quandl data that I cleaned up with the quandClean function in my IKTrading package. While some of the futures I originally worked with have huge chunks of missing data, the long term bonds and gold futures are relatively intact. Gold had about 44 missing days in the 21st century when GLD had returns, so there may be some data integrity issues there, though the average return for GLD on those days is about -3 bps, so the general concept demonstration is intact. With the bonds, there were 36 missing days that TLT had returns for, but the average returns for those days were a -18 bps, so I instead imputed those missing days to zero.

In any case, here’s the script to set up the data:

require(IKTrading)
require(PerformanceAnalytics)
require(quantmod)

getSymbols("^GSPC", from="1800-01-01")
SPrets &lt;- Return.calculate(Cl(GSPC))

goldFutures &lt;- quandClean(stemCode = "CHRIS/CME_GC", verbose = TRUE)
getSymbols("GLD", from="1990-01-01") #quandl's data had a few gaps--let's use GLD to fill them in.
goldGLD &lt;- merge(Cl(goldFutures), Cl(GLD), join='outer')
goldRets &lt;- Return.calculate(goldGLD)
sum(is.na(goldRets[,1]))
mean(goldRets[is.na(goldRets[,1]),2], na.rm=TRUE)
goldRets[is.na(goldRets[,1]),1] &lt;- goldRets[is.na(goldRets[,1]),2] #impute missing returns data with GLD returns data for that day
goldRets &lt;- goldRets[,1]

thirtyBond &lt;- quandClean(stemCode="CHRIS/CME_US", verbose = TRUE)
getSymbols("TLT", from="1990-01-01")
bondTLT &lt;- merge(Cl(thirtyBond), Cl(TLT), join='outer')
bondRets &lt;- Return.calculate(bondTLT)
sum(is.na(bondRets[,1]))
mean(bondRets[is.na(bondRets[,1]),2], na.rm=TRUE) #18 basis points? Just going to impute as zero.
bondRets[is.na(bondRets[,1]),1] &lt;- 0 #there are 259 other such instances prior to this imputing
bondRets &lt;- bondRets[,1]

SPbondGold &lt;- cbind(SPrets, goldRets, bondRets)
SPbondGold &lt;- SPbondGold["1977-12-30::"] #start off at beginning of 1978, since that's when gold futures were first in inception
colnames(SPbondGold) &lt;- c("SandP", "Bonds", "Gold")

DTW_II_returns &lt;- Return.rebalancing(R = SPbondGold, weights = c(1/3, 1/3, 1/3), geometric = FALSE, rebalance_on = "years")
stratSP &lt;- merge(DTW_II_returns, SPrets, join='inner')
colnames(stratSP) &lt;- c("Harry Long Strategy", "S&amp;P 500")

First, let’s recreate the original equity curve, to prove that I’m comparing apples to apples.

#Recreate the original equity curve
charts.PerformanceSummary(stratSP["2004-12-01::"])

So, as you can see, everything seems to match, article confirmed, and that’s good.

But I wrote the backtest to go back to 1978, just so we can see as much of the performance as we possibly can. So, let’s take a look. Now, keep in mind that the article stated the following two assertions:

“As we previously observed, the approach has higher returns and lower drawdowns across an entire bull and bear market cycle than the S&P 500…” and that “…a portfolio manager who employed this decidedly humble, simple approach, would actually have been doing well for clients as a fiduciary.”

Let’s see if the full-period equity curve supports these assertions.

charts.PerformanceSummary(stratSP)

Here are some numerical values to put this into perspective:

&gt; Return.cumulative(stratSP)
                  Harry Long Article  S&amp;P 500
Cumulative Return           6.762379 20.25606
&gt; Return.annualized(stratSP)
                  Harry Long Article    S&amp;P 500
Annualized Return         0.05713769 0.08640992
&gt; SharpeRatio.annualized(stratSP)
                                Harry Long Article   S&amp;P 500
Annualized Sharpe Ratio (Rf=0%)           0.583325 0.4913877
&gt; maxDrawdown(stratSP)
               Harry Long Article   S&amp;P 500
Worst Drawdown           0.334808 0.5677539

In short, a 5.7% to an 8.6% annualized return in favor of the S&P 500 since 1978, with both sets of returns sporting substantial drawdowns compared to their annualized rates of return. So while this strategy gives a better risk-adjusted return, you get the return you pay for with the risks taken–smaller max drawdown for a similar Sharpe? Smaller return. Furthermore, the early 80s seem to have hurt this “all-weather” strategy more than the S&P 500.

For fun, let’s leverage the strategy 2x over and see what happens.

stratSP$leveragedStrat &lt;- stratSP[,1]*2
charts.PerformanceSummary(stratSP)

And this is the resulting equity curve:

Hey, now we’re talking, right?! Well, the late 70s and early 80s say not exactly. But what if I wanted to make myself look better than the strategy justifies? Well, I can simply truncate those drawdowns, by just leaving off everything before 1985! (And put some marketing spin on it to justify it.)

charts.PerformanceSummary(stratSP["1985::"])

Here are some statistics:

&gt; maxDrawdown(stratSP["1985::"])
               Harry.Long.Article   S.P.500 leveragedStrat
Worst Drawdown          0.2228608 0.5677539      0.4065492
&gt; Return.annualized(stratSP["1985::"])
                  Harry.Long.Article    S.P.500 leveragedStrat
Annualized Return         0.05782724 0.08700208      0.1106892
&gt; SharpeRatio.annualized(stratSP["1985::"])
                                Harry.Long.Article   S.P.500 leveragedStrat
Annualized Sharpe Ratio (Rf=0%)          0.6719875 0.4753468      0.6431376

Now look, higher returns, lower max drawdown, higher Sharpe, we can all go home happy?

Of course not. So what’s the point of this post?

Simply, to not believe everything you see at first glance. There’s often a phrase that’s thrown around that states “you’ll never see a bad backtest”–simply implying that if the backtest is bad, it’s thrown out, and all that’s left are the results with cherry-picked time frames, brute-force curve-fit parameter sets, and who knows what other sort of methodologies baked in to make a strategy look as good as possible.

On my blog, I try my best to be on the opposite side of the spectrum. In instances in which ideas were published years ago, some low-hanging fruit is an out-of sample test. On the opposite side of the spectrum, such as with Mr. Harry Long of SeekingAlpha here, when there are backtests with a very small time frame owing to newly released instruments, I’ll try and provide a much larger context. I provide the code, the data, and explanations in plain English, all to the best of my ability here on this blog. If you don’t wish to take my word for it, you can always run the scripts yourself. Worst to worst, leave a comment, and I’ll answer to the best of my ability.

I’ll have another Harry Long backtest replication up soon.

Thanks for reading.

Combining FAA and Stepwise Correlation

Since I debuted the stepwise correlation algorithm, I suppose the punchline that people want to see is: does it actually work?

The short answer? Yes, it does.

A slightly longer answer: it works, With the caveat that having a better correlation algorithm that makes up 25% of the total sum of weighted ranks only has a marginal (but nevertheless positive) effect on returns. Furthermore, when comparing a max decorrelation weighting using default single-pass correlation vs. stepwise, the stepwise gives a bumpier ride, but one with visibly larger returns. Furthermore, for this universe, the difference between starting at the security ranked highest by the momentum and volatility components, or with the security that has the smallest aggregate correlation to all securities, is very small. Essentially, from my inspection, the answer to using stepwise correlation is: “it’s a perfectly viable alternative, if not better.”

Here are the functions used in the script:

require(quantmod)
require(PerformanceAnalytics)

stepwiseCorRank <- function(corMatrix, startNames=NULL, stepSize=1, bestHighestRank=FALSE) {
  #edge cases
  if(dim(corMatrix)[1] == 1) {
    return(corMatrix)
  } else if (dim(corMatrix)[1] == 2) {
    ranks <- c(1.5, 1.5)
    names(ranks) <- colnames(corMatrix)
    return(ranks)
  }
  
  if(is.null(startNames)) {
    corSums <- rowSums(corMatrix)
    corRanks <- rank(corSums)
    startNames <- names(corRanks)[corRanks <= stepSize]
  }
  nameList <- list()
  nameList[[1]] <- startNames
  rankList <- list()
  rankCount <- 1
  rankList[[1]] <- rep(rankCount, length(startNames))
  rankedNames <- do.call(c, nameList)
  
  while(length(rankedNames) < nrow(corMatrix)) {
    rankCount <- rankCount+1
    subsetCor <- corMatrix[, rankedNames]
    if(class(subsetCor) != "numeric") {
      subsetCor <- subsetCor[!rownames(corMatrix) %in% rankedNames,]
      if(class(subsetCor) != "numeric") {
        corSums <- rowSums(subsetCor)
        corSumRank <- rank(corSums)
        lowestCorNames <- names(corSumRank)[corSumRank <= stepSize]
        nameList[[rankCount]] <- lowestCorNames
        rankList[[rankCount]] <- rep(rankCount, min(stepSize, length(lowestCorNames)))
      } else { #1 name remaining
        nameList[[rankCount]] <- rownames(corMatrix)[!rownames(corMatrix) %in% names(subsetCor)]
        rankList[[rankCount]] <- rankCount
      }
    } else {  #first iteration, subset on first name
      subsetCorRank <- rank(subsetCor)
      lowestCorNames <- names(subsetCorRank)[subsetCorRank <= stepSize]
      nameList[[rankCount]] <- lowestCorNames
      rankList[[rankCount]] <- rep(rankCount, min(stepSize, length(lowestCorNames)))
    }    
    rankedNames <- do.call(c, nameList)
  }
  
  ranks <- do.call(c, rankList)
  names(ranks) <- rankedNames
  if(bestHighestRank) {
    ranks <- 1+length(ranks)-ranks
  }
  ranks <- ranks[colnames(corMatrix)] #return to original order
  return(ranks)
}


FAAreturns <- function(prices, monthLookback = 4,
                       weightMom=1, weightVol=.5, weightCor=.5, 
                       riskFreeName="VFISX", bestN=3,
                       stepCorRank = FALSE, stepStartMethod=c("best", "default")) {
  stepStartMethod <- stepStartMethod[1]
  returns <- Return.calculate(prices)
  monthlyEps <- endpoints(prices, on = "months")
  riskFreeCol <- grep(riskFreeName, colnames(prices))
  tmp <- list()
  dates <- list()
  
  for(i in 2:(length(monthlyEps) - monthLookback)) {
    
    #subset data
    priceData <- prices[monthlyEps[i]:monthlyEps[i+monthLookback],]
    returnsData <- returns[monthlyEps[i]:monthlyEps[i+monthLookback],]
    
    #perform computations
    momentum <- data.frame(t(t(priceData[nrow(priceData),])/t(priceData[1,]) - 1))
    priceData <- priceData[, momentum > 0] #remove securities with momentum < 0
    returnsData <- returnsData[, momentum > 0]
    momentum <- momentum[momentum > 0]
    names(momentum) <- colnames(returnsData)
    vol <- as.numeric(-sd.annualized(returnsData))
    
    if(length(momentum) > 1) {
      
      #perform ranking
      if(!stepCorRank) {
        sumCors <- -colSums(cor(returnsData, use="complete.obs"))
        stats <- data.frame(cbind(momentum, vol, sumCors))
        ranks <- data.frame(apply(stats, 2, rank))
        weightRankSum <- weightMom*ranks$momentum + weightVol*ranks$vol + weightCor*ranks$sumCors
        names(weightRankSum) <- rownames(ranks)
      } else {
        corMatrix <- cor(returnsData, use="complete.obs")
        momRank <- rank(momentum)
        volRank <- rank(vol)
        compositeMomVolRanks <- weightMom*momRank + weightVol*volRank
        maxRank <- compositeMomVolRanks[compositeMomVolRanks==max(compositeMomVolRanks)]
        if(stepStartMethod=="default") {
          stepCorRanks <- stepwiseCorRank(corMatrix=corMatrix, startNames = NULL, 
                                          stepSize = 1, bestHighestRank = TRUE)
        } else {
          stepCorRanks <- stepwiseCorRank(corMatrix=corMatrix, startNames = names(maxRank), 
                                          stepSize = 1, bestHighestRank = TRUE)
        }
        weightRankSum <- weightMom*momRank + weightVol*volRank + weightCor*stepCorRanks
      }
      
      totalRank <- rank(weightRankSum)
      
      #find top N values, from http://stackoverflow.com/questions/2453326/fastest-way-to-find-second-third-highest-lowest-value-in-vector-or-column
      #thanks to Dr. Rob J. Hyndman
      upper <- length(names(returnsData))
      lower <- max(upper-bestN+1, 1)
      topNvals <- sort(totalRank, partial=seq(from=upper, to=lower))[c(upper:lower)]
      
      #compute weights
      longs <- totalRank %in% topNvals #invest in ranks length - bestN or higher (in R, rank 1 is lowest)
      longs <- longs/sum(longs) #equal weight all candidates
      longs[longs > 1/bestN] <- 1/bestN #in the event that we have fewer than top N invested into, lower weights to 1/top N
      names(longs) <- names(totalRank)
      
    } else if(length(momentum) == 1) { #only one security had positive momentum 
      longs <- 1/bestN
      names(longs) <- names(momentum)
    } else { #no securities had positive momentum 
      longs <- 1
      names(longs) <- riskFreeName
    }
    
    #append removed names (those with momentum < 0)
    removedZeroes <- rep(0, ncol(returns)-length(longs))
    names(removedZeroes) <- names(returns)[!names(returns) %in% names(longs)]
    longs <- c(longs, removedZeroes)
    
    #reorder to be in the same column order as original returns/prices
    longs <- data.frame(t(longs))
    longs <- longs[, names(returns)]
    
    #append lists
    tmp[[i]] <- longs
    dates[[i]] <- index(returnsData)[nrow(returnsData)]
  }
  
  weights <- do.call(rbind, tmp)
  dates <- do.call(c, dates)
  weights <- xts(weights, order.by=as.Date(dates)) 
  weights[, riskFreeCol] <- weights[, riskFreeCol] + 1-rowSums(weights)
  strategyReturns <- Return.rebalancing(R = returns, weights = weights, geometric = FALSE)
  colnames(strategyReturns) <- paste(monthLookback, weightMom, weightVol, weightCor, sep="_")
  return(strategyReturns)
}

The FAAreturns function has been modified to transplant the stepwise correlation algorithm I discussed earlier. Essentially, the chunk of code that performs the ranking inside the function got a little bit larger, and some new arguments to the function have been introduced.

First off, there’s the option to use the stepwise correlation algorithm in the first place–namely, the stepCorRank defaulting to FALSE (the default settings replicate the original FAA idea demonstrated in the first post on this idea). However, the argument that comes next, the stepStartMethod argument does the following:

Using the “default” setting, the algorithm will start off using the security that is simply least correlated among the securities (that is, the lowest sum of correlations among securities). However, the “best” setting instead will use the weighted sum of ranks using the prior two factors (momentum and volatility). This argument defaults to using the best security (aka the one best ranked prior by the previous two factors), as opposed to the default. At the end of the day, I suppose the best way of illustrating functionality is with some examples of taking this piece of engineering out for a spin. So here goes!

mutualFunds <- c("VTSMX", #Vanguard Total Stock Market Index
                 "FDIVX", #Fidelity Diversified International Fund
                 "VEIEX", #Vanguard Emerging Markets Stock Index Fund
                 "VFISX", #Vanguard Short-Term Treasury Fund
                 "VBMFX", #Vanguard Total Bond Market Index Fund
                 "QRAAX", #Oppenheimer Commodity Strategy Total Return 
                 "VGSIX" #Vanguard REIT Index Fund
)

#mid 1997 to end of 2012
getSymbols(mutualFunds, from="1997-06-30", to="2012-12-31")
tmp <- list()
for(fund in mutualFunds) {
  tmp[[fund]] <- Ad(get(fund))
}

#always use a list hwne intending to cbind/rbind large quantities of objects
adPrices <- do.call(cbind, args = tmp)
colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))

original <- FAAreturns(adPrices, stepCorRank=FALSE)
originalSWCbest <- FAAreturns(adPrices, stepCorRank=TRUE)
originalSWCdefault <- FAAreturns(adPrices, stepCorRank=TRUE, stepStartMethod="default")
stepMaxDecorBest <- FAAreturns(adPrices, weightMom=.05, weightVol=.025, 
                               weightCor=1, riskFreeName="VFISX", 
                               stepCorRank = TRUE, stepStartMethod="best")
stepMaxDecorDefault <- FAAreturns(adPrices, weightMom=.05, weightVol=.025, 
                                  weightCor=1, riskFreeName="VFISX", 
                                  stepCorRank = TRUE, stepStartMethod="default")
w311 <- FAAreturns(adPrices, weightMom=3, weightVol=1, weightCor=1, stepCorRank=TRUE)
originalMaxDecor <- FAAreturns(adPrices, weightMom=0, weightVol=1, stepCorRank=FALSE)
tmp <- cbind(original, originalSWCbest, originalSWCdefault, 
             stepMaxDecorBest, stepMaxDecorDefault, w311, originalMaxDecor)
names(tmp) <- c("original", "originalSWCbest", "originalSWCdefault", "SMDB", 
                "SMDD", "w311", "originalMaxDecor")
charts.PerformanceSummary(tmp, colorset=c("black", "orange", "blue", "purple", "green", "red", "darkred"))


statsTable <- data.frame(t(rbind(Return.annualized(tmp)*100, maxDrawdown(tmp)*100, SharpeRatio.annualized(tmp))))
statsTable$ReturnDrawdownRatio <- statsTable[,1]/statsTable[,2]

Same seven securities as the original paper, with the following return streams:

Original: the FAA original replication
originalSWCbest: original weights, stepwise correlation algorithm, using the best security as ranked by momentum and volatility as a starting point.
originalSWCdefault: original weights, stepwise correlation algorithm, using the default (minimum sum of correlations) security as a starting point.
stepMaxDecorBest: a max decorrelation algorithm that sets the momentum and volatility weights at .05 and .025 respectively, compared to 1 for correlation, simply to get the best starting security through the first two factors.
stepMaxDecorDefault: analogous to originalSWCdefault, except with the starting security being defined as the one with minimum sum of correlations.
w311: using a weighting of 3, 1, and 1 on momentum, vol, and correlation, respectively, while using the stepwise correlation rank algorithm, starting with the best security (the default for the function), since I suspected that not weighing momentum at 1 or higher was the reason any other equity curves couldn’t top out above the paper’s original.
originalMaxDecor: max decorrelation using the original 1-pass correlation matrix

Here is the performance chart:

Here’s the way I interpret it:

Does David Varadi’s stepwise correlation ranking algorithm help performance? From this standpoint, the answers lead to yes. Using the original paper’s parameters, the performance over the paper’s backtest period is marginally better in terms of the equity curves. Comparing max decorrelation algorithms (SMDB and SMDD stand for stepwise max decorrelation best and default, respectively), the difference is even more clear.

However, I was wondering why I could never actually outdo the original paper’s annualized return, and out of interest, decided to more heavily weigh the momentum ranking than the original paper eventually had it set at. The result is a bumpier equity curve, but one that has a higher annualized return than any of the others. It’s also something that I didn’t try in my walk-forward example (though interested parties can simply modify the original momentum vector to contain a 1.5 weight, for instance).

Here’s the table of statistics for the different permutations:

> statsTable
                   Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0.. ReturnDrawdownRatio
original                    14.43802       13.15625                        1.489724            1.097427
originalSWCbest             14.70544       13.15625                        1.421045            1.117753
originalSWCdefault          14.68145       13.37059                        1.457418            1.098041
SMDB                        13.55656       12.33452                        1.410072            1.099075
SMDD                        13.18864       11.94587                        1.409608            1.104033
w311                        15.76213       13.85615                        1.398503            1.137555
originalMaxDecor            11.89159       11.68549                        1.434220            1.017637

At the end of the day, all of the permutations exhibit solid results, and fall along different ends of the risk/return curve. The original settings exhibit the highest Sharpe Ratio (barely), but not the highest annualized return to max drawdown ratio (which surprisingly, belongs to the setting that overweights momentum).

To wrap this analysis up (since there are other strategies that I wish to replicate), here is the out-of-sample performance of these seven strategies (to Oct 30, 2014):

Maybe not the greatest thing in the world considering the S&P has made some spectacular returns in 2013, but nevertheless, the momentum variant strategies established new equity highs fairly recently, and look to be on their way up from their latest slight drawdown. Here are the statistics for 2013-2014:

statsTable <- data.frame(t(rbind(Return.annualized(tmp["2013::"])*100, maxDrawdown(tmp["2013::"])*100, SharpeRatio.annualized(tmp["2013::"]))))
statsTable$ReturnDrawdownRatio <- statsTable[,1]/statsTable[,2]

> statsTable
                   Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0.. ReturnDrawdownRatio
original                    9.284678       8.259298                       1.1966581           1.1241485
originalSWCbest             8.308246       9.657667                       0.9627413           0.8602746
originalSWCdefault          8.916144       8.985685                       1.0861781           0.9922609
SMDB                        6.406438       9.657667                       0.8366559           0.6633525
SMDD                        5.641980       5.979313                       0.7840507           0.9435833
w311                        8.921268       9.025100                       1.0142871           0.9884953
originalMaxDecor            2.888778       6.670709                       0.4244202           0.4330542

So, the original parameters are working solidly, the stepwise correlation algorithm seems to be in a slight rut, and the variants without any emphasis on momentum simply aren’t that great (they were created purely as illustrative tools to begin with). Whether you prefer to run FAA with these securities, or with trading strategies of your own, my only caveat is that transaction costs haven’t been taken into consideration (from what I hear, interactive brokers charges you $1 per transaction, so it shouldn’t make a world of a difference), but beyond that, I believe these last four posts have shown that FAA is something that works. While it doesn’t always work perfectly (EG the S&P 500 had a very good 2013), the logic is sound, and the results are solid, even given some rather plain-vanilla type securities.

In any case, I think I’ll conclude with the fact that FAA works, and the stepwise correlation algorithm provides a viable alternative to computing your weights. I’ll update my IKTrading package with some formal documentation regarding this algorithm soon.

Thanks for reading.