This post will deal with applying the constant-volatility procedure written about by Barroso and Santa Clara in their paper “Momentum Has Its Moments”.

The last two posts dealt with evaluating the intelligence of the signal-generation process. While the strategy showed itself to be marginally better than randomly tossing darts against a dartboard and I was ready to reject it for want of moving onto better topics that are slightly less of a toy in terms of examples than a little rotation strategy, Brian Peterson told me to see this strategy through to the end, including testing out rule processes.

First off, to make a distinction, rules are not signals. Rules are essentially a way to quantify what exactly to do assuming one acts upon a signal. Things such as position sizing, stop-loss processes, and so on, all fall under rule processes.

This rule deals with using leverage in order to target a constant volatility.

So here’s the idea: in their paper, Pedro Barroso and Pedro Santa Clara took the Fama-French momentum data, and found that the classic WML strategy certainly outperforms the market, but it has a critical downside, namely that of momentum crashes, in which being on the wrong side of a momentum trade will needlessly expose a portfolio to catastrophically large drawdowns. While this strategy is a long-only strategy (and with fixed-income ETFs, no less), and so would seem to be more robust against such massive drawdowns, there’s no reason to leave money on the table. To note, not only have Barroso and Santa Clara covered this phenomena, but so have others, such as Tony Cooper in his paper “Alpha Generation and Risk Smoothing Using Volatility of Volatility”.

In any case, the setup here is simple: take the previous portfolios, consisting of 1-12 month momentum formation periods, and every month, compute the annualized standard deviation, using a 21-252 (by 21) formation period, for a total of 12 x 12 = 144 trials. (So this will put the total trials run so far at 24 + 144 = 168…bonus points if you know where this tidbit is going to go).

Here’s the code (again, following on from the last post, which follows from the second post, which follows from the first post in this series).

require(reshape2) require(ggplot2) ruleBacktest <- function(returns, nMonths, dailyReturns, nSD=126, volTarget = .1) { nMonthAverage <- apply(returns, 2, runSum, n = nMonths) nMonthAverage <- xts(nMonthAverage, order.by = index(returns)) nMonthAvgRank <- t(apply(nMonthAverage, 1, rank)) nMonthAvgRank <- xts(nMonthAvgRank, order.by=index(returns)) selection <- (nMonthAvgRank==5) * 1 #select highest average performance dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection) constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252)) monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"]) wts <- cbind(monthlyLeverage, 1-monthlyLeverage) constantVolComponents <- cbind(dailyBacktest, 0) out <- Return.portfolio(R = constantVolComponents, weights = wts) out <- apply.monthly(out, Return.cumulative) return(out) } t1 <- Sys.time() allPermutations <- list() for(i in seq(21, 252, by = 21)) { monthVariants <- list() for(j in 1:12) { trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i) sharpe <- table.AnnualizedReturns(trial)[3,] monthVariants[[j]] <- sharpe } allPermutations[[i]] <- do.call(c, monthVariants) } allPermutations <- do.call(rbind, allPermutations) t2 <- Sys.time() print(t2-t1) rownames(allPermutations) <- seq(21, 252, by = 21) colnames(allPermutations) <- 1:12 baselineSharpes <- table.AnnualizedReturns(algoPortfolios)[3,] baselineSharpeMat <- matrix(rep(baselineSharpes, 12), ncol=12, byrow=TRUE) diffs <- allPermutations - as.numeric(baselineSharpeMat) require(reshape2) require(ggplot2) meltedDiffs <-melt(diffs) colnames(meltedDiffs) <- c("volFormation", "momentumFormation", "sharpeDifference") ggplot(meltedDiffs, aes(x = momentumFormation, y = volFormation, fill=sharpeDifference)) + geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red") meltedSharpes <- melt(allPermutations) colnames(meltedSharpes) <- c("volFormation", "momentumFormation", "Sharpe") ggplot(meltedSharpes, aes(x = momentumFormation, y = volFormation, fill=Sharpe)) + geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = mean(allPermutations))

Again, there’s no parallel code since this is a relatively small example, and I don’t know which OS any given instance of R runs on (windows/linux have different parallelization infrastructure).

So the idea here is to simply compare the Sharpe ratios with different volatility lookback periods against the baseline signal-process-only portfolios. The reason I use Sharpe ratios, and not say, CAGR, volatility, or drawdown is that Sharpe ratios are scale-invariant. In this case, I’m targeting an annualized volatility of 10%, but with a higher targeted volatility, one can obtain higher returns at the cost of higher drawdowns, or be more conservative. But the Sharpe ratio should stay relatively consistent within reasonable bounds.

So here are the results:

Sharpe improvements:

In this case, the diagram shows that on a whole, once the volatility estimation period becomes long enough, the results are generally positive. Namely, that if one uses a very short estimation period, that volatility estimate is more dependent on the last month’s choice of instrument, as opposed to the longer-term volatility of the system itself, which can create poor forecasts. Also to note is that the one-month momentum formation period doesn’t seem overly amenable to the constant volatility targeting scheme (there’s basically little improvement if not a slight drag on risk-adjusted performance). This is interesting in that the baseline Sharpe ratio for the one-period formation is among the best of the baseline performances. However, on a whole, the volatility targeting actually does improve risk-adjusted performance of the system, even one as simple as throwing all your money into one asset every month based on a single momentum signal.

Absolute Sharpe ratios:

In this case, the absolute Sharpe ratios look fairly solid for such a simple system. The 3, 7, and 9 month variants are slightly lower, but once the volatility estimation period reaches between 126 and 252 days, the results are fairly robust. The Barroso and Santa Clara paper uses a period of 126 days to estimate annualized volatility, which looks solid across the entire momentum formation period spectrum.

In any case, it seems the verdict is that a constant volatility target improves results.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

Hi!

Nice post! What did the maxDD look like after this modification?

That’d depend on which variant of the strategy I’d choose!

Pingback: Hypothesis Driven Development Part IV: Testing The Barroso/Santa Clara Rule | Mubashir Qasim

Pingback: Quantocracy's Daily Wrap for 09/16/2015 | Quantocracy

Pingback: Best Links of the Week | Quantocracy

Would it be possible to show this approach/methodology using quantstrat coded strategies? Like RSI or something similar. Or, even better, debunk something like this: http://systemtradersuccess.com/simplest-system-youll-ever-find-sp-e-mini/ . Thanks.

Rotation strategies are not quantstrat strategies. Quantstrat strategies are generally for single instruments, and it’d take a LOT more work to pull off rotational strategies in quantstrat.

Quantstrat strategies are for testing strategies on individual instruments.

Yes, exactly, that is what I am interested in. I understand this. That is why I gave an example from a blog post on some other site (It could as easily be a simple RSI strategy). I think the exercise would be very educational. To put a typical signal driven one instrument strategy through quantstrat and carefully move it through “Hypothesis Driven Development” steps so readers would have reference also on how to do it with quantstrat. Maybe that would result in some “generalized” functions as additions to IKtrading package.

Pingback: Hypothesis-Driven Development Part V: Stop-Loss, Deflating Sharpes, and Out-of-Sample | QuantStrat TradeR