This post will introduce component conditional value at risk mechanics found in PerformanceAnalytics from a paper written by Brian Peterson, Kris Boudt, and Peter Carl. This is a mechanism that is an easy-to-call mechanism for computing component expected shortfall in asset returns as they apply to a portfolio. While the exact mechanics are fairly complex, the upside is that the running time is nearly instantaneous, and this method is a solid tool for including in asset allocation analysis.
For those interested in an in-depth analysis of the intuition of component conditional value at risk, I refer them to the paper written by Brian Peterson, Peter Carl, and Kris Boudt.
Essentially, here’s the idea: all assets in a given portfolio have a marginal contribution to its total conditional value at risk (also known as expected shortfall)–that is, the expected loss when the loss surpasses a certain threshold. For instance, if you want to know your 5% expected shortfall, then it’s the average of the worst 5 returns per 100 days, and so on. For returns using daily resolution, the idea of expected shortfall may sound as though there will never be enough data in a sufficiently fast time frame (on one year or less), the formula for expected shortfall in the PerformanceAnalytics defaults to an approximation calculation using a Cornish-Fisher expansion, which delivers very good results so long as the p-value isn’t too extreme (that is, it works for relatively sane p values such as the 1%-10% range).
Component Conditional Value at Risk has two uses: first off, given no input weights, it uses an equal weight default, which allows it to provide a risk estimate for each individual asset without burdening the researcher to create his or her own correlation/covariance heuristics. Secondly, when provided with a set of weights, the output changes to reflect the contribution of various assets in proportion to those weights. This means that this methodology works very nicely with strategies that exclude assets based on momentum, but need a weighting scheme for the remaining assets. Furthermore, using this methodology also allows an ex-post analysis of risk contribution to see which instrument contributed what to risk.
First, a demonstration of how the mechanism works using the edhec data set. There is no strategy here, just a demonstration of syntax.
This will assume an equal-weight contribution from all of the funds in the edhec data set.
So tmp is the contribution to expected shortfall from each of the various edhec managers over the entire time period. Here’s the output:
$MES
[,1]
[1,] 0.03241585
$contribution
Convertible Arbitrage CTA Global Distressed Securities Emerging Markets Equity Market Neutral
0.0074750513 -0.0028125166 0.0039422674 0.0069376579 0.0008077760
Event Driven Fixed Income Arbitrage Global Macro Long/Short Equity Merger Arbitrage
0.0037114666 0.0043125937 0.0007173036 0.0036152960 0.0013693293
Relative Value Short Selling Funds of Funds
0.0037650911 -0.0048178690 0.0033924063
$pct_contrib_MES
Convertible Arbitrage CTA Global Distressed Securities Emerging Markets Equity Market Neutral
0.23059863 -0.08676361 0.12161541 0.21402052 0.02491917
Event Driven Fixed Income Arbitrage Global Macro Long/Short Equity Merger Arbitrage
0.11449542 0.13303965 0.02212817 0.11152864 0.04224258
Relative Value Short Selling Funds of Funds
0.11614968 -0.14862694 0.10465269
The salient part of this is the percent contribution (the last output). Notice that it can be negative, meaning that certain funds gain when others lose. At least, this was the case over the current data set. These assets diversify a portfolio and actually lower expected shortfall.
> tmp2 <- CVaR(edhec, portfolio_method = "component", weights = c(rep(.1, 10), rep(0,3)))
> tmp2
$MES
[,1]
[1,] 0.04017453
$contribution
Convertible Arbitrage CTA Global Distressed Securities Emerging Markets Equity Market Neutral
0.0086198045 -0.0046696862 0.0058778855 0.0109152240 0.0009596620
Event Driven Fixed Income Arbitrage Global Macro Long/Short Equity Merger Arbitrage
0.0054824325 0.0050398011 0.0009638502 0.0044568333 0.0025287234
Relative Value Short Selling Funds of Funds
0.0000000000 0.0000000000 0.0000000000
$pct_contrib_MES
Convertible Arbitrage CTA Global Distressed Securities Emerging Markets Equity Market Neutral
0.21455894 -0.11623499 0.14630875 0.27169512 0.02388732
Event Driven Fixed Income Arbitrage Global Macro Long/Short Equity Merger Arbitrage
0.13646538 0.12544767 0.02399157 0.11093679 0.06294345
Relative Value Short Selling Funds of Funds
0.00000000 0.00000000 0.00000000
In this case, I equally weighted the first ten managers in the edhec data set, and put zero weight in the last three. Furthermore, we can see what happens when the weights are not equal.
> tmp3 <- CVaR(edhec, portfolio_method = "component", weights = c(.2, rep(.1, 9), rep(0,3)))
> tmp3
$MES
[,1]
[1,] 0.04920372
$contribution
Convertible Arbitrage CTA Global Distressed Securities Emerging Markets Equity Market Neutral
0.0187406982 -0.0044391078 0.0057235762 0.0102706768 0.0007710434
Event Driven Fixed Income Arbitrage Global Macro Long/Short Equity Merger Arbitrage
0.0051541429 0.0055944367 0.0008028457 0.0044085104 0.0021768951
Relative Value Short Selling Funds of Funds
0.0000000000 0.0000000000 0.0000000000
$pct_contrib_MES
Convertible Arbitrage CTA Global Distressed Securities Emerging Markets Equity Market Neutral
0.38087972 -0.09021895 0.11632406 0.20873782 0.01567043
Event Driven Fixed Income Arbitrage Global Macro Long/Short Equity Merger Arbitrage
0.10475109 0.11369947 0.01631677 0.08959710 0.04424249
Relative Value Short Selling Funds of Funds
0.00000000 0.00000000 0.00000000
This time, notice that as the weight increased in the convertible arb manager, so too did his contribution to maximum expected shortfall.
For a future backtest, I would like to make some data requests. I would like to use the universe found in Faber’s Global Asset Allocation book. That said, the simulations in that book go back to 1972, and I was wondering if anyone out there has daily returns for those assets/indices. While some ETFs go back into the early 2000s, there are some that start rather late such as DBC (commodities, early 2006), GLD (gold, early 2004), BWX (foreign bonds, late 2007), and FTY (NAREIT, early 2007). As an eight-year backtest would be a bit short, I was wondering if anyone had data with more history.
One other thing, I will in New York for the trading show, and speaking on the “programming wars” panel on October 6th.
Thanks for reading.
NOTE: While I am currently contracting, I am also looking for a permanent position which can benefit from my skills for when my current contract ends. If you have or are aware of such an opening, I will be happy to speak with you.
This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return.Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs.
So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven’t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times.
So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return.Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics’s functions.
Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 1/9th if invested into all 9, 100% into 1 if only 1 invested into, 100% cash, denoted with a zero return vector, if no sectors are invested into). It’s a simple, toy strategy, as the strategy isn’t the point of the demonstration.
Here’s the basic setup code:
require(TTR)
require(PerformanceAnalytics)
require(quantmod)
symbols <- c("XLF", "XLK", "XLU", "XLE", "XLP", "XLF", "XLB", "XLV", "XLY")
getSymbols(symbols, src='yahoo', from = '1990-01-01-01')
prices <- list()
for(i in 1:length(symbols)) {
tmp <- Ad(get(symbols[[i]]))
prices[[i]] <- tmp
}
prices <- do.call(cbind, prices)
# Our signal is a simple adjusted price over 200 day SMA
signal <- prices > xts(apply(prices, 2, SMA, n = 200), order.by=index(prices))
# equal weight all assets with price above SMA200
returns <- Return.calculate(prices)
weights <- signal/(rowSums(signal)+1e-16)
# With Return.portfolio, need all weights to sum to 1
weights$zeroes <- 1 - rowSums(weights)
returns$zeroes <- 0
monthlyWeights <- na.omit(weights[endpoints(weights, on = 'months'),])
weights <- na.omit(weights)
returns <- na.omit(returns)
So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return.Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here).
So, the trick is this: when you call Return.portfolio, use the verbose = TRUE option. This creates several objects, among them returns, BOP.Weight, and EOP.Weight. These stand for Beginning Of Period Weight, and End Of Period Weight.
The way that turnover is computed is simply the difference between how the day’s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day’s drift/return for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight.
This is what such transactions look like for this strategy.
Something we can do with such data is take a one-year rolling turnover, accomplished with the following code:
yearlyTO <- runSum(monthlyTO, 252)
plot(yearlyTO, main = "running one year turnover")
It looks like this:
This essentially means that one year’s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100% turnover, and buying an entirely new set of assets is another 100%, then two-way turnover is 200%) is around 800% at maximum. That may be pretty high for some people.
Now, here’s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact $100).
So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible.
So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return.portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs.
Thanks for reading.
NOTE: I will be giving a lightning talk at R/Finance, so for those attending, you’ll be able to find me there.
This post will feature the differences in the implementation of my constrained critical line algorithm with that of Dr. Clarence Kwan’s. The constrained critical line algorithm is a form of gradient descent that incorporates elements of momentum. My implementation includes a volatility-targeting binary search algorithm.
First off, rather than try and explain the algorithm piece by piece, I’ll defer to Dr. Clarence Kwan’s paper and excel spreadsheet, from where I obtained my original implementation. Since that paper and excel spreadsheet explains the functionality of the algorithm, I won’t repeat that process here. Essentially, the constrained critical line algorithm incorporates its lambda constraints into the structure of the covariance matrix itself. This innovation actually allows the algorithm to invert previously rank-deficient matrices.
Now, while Markowitz mean-variance optimization may be a bit of old news for some, the ability to use a short lookback for momentum with monthly data has allowed me and my two coauthors (Dr. Wouter Keller, who came up with flexible and elastic asset allocation, and Adam Butler, of GestaltU) to perform a backtest on a century’s worth of assets, with more than 30 assets in the backtest, despite using only a 12-month formation period. That paper can be found here.
Let’s look at the code for the function.
CCLA <- function(covMat, retForecast, maxIter = 1000,
verbose = FALSE, scale = 252,
weightLimit = .7, volThresh = .1) {
if(length(retForecast) > length(unique(retForecast))) {
sequentialNoise <- seq(1:length(retForecast)) * 1e-12
retForecast <- retForecast + sequentialNoise
}
#initialize original out/in/up status
if(length(weightLimit) == 1) {
weightLimit <- rep(weightLimit, ncol(covMat))
}
rankForecast <- length(retForecast) - rank(retForecast) + 1
remainingWeight <- 1 #have 100% of weight to allocate
upStatus <- inStatus <- rep(0, ncol(covMat))
i <- 1
while(remainingWeight > 0) {
securityLimit <- weightLimit[rankForecast == i]
if(securityLimit < remainingWeight) {
upStatus[rankForecast == i] <- 1 #if we can't invest all remaining weight into the security
remainingWeight <- remainingWeight - securityLimit
} else {
inStatus[rankForecast == i] <- 1
remainingWeight <- 0
}
i <- i + 1
}
#initial matrices (W, H, K, identity, negative identity)
covMat <- as.matrix(covMat)
retForecast <- as.numeric(retForecast)
init_W <- cbind(2*covMat, rep(-1, ncol(covMat)))
init_W <- rbind(init_W, c(rep(1, ncol(covMat)), 0))
H_vec <- c(rep(0, ncol(covMat)), 1)
K_vec <- c(retForecast, 0)
negIdentity <- -1*diag(ncol(init_W))
identity <- diag(ncol(init_W))
matrixDim <- nrow(init_W)
weightLimMat <- matrix(rep(weightLimit, matrixDim), ncol=ncol(covMat), byrow=TRUE)
#out status is simply what isn't in or up
outStatus <- 1 - inStatus - upStatus
#initialize expected volatility/count/turning points data structure
expVol <- Inf
lambda <- 100
count <- 0
turningPoints <- list()
while(lambda > 0 & count < maxIter) {
#old lambda and old expected volatility for use with numerical algorithms
oldLambda <- lambda
oldVol <- expVol
count <- count + 1
#compute W, A, B
inMat <- matrix(rep(c(inStatus, 1), matrixDim), nrow = matrixDim, byrow = TRUE)
upMat <- matrix(rep(c(upStatus, 0), matrixDim), nrow = matrixDim, byrow = TRUE)
outMat <- matrix(rep(c(outStatus, 0), matrixDim), nrow = matrixDim, byrow = TRUE)
W <- inMat * init_W + upMat * identity + outMat * negIdentity
inv_W <- solve(W)
modified_H <- H_vec - rowSums(weightLimMat* upMat[,-matrixDim] * init_W[,-matrixDim])
A_vec <- inv_W %*% modified_H
B_vec <- inv_W %*% K_vec
#remove the last elements from A and B vectors
truncA <- A_vec[-length(A_vec)]
truncB <- B_vec[-length(B_vec)]
#compute in Ratio (aka Ratio(1) in Kwan.xls)
inRatio <- rep(0, ncol(covMat))
inRatio[truncB > 0] <- -truncA[truncB > 0]/truncB[truncB > 0]
#compute up Ratio (aka Ratio(2) in Kwan.xls)
upRatio <- rep(0, ncol(covMat))
upRatioIndices <- which(inStatus==TRUE & truncB < 0)
if(length(upRatioIndices) > 0) {
upRatio[upRatioIndices] <- (weightLimit[upRatioIndices] - truncA[upRatioIndices]) / truncB[upRatioIndices]
}
#find lambda -- max of up and in ratios
maxInRatio <- max(inRatio)
maxUpRatio <- max(upRatio)
lambda <- max(maxInRatio, maxUpRatio)
#compute new weights
wts <- inStatus*(truncA + truncB * lambda) + upStatus * weightLimit + outStatus * 0
#compute expected return and new expected volatility
expRet <- t(retForecast) %*% wts
expVol <- sqrt(wts %*% covMat %*% wts) * sqrt(scale)
#create turning point data row and append it to turning points
turningPoint <- cbind(count, expRet, lambda, expVol, t(wts))
colnames(turningPoint) <- c("CP", "Exp. Ret.", "Lambda", "Exp. Vol.", colnames(covMat))
turningPoints[[count]] <- turningPoint
#binary search for volatility threshold -- if the first iteration is lower than the threshold,
#then immediately return, otherwise perform the binary search until convergence of lambda
if(oldVol == Inf & expVol < volThresh) {
turningPoints <- do.call(rbind, turningPoints)
threshWts <- tail(turningPoints, 1)
return(list(turningPoints, threshWts))
} else if(oldVol > volThresh & expVol < volThresh) {
upLambda <- oldLambda
dnLambda <- lambda
meanLambda <- (upLambda + dnLambda)/2
while(upLambda - dnLambda > .00001) {
#compute mean lambda and recompute weights, expected return, and expected vol
meanLambda <- (upLambda + dnLambda)/2
wts <- inStatus*(truncA + truncB * meanLambda) + upStatus * weightLimit + outStatus * 0
expRet <- t(retForecast) %*% wts
expVol <- sqrt(wts %*% covMat %*% wts) * sqrt(scale)
#if new expected vol is less than threshold, mean becomes lower bound
#otherwise, it becomes the upper bound, and loop repeats
if(expVol < volThresh) {
dnLambda <- meanLambda
} else {
upLambda <- meanLambda
}
}
#once the binary search completes, return those weights, and the corner points
#computed until the binary search. The corner points aren't used anywhere, but they're there.
threshWts <- cbind(count, expRet, meanLambda, expVol, t(wts))
colnames(turningPoint) <- colnames(threshWts) <- c("CP", "Exp. Ret.", "Lambda", "Exp. Vol.", colnames(covMat))
turningPoints[[count]] <- turningPoint
turningPoints <- do.call(rbind, turningPoints)
return(list(turningPoints, threshWts))
}
#this is only run for the corner points during which binary search doesn't take place
#change status of security that has new lambda
if(maxInRatio > maxUpRatio) {
inStatus[inRatio == maxInRatio] <- 1 - inStatus[inRatio == maxInRatio]
upStatus[inRatio == maxInRatio] <- 0
} else {
upStatus[upRatio == maxUpRatio] <- 1 - upStatus[upRatio == maxUpRatio]
inStatus[upRatio == maxUpRatio] <- 0
}
outStatus <- 1 - inStatus - upStatus
}
#we only get here if the volatility threshold isn't reached
#can actually happen if set sufficiently low
turningPoints <- do.call(rbind, turningPoints)
threshWts <- tail(turningPoints, 1)
return(list(turningPoints, threshWts))
}
Essentially, the algorithm can be divided into three parts:
The first part is the initialization, which does the following:
It creates three status vectors: in, up, and out. The up vector denotes which securities are at their weight constraint cap, the in status are securities that are not at their weight cap, and the out status are securities that receive no weighting on that iteration of the algorithm.
The rest of the algorithm essentially does the following:
It takes a gradient descent approach by changing the status of the security that minimizes lambda, which by extension minimizes the volatility at the local point. As long as lambda is greater than zero, the algorithm continues to iterate. Letting the algorithm run until convergence effectively provides the volatility-minimization portfolio on the efficient frontier.
However, one change that Dr. Keller and I made to it is the functionality of volatility targeting, allowing the algorithm to stop between iterations. As the SSRN paper shows, a higher volatility threshold, over the long run (the *VERY* long run) will deliver higher returns.
In any case, the algorithm takes into account several main arguments:
A return forecast, a covariance matrix, a volatility threshold, and weight limits, which can be either one number that will result in a uniform weight limit, or a per-security weight limit. Another argument is scale, which is 252 for days, 12 for months, and so on. Lastly, there is a volatility threshold component, which allows the user to modify how aggressive or conservative the strategy can be.
In any case, to demonstrate this function, let’s run a backtest. The idea in this case will come from a recent article published by Frank Grossmann from SeekingAlpha, in which he obtained a 20% CAGR but with a 36% max drawdown.
In essence, we take the returns over a specified monthly lookback period, specify a volatility threshold, specify asset caps, specify the bond asset cap, and whether or not we wish to use TLT or TMF (a 3x leveraged variant, which just multiplies all returns of TLT by 3, for simplicity). The output of the CCLA (Constrained Critical Line Algorithm) is a list that contains the corner points, and the volatility threshold corner point which contains the corner point number, expected return, expected volatility, and the lambda value. So, we want the fifth element onward of the second element of the list.
In this case, more risk, more reward, lower risk/reward ratios as you push the volatility threshold. So for once, the volatility puzzle doesn’t rear its head, and higher risk indeed does translate to higher returns (at the cost of everything else, though).
Here’s the equity curve.
Lastly, let’s try toggling the asset cap limits with the vol threshold back at 10.
Essentially, in this case, there was very little actual change from simply tweaking weight limits. Here’s an equity curve:
To conclude, while not exactly achieving the same aggregate returns or Sharpe ratio that the SeekingAlpha article did, it did highlight a probable cause of its major drawdown, and also demonstrated the levers of how to apply the constrained critical line algorithm, the mechanics of which are detailed in the papers linked to earlier.
This strategy goes over JP Morgan’s SCTO strategy, a basic XL-sector/RWR rotation strategy with the typical associated risks and returns with a momentum equity strategy. It’s nothing spectacular, but if a large bank markets it, it’s worth looking at.
Recently, one of my readers, a managing director at a quantitative investment firm, sent me a request to write a rotation strategy based around the 9 sector spiders and RWR. The way it works (or at least, the way I interpreted it) is this:
Every month, compute the return (not sure how “the return” is defined) and rank. Take the top 5 ranks, and weight them in a normalized fashion to the inverse of their 22-day volatility. Zero out any that have negative returns. Lastly, check the predicted annualized vol of the portfolio, and if it’s greater than 20%, bring it back down to 20%. The cash asset–SHY–receives any remaining allocation due to setting securities to zero.
sctoStrat <- function(returns, cashAsset = "SHY", lookback = 4, annVolLimit = .2,
topN = 5, scale = 252) {
ep <- endpoints(returns, on = "months")
weights <- list()
cashCol <- grep(cashAsset, colnames(returns))
#remove cash from asset returns
cashRets <- returns[, cashCol]
assetRets <- returns[, -cashCol]
for(i in 2:(length(ep) - lookback)) {
retSubset <- assetRets[ep[i]:ep[i+lookback]]
#forecast is the cumulative return of the lookback period
forecast <- Return.cumulative(retSubset)
#annualized (realized) volatility uses a 22-day lookback period
annVol <- StdDev.annualized(tail(retSubset, 22))
#rank the forecasts (the cumulative returns of the lookback)
rankForecast <- rank(forecast) - ncol(assetRets) + topN
#weight is inversely proportional to annualized vol
weight <- 1/annVol
#zero out anything not in the top N assets
weight[rankForecast <= 0] <- 0
#normalize and zero out anything with a negative return
weight <- weight/sum(weight)
weight[forecast < 0] <- 0
#compute forecasted vol of portfolio
forecastVol <- sqrt(as.numeric(t(weight)) %*%
cov(retSubset) %*%
as.numeric(weight)) * sqrt(scale)
#if forecasted vol greater than vol limit, cut it down
if(as.numeric(forecastVol) > annVolLimit) {
weight <- weight * annVolLimit/as.numeric(forecastVol)
}
weights[[i]] <- xts(weight, order.by=index(tail(retSubset, 1)))
}
#replace cash back into returns
returns <- cbind(assetRets, cashRets)
weights <- do.call(rbind, weights)
#cash weights are anything not in securities
weights$CASH <- 1-rowSums(weights)
#compute and return strategy returns
stratRets <- Return.portfolio(R = returns, weights = weights)
return(stratRets)
}
In this case, I took a little bit of liberty with some specifics that the reference was short on. I used the full covariance matrix for forecasting the portfolio variance (not sure if JPM would ignore the covariances and do a weighted sum of individual volatilities instead), and for returns, I used the four-month cumulative. I’ve seen all sorts of permutations on how to compute returns, ranging from some average of 1, 3, 6, and 12 month cumulative returns to some lookback period to some two period average, so I’m all ears if others have differing ideas, which is why I left it as a lookback parameter.
To me, it looks like a standard rotation strategy. Aims for the highest momentum securities, diversifies to try and control risk, hits a drawdown in the crisis, recovers, and slightly lags the bull run on SPY. Nothing out of the ordinary.
So, for those interested, here you go. I’m surprised that JP Morgan itself markets this sort of thing, considering that they probably employ top-notch quants that can easily come up with products and/or strategies that are far better.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
This post will display my implementation of the Logical Invest Enhanced Bond Rotation strategy. This is a strategy that indeed does work, but is dependent on reinvesting dividends, as bonds pay coupons, which means bond ETFs do likewise.
The strategy is fairly simple — using four separate fixed income markets (long-term US government bonds, high-yield bonds, emerging sovereign debt, and convertible bonds), the strategy aims to deliver a low-risk, high Sharpe profile. Every month, it switches to two separate securities, in either a 60-40 or 50-50 split (that is, a 60-40 one way, or the other). My implementation for this strategy is similar to the ones I’ve done for the Logical Invest Universal Investment Strategy, which is to maximize a modified Sharpe ratio in a walk-forward process.
The one thing different about this code is the way I initialize the return streams. It’s an ugly piece of work, but it takes all of the pairwise combinations (that is, 4 choose 2, or 4c2) along with a sequence going by 10% for the different security weights between the lower and upper bound (that is, if the lower bound is 40% and upper bound is 60%, the three weights will be 40-60, 50-50, and 60-40). So, in this case, there are 18 configurations. 4c2*3. Do note that this is not at all a framework that can be scaled up. That is, with 20 instruments, there will be 190 different combinations, and then anywhere between 3 to 11 (if going from 0-100) configurations for each combination. Obviously, not a pretty sight.
Beyond that, it’s the same refrain. Bind the returns together, compute an n-day rolling cumulative return (far faster my way than using the rollApply version of Return.annualized), divide it by the n-day rolling annualized standard deviation divided by the modified Sharpe F factor (1 gives you Sharpe ratio, 0 gives you pure returns, greater than 1 puts more of a focus on risk). Take the highest Sharpe ratio, allocate to that configuration, repeat.
So, how does this perform? Here’s a test script, using the same 73-day lookback with a modified Sharpe F of 2 that I’ve used in the previous Logical Invest strategies.
> fiveStats(etfs)
Adjusted ETFs Close ETFs
Annualized Return 0.12320000 0.08370000
Annualized Std Dev 0.06780000 0.06920000
Annualized Sharpe (Rf=0%) 1.81690000 1.20980000
Worst Drawdown 0.06913986 0.08038459
Calmar Ratio 1.78158934 1.04078405
In other words, reinvesting dividends makes up about 50% of these returns.
Let’s look at the mutual funds. Note that these are for the sake of illustration only–you can’t trade out of mutual funds every month.
Equity curve:
Log scale:
Statistics:
Adjusted MFs Close MFs
Annualized Return 0.11450000 0.0284000
Annualized Std Dev 0.05700000 0.0627000
Annualized Sharpe (Rf=0%) 2.00900000 0.4532000
Worst Drawdown 0.09855271 0.2130904
Calmar Ratio 1.16217559 0.1332706
In this case, day and night, though how much of it is the data source may also be an issue. Yahoo isn’t the greatest when it comes to data, and I’m not sure how much the data quality deteriorates going back that far. However, the takeaway seems to be this: with bond strategies, dividends will need to be dealt with, and when considering returns data presented to you, keep in mind that those adjusted returns assume the investor stays on top of dividend maintenance. Fail to reinvest the dividends in a timely fashion, and, well, the gap can be quite large.
To put it into perspective, as I was writing this post, I wondered whether or not most of this was indeed due to dividends. Here’s a plot of the difference in returns between adjusted and close ETF returns.
chart.TimeSeries(etfAdj - etfClose, legend.loc="topleft", date.format="%Y-%m",
main = "Return differences adjusted vs. close ETFs")
With the resulting image:
While there may be some noise to the order of the negative fifth power on most days, there are clear spikes observable in the return differences. Those are dividends, and their compounding makes a sizable difference. In one case for CWB, the difference is particularly striking (Dec. 29, 2014). In fact, here’s a quick little analysis of the effect of the dividend effects.
In short, the effect of the dividend is massive. In some instances, such as with JNK, the dividend comprises more than 50% of the annualized returns for the security!
Basically, I’d like to hammer the point home one last time–backtests using adjusted data assume instantaneous maintenance of dividends. In order to achieve the optimistic returns seen in the backtests, these dividend payments must be reinvested ASAP. In short, this is the fine print on this strategy, and is a small, but critical detail that the SeekingAlpha article doesn’t mention. (Seriously, do a ctrl + F in your browser for the word “dividend”. It won’t come up in the article itself.) I wanted to make sure to add it.
One last thing: gaudy numbers when using monthly returns!
> fiveStats(apply.monthly(etfs, Return.cumulative))
Adjusted ETFs Close ETFs
Annualized Return 0.12150000 0.082500
Annualized Std Dev 0.06490000 0.067000
Annualized Sharpe (Rf=0%) 1.87170000 1.232100
Worst Drawdown 0.03671871 0.049627
Calmar Ratio 3.30769620 1.662642
Look! A Calmar Ratio of 3.3, and a Sharpe near 2!*
*: Must manage dividends. Statistics reported are monthly.
Okay, in all fairness, this is a pretty solid strategy, once one commits to managing the dividends. I just felt that it should have been a topic made front and center considering its importance in this case, rather than simply swept under the “we use adjusted returns” rug, since in this instance, the effect of dividends is massive.
In conclusion, while I will more or less confirm the strategy’s actual risk/reward performance (unlike some other SeekingAlpha strategies I’ve backtested), which, in all honesty, I find really impressive, it comes with a caveat like the rest of them. However, the caveat of “be detail-oriented/meticulous/paranoid and reinvest those dividends!” in my opinion is a caveat that’s a lot easier to live with than 30%+ drawdowns that were found lurking in other SeekingAlpha strategies. So for those that can stay on top of those dividends (whether manually, or with machine execution), here you go. I’m basically confirming the performance of Logical Invest’s strategy, but just belaboring one important detail.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
This post will introduce my up and coming IKReporting package, and functions that compute and plot rolling returns, which are useful to compare recent performance, since simply looking at two complete equity curves may induce sample bias (EG SPY in 2008), which may not reflect the state of the markets going forward.
In any case, the motivation for this package was brought about by one of my readers, who has reminded me in the past of the demand for the in-the-ditches work of pretty performance reports. This package aims to make creating such thing as painless as possible, and I will be updating it rapidly in the near future.
The strategy in use for this post will be Flexible Asset Allocation from my IKTrading package, in order to celebrate the R/Finance lightning talk I’m approved for on FAA, and it’ll be compared to SPY.
Here’s the code:
require(IKTrading)
require(quantmod)
require(PerformanceAnalytics)
options("getSymbols.warning4.0"=FALSE)
symbols <- c("XLB", #SPDR Materials sector
"XLE", #SPDR Energy sector
"XLF", #SPDR Financial sector
"XLP", #SPDR Consumer staples sector
"XLI", #SPDR Industrial sector
"XLU", #SPDR Utilities sector
"XLV", #SPDR Healthcare sector
"XLK", #SPDR Tech sector
"XLY", #SPDR Consumer discretionary sector
"RWR", #SPDR Dow Jones REIT ETF
"EWJ", #iShares Japan
"EWG", #iShares Germany
"EWU", #iShares UK
"EWC", #iShares Canada
"EWY", #iShares South Korea
"EWA", #iShares Australia
"EWH", #iShares Hong Kong
"EWS", #iShares Singapore
"IYZ", #iShares U.S. Telecom
"EZU", #iShares MSCI EMU ETF
"IYR", #iShares U.S. Real Estate
"EWT", #iShares Taiwan
"EWZ", #iShares Brazil
"EFA", #iShares EAFE
"IGE", #iShares North American Natural Resources
"EPP", #iShares Pacific Ex Japan
"LQD", #iShares Investment Grade Corporate Bonds
"SHY", #iShares 1-3 year TBonds
"IEF", #iShares 3-7 year TBonds
"TLT" #iShares 20+ year Bonds
)
from="2003-01-01"
#SPDR ETFs first, iShares ETFs afterwards
if(!"XLB" %in% ls()) {
suppressMessages(getSymbols(symbols, from="2003-01-01", src="yahoo", adjust=TRUE))
}
prices <- list()
for(i in 1:length(symbols)) {
prices[[i]] <- Cl(get(symbols[i]))
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
faa <- FAA(prices = prices, riskFreeName = "SHY", bestN = 6, stepCorRank = TRUE)
getSymbols("SPY", from="1990-01-01")
comparison <- merge(faa, Return.calculate(Cl(SPY)), join='inner')
colnames(comparison) <- c("FAA", "SPY")
And now here’s where the new code comes in:
This is a simple function for computing running cumulative returns of a fixed window. It’s a quick three-liner function that can compute the cumulative returns over any fixed period near-instantaneously.
"runCumRets" <- function(R, n = 252) {
cumRets <- cumprod(1+R)
rollingCumRets <- cumRets/lag(cumRets, k = n) - 1
return(rollingCumRets)
}
So how does this get interesting? Well, with some plotting, of course.
Here’s a function to create a plot of these rolling returns.
While the computation is done in the first line, the rest of the code is simply to make a prettier plot.
Here’s what the 252-day rolling return comparison looks like.
require(IKReporting)
plotCumRets(comparison)
So here’s the interpretation: assuming that there isn’t too much return degradation in the implementation of the FAA strategy, it essentially delivers most of the upside of SPY while doing a much better job protecting the investor when things hit the fan. Recently, however, seeing as to how the stock market has been on a roar, there’s a slight bit of underperformance over the past several years.
However, let’s look at a longer time horizon — the cumulative return over 756 days.
plotCumRets(comparison, n = 756)
With the following result:
This offers a much clearer picture–essentially, what this states is that over any 756-day period, the strategy has not lost money, ever, unlike SPY, which would have wiped out three years of gains (and then some) at the height of the crisis. More recently, as the stock market is in yet another run-up, there has been some short-term (well, if 756 days can be called short-term) underperformance, namely due to SPY having some historical upward mobility.
On another unrelated topic, some of you (perhaps from Seeking Alpha) may have seen the following image floating around:
This is a strategy I have collaborated with Harry Long from Seeking Alpha on. While I’m under NDA and am not allowed to discuss the exact rules of this particular strategy, I can act as a liaison for those that wish to become a client of ZOMMA, LLC. While the price point is out of the reach of ordinary retail investors (the price point is into the six figures), institutions that are considering licensing one of these indices can begin by sending me an email at ilya.kipnis@gmail.com. I can also set up a phone call.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
This post will demonstrate a downside to rankings-based strategies, particularly when using data of a questionable quality (which, unless one pays multiple thousands of dollars per month for data, most likely is of questionable quality). Essentially, by making one small change to the way the strategy filters, it introduces a massive performance drop in terms of drawdown. This exercise effectively demonstrates a different possible way of throwing a curve-ball at ranking strategies to test for robustness.
Recently, a discussion came up between myself, Terry Doherty, Cliff Smith, and some others on Seeking Alpha regarding what happened when I substituted the 63-day SMA for the three month SMA in Cliff Smith’s QTS strategy (quarterly tactical strategy…strategy).
Essentially, by simply substituting a 63-day SMA (that is, using daily data instead of monthly) for a 3-month SMA, the results were drastically affected.
Here’s the new QTS code, now in a function.
qts <- function(prices, nShort = 20, nLong = 105, nMonthSMA = 3, nDaySMA = 63, wRankShort=1, wRankLong=1.01,
movAvgType = c("monthly", "daily"), cashAsset="VUSTX", returnNames = FALSE) {
cashCol <- grep(cashAsset, colnames(prices))
#start our data off on the security with the least data (VGSIX in this case)
prices <- prices[!is.na(prices[,7]),]
#cash is not a formal asset in our ranking
cashPrices <- prices[, cashCol]
prices <- prices[, -cashCol]
#compute momentums
rocShort <- prices/lag(prices, nShort) - 1
rocLong <- prices/lag(prices, nLong) - 1
#take the endpoints of quarter start/end
quarterlyEps <- endpoints(prices, on="quarters")
monthlyEps <- endpoints(prices, on = "months")
#take the prices at quarterly endpoints
quarterlyPrices <- prices[quarterlyEps,]
#short momentum at quarterly endpoints (20 day)
rocShortQtrs <- rocShort[quarterlyEps,]
#long momentum at quarterly endpoints (105 day)
rocLongQtrs <- rocLong[quarterlyEps,]
#rank short momentum, best highest rank
rocSrank <- t(apply(rocShortQtrs, 1, rank))
#rank long momentum, best highest rank
rocLrank <- t(apply(rocLongQtrs, 1, rank))
#total rank, long slightly higher than short, sum them
totalRank <- wRankLong * rocLrank + wRankShort * rocSrank
#function that takes 100% position in highest ranked security
maxRank <- function(rankRow) {
return(rankRow==max(rankRow))
}
#apply above function to our quarterly ranks every quarter
rankPos <- t(apply(totalRank, 1, maxRank))
#SMA of securities, only use monthly endpoints
#subset to quarters
#then filter
movAvgType = movAvgType[1]
if(movAvgType=="monthly") {
monthlyPrices <- prices[monthlyEps,]
monthlySMAs <- xts(apply(monthlyPrices, 2, SMA, n=nMonthSMA), order.by=index(monthlyPrices))
quarterlySMAs <- monthlySMAs[index(quarterlyPrices),]
smaFilter <- quarterlyPrices > quarterlySMAs
} else if (movAvgType=="daily") {
smas <- xts(apply(prices, 2, SMA, n=nDaySMA), order.by=index(prices))
quarterlySMAs <- smas[index(quarterlyPrices),]
smaFilter <- quarterlyPrices > quarterlySMAs
} else {
stop("invalid moving average type")
}
finalPos <- rankPos*smaFilter
finalPos <- finalPos[!is.na(rocLongQtrs[,1]),]
cash <- xts(1-rowSums(finalPos), order.by=index(finalPos))
finalPos <- merge(finalPos, cash, join='inner')
prices <- merge(prices, cashPrices, join='inner')
returns <- Return.calculate(prices)
stratRets <- Return.portfolio(returns, finalPos)
if(returnNames) {
findNames <- function(pos) {
return(names(pos[pos==1]))
}
tmp <- apply(finalPos, 1, findNames)
assetNames <- xts(tmp, order.by=as.Date(names(tmp)))
return(list(assetNames, stratRets))
}
return(stratRets)
}
In essence, it allows the function to use either a monthly-calculated moving average, or a daily, which is then subset to the quarterly frequency of the rest of the data.
(I also allow the function to return the names of the selected securities.)
So now we can do two tests:
1) The initial parameter settings (20-day short-term momentum, 105-day long-term momentum, equal weigh their ranks (tiebreaker to the long-term), and use a 3-month SMA to filter)
2) The same exact parameter settings, except a 63-day SMA for the filter.
Here’s the code to do that.
#get our data from yahoo, use adjusted prices
symbols <- c("NAESX", #small cap
"PREMX", #emerging bond
"VEIEX", #emerging markets
"VFICX", #intermediate investment grade
"VFIIX", #GNMA mortgage
"VFINX", #S&P 500 index
"VGSIX", #MSCI REIT
"VGTSX", #total intl stock idx
"VUSTX") #long term treasury (cash)
getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
prices[[i]] <- Ad(get(symbols[i]))
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
monthlySMAqts <- qts(prices, returnNames=TRUE)
dailySMAqts <- qts(prices, wRankShort=.95, wRankLong=1.05, movAvgType = "daily", returnNames=TRUE)
retsComparison <- cbind(monthlySMAqts[[2]], dailySMAqts[[2]])
colnames(retsComparison) <- c("monthly SMA qts", "daily SMA qts")
retsComparison <- retsComparison["2003::"]
charts.PerformanceSummary(retsComparison["2003::"])
rbind(table.AnnualizedReturns(retsComparison["2003::"]), maxDrawdown(retsComparison["2003::"]))
And here are the results:
Statistics:
monthly SMA qts daily SMA qts
Annualized Return 0.2745000 0.2114000
Annualized Std Dev 0.1725000 0.1914000
Annualized Sharpe (Rf=0%) 1.5915000 1.1043000
Worst Drawdown 0.1911616 0.3328411
With the corresponding equity curves:
Here are the several instances in which the selections do not match thanks to the filters:
selectedNames <- cbind(monthlySMAqts[[1]], dailySMAqts[[1]])
colnames(selectedNames) <- c("Monthly SMA Filter", "Daily SMA Filter")
differentSelections <- selectedNames[selectedNames[,1]!=selectedNames[,2],]
Now, of course, many can make the arguments that Yahoo’s data is junk, my backtest doesn’t reflect reality, etc., which would essentially miss the point: this data here, while not a perfect realization of the reality of Planet Earth, may as well have been valid (you know, like all the academics, who use various simulation techniques to synthesize more data or explore other scenarios?). All I did here was change the filter to something logically comparable (that is, computing the moving average filter on a different time-scale, which does not in any way change the investment logic). From 2003 onward, this change only affected the strategy in four places. However, those instances were enough to create some noticeable changes (for the worse) in the strategy’s performance. Essentially, the downside of rankings-based strategies are when the overall number of selected instruments (in this case, ONE!) is small, a few small changes in parameters, data, etc. can lead to drastically different results.
As I write this, Cliff Smith already has ideas as to how to counteract this phenomenon. However, unto my experience, once a strategy starts getting into “how do we smooth out that one bump on the equity curve” territory, I think it’s time to go back and re-examine the strategy altogether. In my opinion, while the idea of momentum is of course, sound, with a great deal of literature devoted to it, the idea of selecting just one instrument at a time as the be-all-end-all strategy does not sit well with me. However, to me, QTS nevertheless presents an interesting framework for analyzing small subgroups of securities, and using it as one layer of an overarching strategy framework, such that the return streams are sub-strategies, instead of raw instruments.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
I’m sure we’ve all heard about diversified stock and bond portfolios. In its simplest, most diluted form, it can be comprised of the SPY and TLT etfs. The concept introduced by Logical Invest, in a Seeking Alpha article written by Frank Grossman (also see link here), essentially uses a walk-forward methodology of maximizing a modified Sharpe ratio, biased heavily in favor of the volatility rather than the returns. That is, it uses a 72-day moving window to maximize total returns between different weighting configurations of a SPY-TLT mix over the standard deviation raised to the power of 5/2. To put it into perspective, at a power of 1, this is the basic Sharpe ratio, and at a power of 0, just a momentum maximization algorithm.
The process for this strategy is simple: rebalance every month on some multiple of 5% between SPY and TLT that previously maximized the following quantity (returns/vol^2.5 on a 72-day window).
Here’s the code for obtaining the data and computing the necessary quantities:
In short, it seems the strategy performs far better than either of the ingredients. Let’s see if the equity curve comparison reflects this.
Indeed, it does. While it does indeed have the drawdown in the crisis, both instruments were in drawdown at the time, so it appears that the strategy made the best of a bad situation.
2002 was an incomplete year. However, what’s interesting here is that on a whole, while the strategy rarely if ever does as well as the better of the two instruments, it always outperforms the worse of the two instruments–and not only that, but it has delivered a positive performance in every year of the backtest–even when one instrument or the other was taking serious blows to performance, such as SPY in 2008, and TLT in 2009 and 2013.
For the record, here is the weight of SPY in the strategy.
Now while this may serve as a standalone strategy for some people, the takeaway in my opinion from this is that dynamically re-weighting two return streams that share a negative correlation can lead to some very strong results compared to the ingredients from which they were formed. Furthermore, rather than simply rely on one number to summarize a relationship between two instruments, the approach that Frank Grossman took to actually model the combined returns was one I find interesting, and undoubtedly has applications as a general walk-forward process.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
So thanks to seeing Michael Kapler’s implementation of David Varadi’s percentile channels strategy, I was able to get a better understanding of what was going on. It turns out that rather than looking at the channel value only at the ends of months, that the strategy actually keeps track of the channel’s value intra-month. So if in the middle of the month, you had a sell signal and at the end of the month, the price moved up to intra-channel values, you would still be on a sell signal rather than the previous month’s end-of-month signal. It’s not much different than my previous implementation when all is said and done (slightly higher Sharpe, slightly lower returns and drawdowns). In any case, the concept remains the same.
For this implementation, I’m going to use the runquantile function from the caTools package, which contains a function called runquantile that works like a generalized runMedian/runMin/runMax from TTR, once you’re able to give it the proper arguments (on default, its results are questionable).
Essentially, with runquantile, you need to give it the “trim” argument, and then manually append the leading NAs, and then manually turn it into an xts object, which is annoying. One would think that the author of this package would take care of these quality-of-life issues, but no. In any case, there are two strategies at play here–one being the percentile channel risk parity strategy, and the other what Michael Kapler calls “channel equal weight”, which actually *isn’t* an equal weight strategy, since the composite parameter values may take the values (-1, -.5, 0, .5, and 1–with a possibility for .75 or .25 early on when some of the lookback channels still say 0 instead of only 1 or -1), but simply, the weights without taking into account volatility at all, but I’m sticking with Michael Kapler’s terminology to be consistent. That said, I don’t personally use Michael Kapler’s SIT package due to the vast differences in syntax between it and the usual R code I’m used to. However, your mileage may vary.
In short, the more naive weighting scheme delivers slightly higher returns but pays dearly for those marginal returns with downside risk.
Here are the equity curves:
So, there you have it. The results David Varadi obtained are legitimate. But nevertheless, I hope this demonstrates how easy it is for the small details to make material differences.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
This post will detail an attempt at replicating David Varadi’s percentile channelsstrategy. As I’m only able to obtain data back to mid 2006, the exact statistics will not be identical. However, of the performance I do have, it is similar (but not identical) to the corresponding performance presented by David Varadi.
First off, before beginning this post, I’d like to issue a small mea culpa regarding the last post. It turns out that Yahoo’s data, once it gets into single digit dollar prices, is of questionable accuracy, and thus, results from the late 90s on mutual funds with prices falling into those ranges are questionable, as a result. As I am an independent blogger, and also make it a policy of readers being able to replicate all of my analysis, I am constrained by free data sources, and sometimes, the questionable quality of that data may materially affect results. So, if it’s one of your strategies replicated on this blog, and you find contention with my results, I would be more than happy to work with the data used to generate the original results, corroborate the results, and be certain that any differences in results from using lower-quality, publicly-available data stem from that alone. Generally, I find it surprising that a company as large as Yahoo can have such gaping data quality issues in certain aspects, but I’m happy that I was able to replicate the general thrust of QTS very closely.
This replication of David Varadi’s strategy, however, is not one such case–mainly because the data for DBC does not extend back very far (it was in inception only in 2006, and the data used by David Varadi’s programmer was obtained from Bloomberg, which I have no access to), and furthermore, I’m not certain if my methods are absolutely identical. Nevertheless, the strategy in and of itself is solid.
The way the strategy works is like this (to my interpretation of David Varadi’s post and communication with his other programmer). Given four securities (LQD, DBC, VTI, ICF), and a cash security (SHY), do the following:
Find the running the n-day quantile of an upper and lower percentile. Anything above the upper percentile gets a score of 1, anything lower gets a score of -1. Leave the rest as NA (that is, anything between the bounds).
Subset these quantities on their monthly endpoints. Any value between channels (NA) takes the quantity of the last value. (In short, na.locf). Any initial NAs become zero.
Do this with a 60-day, 120-day, 180-day, and 252-day setting at 25th and 75th percentiles. Add these four tables up (their dimensions are the number of monthly endpoints by the number of securities) and divide by the number of parameter settings (in this case, 4 for 60, 120, 180, 252) to obtain a composite position.
Next, obtain a running 20-day standard deviation of the returns (not prices!), and subset it for the same indices as the composite positions. Take the inverse of these volatility scores, and multiply it by the composite positions to get an inverse volatility position. Take its absolute value (some positions may be negative, remember), and normalize. In the beginning, there may be some zero-across-all-assets positions, or other NAs due to lack of data (EG if a monthly endpoint occurs before enough data to compute a 20-day standard deviation, there will be a row of NAs), which will be dealt with. Keep all positions with a positive composite position (that is, scores of .5 or 1, discard all scores of zero or lower), and reinvest the remainder into the cash asset (SHY, in our case). Those are the final positions used to generate the returns.
This is how it looks like in code.
This is the code for obtaining the data (from Yahoo finance) and separating it into cash and non-cash data.
This is the function for computing the percentile channel positions for a given parameter setting. Unfortunately, it is not instantaneous due to R’s rollapply function paying a price in speed for generality. While the package caTools has a runquantile function, as of the time of this writing, I have found differences between its output and runMedian in TTR, so I’ll have to get in touch with the package’s author.
The way this function works is simple: computes a running quantile using rollapply, and then scores anything with price above its 75th percentile as 1, and anything below the 25th percentile as -1, in accordance with David Varadi’s post.
It then subsets these quantities on months (quarters is also possible–or for that matter, other values, but the spirit of the strategy seems to be months or quarters), and imputes any NAs with the last known observation, or zero, if it is an initial NA before any position is found. Something I have found over the course of writing this and the QTS strategy is that one need not bother implementing a looping mechanism to allocate positions monthly if there isn’t a correlation matrix based on daily data involved every month, and it makes the code more readable.
Next, find the running volatility for the assets, and subset them to the same time period (in this case months) as our composite position. In David Varadi’s example, the parameter is a 20-day lookback.
#find 20-day rolling standard deviations, subset them on identical indices
#to the percentile channel monthly positions
sd20 <- xts(sapply(returns[,-5], runSD, n=20), order.by=index(assetPrices))
monthlySDs <- sd20[index(compositePosition)]
Next, perform the following steps: find the inverse volatility of these quantities, multiply by the composite position score, take the absolute value, and keep any position for which the composite position is greater than zero (or technically speaking, has positive signage). Due to some initial NA rows due to a lack of data (either not enough days to compute a running volatility, or no positive positions yet), those will simply be imputed to zero. Reinvest the remainder in cash.
#compute inverse volatilities
inverseVols <- 1/monthlySDs
#multiply inverse volatilities by composite positions
invVolPos <- inverseVols*compositePosition
#take absolute values of inverse volatility multiplied by position
absInvVolPos <- abs(invVolPos)
#normalize the above quantities
normalizedAbsInvVols <- absInvVolPos/rowSums(absInvVolPos)
#keep only positions with positive composite positions (remove zeroes/negative)
nonCashPos <- normalizedAbsInvVols * sign(compositePosition > 0)
nonCashPos[is.na(nonCashPos)] <- 0 #no positions before we have enough data
#add cash position which is complement of non-cash position
finalPos <- nonCashPos
finalPos$cashPos <- 1-rowSums(finalPos)
And finally, the punchline, how does this strategy perform?
The statistics are visibly worse than David Varadi’s 10% vs. 11.1% CAGR, 6.9% annualized standard deviation vs. 5.72%, 7.45% max drawdown vs. 5.5%, and derived statistics (EG MAR). However, my data starts far later, and 1995-1996 seemed to be phenomenal for this strategy. Here are the cumulative returns for the data I have:
I see a major discrepancy between my returns and David’s returns in 2011, but beyond that, the results seem to be somewhere close in the pattern of yearly returns. Whether my methodology is incorrect (I think I followed the procedure to the best of my understanding, but of course, if someone sees a mistake in my code, please let me know), or whether it’s the result of using Yahoo’s questionable quality data, I am uncertain.
However, in my opinion, that doesn’t take away from the validity of the strategy as a whole. With a mid-1 Sharpe ratio on a monthly rebalancing scale, and steady new equity highs, I feel that this is a result worth sharing–even if not directly corroborated (yet, hopefully).
One last note–some of the readers on David Varadi’s blog have cried foul due to their inability to come close to his results. Since I’ve come close, I feel that the results are valid, and since I’m using different data, my results are not identical. However, if anyone has questions about my process, feel free to leave questions and/or comments.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.