This post will display my implementation of the Logical Invest Enhanced Bond Rotation strategy. This is a strategy that indeed does work, but is dependent on reinvesting dividends, as bonds pay coupons, which means bond ETFs do likewise.
The strategy is fairly simple — using four separate fixed income markets (long-term US government bonds, high-yield bonds, emerging sovereign debt, and convertible bonds), the strategy aims to deliver a low-risk, high Sharpe profile. Every month, it switches to two separate securities, in either a 60-40 or 50-50 split (that is, a 60-40 one way, or the other). My implementation for this strategy is similar to the ones I’ve done for the Logical Invest Universal Investment Strategy, which is to maximize a modified Sharpe ratio in a walk-forward process.
Here’s the code:
LogicInvestEBR <- function(returns, lowerBound, upperBound, period, modSharpeF) { count <- 0 configs <- list() instCombos <- combn(colnames(returns), m = 2) for(i in 1:ncol(instCombos)) { inst1 <- instCombos[1, i] inst2 <- instCombos[2, i] rets <- returns[,c(inst1, inst2)] weightSeq <- seq(lowerBound, upperBound, by = .1) for(j in 1:length(weightSeq)) { returnConfig <- Return.portfolio(R = rets, weights = c(weightSeq[j], 1-weightSeq[j]), rebalance_on="months") colnames(returnConfig) <- paste(inst1, weightSeq[j], inst2, 1-weightSeq[j], sep="_") count <- count + 1 configs[[count]] <- returnConfig } } configs <- do.call(cbind, configs) cumRets <- cumprod(1+configs) #rolling cumulative rollAnnRets <- (cumRets/lag(cumRets, period))^(252/period) - 1 rollingSD <- sapply(X = configs, runSD, n=period)*sqrt(252) modSharpe <- rollAnnRets/(rollingSD ^ modSharpeF) monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),] findMax <- function(data) { return(data==max(data)) } #configs$zeroes <- 0 #zeroes for initial periods during calibration weights <- t(apply(monthlyModSharpe, 1, findMax)) weights <- weights*1 weights <- xts(weights, order.by=as.Date(rownames(weights))) weights[is.na(weights)] <- 0 weights$zeroes <- 1-rowSums(weights) configCopy <- configs configCopy$zeroes <- 0 stratRets <- Return.portfolio(R = configCopy, weights = weights) return(stratRets) }
The one thing different about this code is the way I initialize the return streams. It’s an ugly piece of work, but it takes all of the pairwise combinations (that is, 4 choose 2, or 4c2) along with a sequence going by 10% for the different security weights between the lower and upper bound (that is, if the lower bound is 40% and upper bound is 60%, the three weights will be 40-60, 50-50, and 60-40). So, in this case, there are 18 configurations. 4c2*3. Do note that this is not at all a framework that can be scaled up. That is, with 20 instruments, there will be 190 different combinations, and then anywhere between 3 to 11 (if going from 0-100) configurations for each combination. Obviously, not a pretty sight.
Beyond that, it’s the same refrain. Bind the returns together, compute an n-day rolling cumulative return (far faster my way than using the rollApply version of Return.annualized), divide it by the n-day rolling annualized standard deviation divided by the modified Sharpe F factor (1 gives you Sharpe ratio, 0 gives you pure returns, greater than 1 puts more of a focus on risk). Take the highest Sharpe ratio, allocate to that configuration, repeat.
So, how does this perform? Here’s a test script, using the same 73-day lookback with a modified Sharpe F of 2 that I’ve used in the previous Logical Invest strategies.
symbols <- c("TLT", "JNK", "PCY", "CWB", "VUSTX", "PRHYX", "RPIBX", "VCVSX") suppressMessages(getSymbols(symbols, from="1995-01-01", src="yahoo")) etfClose <- Return.calculate(cbind(Cl(TLT), Cl(JNK), Cl(PCY), Cl(CWB))) etfAdj <- Return.calculate(cbind(Ad(TLT), Ad(JNK), Ad(PCY), Ad(CWB))) mfClose <- Return.calculate(cbind(Cl(VUSTX), Cl(PRHYX), Cl(RPIBX), Cl(VCVSX))) mfAdj <- Return.calculate(cbind(Ad(VUSTX), Ad(PRHYX), Ad(RPIBX), Ad(VCVSX))) colnames(etfClose) <- colnames(etfAdj) <- c("TLT", "JNK", "PCY", "CWB") colnames(mfClose) <- colnames(mfAdj) <- c("VUSTX", "PRHYX", "RPIBX", "VCVSX") etfClose <- etfClose[!is.na(etfClose[,4]),] etfAdj <- etfAdj[!is.na(etfAdj[,4]),] mfClose <- mfClose[-1,] mfAdj <- mfAdj[-1,] etfAdjTest <- LogicInvestEBR(returns = etfAdj, lowerBound = .4, upperBound = .6, period = 73, modSharpeF = 2) etfClTest <- LogicInvestEBR(returns = etfClose, lowerBound = .4, upperBound = .6, period = 73, modSharpeF = 2) mfAdjTest <- LogicInvestEBR(returns = mfAdj, lowerBound = .4, upperBound = .6, period = 73, modSharpeF = 2) mfClTest <- LogicInvestEBR(returns = mfClose, lowerBound = .4, upperBound = .6, period = 73, modSharpeF = 2) fiveStats <- function(returns) { return(rbind(table.AnnualizedReturns(returns), maxDrawdown(returns), CalmarRatio(returns))) } etfs <- cbind(etfAdjTest, etfClTest) colnames(etfs) <- c("Adjusted ETFs", "Close ETFs") charts.PerformanceSummary((etfs)) mutualFunds <- cbind(mfAdjTest, mfClTest) colnames(mutualFunds) <- c("Adjusted MFs", "Close MFs") charts.PerformanceSummary(mutualFunds) chart.TimeSeries(log(cumprod(1+mutualFunds)), legend.loc="topleft") fiveStats(etfs) fiveStats(mutualFunds)
So, first, the results of the ETFs:
Equity curve:
Five statistics:
> fiveStats(etfs) Adjusted ETFs Close ETFs Annualized Return 0.12320000 0.08370000 Annualized Std Dev 0.06780000 0.06920000 Annualized Sharpe (Rf=0%) 1.81690000 1.20980000 Worst Drawdown 0.06913986 0.08038459 Calmar Ratio 1.78158934 1.04078405
In other words, reinvesting dividends makes up about 50% of these returns.
Let’s look at the mutual funds. Note that these are for the sake of illustration only–you can’t trade out of mutual funds every month.
Equity curve:
Log scale:
Statistics:
Adjusted MFs Close MFs Annualized Return 0.11450000 0.0284000 Annualized Std Dev 0.05700000 0.0627000 Annualized Sharpe (Rf=0%) 2.00900000 0.4532000 Worst Drawdown 0.09855271 0.2130904 Calmar Ratio 1.16217559 0.1332706
In this case, day and night, though how much of it is the data source may also be an issue. Yahoo isn’t the greatest when it comes to data, and I’m not sure how much the data quality deteriorates going back that far. However, the takeaway seems to be this: with bond strategies, dividends will need to be dealt with, and when considering returns data presented to you, keep in mind that those adjusted returns assume the investor stays on top of dividend maintenance. Fail to reinvest the dividends in a timely fashion, and, well, the gap can be quite large.
To put it into perspective, as I was writing this post, I wondered whether or not most of this was indeed due to dividends. Here’s a plot of the difference in returns between adjusted and close ETF returns.
chart.TimeSeries(etfAdj - etfClose, legend.loc="topleft", date.format="%Y-%m", main = "Return differences adjusted vs. close ETFs")
With the resulting image:
While there may be some noise to the order of the negative fifth power on most days, there are clear spikes observable in the return differences. Those are dividends, and their compounding makes a sizable difference. In one case for CWB, the difference is particularly striking (Dec. 29, 2014). In fact, here’s a quick little analysis of the effect of the dividend effects.
dividends <- etfAdj - etfClose divReturns <- list() for(i in 1:ncol(dividends)) { diffStream <- dividends[,i] divPayments <- diffStream[diffStream >= 1e-3] divReturns[[i]] <- Return.annualized(divPayments) } divReturns <- do.call(cbind, divReturns) divReturns divReturns/Return.annualized(etfAdj)
And the result:
> divReturns TLT JNK PCY CWB Annualized Return 0.03420959 0.08451723 0.05382363 0.05025999 > divReturns/Return.annualized(etfAdj) TLT JNK PCY CWB Annualized Return 0.453966 0.6939243 0.5405922 0.3737499
In short, the effect of the dividend is massive. In some instances, such as with JNK, the dividend comprises more than 50% of the annualized returns for the security!
Basically, I’d like to hammer the point home one last time–backtests using adjusted data assume instantaneous maintenance of dividends. In order to achieve the optimistic returns seen in the backtests, these dividend payments must be reinvested ASAP. In short, this is the fine print on this strategy, and is a small, but critical detail that the SeekingAlpha article doesn’t mention. (Seriously, do a ctrl + F in your browser for the word “dividend”. It won’t come up in the article itself.) I wanted to make sure to add it.
One last thing: gaudy numbers when using monthly returns!
> fiveStats(apply.monthly(etfs, Return.cumulative)) Adjusted ETFs Close ETFs Annualized Return 0.12150000 0.082500 Annualized Std Dev 0.06490000 0.067000 Annualized Sharpe (Rf=0%) 1.87170000 1.232100 Worst Drawdown 0.03671871 0.049627 Calmar Ratio 3.30769620 1.662642
Look! A Calmar Ratio of 3.3, and a Sharpe near 2!*
*: Must manage dividends. Statistics reported are monthly.
Okay, in all fairness, this is a pretty solid strategy, once one commits to managing the dividends. I just felt that it should have been a topic made front and center considering its importance in this case, rather than simply swept under the “we use adjusted returns” rug, since in this instance, the effect of dividends is massive.
In conclusion, while I will more or less confirm the strategy’s actual risk/reward performance (unlike some other SeekingAlpha strategies I’ve backtested), which, in all honesty, I find really impressive, it comes with a caveat like the rest of them. However, the caveat of “be detail-oriented/meticulous/paranoid and reinvest those dividends!” in my opinion is a caveat that’s a lot easier to live with than 30%+ drawdowns that were found lurking in other SeekingAlpha strategies. So for those that can stay on top of those dividends (whether manually, or with machine execution), here you go. I’m basically confirming the performance of Logical Invest’s strategy, but just belaboring one important detail.
Thanks for reading.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
Dear Ilya,
How do you proceed to test monthly returns when there is the assumption of using a 73-day lookback period for the modified Sharpe?
Thanks,
Diego
Run calculations on the daily data, subset it on months after.
I’ve put this comment on Frank’s SA article, but I wanted to make sure you saw it, Ilya.
Hi Ilya,
Thanks for your analysis. It is always useful for other analysts to check a tactical strategy to identify strengths and weaknesses. I have interacted with you in the past, and you do very thorough analysis.
One thing that concerns me, however, especially in backtesting to 2000, is that you use Yahoo adjusted price data (and I think Frank does too, but this needs confirmation). I have reported a number of times that Yahoo adjusted data is rampant with missed dividends, and since dividends are very important in any bond strategy, I think it is imperative that higher fidelity data are employed in your analysis. A secondary issue with Yahoo data is their use of only two decimal places that leads to significant error in mutual fund data going back to 2000 and beyond. But Yahoo’s major error is missed dividends, and they miss a lot of dividends.
I have looked extensively at four data sources to try to find one that provides higher fidelity data for backtesting. The four data sources I’ve looked at are: ETFreplay, Yahoo, Stockcharts, and YCharts. ETFreplay doesn’t provide downloaded data, but is a source that seems to me to use very high fidelity data (I have never found a missed data in their data). The other three sources do provide downloaded data, but only Yahoo is free.
My conclusions were that YCharts has the highest fidelity data, comparable to ETFreplay. Both Yahoo and Stockcharts miss a significant number of dividends, while YCharts does not. Also, YCharts carries six decimal places in their adjusted data; this provides high fidelity data for mutual funds going back to 2000 and beyond. The cost for YCharts data is not excessive ($40 per month). My recommendation to you, Ilya, is to invest in a YCharts or comparable license that provides higher fidelity adjusted price data.
Thanks,
Cliff
Hi Ilya,
Thanks for the post and nice job with the blog!
The missing piece is the tax rate applied to dividend. This seriously reduces the real amount available to investors. Worst case scenario is a foreign investor (like me) who has to pay a 30% tax on dividend: that changes dramatically the all picture. The rate applied to US resident is very different but it’s not nugatory either from memory.
The R Trader
You’re welcome for the post. Thanks for the compliments. I guess one would have to try and run this in an account as free from taxes as possible, then.
Great work as usual Ilya!
I ran a bond rotation strategy for about 5 months last year. Being a foreign investor (like R Trader), it wasn’t worth it due to taxes. You are correct that dividends are a substantial portion of returns but if you have to pay significant taxes (or even regular taxes), the risk/reward changes – and not in a positive way! It would be interesting to try this on a quarterly basis using mutual funds. But as you have pointed out, the gap between closed and adjusted (not really achievable) is huge. I expect that to hold on a quarterly time frame.
Regarding those ratios, when using MF data back in the 90’s or pre 2007 for that matter, I think you should look at the risk free yields and not use RF=0%. From 1995 through to 2000, for example, 90 day yields were 5%. If you look at the return of these rotation strategies against a 5% RF yield, they will not look anywhere near as attractive. If you factor in inflation and then calculate the real returns, well, it gets worse. The 90 day monthly rates from 1934 are here: http://research.stlouisfed.org/fred2/data/TB3MS.txt
Finally, the post-2008 periods saw the Fed quadruple their balance sheet thereby lifting equity prices and bond yields got crushed at the same time (and of course the last 30 years saw the biggest bond bull ever on top of all that). So it was quite a distortion that was introduced in the markets (unprecedented in fact). You can see the effect of this intervention in your graphs above. There is a significant inflection point post 2008 (the same goes for the Universal Strategy). So the strategy is a curve fit because it is what worked well given the monetary policy. Walk-forward techniques don’t prevent curve fitting if the underlying strategy itself is a curve fit. It’s simply trying to optimize and already fitted algorithm. Going forward, I don’t expect interest rates to drop another 6% or more from here or for the Fed to quadruple their balance sheet again with QE5-8 thereby lifting prices through ten’s of billions of monthly purchases. CWB has a 60 day correlation of daily returns of 0.8 to the SP500. That’s very high so it’s like owning an equity index (although I haven’t tried it, I would not be at all surprised if you substitute SPY for CWB and get a similar result).
So basically, reality bites. Dividend taxes, inefficiencies in dividend reinvestment, trading fees, inflation, risk-free rates prior to 2007 and the curve-fit gains post 2008 makes me highly skeptical that this will actually yield any alpha going forward.
PS: I’ve been off the blog for a few months for many reasons. It is quite possible that my comments are repeating those of others already made. If so, I apologize. I am reading recent material and working backwards in time.
Yeah, that’s the thing about backtests–I can only backtest given our reality, and the data for our reality. If all the data we have is of the past 30 years, well, no technique against overfitting is going to guard against what can happen when there’s a thirty-year regime shift.
I wonder if anyone from Logical Invest frequents this blog to provide a more comprehensive answer, but I’ll copy/paste this on the SeekingAlpha article and see what comes up!
Hi Ilya! i´m really far away from the subject of your post (i´m ecologist!), but a frequent visitor of r-bloggers, the reason i´m here…
I really like your graphs, but even when i look for some graphical code i didn’t find it, (particularly the the Equity curve plot!, so cool!), and since i´m recently involved in time series data i want to explore several ways to visualize the information.
Is the graphical consruction part of an “inner” function or i miss some lines in your post?
Thanks! and great code (until i can undersand! jejje)
PD: some notes besides my question… i always think that that money investment seems similar to the energy investment that animals do when looking for food: usually the most secure source (high energy/reward balance) will work ok, so a conservative strategy is selected, but not always… and, the fact is that this eventual “irrational” behaviour allows some individuals to survive, of course this strategy is not always rewarded, this individuals usually die young (a.k.a.:are rare), but the population need them to be maintained in time…
Hi Ian,
That’s because I usually don’t roll my graphs manually. Assuming a return stream (that is, a distribution that has some sort of variance around a near-zero number), I let charts.PerformanceSummary take care of it in the form of charts.PerformanceSummary(r, …). The only instance in which this doesn’t work is when I’d need to plot log returns due to exponential compounding, at which point, it’d be chart.TimeSeries(log(cumprod(1+r))) .
Does that help?
Pingback: Quantocracy's Daily Wrap for 04/08/2015 | Quantocracy
I am probably missing something, but according to my calculations, just allocating the Naive Graham weights (75% ,25%) to the top two etfs based on the prior three months’ returns and trading every one, two or three months appears to provide better returns for TLT. JNK, PCY and CWB than the 11% CAGR that you seem to be showing.
2009-2015: One month holding period CAGR 14.7%
Two month holding period CAGR 13.7%
Three month holding period CAGR 15.9%
Sorry, I transcribed the results incorrectly. The results in the previous comment are for the weights allocated using risk parity (weights inversely proportional to the standard deviations of the prior three months’ daily returns.) For the Naive Graham weights (75%, 25%), the results are:
2009-2015: One month holding period CAGR 15.3%
Two month holding period CAGR 13.97%
Three month holding period CAGR 14.35%
Amazingly good results.
So, to clarify your algorithm, what you propose is this:
Every month (two months, three months), rank the returns of the four securities, and just risk-parity the top two? Or go 75% into rank 1 and 25% into rank 2?
Or is it that the results for risk parity are your first post (14.7%/13.7%/15.9%) and your second post are the result of Graham weighting?
Hi Ilya,
I cloned the strategy on Quantopian here if you are interested: https://www.quantopian.com/posts/the-logical-invest-enhanced-bond-rotation-strategy
Chirag,
I wouldn’t call the security selection a case of overfitting if there’s a decent explanation behind them, which in this case, I think there is. In short, I think the strategy was crafted around the securities themselves, as opposed to being a general framework.
However, what exactly do you mean by “month start instead of month end”? Simply lagging the weights by a day?
Ilya,
That makes sense. The “month start instead of month end” means rebalancing on the first trading day of each month rather than the last. Actually, I just changed it now, and returns are only 8% worse over the 6 year time frame. I was tweaking the strategy a lot last night, and initially I was stuck at 60% returns cumulative for a long time, and when I changed it to month end rebalancing, I got the 100% returns. Although now, I can’t figure out what else I changed that makes it drop to only 92% when I pick month start.
Regarding the security selection, my concern was the time frame. I tried picking substitute bond ETFs to get a longer time period, but I can’t find a substitute for PCY. Every backtest I use without PCY mimics AGG (an aggregate bond ETF), so it seems like PCY is the primary asset. And most of the alpha over AGG when I use PCY comes during 2009, when the backtest starts: http://finance.yahoo.com/echarts?s=PCY+Interactive#%7B%22range%22%3A%22max%22%2C%22scale%22%3A%22linear%22%2C%22comparisons%22%3A%7B%22AGG%22%3A%7B%22color%22%3A%22%23cc0000%22%2C%22weight%22%3A1%7D%7D%7D. So I’m unsure if the results would persist.
Interesting strategy and nice work on the implementation in R Ilya. I could not see how to identify the latest weights for the top two funds.
Fred.
You’d have to look at the weights, and by weights, I mean their column names. So return the weights, and to tail(weights, 1)
tail(PCY,1) for example simple gives the high, low closing prices etc. Perhaps I misunderstood what you meant by column names.
What I mean is that it computes the strategy returns from somewhere, since Return.portfolio needs weights and a return stream. I don’t actually return the weights in this case, but if you read the code, you can see where I use them. If you want to return the weights, you can change my return statement to return(list(weights, stratRets)), and then of that 2-element list, the weights will be the first element. Then do tail(weights, 1)
Hi Ilya,
Thanks for your quick responses. I did as you said but got in response to tail(weights,1) I get the error “UseMethod(“weights”)”. Also, if I attempt to do table.AnnualizedReturns of the strategy I get “Error in checkData(R) : The data cannot be converted into a time series. If you are trying to pass in names from a data object with one column, you should use the form ‘data[rows, columns, drop = FALSE]’. Rownames should have standard date formats, such as ‘1985-03-15’. ” If I use your original code i.e, “return(stratRets)” I can obtain the annualized return.
Thanks for your help with this.
Fred.
Replace the last line in the function with return(list(weights, stratRets)).
Then, after you run the function (EG the etfAdjTest constitutes running the function), run tail(NAME_OF_YOUR_RUN[[1]], 1)
Thanks a lot Ilya. It worked!
Fred.
Ilya,
Great work as always. If I may comment on one thing you missed out on in this analysis is, even if you use the close prices and not adjusted prices you still receive the cash dividend into your account if you are invested in the ETF by Ex-Div date.
By not including the cash divided, while using, close prices you are understating the equity curve by some margin.
So if you do include the cash dividends into the final equity the curve would fall somewhere in between the two curves you demonstrated.
Obviously the analysis becomes very complicated to figure out what the Ex-Div dates are for each invested ETF and see if you would have received the cash dividend or not.
Best,
– CyTrader