Current location | Thread information | |
![]() ![]() ![]() ![]() ![]() ![]() |
Last Activity 4/1/2022 3:49 AM 21 replies, 3012 viewings |
|
|
Printer friendly version |
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
A discussion started in the middle of an ATM thread which is very important, but was somewhat outside the scope of the thread. Mark Holstius asked that it be moved elsewhere. Here is the link to the original thread: https://www.omnitrader.com/currentclients/otforum/thread-view.asp?threadid=15825 Here are the initial posts on that topic, earliest to most recent. ********** John W - Steve has just raised a most important topic - what is the best way to get an apples to apples comparison of each strategy? I have been using a different approach based on investing a Fixed dollar amount, not percent of equity or fixed trade size. I noticed when I played with Mark's terrific ATM M&M strategy that the results for Outlay, Return and PPT% are significantly different using each of these 3 Allocation methods. Using percent of equity results in trade sizes in later years that are significantly different from those in earlier years, so better profits or worse losses in later years skew the average results per trade. Profitable strategies that trade a lot are very susceptible to skew because equity compounds rapidly in later years. Strategies that don't trade much or have middling profitability are not very susceptible to skew because their equity doesn't change much. Mixing profitable strategies that trade a lot with those that don’t trade much or profitable strategies with losing strategies will skew results for both types. Similarly using a fixed trade size weights the dollar outlay and results and PPT% more heavily towards shares with bigger share prices and that skews average results. So, for comparison purposes I believe that fixed dollar has the advantage that every trade is equal dollar weighted. The input from OT into the comparison spreadsheet is in dollars outlay and profit - the input ideally should be equal weighted dollars. We are spending a lot of time on spreadsheets and ranking strategies, have we considered if we have the right approach by using percent of equity? I'm raising the question of what is the right approach to get an apples to apples strategy comparison? ********** Jim Dean - I heartily agree that fixed dollar is far to be preferred over percent of equity. I’ve tried to suggest this many times over the years - but there is so much “marketing inertia” associated with percent equity that it never seems to take hold. Another aspect to this and argument for it is the nature of the trader’s lifestyle. Unless the reader is already very very wealthy, then it seems to me that it would be a fairly normal thing for the profits from trading to be “skimmed off” to one degree or another, for vacations or large purchases or paying kids tuition etc. So, once again I applaud the idea of using fixed $ vs pct equity. And also suggest another couple of future Port Sim options be added: 1. Fixed account size with side Cash account. That is, start with $100,000 or whatever amount, and siphon off all profits above that into a (separately tracked) Cash account. The Cash account would be drawn down to replenish the trading account any time it drops below $100,000. Advantage of this is they it treats every day of the historical test period exactly the same as every other one, in terms of available equity. 2. Alternative to #1: Bounded account size with side Cash account, between say $100k and $200k, where profits above $200k siphon off to Cash and losses below $100k are replenished from Cash. Of course the reporting stats would be the same - total P/L would always be the sum of the trading account and the Cash account. I believe either of these two may be a much closer representation of traders who “trade for a living” rather than investors saving for retirement. And they both assure trade sizes that are reasonable across the entire test period. ********** LSJ - Jim, I wholeheartedly agree with the side cash acct concept. The reason we who are not lottery winners trade is to make use of the income. Another view on trade amount has to do with risk. Using a percent of equity does not take that into account. Simply put, if I were to buy 100 shares of a $100 stock on margin then I have a $5000 investment but with proper trade management that is not $5000 or $10,000 of risk. I would prefer sizing a trade based on how much I am risking. That would be the dollar amount to a fixed loss stop. In this case if the loss stop was $2.00 below entry then I am only risking $200. In that case then assuming I had the margin I could invest considerably more in that trade and it changes my whole approach to performance testing. [Edited by Jim Dean on 9/1/2018 9:24 AM] | |||
^ Top | ||||
LSJ![]() Legend ![]() Posts: 515 Joined: 8/17/2006 Location: Citrus Springs, FL ![]() |
This is a bit of a crossover subject between futures and ATM but the discussion was started in ATM. I am interested in applying ATM techniques to futures trading. A couple of the mechanics involved on which I have questions is, for futures, how does OT value the cost of entry for a position. The stock calculation is straight forward but for futures does OT use the margin value listed in the database or does it just use the $1/pt as in stocks (and the selected leverage)? I have run a futures profile with a simple ATM % equity. I have circumvented the question of what does % equity do with futures by limiting the simulation trading parameters to a max shares of one. Examining the trades I have verified that the point calculations for P&L are correct. Here is the result for a $25000 account with no leverage for a period of 5 years. This is an early result without a lot of tweaking. Because of the huge leverage in futures and the potential huge losses using pivot points, etc I have added fixed loss tops to the strategies. [Edited by LSJ on 8/31/2018 11:25 AM] ![]() ![]() ![]() | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
On various occasions I've tried to point out the danger/absurdity of using the %Equity method, which 99.9% of all the testing and development seems to be standardized on. This post hopefully will explain it in VERY SIMPLE TERMS. Bottom line, imho, almost every PortSim run done with %Equity is FATALLY FLAWED, and MISLEADING ... unless: a) it covers a very small number of trades, or b) it shows that the strategy is *losing* consistently. Yes ... I'm hoping that bolded statement raises some eyebrows and encourages y'all to carefully think this through. I hope that the examples below will make it clear as to why it's important, and why future Nirvana and User testing and posts and marketing should use a different approach. ==== Here's the typical scenario: start with 100K, set Allocation to 10% of Equity, and test across ten years ... these values may differ a bit but I think they represent the majority of the brochures and user-posted test results. If the size of each trade is 10% of current equity, then that means (duh) that a max of ten trades can be active at once ... but since some might argue that 5% is more common, let's use 7%. That means 14 trades at a time (please disregard the effect of margin ... the conclusions are the same ... hang in there) So, at the outset of the PortSim backtest, that means each trade has about $7,000 to buy shares with. If the per-share price is $700, that's 10 shares. If the per-share price is $70, that's 100 shares. If it's $7/share, it buys 1000 shares. As long as there is some kind of "reasonable" liquidity filter used for the symbol list, or it's a major list like the SP-100, then we shouldn't have any trouble getting those trades filled at a reasonable price ... let's just ignore the fact that the SP-100 was different 10 years ago, and that using it as the basis today is "cheating" since we know those symbols are going to end up doing well ... that's a whole different discussion. If we use a cash-liquidity filter something like this: Avg(C,10) * Avg(V,10) > 100,000,000 ... this is the kind of recommended filter that seems to be the most prevalent ... then the only symbols that "get through" that filter are ones where at least $100 million is traded per day on the average over a couple of weeks. Sounds reasonable ... sounds safe. Some might even use a smaller value than 100,000,000 in order to get more symbols on the FL. But we'll stick with it. Now, assume further that this is a "SUCCESSFUL" strategy (or ATM method, etc) ... and that over the ten years, the equity curve rises from $100,000 to $100 million ... this may sound crazy (and that *IS* the point btw) ... but if you've been following the threads related to OmniVest and ATM, you'll find PortSim outputs posted that end up with $100 billion or even occasionally $100 trillion, after ten years. So, this example is going to use a "conservative" (hahaha) ending value of $100 million. If you check this out for the market today ... starting with over 10,000 symbols in "All US Stocks", there are 574 symbols that pass the test. HOWEVER (and this is important) ... if we start with the SP500, only 320 pass ... and if we start with the SP100 (afaik the most common of all the canned test beds), then all of them make it through that test. So ... let's just assume that we have 100 symbols to "try out" (at the HRE) in order to make the strategy work. However ... what about those same SP100 symbols, 10 years ago? How many of them passed that liquidity test back then? To find out: 1. Edit > Data Periods > 2650 bars (ie about ten years of 260 days/yr) 2. Select the SP 100 standard list for the focus list basis 3. Create a custom OmniScript column using this formula: Avg(C,14)[2600] * Avg(V,14)[2600] ... the highest value in that sorted column = 6,000,000,000 (ie $6 billion/day averaged over a two-week period) ... number of symbols in that list with > $100,000,000 avg daily liquidity as of ten years ago: 87 ... the lowest value is ZERO ... in fact Eight symbols show 0 ... that is, they were not even being traded back then ... SO - only 87 of the SP100 symbols had adequate liquidity to be traded by the test strategy as of 2600 bars ago. Now, let's see how often in that 10-year period the liquidity filter allows enough symbols through to the list that the strategy has available for trade-prospecting. To do this, add another custom OmniScript column (everything else the same): -sum((Avg(C,14) * Avg(V,14) > 1e8),2600) ... this formula counts how many days the liquidity passes the filter, over the past ten years ... 75 of the SP100 symbols pass that filter every day ... 86 symbols pass it for at least 2500 of the 2600 days. ... So ... it looks like 86 symbols offer a "consistently adequate liquidity", out of today's SP100 list over the past ten years. ... there are 98 symbols in the SP100 list today (go figger) ... so let's summarize by saying 85% of the SP100 symbols are adequately liquid for the past 10 years. The horse is not quite dead yet. I've noticed that in recent years, the starting list for testing has expanded to include the full SP500, presumably to provide more opportunities for the strategy to "hit" and thus allow full allocation of funds more consistently (a good thing). If we check the SP500 using the same method as above, the result is that of the 490 symbols currently on that list: ... 55 did not exist 10 years ago ... 91 had all 2600 days pass the test ... 135 had at least 2500 days pass the test ... So ... that means only about 28% of today's SP500 symbols actually would be available for trading for the large majority of days over the past ten years. Now, let's say that we want a bigger, more diverse list for a starting point ... ie we are working with a bigger starting population like the Russell 1000 ... so that we can apply other filters as well, to get "better but viable" candidates. Checking the Russ1k, of the 1854 symbols currently on that list: ... 230 did not exist 10 years ago ... only 139 had at least 2500 days pass the test ... So ... that means a bit less than 8% of today's Russ1k symbols actually would be available for trading for the large majority of days over the past ten years. (remember, more filters would likely be in play as well ... but assume the liquidity-percentage remains relatively constant) If we use a list that is not "purely" large-cap for a starting point, such as the Russell 2000 ... of the 1854 symbols currently on that list: ... 844 did not exist 10 years ago ... ONLY 2 had at least 2500 days pass the test ... So ... that means less than 1% of today's Russ2k symbols actually would be available for trading for the large majority of days over the past ten years. This is to be expected since the Russ2k has low-midcap stocks that are do not have as much institutional trading ... and means for liquidity purposes, large-cap is almost a requirement. Finally, let's say we have a several extra "picky" filter-rules and are using the list and/or our strategy doesn't fire frequently ... in that case we want to open up the starting point fully, using All Optionable Stocks. Checking the Optionables, of the 4349 symbols currently on that list: ... 1939 did not exist 10 years ago ... only 180 had at least 2500 days pass the test ... So ... that means only about 4% of today's Optionable symbols actually would be liquid enough for trading for the large majority of days over the past ten years. (remember, more filters would likely be in play as well ... but assume the liquidity-percentage remains relatively constant) Consolidating all this ... let's just average the results, presuming that sometimes you use the SP100, sometimes the SP100, sometimes the Russ1k, and sometimes the Optionable lists ... the overall average of the percent-available symbols, over a ten year period is: (85% + 28% + 8% + 4%) / 4 = 31% are viable throughout the test period. btw ... Dynamic Lists would raise these percentages considerably ... but DL's cannot be used with ATM so we need to stick with the analysis above ===================== Now, let's consider the trades that are taken using the 7% allocation method, in the last year or so of that time period. 7% of $100 million is $7,000,000 ... which buys 10,000 shares of the $700 stock, 100,000 shares of the $70 stock, and 1 million shares of the $7 stock. Hmmm. Those are some BIG trades ... even for the $700 stock. A $7 million trade, regardless of the number of shares actually bought/sold, would: a. be very hard to get a single, clean fill ... probably many trades would fail b. would almost certainly suffer from significant slippage in entry/exit prices c. would almost certainly create a "pop" in the price (maybe the H for the day) PRACTICALLY SPEAKING, I doubt that most OT users, regardless of their account size, would be comfortable "regularly" tying up more than about $100,000 in any given trade ... which means that with the 7% equity rule, we start getting uncomfortable with the trade sizes when the account reaches $1.5 million. Hmmm again. So ... our "normal attractive" PortSim equity curve took us all the way to $100 million in 10 years ... and for many curves I've seen, it takes about half that time to get to $1.5 million. So, here is the BIG QUESTION: How will things work in the second half of the test period, when the trade sizes required by PortSim get to be too big for our comfort level? The answer is fairly clear ... we will LIMIT the sizes to our max comfort-level equity level ... and in order to keep our account fully funded, we will have to TAKE MORE TRADES every day (the PortSim analysis %Equity allocation model that got to $100 million used no more than 14 trades/day) HOWEVER, presumably we have used the cool ranking and market state methods in ATM or OmniVest to pick the "best" 14 trades every day. So ... if we need to find MORE trades to keep us allocated, we need to put our money into WORSE-ranked opportunities. How many? Well, if our account gets to $100 million or so in the final year, and if we don't want to tie up more than $100,000 in any given trade, then that means we will need ONE THOUSAND SYMBOLS IN TRADE every day. That's 986 worse-ranked symbols than the PortSim is using. However ... look back at the analysis of how many symbols pass the liquidity test ... only 31% of our list, on the average. Generalizing, that means to get 1000 tradeable symbols using the liquidity filter described above, our starting list has to have at 3200+ symbols in it ... and our strategies have to be actively trading EVERY SINGLE ONE of them. Clearly, this is absurd And that's why, at the beginning of this post, I made the bold statement that PortSim runs which we very often are using to select strat's, tune ATM's, etc are USELESS ... misleading. SOLUTION: Please look over my earlier post in this thread that suggests alternatives. The coolest and simplest and most flexible fix to PortSim modelling that would solve ALL of this, is ... Allow the user to select more than one Allocation Method ... and give them a single new input that tells Port Sim to set the size of each trade based on the MINIMUM, Average (or maximum-bad) of the selected methods. Doing this, we can use % Equity until its sizes are too big, and let Fixed $ take over above that. Or (my preferred choice by far), ALSO activate the Turtle Trader $ at Risk method as well ... and tell PortSim to use the Minimum of the three. my guess is that making this change to OT would be a lot simpler than other possible approaches involving custom formulae, etc that I've suggested earlier. So, if you agree that this is a concern, and like the proposed solution, please drop an email to Ed or Jeff referencing this post ... the link to this post is: https://www.omnitrader.com/currentclients/otforum/thread-view.asp?threadid=15829#45287 [Edited by Jim Dean on 9/1/2018 9:25 AM] | |||
^ Top | ||||
SteveL![]() Veteran ![]() ![]() ![]() ![]() Posts: 262 Joined: 8/19/2005 Location: Boulder, CO ![]() |
Jim. I don't disagree with the points you make AFTER your bold statement. But, I do disagree with your BOLD statement (which I assume is bold primarily to instigate a discussion). I think we differ in our view of what purpose is served by historical equity curve comparisons. I think %Equity PortSim examples are useful for comparison of potential trading approaches, strategies, etc. The PortSim results and equity curve smoothness provide a perspective for the next year or so regarding how a chosen set of strategies MIGHT work based on their historical performance. If over time my chosen method (combination of strategies, symbols, market states, etc) produces fantastic results, and I'm forced to apply discretion regarding trade size, then I'll deal with that problem. Of course I don't expect to see my trading account grow to billions. For me, that isn't the point of these equity curves. Rather, the point of posting them is for comparison of one approach vs. another for near-future application. In my case, I do not view them as reasonable expectations of what I'll be doing in 20, 10 or even 5 years. | |||
^ Top | ||||
Buffalo Bill![]() Legend ![]() ![]() Posts: 539 Joined: 10/3/2006 Location: Stafford, VA ![]() |
Steve First, I like Jim's idea - it may help should we ever get to the point we need to worry about our trade sizes getting too big! Second, like you I don't EVER think my results will match a $122B PS equity curve, and I use it for comparison only. I also take smaller chunks of time and look at port sim results there, especially the last year or so. Compare those for something more realistic. | |||
^ Top | ||||
mholstius![]() Veteran ![]() ![]() ![]() Posts: 174 Joined: 1/13/2017 ![]() |
An excellent analysis of the reality of equity curves, Jim, but I respectfully point out that I see them as having a different, and very useful, purpose. For me, equity curves are simply a tool to compare the relative performance and stability of various strategies or concepts. I use them as a visual compilation of many statistics: slope, stability, standard deviation, consistency, etc. They’re an excellent way to verify that a trading strategy is valid; if this and this happens, would it be prudent to enter a trade? I don’t expect to be taking $1M trades. I just want to apply the same set of rules & criteria consistently over a long period of time in a variety of conditions, with different types of instruments to validate a theory. The equity curve is merely a graph of the results, another way to “see” the statistics - and an excellent way to compare the relative merits of various systems. That said, you’ve presented a number of excellent ideas over the years for measuring risk and mitigating its effect that I’d like to see incorporated. Mark | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Hi Bill For the record, my rant was not targeting your 122b post ;-) … solely serendipitous. There are hundreds and hundreds of other relevant posts and brochures. Hi Steve I agree that the PortSim curves are only useful for comparing alternative strats and methods, not for projecting actual likely returns. Hi Mark I was not trying to negate the value of PortSim or of equity curves - rather, I am trying to point out major methodological flaws and recommend using different allocation options. ======== I did fail to make one additional major point at the end of my earlier post (was tired of typing). That is - using compounded %Equity models over a multi-year period *hugely* biases the results for the most recent trades. In my example I pointed out that 100k-1.5m took roughly the first 5 years, and the latter five years added $98.5m - effectively “squashing” the comparative performance differences between strategies related to the trades in the first half. And the last quarter of the test period is similarly hugely more influential than the third quarter of the period. So - bottom line - the compounding effect makes the ten year test absurd per se. Doing a test over one or two years, as I stated at the beginning, with the same strategy and trade frequency, is not as much of a problem. For the purposes of *comparing the relative merits* of different strategies or methods, a fixed dollars trade size is by far the best approach - it evens the playing field over time and gives each trade statistically the same influence as every other trade during the test period. [Edited by Jim Dean on 9/1/2018 9:25 AM] | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Here’s a challenge to the “thinkers and try-ers” in the group. The next time you decide to run some PortSim tests to determine which strategies are best, or to fine tune parameters - *do it twice*. The first time, use the good ole %Equity method with compounding. The second time, run the tests the same way but using Fixed $ allocation. Maybe start with a bit more money (to avoid killing the account early on) - maybe $200k instead of $100k - but this is not essential. Then, separately evaluate the tests to see which strategies / methods / parameter-values bubble to the top and appear to be “the best”. I am sure that in many cases, the answers will be DIFFERENT. In those cases, my point is that the “right choice” - the one that is most likely to hold up well in future trading - will be the one based on non-compounded fixed $ allocation. [Edited by Jim Dean on 8/30/2018 8:09 PM] | |||
^ Top | ||||
LSJ![]() Legend ![]() Posts: 515 Joined: 8/17/2006 Location: Citrus Springs, FL ![]() |
Jim, I certainly understand your meticulous analysis of what is going on and appreciate the perspective. I maintain the view that as long as the "synthetic" history captures the character of the charts being traded then it is a reasonable approximation useful for drawing some conclusions about the future. It is interesting to me that if all identifying lables are removed most people could not tell the difference between a 15m chart on a stock vs. a daily chart on crude oil (me included), or whatever. That is kind of thought provoking when I think about just what my analysis is doing. Weighing in on the % equity, dollar amount question (again) I am still in favor of maybe an additional calculation based on risk. Sort of like a lottery ticket. I can only lose $2 even if the pot is multi-millions. Percent of equity does not apply the same as if I place a trade with a stop at $200 loss. $200 is all I can lose no matter the notional value of the trade (notwithstanding the outlier event where stops don't work.) I'm no accountant but I would like to see some parallel analysis in Port Sim of return on capital at risk. That could change PS results and amounts traded. I think this becomes more evident in futures trading. Taking the eMiniS&P 500 with $6350 I can control a $145,000 investment. That further is reduced to the dollar risk to the stop. It is not unreasonable to find a trade where $500 risk is a good trade. So now with $500 for $145000 what is the best way to calculate a meaningful measure? Just saying... | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Hi Larry I fully agree that dollars at risk in a trade (ie entry vs stop times shares) is highly important - and for most active stock traders, likely more important to portfolio heath than the capital tied up in the trade. The Turtle Trader plugin provides an allocation method based on volatility-risk, which is a close cousin. My recent posts have been trying to encourage people to use a more mathematically rigorous and pragmatic choice for allocation than compounded % Equity. For simplicity’s sake, using the tools we all have at hand now, fixed-$ seems the most justifiable and reliable selection when trying to choose between alternative strategies or methods or parameter settings. However - I would also consider Fixed Risk $ to be a reasonable method, as long as that same method is used in actual trading (for those that own Turtle Trader). If the strategies being considered all use some form of volatility-based exit methods (such as trailing stops based on atr-multiples), this would be the superior choice. Since it’s not compounded, it keeps a level playing field for all trades across a testing period. It’s my observation though that most strategies produced by Nirvana tend to gravitate to exit methods that are either time based or %price based (rather than ATR-volatility based). Insofar as that may be the case for a given set of tests, fixed $ would likely yield more statistically-representative conclusions. [Edited by Jim Dean on 8/30/2018 8:45 PM] | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
I know this has been a lot of info to digest. So, here is the main point I’m trying to make, using the examples previously provided … The KEY ISSUE in this allocation debate, *insofar* as it applies to testing and development A vs B vs C comparisons, hinges on COMPOUNDING. % Equity allocation naturally uses compounding. Fixed $ (either measured by per-trade Capital-commitment or by per-trade Risk-exposure) does not suffer from the potentially horribly-misleading decisions which compounding sets the stage for. Using popular lingo, I would classify results and decision making from Compounded allocation while testing A vs B as “Fake News”. This is true for stocks, futures, forex, options, mutual funds, bitcoin, Texas hold’em, and Ponzi schemes. It’s just basic math and scientific method. [Edited by Jim Dean on 8/31/2018 6:37 AM] | |||
^ Top | ||||
Vinay![]() Elite ![]() ![]() ![]() Posts: 640 Joined: 12/9/2011 Location: Planet Earth ![]() |
While using % of Equity method I always use the "Max Amount" setting in the "Trading Parameters" Tab. If we put some realistic value here, then it should take care of the many concerns regarding unrealistic trade sizes. P.S. I request members to reduced the size of the images posted here which can fit in the screen. It disrupts the flow of reading if we need to continuously move the page left and right. [Edited by Vinay on 8/31/2018 10:48 AM] ![]() | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Thanks, Vinay ... I'd totally forgotten about that input option. And yes, it would help with the "unreasonable sizes" aspect of my original post. I'd suggest that a good "generic" setting for the "average" OT active trader might be $100,000 (or maybe a lot less, if the strat/method being tested offers plenty of entry opportunities). This may not *eliminate* the problem of COMPOUNDING during development testing comparisons ... unless the value it is set to is pretty small. For statistically legitimate A vs B comparisons described in earlier posts, I still feel that Fixed $ (Capital or Risk) is the most proper method. Thanks, Vinay! [Edited by Jim Dean on 9/1/2018 9:26 AM] | |||
^ Top | ||||
mholstius![]() Veteran ![]() ![]() ![]() Posts: 174 Joined: 1/13/2017 ![]() |
Hi Jim, Your very detailed post #45287 above does an excellent job of pointing out the problems with lists: survivability, non-existent symbols in the past, changing members of indexes, etc. Those difficulties exist no matter what allocation we use, so they have to be taken into consideration and minimized as much as possible with dynamic lists, Omniscan, etc. Our goal is to compare various systems and strategies as accurately as possible. We want to see the trades that would’ve been generated by the strategies / systems in diverse situations and market states according to the rules built into each. I took your advice and created 2 ATM methods that are identical, except that one uses % of equity set at 10% and the other uses a fixed $ amount set at $10,000 (also 10% when they both start with $100,000). You’re correct that they give vastly different results, but I’ll try to explain why the % of equity is the best choice at the moment, and why using a fixed $ amount has a fatal flaw. This is a log scale chart showing both results using non concurrent ATM; It’s correct that when using % of equity the trade sizes and equity are MUCH larger toward the end due to compounding. We can adjust for that visually by using a log scale chart (above), but have to agree that the size of the trades are completely unrealistic. However, for our purposes size doesn’t matter - we simply want the best representation of the stability and performance of the strategies and systems when they trade according to the rules of the method. The following chart isolates the % Of Equity run in a normal (non-log) scale chart; Notice that the ATM allocating and ranking functions kick in and pick the trades according to the settings we’ve chosen, with a maximum of ten 10% trades at any time. This gives us a fairly stable avg % invested over the 15 year period. The following chart isolates the Fixed $ Amount run in a normal (non-log) scale chart; The critical point to notice here is that there are almost 50% more trades taken (9,938 vs 6,689). The 6,689 trades taken in the % of equity run is the number determined by the correct application of the ATM method’s ranking and allocation settings. The 9,938 trades taken by the Fixed $ run includes 3,249 extra trades taken because the fixed dollar amount trade size as a percent of the available equity declines rapidly as the account grows in size. This allows it to take many more trades, thereby negating the settings in ATM that would normally allow only the 10 “best” trades to be taken before reaching the equity ceiling. Below is a table showing how the change in account size up or down has an effect on the % of the account assigned to each trade; I built this ATM method to use 10% of equity / trade, which limits it to picking the top 10 trades available before running out of equity. When using a Fixed Trade size of $10,000, the only time it’s trading according to those ATM rules is when the account size is $100,000. If the account grows to $200,000, the $10,000 trade size becomes 5% of equity (allowing 20 trades) and when the account reaches $1M the $10,000 trade size is only 1% of equity (allowing 100 trades). This allows the method to take virtually any number of trades that are available at that point, and negates the settings that should cause it to select the best trades. This is the fatal flaw when using a Fixed Trade Size: as soon as the account moves away from the starting equity, in either direction, the % of the account allotted to a trade varies and is no longer correct - and the results are therefore inaccurate. If the account increases in value, the % of the account allocated to a trade decreases, and the number of trades that could be taken increases. The opposite occurs when the account size decreases, so simply setting a "Max # Of Trades" to 10 would not alleviate the error when the account had decreased below the starting equity. When using % Of Equity, as the account moves up and down the size, number, and ranking of the trades in ATM adjusts to follow the rules of the method. We want the best representation of what to expect from the system. I agree that it’s definitely unrealistic to think that we can make multimillion dollar trades, but we’re using the Port Sim to determine whether the specific allocations we’ve chosen in the method worked over a long period of time in the past. Currently, the most accurate way to do that is by using % of equity. Unrealistically high trade sizes as time goes by are a necessary byproduct of the process, but all the associated statistics (# of trades, HR, MDD, etc.) are accurate. That simply isn’t true when using a Fixed Trade Size. I realize this challenges what you’ve proposed, Jim, but I hope this explanation helps to clarify the difference in results when using a Fixed Trade Size vs % Of Equity… Mark [Edited by mholstius on 8/31/2018 7:47 PM] ![]() ![]() ![]() ![]() | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Hi Mark It’s pretty late and honestly I’m not able to absorb the details very well. Some of what you said sounds like the issue is with the ATM settings not allowing the Fixed $ case to run in a realistic manner. And I think you missed a crucial point I was making. I kept repeating that Fixed $ runs are best *for the purpose of comparing* A vs B during the process of choosing between strategies or comparing methods or tuning parameters. Your example does not do that, if I follow it. Your two different A vs B runs seem to be the same everything with only the allocation method being different. Which is the exact opposite of what I was talking about. Your point seems to be, at least in part, that ATM was not allowing all the account to be in play. Again, I’m a bit tired right now, but here is what I suggest. SIMPLIFY. Forget ATM for now and just pick out, say, five individual strategies, all of which you think are pretty good ones. Using the same focus list and 10 yr time frame etc, do a PortSim run on each separate strategy using Fixed $. Examine the results and make a decision from those runs only, using whatever criteria you want, about which of the five is best, second best, etc. Now, repeat that process for the same five strats with same FL and timeframe etc, this time using % Equity. Examine those results and use the same decision criteria to rank the strategies. My prediction is that the order in which you ranked them will end up being different. And my assertion is that the decision made from the Fixed $ runs, regarding that ranking, will be the most reliable and representative one, for future use. Re ATM - it is a MUCH more complex playing field and it’s very likely that several things need to be set differently to accomplish similar decision making comparisons. But the comparisons always should be Fixed $ vs Fixed $. I’ve tried to make it clear that all I was saying has to do with making choices. I’m not saying that Fixed $ is necessarily preferable to %Equity *trading* at the hard right edge. I hope this makes better sense than my current awakeness- level merits. Thanks again for giving this a shot. [Edited by Jim Dean on 9/1/2018 9:25 AM] | |||
^ Top | ||||
mholstius![]() Veteran ![]() ![]() ![]() Posts: 174 Joined: 1/13/2017 ![]() |
Good morning, Jim… Before using anything to make choices, I want to know what it’s telling me. In order to do that, I intentionally used the same setup for both runs and only changed the method of allocation (% of equity vs fixed $ amount). That allowed me to isolate and quantify the effect that one variable was having. If it proved to be beneficial, then I’d go ahead and use it. The comparison of the two made it apparent that using a fixed $ amount alters all of the pertinent statistics because the size of the trades relative to the account changes as the account equity changes (up or down). It has nothing to do with ATM, other than the fact that it was my choice for doing the comparison. ATM is working correctly, as intended. Let me limit this discussion to the run using a fixed $ amount and ignore the other; It starts by using a fixed allocation of $10,000 (10% of the starting equity). As the account grows, the trade size relative to the account equity decreases. When the account reaches $200,000, each trade uses only 5% of the available equity. At $1M, each trade uses only 1% of the available equity. And at the end when the account is at $1.6M, each trade is only using .63% of the available equity. As a result, all the statistics (MDD, Avg Inv, ROI, etc.) are lowered - but those stats will always be totally dependent on how the account grows (or falls). It’s true that trading an account in an increasingly conservative manner like this might be exactly what someone wants to accomplish. If so, then this is the way to go. Personally, I want to take advantage of the benefits of compounding - and this demonstrates that the correct allocation for me to use is % of equity, during testing and comparison of various strategies along with actual trading. Hope that helps clarify what I was trying to demonstrate. Happy Labor Day weekend, Mark [Edited by mholstius on 9/1/2018 8:14 AM] | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Thanks Mark With fresh eyes this morning and your response to confirm it, it appears that my initial evaluation last night was correct. You and I are talking about entirely different things. I have no argument or “dog in the fight” whatsoever about which allocation method is better to use for trades at the HRE (in context of this thread, that is ... I *do* have opinions about it ;~). Full allocation of funds is crucial and I’ve always felt that way. And I understand the simple math re how the compounding operates and how allocation percentages are calculated and why the bottom graph is different. But all of that is off the subject of what I was trying to talk about in this thread (which btw you asked me to separate from the other thread not to muddy the waters there). So, with due respect, back atcha my friend :-) It might be less confusing, in *this* thread, to move those correct but off-topic posts elsewhere and stay focused on the important point I am trying to present here. If you can debunk it, fine. But not by changing the question itself, please. I’ve stated the “purpose” several times and we need to stick to that playing field for any of this to make sense. I’ve laid out a specific test regimen. So it should be clear, if you read it carefully and follow the steps. Again - my point is relative to DOING RESEARCH - not about active live trading. The research I am speaking of is when a user is trying to answer a question like: A. Of these five strategies, which is the best and which is the worst? B. Within a range of possible parameters for a Block in a Strategy, what values are best? C. (More complex) For some ATM feature such as Ranking formula, what’s the best to use? To answer those kind of focused questions, I believe that my prior points have satisfactorily proven that Fixed$ is the allocation method that wil most properly utilize the wide variety of market fluctuations across a ten year backtest, such that the resulting choices have the highest likelihood of providing robust future performance - regardless of the allocation method used in actual trading. I am absolutely confident of this, fwiw. It’s just math. (But of course I have been known to be wrong before … ;-) PS: I have retroactively modified the thread title to clarify the focus. [Edited by Jim Dean on 9/1/2018 10:02 AM] | |||
^ Top | ||||
mholstius![]() Veteran ![]() ![]() ![]() Posts: 174 Joined: 1/13/2017 ![]() |
Well, Jim… you’re definitely right, but using a Fixed $ amount was just part of the process. I’d appreciate your input on taking your excellent suggestion a bit further. (Along with anyone else’s ideas…) What I found in my testing this weekend is that the goal can be simplified even more. Using a Fixed $ amount goes part way, but I think the real objective is to see all the trades the strategies can produce in order to get the best picture of their potential performance (especially before using ATM to then find the best trades in each). To accomplish that, I used the basic parameters I’d posted in another thread to get all the trades; 1) To make sure the high priced stocks are included, set the starting equity to $1M 2) Then set: either the Fixed $ at $10,000 -OR- % Of Equity to 1% 3) Set the Margin leverage at 6X and “Use Leverage To Increase # Of Trades” The results from a “normal” run comparing the 5 strategies starting at $100,000 with all at 10% of equity; Notice that VBX3 is at the bottom and has 4,763 trades Here are the same 5 strategies using settings to get the max number of trades, and a Fixed $ of $10,000; All of the strategies have a larger number of trades, but notice the difference in the curves - and especially that VBX3 now has 12,801 trades (vs 4,763) and has moved from the bottom to the top. To demonstrate that showing all the trades is the deciding factor, as opposed to simply using a Fixed $ amount, here’s the same run of 5 strategies using % Of Equity at 1%; The # of trades for each is identical to the Fixed $ run, and the curves are basically the same. I prefer % of Equity, though, since you don’t get the slight “rolling over” effect produced as the account grows and the Fixed $ amount becomes a lower % of the account. With the original 10% allocation run, I would have chosen NSP41 and CRT3 as the top 2 strategies, but now I’ll use VBX3 and NSP41 for the following example. It gets even more interesting when combining those 2 and comparing their performance when using allocations of 10% of Equity and 1% of Equity; Using 1% vs 10% allocation, the number of trades increases to 19,193 from 6,310 and the MDD decreases to 14.3% from 49.9% (Avg Ann MDD to 9.0% from 26.5%). Interesting that decreasing the allocation, and thereby increasing the # of trades and diversification, can change the performance so radically. There could be a concern that the increased # of trades at 1% would run up the commissions, so I included IB commissions in the test. Just wanted to share the information. Maybe it will save someone some hours of testing, or spark an interest in a new area...??? Thanks again for getting me thinking, Jim. Happy Labor Day weekend, Mark [Edited by mholstius on 9/2/2018 12:52 PM] ![]() ![]() ![]() ![]() | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Thanks for putting in that work, Mark. You're 100% right ... I should have mentioned re Researching that the starting equity should be really high. But I was mostly focused on the ills of compounding ... Re using 1% equity vs fixed$ ... certainly the compounding is reduced ... but might I suggest that you set a cap on it as Vinay pointed out, like maybe $100,000 per trade (with $1m start). The reason for this is that if trade sizes are allowed to grow ... to the degree that they do, it invalidates or at least largely attenuates the influence of the early half or so of the test period on the ultimate results ... thus making the A-vs-B conclusions draw to be less robust/reliable for the future. Your mention of commissions implies to me that you're still thinking of using the same methodology for backtest-research-A-vs-B as for actual trading. I'd suggest that if you remove the actual trading part from your thinking ... and therefore also remove any consideration at all of the net-allocation graph, that it will become clear that Fixed $ is the best apples-to-apples method. To be clear ... *I* recommend using *neither* of those methods for actual trading ... I have a "SmartSizing" approach that melds Capital tied up, Risk being taken, Hedging and Diversification. Which, obviously, isn't one of the PortSim tab options. ;~) [Edited by Jim Dean on 9/2/2018 12:56 PM] | |||
^ Top | ||||
mholstius![]() Veteran ![]() ![]() ![]() Posts: 174 Joined: 1/13/2017 ![]() |
Good points, Jim... I'm gonna go back to working on my walk forward project for the rest of the weekend. Sure wish we had that "Smart Sizing"... Mark | |||
^ Top | ||||
Jim Dean![]() Sage ![]() ![]() Posts: 3022 Joined: 9/21/2006 Location: L'ville, GA ![]() |
Re Smart Sizing ... the simple version is just an OScript formula ... I sent it to Ed several years ago, when he said N was definitely going to do something to OT and OV to make it useable (either hardcoded, or an OScript input field for postion sizing). Obviously, that never happened I conducted a Sat afternoon seminar on that at a Bash in 2013-14, and gave out the OScript formulae to attendees. It's currently usable in a focus list so for manual HRE trading, it works fine. But since it's not possible (yet) to do it mechanically, there's not much point in re-discussing it. I'm pretty sure that sometime in the past, somewhere on the N forums in my ~10,000 posts, I wrote it up. The "robust" version's enhancements of Hedging and Diversification, interestingly enough, is "sorta" do-able in the ATM engine. But the core position sizing thing, calculating shares for each trade based on the symbol's price and its volatility, is just not yet do-able. Otoh ... I'm still working on a huge project that dramatically enhances mechanical Trade Plan capabilities (true dynamic scaling in and out) ... and one of the benefits to that (a fairly simple aspect of it, in fact), is support for Smart Sizing. So ... there is hope for the future. | |||
^ Top | ||||
aztrix![]() Veteran ![]() Posts: 116 Joined: 6/16/2004 Location: Sydney, NSW, Australia ![]() |
Howzit Jim? An amazing thread with great points of view from many experts While I applaud the intention to get a more flexible/meaningful allocation methodology, I believe the Fixed $ allocation is equally flawed (maybe not equally but substantially flawed anyway) and this relates to inflation/power of $/$ value What do I mean? Let me explain using an example. assuming your parameters of 7% on a 100k account, you could have bought ±250 AAPL shares 20 years ago, while today you wouldn't even be able to afford 50. This is anything but a level playing field, statistically each trade doesn't have the same influence. So IMHO we're comparing apples (pun intended) with oranges i.e. Fixed $ allocation is equally flawed. Perhaps in a flat market over 20 years you could make a case … This does not detract from your point that we need a more flexible/meaningful allocation methodology, quite the contrary, it reinforces it! The reality is that we do have cash accounts associated with our trading accounts and and transfer funds between the two so it would be a welcome addition to add to the PortSim e.g. withdraw 50% of profits/year but I digress BTW I find it interesting that we talk about seeding a trading account with $100k 20 years ago, personally speaking I would have dreamt of having an account of that size, in reality it would have been more like $5k-$10k so for my testing that would be a more realistic starting point which would also limit the compounding effect while magnifying the real cost of trading on a small account but also lends itself more to a % of Equity allocation methodology Keep the good stuff coming Jim, I think we all learn something new from you every week if not day Cheers Bruce --
|
|
|
Legend | Action | Notification | |||
Administrator
Forum Moderator |
Registered User
Unregistered User |
![]() |
Toggle e-mail notification |