Hi ZestIn my former career my experience spanned across 20 years...

  1. 8,420 Posts.
    lightbulb Created with Sketch. 59
    Hi Zest


    In my former career my experience spanned across 20 years of product development and manufacturing. I see EA development, launch and operation as very much the same thing so I am simply using my previous experience and applying to EA/FX trading.


    Yes apart from building the EA portfolio, continuous improvement is a key part of the process. Even though it is possible to do extensive backtesting, EA revision & optimisation, demo testing, I know that a perfectly optimised and bug-free EA is not possible before live launch even with all the testing, hence the live launch with graduated risk strategy and continuous improvement. The same happens in product development / manufacturing which is why the concept of the beta phase/product.


    So one goal is build the target number of EA's for the portfolio. A second goal is improve each one over time to make it more efficient (not necessarily more optimised) - this is the benefit of learning to code - it makes continuous improvement posible whereas if you subscribe to a signal it is what it is, and if you don't like it try another one. Once the desired number of EA's is reached an they have been made as efficient as possible, the next step is to develop new EA's which can replace one or some of the current ones, but now there is an existing benchmark that can be used to decide whether to use a new one or not - does it perform better than one/some of the existing ones (either in efficiency, or in correlation).


    When I say efficiency, I mean things like:

    1. Minimising slippage & ogther transaction costs
    2. Minimising or maximising the number of trades included/excluded by the EA because of programming design (i.e. can a function be written a different way to catch some profitable category of trades that the EA is not designed to pick up)


    When I refer to variance, I mean the classic actual vs budget definition. In this case, the budget is the chart-based theoretical performance of the model that the EA is based on (factoring in a budget for costs like slippage & brokerage) in terms of number of setups in a period and therefore the number of trades that should be taken, the return, the drawdown, etc. The actual then means things like what is the real slippage cost vs budget (if greater do I need to make the budget more realistic, or can I improve the real slippage?), so a comparison of budget entry and exit prices, and the trades actually entered. I discovered a significant variance when I started trading my 1st model manually because of errors (typo's in entry, stop levels,) or incorrectly identified setups, or manual override decisions made. I do the same for the EA and so far it performs better on those variances than I do manually.


    So from the above I have found that my manual variances result in about a 30% negative impact on what the chart based model says I should achieve. So far during the same month that the EA has been running, it has been running with variances costing about 20% compared to chart model, so I have gained a 10% reduction in preventable losses which are transferred to the bottom line (profit). I think with various quality improvements I can get the EA to within 10% of the chart based model and if I can achieve that I would consider that very good performance.


    A lot of the above is akin to product quality control, spoilage and cost control, and continuous improvement, in a manufacturing environment.


    In terms of backtesting, I still do 2 types - full auto using EA's on the MT4 tester, and semi-auto using spreadsheets. I basically file the results according to the EA/Model. With the EA testing I save all the tests some of which go into the EA analyser in EA Labs, but most are simply part of the screening process to select markets & test/optimise paramters. With spreadsheet testing I keep the spreadsheets for each market/variable tested and then compile results into master spreadsheets at the relevant point in the process.


    Specifically on your last question I think this refers to using the optimiser in MT4 tester. I will typically try to only vary 2 variables in each optimisation which gives 100-200 results depending on the start, step and stop values selected. If you go to 3 variables the number of results balloons to 1000+. Personally I think optimising 3 variables at the same time is essentially curve-fitting which gives a false picture of the results that can be achieved. The other thing that makes a big difference is which mode you test in (Every tick, Control points, or Open pricing). At this stage, my models are all based on completed bars, i.e. not on tick, so I can use Open pricing mode to test my EA's - this I believe makes a huge difference in how long it takes to process a backtest. Typically a backtest using Open pricing mode takes less than 30 seconds, whereas testing in Every tick mode can take hours. I guess which mode you use depends on what you are wanting to do with your EA. Even though my models so far are all based on the H12 timeframe, I run the EA's on the M1 chart because I use M1 bars to trigger entries and exits. Having done the backtesting and the optimisation and knowing the EA works in this mode, I do want to test the EA using Tick mode (to improve entry & exit pricing) but testing doesn't involve optimisation (and hence massive hours crunching backtests) instead it is focused on correct execution of an already backtested/optimised EA to compare results in tick and M1 bar mode (if that makes sense).


    Cheers, Sharks




 
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.