One of those posts i'm going to have to read at least 3 times to let it sink in!
Your process makes perfect sense mate thanks for talking through it, a few things there I will adopt to make my approach to this more structured (and importantly help keep my head on straight over the coming months).
I reckon i'm still gonna stick to the store brought EAs, for now, and just smash through those and see how I go - acknowledging the lack of 'control', I remain optimistic there must be some good ones out there with enough inputs available to tweak to my liking, and once I have a few stable ones running I think then will be my trigger to start looking into developing my own (with hopefully a deeper knowledge of the optimisation process, so then i'm not trying to master two things at once). I guess what's working against me right now, or at least what deters me from biting into the code is I have never manually traded FX properly, so whilst I understand most concepts from ASX trading, the spreads, pips, position and risk management protocols are just all so different, so don't want to bite off too much too early and end up facing a very steep learning curve, trying to flatten it a bit, and if that means I can make crude profits earlier in my journey then all the better I suppose, so I see the store EAs as low hanging fruit if that makes sense.
So, if you throw yours on up the store be sure to shout out, i'll be your first buyer!
That last para was very helpful, thank you, there are so many ways to tackle it and how you describe it makes perfect sense, however like Rick I find your curve fitting comment very interesting, let's take two scenarios;
Option 1:
10 variables, all being optimised in the tester in the same run and spits out that XYZ setting are "optimal"
Option 2:
Test 2 variables at a time, over 5 runs, so you are left with 5 'pairs' of 'optimal' variables.
Do you then take that same EA, adjust all 10 variables as per the results of your 5 tests, and consider that your EA to proceed for demo purposes, or do you then cross-test the new variables against different pairs to continue adjusting? (if so, to what end?)
I think what point I still haven't got my head around is this (and no article has explained it to me yet): isn't back-testing by its own nature, curve fitting? Given enough tweaks you are going to eventually arrive at the "Perfect EA", but as we know what constitutes a 'perfect EA' changes constantly in a live environment, so whether you test all variables at once for 10,000 passes, or run 100 back tests each with only 100 passes, what's the difference if you are only ever refining based on a fixed data set, and the more you sharpen it the more it becomes tweaked for a world that no longer exists.
As a follow on to that thought, I have considered the following - Isn't it therefore probable that a strong (consistent) EA is not one that has many variables, but instead only has a few, this way (exactly as you have said) there is less chance for curve fitting, and higher probability to realise the back test performance in a live environment, even if that means the back test returns were never as high as the 'curve fit' returns. I then think if you have only a few variable (let's take a very simple EMA cross example... with only two variable EMAs, whilst returns may be low it also becomes much easier to adjsut over time - perhaps one month a 15 EMA proves to work better, and another month it's 14... I believe you can write optimisation into the code to also test for this over time but I may be mistaken.
Keen to hear your thoughts on that too, and thanks again for punching out these long informative posts! I'm sure there are a lot of HC viewers who will benefit more than just the few who post here