What do you think about the option to backtest with modified/synthetic candles?
Here for example they add noise whithin the true range of the real candles:
or like this:
Could be useful against overfitting.
I think it could be very nice exactly to avoid overfitting, it would be useful to stress the strategies in a new way
I have a question, i am new to the practical site of ML, would it be that beneficial to see it as a far away goal if we could use our small sets of iterations and bring them together to get either a broader spectrum of outcomes or an more focused outcome? If so, anyone with experience? i would start right away on trying to setup a database. EDIT: Or rather a concept ;-) /EDIT
I would love to give a crowd solution a try.
PleaseEducateMe Not sure if i understand your question. Right now there isn't ML involved with jesse.
markus Good choice of the username then, i assumed that optimize_mode/genetics.py is a start of a ML since it uses test/training with iterations.
so much to learn, such wonderful projects to follow, i love it
Thank you all for your work!
PleaseEducateMe That the upcoming optimize mode. Its hyperparameter optimization. Currently not implemented for use, but @Saleh is working on it. To prevent overfitting in that process right now the backtest is divided into two sets: training(85%) and testing(15%).
PleaseEducateMe Jesse's optimize mode uses the genetic algorithm under the hood which is counted in the artificial intelligence niche but not exactly machine learning (another field of AI).
I will most likely create more AI tools in Jesse. Even so far, the strategy API fully supports training of ML models. I will add tutorials for it in the future.
Saleh Wow I did not expect ML models to be integrated. I've found out of jesse a few days ago and I'm amazed at how much hard and good quality work has been done. I'm gonna get my hands dirty with it this week.