Welcome to the second a part of this two-part weblog collection on the bias-variance tradeoff and its software to buying and selling in monetary markets. Within the first half, we tried to develop an instinct for bias-variance decomposition. On this half, we’ll prolong what we realized and develop a buying and selling technique.
Stipulations
A reader with some primary data of Python and ML ought to have the ability to learn and comprehend the article. These are some pre-requisites:
- Half 1 of this weblog collection on the bias-variance tradeoff and its software to buying and selling in monetary markets
- Linear algebra (primary to intermediate)
- Python programming (primary to intermediate)
- Machine studying (working data of regression and regressor fashions)
- Time collection evaluation (primary to intermediate)
- Expertise in working with market knowledge and creating, backtesting, and evaluating buying and selling methods
Additionally, I’ve added some hyperlinks for additional studying at related locations all through the weblog.
In the event you’re new to Python or want a refresher on it, you can begin with Fundamentals of Python Programming after which transfer to Python for Buying and selling: Fundamental on Quantra for trading-specific functions.
To familiarise your self with machine studying, and with the idea of linear regression, you’ll be able to undergo Machine Studying for Buying and selling and Predicting Inventory Costs Utilizing Regression.
As a result of the article additionally covers time collection transformations and stationarity, you’ll be able to familiarise your self with Time Collection Evaluation. Information of dealing with monetary market knowledge and sensible expertise in technique creation, backtesting, and analysis will enable you apply the article’s learnings to your methods.
On this weblog, we’ll cowl the entire pipeline for utilizing machine studying to construct and backtest a buying and selling technique whereas utilising the bias-variance decomposition to pick out the suitable prediction mannequin. So, right here it goes…
The movement of this text is as follows:
As a ritual, step one is to import the required libraries.
Importing Libraries
I’ve imported the required libraries for all subsequent codes right here. In the event you don’t have any of those put in, a ‘pip set up’ command ought to do the trick (should you don’t wish to depart the Jupyter Pocket book setting or work on Google Colab).
Downloading Information
Subsequent, we outline a perform for downloading the info. We’ll use the yfinance API right here.
Discover the argument ‘multi_level_index’. Lately (I’m penning this in April 2025), there have been some adjustments within the yfinance API. When downloading worth stage and quantity knowledge for any safety by means of the desired API, the ticker title of the safety will get added as a heading.
It seems to be like this when downloaded:
For individuals (like me!) who’re accustomed to not seeing this additional stage of heading, eradicating it whereas downloading the info is a good suggestion. So we set the ‘multi_level_index’ argument to ‘False’.
Defining Technical Indicators as Predictor Variables
Subsequent, since we’re utilizing machine studying to construct a buying and selling technique, we should embody some options (generally known as predictor variables) on which we practice the machine studying mannequin. Utilizing technical indicators as predictor variables is a good suggestion when buying and selling within the monetary markets. Let’s do it now.
Ultimately, we’ll see the listing of indicators once we name this perform on the asset dataframe.
Defining the Goal Variable
The subsequent chronological step is to outline the goal variable/s. In our case, we’ll outline a single goal variable, the close-to-close 5-day p.c return. Let’s see what this implies. Suppose in the present day is a Monday, and there are not any market holidays, barring the weekends, this week. Take into account the p.c change in tomorrow’s (Tuesday’s) closing worth over in the present day’s closing worth, which might be a close-to-close 1-day p.c return. At Wednesday’s shut, it will be the 2-day p.c return, and so forth, until the next Monday, when it will be the 5-day p.c return. Right here’s the Python implementation for a similar:
Why can we use the shift(-5) right here? Suppose the 5-day p.c return based mostly on the closing worth of the next Monday over in the present day’s closing worth is 1.2%. Through the use of shift(-5), we’re putting this worth of 1.2% within the row for in the present day’s OHLC worth ranges, quantity, and different technical indicators. Thus, once we feed the info to the ML mannequin for coaching, it learns by contemplating the technical indicators as predictors and the worth of 1.2% in the identical row because the goal variable.
Stroll Ahead Optimisation with PCA and VIF
One important consideration whereas coaching ML fashions is to make sure that they show strong generalisation. Because of this the mannequin ought to have the ability to extrapolate its efficiency on the coaching dataset (generally known as in-sample knowledge) to the check dataset (generally known as out-of-sample knowledge), and its good (or in any other case) efficiency ought to be attributed primarily to the inherent nature of the info and the mannequin relatively than probability.
One strategy in direction of that is combinatorial purged cross-validation with embargoing. You may learn this to be taught extra.
One other strategy is walk-forward optimisation, which we’ll use (learn extra: 1 2).
One other important consideration whereas constructing an ML pipeline is characteristic extraction. In our case, the entire predictors we’ve got is 21. We have to extract an important ones from these, and for this, we’ll use Principal Element Evaluation and the Variance Inflation Issue. The previous extracts the highest 4 (a worth that I selected to work with; you’ll be able to change it and see how the backtest adjustments) mixtures of options that specify essentially the most variance throughout the dataset, whereas the latter addresses mutual data, also called multicollinearity.
Right here’s the Python implementation of constructing a perform that does the above:
Buying and selling Technique Formulation, Backtesting, and Analysis
We now come to the meaty half: the technique formulation. Listed below are the technique outlines:
Preliminary capital: ₹10,000.
Capital to be deployed per commerce: 20% of preliminary capital (₹2,000 in our case).
Lengthy situation: when the 5-day close-to-close p.c return prediction is constructive.
Quick situation: when the 5-day close-to-close p.c return prediction is detrimental.
Entry level: open of day (N+1). Thus, if in the present day is a Monday, and the prediction for the 5-day close-to-close p.c returns is constructive in the present day, I’ll go lengthy at Tuesday’s open, else I’ll go brief at Tuesday’s open.
Exit level: shut of day (N+5). Thus, after I get a constructive (detrimental) prediction in the present day and go lengthy (brief) throughout Tuesday’s open, I’ll sq. off on the closing worth of the next Monday (offered there are not any market holidays in between).
Capital compounding: no. Because of this our income (losses) from each commerce will not be getting added (subtracted) to (from) the tradable capital, which stays mounted at ₹10,000.
Right here’s the Python code for this technique:
Subsequent, we outline the capabilities to judge the Sharpe ratio and most drawdowns of the technique and a buy-and-hold strategy.
Calling the Capabilities Outlined Beforehand
Now, we start calling a number of the capabilities talked about above.
We’ll begin with downloading the info utilizing the yfinance API. The ticker and interval are user-driven. When operating this code, you’ll be prompted to enter the identical. I selected to work with the 10-year day by day knowledge of the NIFTY-50, the broad market index based mostly on the Nationwide Inventory Trade (NSE) of India. You may select a smaller timeframe; the longer the timeframe, the longer it should take for the following codes to run. After downloading the info, we’ll create the technical indicators by calling the ‘create_technical_indicators’ perform we outlined beforehand.
Right here’s the output of the above code:
Enter a sound yfinance API ticker: ^NSEI Enter the variety of years for downloading knowledge (e.g., 1y, 2y, 5y, 10y): 10y YF.obtain() has modified argument auto_adjust default to True [*********************100%***********************] 1 of 1 accomplished
Subsequent, we align the info:
Let’s examine the 2 dataframes ‘indicators’ and ‘data_merged’.
RangeIndex: 2443 entries, 0 to 2442 Information columns (whole 21 columns): # Column Non-Null Rely Dtype --- ------ -------------- ----- 0 sma_5 2443 non-null float64 1 sma_10 2443 non-null float64 2 ema_5 2443 non-null float64 3 ema_10 2443 non-null float64 4 momentum_5 2443 non-null float64 5 momentum_10 2443 non-null float64 6 roc_5 2443 non-null float64 7 roc_10 2443 non-null float64 8 std_5 2443 non-null float64 9 std_10 2443 non-null float64 10 rsi_14 2443 non-null float64 11 vwap 2443 non-null float64 12 obv 2443 non-null int64 13 adx_14 2443 non-null float64 14 atr_14 2443 non-null float64 15 bollinger_upper 2443 non-null float64 16 bollinger_lower 2443 non-null float64 17 macd 2443 non-null float64 18 cci_20 2443 non-null float64 19 williams_r 2443 non-null float64 20 stochastic_k 2443 non-null float64 dtypes: float64(20), int64(1) reminiscence utilization: 400.9 KB
Index: 2438 entries, 0 to 2437 Information columns (whole 28 columns): # Column Non-Null Rely Dtype --- ------ -------------- ----- 0 Date 2438 non-null datetime64[ns] 1 Shut 2438 non-null float64 2 Excessive 2438 non-null float64 3 Low 2438 non-null float64 4 Open 2438 non-null float64 5 Quantity 2438 non-null int64 6 sma_5 2438 non-null float64 7 sma_10 2438 non-null float64 8 ema_5 2438 non-null float64 9 ema_10 2438 non-null float64 10 momentum_5 2438 non-null float64 11 momentum_10 2438 non-null float64 12 roc_5 2438 non-null float64 13 roc_10 2438 non-null float64 14 std_5 2438 non-null float64 15 std_10 2438 non-null float64 16 rsi_14 2438 non-null float64 17 vwap 2438 non-null float64 18 obv 2438 non-null int64 19 adx_14 2438 non-null float64 20 atr_14 2438 non-null float64 21 bollinger_upper 2438 non-null float64 22 bollinger_lower 2438 non-null float64 23 macd 2438 non-null float64 24 cci_20 2438 non-null float64 25 williams_r 2438 non-null float64 26 stochastic_k 2438 non-null float64 27 Goal 2438 non-null float64 dtypes: datetime64[ns](1), float64(25), int64(2) reminiscence utilization: 552.4 KB
The dataframe ‘indicators’ accommodates all 21 technical indicators talked about earlier.
Bias-Variance Decomposition
Now, the first function of this weblog is to reveal how the bias-variance decomposition can help in growing an ML-based buying and selling technique. After all, we aren’t simply limiting ourselves to it; we’re additionally studying the entire pipeline of making and backtesting an ML-based technique with robustness. However let’s speak concerning the bias-variance decomposition now.
We start by defining six totally different regression fashions:
You may add extra or subtract a pair from the above listing. The extra regressor fashions there are, the longer the following codes will take to run. Decreasing the variety of estimators within the related fashions will even end in quicker execution of the following codes.
In case you’re questioning why I selected regressor fashions, it’s as a result of the character of our goal variable is steady, not discrete. Though our buying and selling technique relies on the route of the prediction (bullish or bearish), we’re coaching the mannequin to foretell the 5-day return, a steady random variable, relatively than the market motion, which is a categorical variable.
After defining the fashions, we outline a perform for the bias-variance decomposition:
You may lower the worth of num_rounds to, say, 10, to make the next code run quicker. Nonetheless, the next worth offers a extra strong estimate.
It is a good repository to search for the above code:
https://rasbt.github.io/mlxtend/user_guide/consider/bias_variance_decomp/
Lastly, we run the bias-variance decomposition:
The output of this code is:
Bias-Variance Decomposition for All Fashions: Whole Error Bias Variance Irreducible Error LinearRegression 0.000773 0.000749 0.000024 -2.270048e-19 Ridge 0.000763 0.000743 0.000021 1.016440e-19 DecisionTree 0.000953 0.000585 0.000368 -2.710505e-19 Bagging 0.000605 0.000580 0.000025 7.792703e-20 RandomForest 0.000605 0.000580 0.000025 1.287490e-19 GradientBoosting 0.000536 0.000459 0.000077 9.486769e-20
Let’s analyse the above desk. We’ll want to decide on a mannequin that balances bias and variance, which means it neither underfits nor overfits. The choice tree regressor greatest balances bias and variance amongst all six fashions.
Nonetheless, its whole error is the best. Bagging and RandomForest show related whole errors. GradientBoosting shows not simply the bottom whole error but additionally the next diploma of variance in comparison with Bagging and RandomForest; thus, its capability to generalise to unseen knowledge ought to be higher than the opposite two, since it will seize extra complicated patterns..
You may be compelled to assume that with such proximity of values, such in-depth evaluation isn’t apt owing to a excessive noise-to-signal ratio. Nonetheless, since we’re operating 100 rounds of the bias-variance decomposition, we will be assured within the noise mitigation that outcomes.
Lengthy story minimize brief, we’ll select to coach the GradientBoosting regressor, and use it to foretell the goal variable. You may, in fact, change the mannequin and see how the technique performs below the brand new mannequin. Please notice that we’re treating the ML fashions as black bins right here, as exploring their underlying mechanisms is outdoors the scope of this weblog. Nonetheless, when utilizing ML fashions for any use case, we should always all the time concentrate on their inside workings and select accordingly.
Having mentioned all of the above, is there a method of lowering the errors of a number of of the above regressor fashions? Sure, and it’s not a way, however an integral a part of working with time collection. Let’s talk about this.
Stationarising the Inputs
We’re working with time collection knowledge (learn extra), and when performing monetary modeling duties, we have to examine for stationarity (learn extra). In our case, we should always examine our enter variables (the predictors) for stationarity. Let’s examine the predictor variables for stationarity and apply differencing to the required predictors (learn extra).
Right here’s the code:
Right here’s a snapshot of the output of the above code:

The above output signifies that 13 predictor variables don’t require stationarisation, whereas 8 do. Let’s stationarise them.
Let’s confirm whether or not the stationarising bought finished as anticipated or not:

Yup, finished!
Let’s align the info once more:
Let’s examine the bias-variance decomposition of the fashions with the stationarised predictors:
Right here’s the output:
Bias-Variance Decomposition for All Fashions with Stationarised Predictors: Whole Error Bias Variance Irreducible Error LinearRegression 0.000384 0.000369 0.000015 5.421011e-20 Ridge 0.000386 0.000373 0.000013 -3.726945e-20 DecisionTree 0.000888 0.000341 0.000546 2.168404e-19 Bagging 0.000362 0.000338 0.000024 -1.151965e-19 RandomForest 0.000363 0.000338 0.000024 7.453890e-20 GradientBoosting 0.000358 0.000324 0.000034 -3.388132e-20
There you go. Simply by following Time Collection 101, we might cut back the errors of all of the fashions. For a similar cause that we mentioned earlier, we’ll select to run the prediction and backtesting utilizing the GradientBoosting regressor.
Working a Prediction utilizing the Chosen Mannequin
Subsequent, we run a walk-forward prediction utilizing the chosen mannequin:
Now, we create a dataframe, ‘final_data’, that accommodates solely the open costs, shut costs, precise/realised 5-day returns, and 5-day returns predicted by the mannequin. We want the open and shut costs for coming into and exiting trades, and the expected 5-day returns, to find out the route by which we take trades. We then name the ‘backtest_strategy’ perform on this dataframe.
Checking the Commerce Logs
The dataframe ‘trades_df_differenced’ accommodates the commerce logs.
We’ll convert the decimals of the values within the dataframe for higher visibility:
Let’s examine the dataframe ‘trades_df_differenced’ now:
Right here’s a snapshot of the output of this code:

From the desk above, it’s obvious that we take a brand new commerce day by day and deploy 20% of our tradeable capital on every commerce.
Fairness Curves, Sharpe, Drawdown, Hit Ratio, Returns Distribution, Common Returns per Commerce, and CAGR
Let’s calculate the fairness for the technique and the buy-and-hold strategy:
Subsequent, we calculate the Sharpe and the utmost drawdowns:
The above code requires you to enter the risk-free fee of your alternative. It’s sometimes the federal government treasury yield. You may look it up on-line in your geography. I selected to work with a worth of 6.6:
Enter the risk-free fee (e.g., for five.3%, enter solely 5.3): 6.6
Now, we’ll reindex the dataframes to a datetime index.
We’ll plot the fairness curves subsequent:
That is how the technique and buy-and-hold fairness curves look when plotted on the identical chart:

The technique fairness and the underlying transfer virtually in tandem, with the technique underperforming earlier than the COVID-19 pandemic and outperforming afterward. Towards the tip, we’ll talk about some lifelike issues about this relative efficiency.
Let’s take a look on the drawdowns of the technique and the buy-and-hold strategy:

Let’s check out the Sharpe ratios and the utmost drawdown by calling the respective capabilities that we outlined earlier:
Output:
Sharpe Ratio (Technique with Stationarised Predictors): 0.89 Sharpe Ratio (Purchase & Maintain): 0.42 Max Drawdown (Technique with Stationarised Predictors): -11.28% Max Drawdown (Purchase & Maintain): -38.44%
Right here’s the hit ratio:
Hit Ratio of Technique with Stationarised Predictors: 54.09%
That is how the distribution of the technique returns seems to be like:

Lastly, let’s calculate the typical income (losses) per profitable (dropping) commerce:
Common Revenue for Worthwhile Trades with Stationarised Predictors: 0.0171 Common Loss Loss-Making Trades with Stationarised Predictors: -0.0146
Primarily based on the above commerce metrics, we revenue extra on common in every commerce than we lose. Additionally, the variety of constructive trades exceeds the variety of detrimental trades. Due to this fact, our technique is protected on each fronts. The utmost drawdown of the technique is restricted to 10.48%.
The rationale: The holding interval for any commerce is 5 days, utilizing solely 20% of our out there capital per commerce. This additionally reduces the upside potential per commerce. Nonetheless, because the common revenue per worthwhile commerce is greater than the typical loss per loss-making commerce and the variety of worthwhile trades is greater than the variety of loss-making trades, the probabilities of capturing extra upsides are greater than these of capturing extra downsides.
Let’s calculate the compounded annual development fee (CAGR):
CAGR (Purchase & Maintain): 13.0078% CAGR (Technique with Stationarised Predictors): 13.3382%
Lastly, we’ll consider the regressor mannequin’s accuracy, precision, recall, and f1 scores (learn extra).
Confusion Matrix (Stationarised Predictors): [[387 508] [453 834]] Accuracy (Stationarised Predictors): 0.5596 Recall (Stationarised Predictors): 0.6480 Precision (Stationarised Predictors): 0.6215 F1-Rating (Stationarised Predictors): 0.6345
Some Life like Concerns
Our technique outperformed the underlying index in the course of the post-COVID-19 crash interval and marginally outperformed the general market. Nonetheless, in case you are considering of utilizing the skeleton of this technique to generate alphas, you’ll must peel off some assumptions and take note of some lifelike issues:
Transaction Prices: We enter and exit trades day by day, as we noticed earlier. This incurs transaction prices.
Asset Choice: We backtested utilizing the broad market index, which isn’t straight tradable. We’ll want to decide on ETFs or derivatives with this index because the underlying.
Slippages: We enter our trades on the market’s opening and exit at its shut. Buying and selling exercise will be excessive throughout these durations, and we might encounter appreciable slippages.
Availability of Partially Tradable Securities: Our backtest implicitly assumes the supply of fractional belongings. For instance, if our capital is ₹2,000 and the entry worth is ₹20,000, we’ll have the ability to purchase or promote 0.1 items of the underlying, ignoring all different prices.
Taxes: Since we’re coming into and exiting trades inside very brief time frames, other than transaction fees, we’d incur a big quantity of short-term capital positive factors tax (STCG) on the income earned. This, in fact, would rely in your native rules.
Danger Administration: Within the backtest, we omitted stop-losses and take-profits. You might be inspired to do that out, see how the technique is modified, and tell us your findings.
Occasion-driven Backtesting: The backtesting we carried out above is vectorised. Nonetheless, in actual life, tomorrow comes solely after in the present day, and we should take into account this when performing a backtest. You may discover the Blueshift at https://blueshift.quantinsti.com/ and check out backtesting the above technique utilizing an event-driven strategy. An event-driven backtest would additionally account for slippage, transaction prices, implementation shortfalls, and threat administration.
Technique Efficiency: The hit ratio of the technique and the mannequin’s accuracy are roughly 54% and 56%, respectively. These values are marginally higher than these of a coin toss. It is best to do this technique with different asset lessons and solely choose these belongings on which these values are at the least 60%. Solely after that ought to you carry out an event-driven backtesting utilizing this technique define.
A Notice on the Downloadable Python Pocket book
The downloadable pocket book contains backtesting the technique and evaluating its efficiency and the mannequin’s efficiency parameters in a situation the place the predictors will not be stationarised and after stationarising them (as we noticed above). Within the former, the technique considerably outperforms the underlying mannequin, and the mannequin shows larger accuracy in its predictions regardless of its greater errors displayed in the course of the bias-variance decomposition. Thus, a well-performing mannequin needn’t essentially translate into buying and selling technique, and vice versa.
The Sharpe of the technique with out the predictors stationarised is 2.56, and the CAGR is nearly 27% (versus 0.94 and 14% respectively when the predictors are stationarised). Since we used GradientBoosting, a tree-based mannequin that does not essentially want the predictor variables to be stationarised, we are able to work with out stationarising the predictors and reap the advantages of the mannequin’s excessive efficiency with non-stationarised predictors.
Notice that operating the pocket book will take a while. Additionally, the performances you receive will differ a bit from what I’ve proven all through the article.
There’s no ‘Good’ in Goodbye…
…but, I’ll should say so now 🙂. Check out the backtest with totally different belongings by altering a number of the parameters talked about within the weblog, and tell us your findings. Additionally, as we all the time say, since we aren’t a registered funding advisory, any technique demonstrated as a part of our content material is for demonstrative, academic, and informational functions solely, and shouldn’t be construed as buying and selling or funding recommendation. Nonetheless, should you’re capable of incorporate all of the aforementioned lifelike components, extensively backtest and ahead check the technique (with or with out some tweaks), generate vital alpha, and make substantial returns by deploying it within the markets, do share the excellent news with us as a remark beneath. We’ll be glad in your success 🙂. Till subsequent time…
Credit
Jose Carlos Tanaka and Vivek Krishnamoorthy, thanks in your meticulous suggestions; it helped form this text!
Chainika Thakar, thanks for rendering this and making it out there to the world!
Subsequent Steps
After going by means of the above, you’ll be able to comply with a number of structured studying paths if you wish to broaden and/or deepen your understanding of buying and selling mannequin efficiency, ML technique growth, and backtesting workflows.
To grasp every part of this technique — from Python and PCA to stationarity and backtesting — discover topic-specific Quantra programs like:
For these aiming to consolidate all of this data right into a structured, mentor-led format, the Govt Programme in Algorithmic Buying and selling (EPAT) presents a really perfect subsequent step. EPAT covers every part from Python and statistics to machine studying, time collection modeling, backtesting, and efficiency metrics analysis — equipping you to construct and deploy strong, data-driven methods at scale.
File within the obtain:
- Bias Variance Decomposition – Python pocket book
Be happy to make adjustments to the code as per your consolation.
All investments and buying and selling within the inventory market contain threat. Any choice to put trades within the monetary markets, together with buying and selling in inventory or choices or different monetary devices is a private choice that ought to solely be made after thorough analysis, together with a private threat and monetary evaluation and the engagement {of professional} help to the extent you consider mandatory. The buying and selling methods or associated data talked about on this article is for informational functions solely.