Regression
This notebook presents example usage of package for solving regression problem on methane dataset. You can download training dataset here and test dataset here
This tutorial will cover topics such as:
- training model
- changing model hyperparameters
- hyperparameters tuning
- calculating metrics for model
- getting RuleKit inbuilt
Summary of the dataset
[ ]:
import pandas as pd
from rulekit.arff import read_arff
BASE_DATASET_URL: str = (
'https://raw.githubusercontent.com/'
'adaa-polsl/RuleKit/master/data/methane/'
)
TRAIN_DATASET_URL: str = BASE_DATASET_URL + 'methane-train.arff'
TEST_DATASET_URL: str = BASE_DATASET_URL + 'methane-test.arff'
train_df = read_arff(TRAIN_DATASET_URL)
test_df = read_arff(TEST_DATASET_URL)
Train file
[2]:
print("Train file overview:")
print(f"Name: methane-train")
print(f"Objects number: {train_df.shape[0]}; Attributes number: {train_df.shape[1]}")
print("Basic attribute statistics:")
train_df.describe()
Train file overview:
Name: methane-train
Objects number: 13368; Attributes number: 8
Basic attribute statistics:
[2]:
| MM31 | MM116 | AS038 | PG072 | PD | BA13 | DMM116 | MM116_pred | |
|---|---|---|---|---|---|---|---|---|
| count | 13368.000000 | 13368.000000 | 13368.000000 | 13368.000000 | 13368.000000 | 13368.000000 | 13368.000000 | 13368.00000 |
| mean | 0.363960 | 0.775007 | 2.294734 | 1.835600 | 0.308573 | 1073.443372 | -0.000007 | 0.79825 |
| std | 0.117105 | 0.269366 | 0.142504 | 0.106681 | 0.461922 | 3.162811 | 0.043566 | 0.28649 |
| min | 0.170000 | 0.200000 | 1.400000 | 1.100000 | 0.000000 | 1067.000000 | -1.800000 | 0.20000 |
| 25% | 0.260000 | 0.500000 | 2.300000 | 1.800000 | 0.000000 | 1070.000000 | 0.000000 | 0.50000 |
| 50% | 0.360000 | 0.800000 | 2.300000 | 1.800000 | 0.000000 | 1075.000000 | 0.000000 | 0.80000 |
| 75% | 0.450000 | 1.000000 | 2.400000 | 1.900000 | 1.000000 | 1076.000000 | 0.000000 | 1.00000 |
| max | 0.820000 | 2.200000 | 2.700000 | 2.600000 | 1.000000 | 1078.000000 | 0.800000 | 2.20000 |
Test file
[3]:
print("\nTest file overview:")
print(f"Name: methane-test")
print(f"Objects number: {test_df.shape[0]}; Attributes number: {test_df.shape[1]}")
print("Basic attribute statistics:")
test_df.describe()
Test file overview:
Name: methane-test
Objects number: 5728; Attributes number: 8
Basic attribute statistics:
[3]:
| MM31 | MM116 | AS038 | PG072 | PD | BA13 | DMM116 | MM116_pred | |
|---|---|---|---|---|---|---|---|---|
| count | 5728.000000 | 5728.000000 | 5728.000000 | 5728.000000 | 5728.000000 | 5728.000000 | 5728.000000 | 5728.000000 |
| mean | 0.556652 | 1.006913 | 2.236627 | 1.819239 | 0.538408 | 1072.691690 | -0.000017 | 1.042458 |
| std | 0.114682 | 0.167983 | 0.104913 | 0.078865 | 0.498566 | 2.799559 | 0.046849 | 0.171393 |
| min | 0.350000 | 0.500000 | 1.800000 | 1.600000 | 0.000000 | 1067.000000 | -0.400000 | 0.600000 |
| 25% | 0.460000 | 0.900000 | 2.200000 | 1.800000 | 0.000000 | 1071.000000 | 0.000000 | 0.900000 |
| 50% | 0.550000 | 1.000000 | 2.200000 | 1.800000 | 1.000000 | 1073.000000 | 0.000000 | 1.000000 |
| 75% | 0.640000 | 1.100000 | 2.300000 | 1.900000 | 1.000000 | 1075.000000 | 0.000000 | 1.200000 |
| max | 0.980000 | 1.600000 | 2.700000 | 2.100000 | 1.000000 | 1078.000000 | 0.300000 | 1.600000 |
Helper function for calculating metrics
[4]:
from sklearn import metrics
import pandas as pd
import numpy as np
from math import sqrt
def get_regression_metrics(measure: str, y_pred, y_true) -> pd.DataFrame:
relative_error = 0
squared_relative_error = 0
relative_error_lenient = 0
relative_error_strict = 0
nae_denominator = 0
avg = sum(y_true) / len(y_pred)
for i in range(0, len(y_pred)):
true = y_true[i]
predicted = y_pred[i]
relative_error += abs((true - predicted) / true)
squared_relative_error += (
abs((true - predicted) / true) *
abs((true - predicted) / true)
)
relative_error_lenient += (
abs((true - predicted) / max(true, predicted))
)
relative_error_strict += abs((true - predicted) / min(true, predicted))
nae_denominator += abs(avg - true)
relative_error /= len(y_pred)
squared_relative_error /= len(y_pred)
relative_error_lenient /= len(y_pred)
relative_error_strict /= len(y_pred)
nae_denominator /= len(y_pred)
correlation = np.mean(np.corrcoef(y_true, y_pred))
dictionary = {
'Measure': measure,
'absolute_error': metrics.mean_absolute_error(y_true, y_pred),
'relative_error': relative_error,
'relative_error_lenient': relative_error_lenient,
'relative_error_strict': relative_error_strict,
'normalized_absolute_error': metrics.mean_absolute_error(y_true, y_pred) / nae_denominator,
'squared_error': metrics.mean_squared_error(y_true, y_pred),
'root_mean_squared_error': metrics.mean_squared_error(y_true, y_pred, squared=False),
'root_relative_squared_error': sqrt(squared_relative_error),
'correlation': correlation,
'squared_correlation': np.power(correlation, 2),
}
return pd.DataFrame.from_records([dictionary], index='Measure')
def get_ruleset_stats(measure: str, model) -> pd.DataFrame:
tmp = model.parameters.__dict__
del tmp['_java_object']
return pd.DataFrame.from_records(
[{'Measure': measure, **tmp, **model.stats.__dict__}],
index='Measure'
)
Rule induction on training dataset
[5]:
X_train: pd.DataFrame = train_df.drop(['MM116_pred'], axis=1)
y_train: pd.Series = train_df['MM116_pred']
[ ]:
from rulekit.regression import RuleRegressor
from rulekit.rules import RuleSet, RegressionRule
from rulekit.params import Measures
# C2
c2_reg = RuleRegressor(
induction_measure=Measures.C2,
pruning_measure=Measures.C2,
voting_measure=Measures.C2,
)
c2_reg.fit(X_train, y_train)
c2_ruleset: RuleSet[RegressionRule] = c2_reg.model
predictions: np.ndarray = c2_reg.predict(X_train)
regression_metrics = get_regression_metrics('C2', predictions, y_train)
ruleset_stats = get_ruleset_stats('C2', c2_ruleset)
# Correlation
corr_reg = RuleRegressor(
induction_measure=Measures.Correlation,
pruning_measure=Measures.Correlation,
voting_measure=Measures.Correlation,
mean_based_regression=True
)
corr_reg.fit(X_train, y_train)
corr_ruleset: RuleSet[RegressionRule] = corr_reg.model
predictions: np.ndarray = corr_reg.predict(X_train)
tmp = get_regression_metrics('Correlation', predictions, y_train)
regression_metrics = pd.concat([regression_metrics, tmp])
ruleset_stats = pd.concat([ruleset_stats, get_ruleset_stats('Correlation', corr_ruleset)])
# RSS
rss_reg = RuleRegressor(
induction_measure=Measures.RSS,
pruning_measure=Measures.RSS,
voting_measure=Measures.RSS,
mean_based_regression=True
)
rss_reg.fit(X_train, y_train)
rss_ruleset: RuleSet[RegressionRule] = rss_reg.model
predictions: np.ndarray = rss_reg.predict(X_train)
tmp = get_regression_metrics('RSS', predictions, y_train)
regression_metrics = pd.concat([regression_metrics, tmp])
ruleset_stats = pd.concat([ruleset_stats, get_ruleset_stats('RSS', rss_ruleset)])
display(ruleset_stats)
display(regression_metrics)
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
| minimum_covered | maximum_uncovered_fraction | ignore_missing | pruning_enabled | max_growing_condition | time_total_s | time_growing_s | time_pruning_s | rules_count | conditions_per_rule | induced_conditions_per_rule | avg_rule_coverage | avg_rule_precision | avg_rule_quality | pvalue | FDR_pvalue | FWER_pvalue | fraction_significant | fraction_FDR_significant | fraction_FWER_significant | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Measure | ||||||||||||||||||||
| C2 | 0.05 | 0.0 | False | True | 0.0 | 27.670397 | 1.978997 | 25.637769 | 11 | 3.272727 | 33.727273 | 0.345683 | 0.874767 | 0.732356 | 1.612636e-177 | 1.612636e-177 | 1.612636e-177 | 1.0 | 1.0 | 1.0 |
| Correlation | 0.05 | 0.0 | False | True | 0.0 | 17.739791 | 0.836199 | 16.871719 | 7 | 2.714286 | 35.285714 | 0.334990 | 0.862965 | 0.800819 | 3.046280e-37 | 3.046280e-37 | 3.046280e-37 | 1.0 | 1.0 | 1.0 |
| RSS | 0.05 | 0.0 | False | True | 0.0 | 34.929544 | 1.020750 | 33.894867 | 6 | 2.333333 | 38.166667 | 0.417440 | 0.855115 | 0.786208 | 6.242568e-40 | 6.242568e-40 | 6.242568e-40 | 1.0 | 1.0 | 1.0 |
| absolute_error | relative_error | relative_error_lenient | relative_error_strict | normalized_absolute_error | squared_error | root_mean_squared_error | root_relative_squared_error | correlation | squared_correlation | |
|---|---|---|---|---|---|---|---|---|---|---|
| Measure | ||||||||||
| C2 | 0.089929 | 0.114526 | 0.101069 | 0.125935 | 0.382694 | 0.019753 | 0.140547 | 0.167429 | 0.937881 | 0.879620 |
| Correlation | 0.088561 | 0.112319 | 0.099635 | 0.125846 | 0.376872 | 0.020912 | 0.144609 | 0.184988 | 0.941044 | 0.885563 |
| RSS | 0.092552 | 0.111375 | 0.102026 | 0.124544 | 0.393860 | 0.020544 | 0.143331 | 0.153866 | 0.945779 | 0.894498 |
C2 Measure generated rules
[7]:
for rule in c2_ruleset.rules:
print(rule)
IF MM116 = <0.35, 0.45) AND MM31 = (-inf, 0.24) AND DMM116 = <-0.05, inf) THEN MM116_pred = {0.40} [0.39,0.42]
IF MM116 = (-inf, 0.55) AND DMM116 = <-0.05, inf) THEN MM116_pred = {0.45} [0.39,0.52]
IF MM31 = <0.19, 0.30) AND MM116 = (-inf, 0.95) AND AS038 = (-inf, 2.45) AND PG072 = <1.55, inf) AND DMM116 = (-inf, 0.15) THEN MM116_pred = {0.50} [0.38,0.61]
IF MM116 = <1.05, 1.35) AND MM31 = <0.28, inf) THEN MM116_pred = {1.19} [1.08,1.31]
IF MM116 = <0.95, 1.25) AND DMM116 = (-inf, 0.40) THEN MM116_pred = {1.11} [0.99,1.22]
IF MM116 = <0.85, 1.15) AND DMM116 = <-0.35, 0.25) THEN MM116_pred = {1.00} [0.89,1.12]
IF MM31 = (-inf, 0.34) AND MM116 = (-inf, 0.85) AND DMM116 = <-0.05, inf) THEN MM116_pred = {0.53} [0.39,0.66]
IF MM31 = <0.18, 0.37) AND AS038 = <2.15, 2.55) AND DMM116 = <-0.15, 0.05) AND MM116 = <0.25, 0.85) AND PG072 = <1.55, inf) AND BA13 = <1070.50, inf) THEN MM116_pred = {0.55} [0.40,0.70]
IF MM116 = <0.75, 1.05) AND DMM116 = <-0.15, 0.15) AND PG072 = (-inf, 2.05) AND MM31 = (-inf, 0.53) AND BA13 = (-inf, 1073.50) THEN MM116_pred = {0.91} [0.80,1.02]
IF MM116 = <0.65, 1.45) AND MM31 = (-inf, 0.67) AND DMM116 = <-0.35, 0.25) AND PG072 = (-inf, 2.35) THEN MM116_pred = {0.96} [0.78,1.14]
IF MM31 = <0.28, 0.76) AND PG072 = (-inf, 2.35) THEN MM116_pred = {0.93} [0.70,1.16]
Correlation Measure generated rules
[8]:
for rule in corr_ruleset.rules:
print(rule)
IF MM116 = (-inf, 0.45) AND MM31 = <0.18, 0.24) AND DMM116 = <-0.05, inf) THEN MM116_pred = {0.40} [0.38,0.42]
IF MM116 = (-inf, 0.55) AND MM31 = (-inf, 0.32) THEN MM116_pred = {0.45} [0.39,0.51]
IF MM31 = <0.18, 0.31) AND MM116 = (-inf, 0.85) AND AS038 = (-inf, 2.55) AND PG072 = <1.55, inf) AND DMM116 = <-0.30, 0.15) THEN MM116_pred = {0.50} [0.39,0.60]
IF MM116 = <1.05, 1.35) THEN MM116_pred = {1.19} [1.08,1.31]
IF MM116 = <0.85, 1.15) AND DMM116 = <-0.35, inf) THEN MM116_pred = {1.00} [0.89,1.12]
IF MM116 = <0.45, 0.85) AND DMM116 = <-0.15, inf) AND PG072 = <1.55, inf) AND MM31 = <0.31, inf) THEN MM116_pred = {0.77} [0.66,0.88]
IF MM31 = <0.23, inf) AND PG072 = (-inf, 2.35) THEN MM116_pred = {0.85} [0.59,1.11]
RSS Measure generated rules
[9]:
for rule in rss_ruleset.rules:
print(rule)
IF MM116 = (-inf, 0.45) AND MM31 = <0.18, 0.25) AND PG072 = (-inf, 2.05) THEN MM116_pred = {0.40} [0.38,0.43]
IF MM116 = (-inf, 0.55) AND DMM116 = <-0.15, inf) THEN MM116_pred = {0.45} [0.39,0.52]
IF MM116 = <0.45, 0.75) THEN MM116_pred = {0.60} [0.49,0.71]
IF DMM116 = <-0.35, inf) AND MM31 = <0.31, inf) AND MM116 = (-inf, 1.05) THEN MM116_pred = {0.87} [0.72,1.02]
IF MM116 = <0.85, 1.45) AND DMM116 = <-0.50, inf) THEN MM116_pred = {1.05} [0.90,1.21]
IF MM31 = <0.23, inf) AND MM116 = <0.25, inf) AND PG072 = (-inf, 2.35) THEN MM116_pred = {0.85} [0.59,1.11]
Evaluation on a test set
[10]:
X_test = test_df.drop(['MM116_pred'], axis=1)
y_test = test_df['MM116_pred']
[11]:
# C2
c2_predictions = c2_reg.predict(X_test)
c2_regression_metrics = get_regression_metrics('C2', c2_predictions, y_test)
# Correlation
corr_predictions = corr_reg.predict(X_test)
corr_regression_metrics = get_regression_metrics('Correlation', corr_predictions, y_test)
# RSS
rss_predictions = rss_reg.predict(X_test)
rss_regression_metrics = get_regression_metrics('RSS', rss_predictions, y_test)
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
[12]:
display(pd.concat([c2_regression_metrics, corr_regression_metrics, rss_regression_metrics]))
| absolute_error | relative_error | relative_error_lenient | relative_error_strict | normalized_absolute_error | squared_error | root_mean_squared_error | root_relative_squared_error | correlation | squared_correlation | |
|---|---|---|---|---|---|---|---|---|---|---|
| Measure | ||||||||||
| C2 | 0.107227 | 0.100574 | 0.094935 | 0.112747 | 0.739328 | 0.020326 | 0.142569 | 0.126236 | 0.835385 | 0.697868 |
| Correlation | 0.105350 | 0.091827 | 0.090950 | 0.109321 | 0.726385 | 0.021890 | 0.147951 | 0.119472 | 0.866898 | 0.751512 |
| RSS | 0.128302 | 0.113411 | 0.111947 | 0.134690 | 0.884639 | 0.027270 | 0.165136 | 0.134849 | 0.866442 | 0.750722 |
Hyperparameters tuning
This one gonna take a while…
[ ]:
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# define models and parameters
model = RuleRegressor(mean_based_regression=True)
minsupp_new = range(5, 7)
measures_choice = [Measures.C2, Measures.Correlation, Measures.RSS]
# define grid search
grid = {
'minsupp_new': minsupp_new,
'induction_measure': measures_choice,
'pruning_measure': measures_choice,
'voting_measure': measures_choice
}
cv = KFold(n_splits=2)
grid_search = GridSearchCV(estimator=model, param_grid=grid, cv=cv, scoring='neg_root_mean_squared_error')
grid_result = grid_search.fit(X_train, y_train)
# summarize results
print("Best RMSE: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
Best RMSE: -0.191976 using {'induction_measure': <Measures.RSS: 'RSS'>, 'minsupp_new': 6, 'pruning_measure': <Measures.C2: 'C2'>, 'voting_measure': <Measures.C2: 'C2'>}
Prediction using the model selected from the tuning
[18]:
reg: RuleRegressor = grid_result.best_estimator_
ruleset: RuleSet[RegressionRule] = reg.model
ruleset_stats = get_ruleset_stats('', ruleset)
Generated rules
[19]:
for rule in ruleset.rules:
print(rule)
IF MM31 = (-inf, 0.23) THEN MM116_pred = {0.40} [0.39,0.41]
IF MM116 = <0.35, 0.45) AND MM31 = (-inf, 0.24) AND DMM116 = <-0.05, inf) THEN MM116_pred = {0.40} [0.39,0.42]
IF MM116 = <0.35, 0.45) AND MM31 = (-inf, 0.24) THEN MM116_pred = {0.40} [0.38,0.42]
IF MM31 = <0.24, 0.25) AND AS038 = (-inf, 2.45) AND DMM116 = <-0.05, inf) AND PD = (-inf, 0.50) THEN MM116_pred = {0.50} [0.47,0.54]
IF MM116 = (-inf, 0.45) AND MM31 = <0.24, 0.25) AND PG072 = (-inf, 2.05) AND AS038 = (-inf, 2.45) AND PD = <0.50, inf) THEN MM116_pred = {0.41} [0.38,0.44]
IF MM31 = <0.24, 0.25) AND PD = (-inf, 0.50) THEN MM116_pred = {0.51} [0.47,0.54]
IF MM31 = (-inf, 0.26) AND DMM116 = <-0.05, 0.05) THEN MM116_pred = {0.46} [0.36,0.55]
IF MM116 = (-inf, 0.45) THEN MM116_pred = {0.40} [0.37,0.44]
IF MM31 = <0.23, 0.24) AND BA13 = (-inf, 1075.50) AND MM116 = <0.45, inf) THEN MM116_pred = {0.50} [0.48,0.52]
IF MM116 = <0.45, 0.55) AND PG072 = <1.65, inf) AND DMM116 = <-0.05, inf) AND PD = (-inf, 0.50) AND MM31 = <0.23, inf) THEN MM116_pred = {0.51} [0.48,0.53]
IF MM116 = <0.45, 0.55) AND PG072 = <1.65, inf) AND MM31 = <0.23, 0.29) AND DMM116 = <-0.05, inf) THEN MM116_pred = {0.51} [0.48,0.53]
IF MM116 = <0.35, 0.55) AND MM31 = (-inf, 0.26) AND BA13 = <1077.50, inf) AND DMM116 = (-inf, -0.05) THEN MM116_pred = {0.54} [0.48,0.60]
IF MM116 = <0.45, 0.55) AND PG072 = <1.65, inf) AND AS038 = (-inf, 2.45) AND PD = (-inf, 0.50) AND BA13 = (-inf, 1077.50) AND MM31 = <0.23, inf) THEN MM116_pred = {0.50} [0.48,0.53]
IF MM116 = <0.45, 0.55) AND MM31 = <0.28, 0.30) AND DMM116 = <-0.05, 0.05) AND AS038 = <2.25, 2.35) AND PG072 = <1.75, 1.95) AND BA13 = <1075.50, 1076.50) AND PD = <0.50, inf) THEN MM116_pred = {0.55} [0.50,0.60]
IF MM116 = (-inf, 0.55) AND MM31 = <0.29, 0.30) AND PG072 = (-inf, 1.95) AND BA13 = (-inf, 1076.50) AND PD = <0.50, inf) THEN MM116_pred = {0.55} [0.50,0.60]
IF MM116 = (-inf, 0.55) THEN MM116_pred = {0.45} [0.39,0.52]
IF MM31 = <0.26, 0.27) AND MM116 = <0.55, 0.65) AND PG072 = <1.75, 1.85) AND AS038 = <2.25, 2.45) AND DMM116 = <-0.05, 0.05) AND PD = (-inf, 0.50) AND BA13 = <1074.50, 1077.50) THEN MM116_pred = {0.60} [NaN,NaN]
IF MM116 = <0.45, 0.65) AND MM31 = <0.23, inf) THEN MM116_pred = {0.55} [0.49,0.61]
IF MM116 = <0.55, 0.75) THEN MM116_pred = {0.67} [0.58,0.77]
IF MM116 = <0.75, 0.85) THEN MM116_pred = {0.83} [0.76,0.90]
IF MM116 = <0.85, inf) THEN MM116_pred = {1.06} [0.88,1.24]
Ruleset evaluation
[20]:
display(ruleset_stats)
| minimum_covered | maximum_uncovered_fraction | ignore_missing | pruning_enabled | max_growing_condition | time_total_s | time_growing_s | time_pruning_s | rules_count | conditions_per_rule | induced_conditions_per_rule | avg_rule_coverage | avg_rule_precision | avg_rule_quality | pvalue | FDR_pvalue | FWER_pvalue | fraction_significant | fraction_FDR_significant | fraction_FWER_significant | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Measure | ||||||||||||||||||||
| 6.0 | 0.0 | False | True | 0.0 | 12.151842 | 1.240354 | 10.883363 | 21 | 3.190476 | 29.809524 | 0.116152 | 0.849723 | NaN | NaN | NaN | NaN | 0.952381 | 0.952381 | 0.952381 |
Validate model on test dataset
[21]:
predictions = reg.predict(X_test)
regression_metrics = get_regression_metrics('', predictions, y_test)
display(regression_metrics.iloc[0])
c:\Users\cezar\OneDrive\Pulpit\EMAG\GIT\PythonRulekit\tutorials_env\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
absolute_error 0.111355
relative_error 0.103524
relative_error_lenient 0.097884
relative_error_strict 0.114888
normalized_absolute_error 0.767792
squared_error 0.019642
root_mean_squared_error 0.140148
root_relative_squared_error 0.125609
correlation 0.801204
squared_correlation 0.641927
Name: , dtype: float64