Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
7790888
.
SiriwatHuntra Aug 1, 2024
2fd5905
New nested exp
SiriwatHuntra Aug 1, 2024
1c0731f
Merge branch 'Nested_CV'
SiriwatHuntra Aug 1, 2024
3a4f8df
merge
SiriwatHuntra Aug 1, 2024
45c0897
rm
SiriwatHuntra Aug 1, 2024
f6a3714
add data
SiriwatHuntra Aug 1, 2024
7da5252
add noisy 80 sample
SiriwatHuntra Aug 1, 2024
600b071
add more data sample
SiriwatHuntra Aug 1, 2024
ff06aab
.
SiriwatHuntra Aug 1, 2024
addc4c7
Merge branch 'main' of https://github.com/SiriwatHuntra/Machine-Learn…
SiriwatHuntra Aug 3, 2024
1c40874
Sync fork
SiriwatHuntra Aug 3, 2024
85be5c6
Merge branch 'main' of https://github.com/SiriwatHuntra/Machine-Learn…
SiriwatHuntra Aug 3, 2024
07e4c6b
Delete Unused folder
SiriwatHuntra Aug 3, 2024
f17288b
Merge branch 'Kariusdi:main' into main
SiriwatHuntra Aug 3, 2024
26470de
Add ass2 py
SiriwatHuntra Aug 3, 2024
8bae9c6
ridge polynomial
SiriwatHuntra Aug 3, 2024
b64ba19
Merge branch 'Model_selection'
SiriwatHuntra Aug 3, 2024
3477b3f
Merge pull request #15 from SiriwatHuntra/main
Kariusdi Aug 3, 2024
bb188d7
:ambulance: : added a bias line.
Kariusdi Aug 3, 2024
c174283
:sparkles: : genaralization between none reg and reg.
Kariusdi Aug 3, 2024
232b495
:memo: : added comment to 5.1.
Kariusdi Aug 3, 2024
460197e
:memo: : added comment to 5.3.
Kariusdi Aug 3, 2024
c93a6a8
:memo: : added path to main README.
Kariusdi Aug 3, 2024
1830963
Fix E_Out
SiriwatHuntra Aug 3, 2024
0d77aa1
Merge branch 'Kariusdi:main' into main
SiriwatHuntra Aug 3, 2024
a33bd74
Add new version ridge RMSE
SiriwatHuntra Aug 3, 2024
2ecb851
Merge branch 'RidgeRMSEwithLamda'
SiriwatHuntra Aug 3, 2024
a9c3f4e
Merge pull request #17 from SiriwatHuntra/main
Kariusdi Aug 3, 2024
0828d3c
Delete
SiriwatHuntra Aug 3, 2024
56c3db7
:memo: added readme for regularization explanation.
Kariusdi Aug 3, 2024
690382c
:coffin: : remove unused code.
Kariusdi Aug 3, 2024
81e4ce4
:memo: : added all docs.
Kariusdi Aug 3, 2024
1b7f220
fix-coding-5.1
supakron-suk Aug 5, 2024
c209d63
fix RMSE
SiriwatHuntra Aug 5, 2024
6c9f1a8
Merge branch 'Kariusdi:main' into main
SiriwatHuntra Aug 5, 2024
9bd9a26
Chaange Ridge Test file
SiriwatHuntra Aug 5, 2024
a090693
fix-coding-5.1(coment)
supakron-suk Aug 5, 2024
89459d8
Merge branches 'main' and 'main' of https://github.com/SiriwatHuntra/…
SiriwatHuntra Aug 5, 2024
da938bb
Rerange code
SiriwatHuntra Aug 5, 2024
57bf9af
Merge branch 'main' of https://github.com/kong08555/Machine-Learning-…
supakron-suk Aug 5, 2024
caf3e20
.
SiriwatHuntra Aug 5, 2024
f896518
Merge pull request #19 from SiriwatHuntra/main
Kariusdi Aug 5, 2024
bb4dedc
:bug: : changed to rss.
Kariusdi Aug 5, 2024
d6da430
:bug: : changed path.
Kariusdi Aug 5, 2024
4d3b927
Update README.md
Kariusdi Aug 18, 2024
c22880f
Merge branch 'Kariusdi:main' into main
supakron-suk Aug 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions Generalization/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Generalization 👻

Generalization refers to a model's ability to perform well on new, unseen data. In linear regression, achieving good generalization means that the model captures the underlying trend in the data without overfitting or underfitting.

## Need to know for this section 👨🏽‍💻

### Bias and Variance

- <mark>**_Bias_**:</mark>

Bias is the error introduced by approximating a real-world problem (which may be complex) by a simplified model. In linear regression, high bias often occurs when the model is too simple to capture the underlying patterns in the data.

> **_High bias_** can lead to underfitting, where the model fails to capture the underlying trend and performs poorly on both training and test data.

- <mark>**_Variance_**:</mark>

Variance is the error introduced by the model’s sensitivity to fluctuations in the training data. High variance occurs when the model is too complex and learns not only the underlying patterns but also the noise in the training data.

> **_High variance_** can lead to overfitting, where the model performs very well on training data but poorly on test data due to its excessive complexity.

</br>

![generalization](./assets/genaralize.png)
</br>

> For the left img is "high bias but low variance". On the other hand, the right img is "low bias and high varaince". Which both cause the overfitting and underfitting!

### Overfitting vs. Underfitting

- Overfitting:

Occurs when a model learns the noise in the training data rather than the actual signal. This results in excellent performance on training data but poor performance on new, unseen data.

<mark>High variance, low training error, high test error. (low bias)</mark>

- Underfitting:

Occurs when a model is too simplistic to capture the underlying patterns in the data. This results in poor performance on both training and test data.

<mark>High bias, high training error, high test error.</mark>

### Goal Representation

Bias vs. Variance Trade-off:

- **_High Bias (Underfitting)_** ⟶ Simple Model ⟶ High Training Error, High Test Error
- **_Low Bias, Low Variance_** ⟶ Optimal Model ⟶ Low Training Error, Low Test Error
- **_High Variance (Overfitting)_** ⟶ Complex Model ⟶ Low Training Error, High Test Error

> Training error is E in and Testing error is E out

### Note 🚨

We can visualize overfitting and underfitting by making a learning curve. You can follow the link https://github.com/Kariusdi/Machine-Learning-Class67/tree/main/LearningCurve.
Binary file added Generalization/assets/genaralize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added LearningCurve.zip
Binary file not shown.
92 changes: 49 additions & 43 deletions LearningCurve/Noisy/LinearModel.ipynb

Large diffs are not rendered by default.

90 changes: 47 additions & 43 deletions LearningCurve/Noisy/LinearOriginModel.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion Linear-Regression/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Linear Regression 📈

Linear Regression is the **_Supervised Learning_** that can predict the output as a continuous. For example, if we have gender and heigth as features and weigth as a output. We can use linear regression to predict the weigth when new data has come.
Linear Regression is the **_Supervised Learning_** that can predict the output as a continuous. For example, if we have gender and height as features and weigth as a output. We can use linear regression to predict the weigth when new data has come.

## Need to know for this section 👨🏽‍💻

Expand Down
6 changes: 3 additions & 3 deletions Linear-Regression/model/LinearRegression.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import numpy as np

def gradientDescent(n_samples, lr, X, y, y_pred):
d_weigths = (1 / n_samples) * np.dot(X.T, (y_pred - y)) # d_weights = 1 / n * ∑(y_prediction - y_actual) * X
d_bias = (1 / n_samples) * np.sum(y_pred - y) # d_bias = 1 / n * ∑(y_prediction - y_actual)
d_weigths = np.mean(np.dot(X.T, (y_pred - y))) # d_weights = 1 / n * ∑(y_prediction - y_actual) * X
d_bias = np.mean((1 / n_samples) * np.sum(y_pred - y)) # d_bias = 1 / n * ∑(y_prediction - y_actual)

weights_gradient = lr * d_weigths
bias_gradient = lr * d_bias
Expand All @@ -14,7 +14,7 @@ def normalEquation(X, y):
return new_weights

def costFunction(n_samples, y_pred, y): # Mean Sqaure Error (MSE)
return (1 / (2 * n_samples)) * np.sum((y_pred - y)**2)
return np.mean((y_pred - y) ** 2)

class LinearRegression:

Expand Down
Binary file added Performance-Estimation.zip
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def cross_validation(x, y, degree):
rmse_cv = np.sqrt(-cv_scores.mean())
return rmse_cv

data_dir = 'Performance-Estimation/Experiments-python/polynomial/datasets'
data_dir = './datasets'
files = os.listdir(data_dir)

degreeArray = [1, 2, 3, 4, 5, 6, 7, 8]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,16 @@ def polynomial_regression(x, y, degree):


files = {
'Noiseless_10': 'Performance-Estimation/Experiments-python/polynomial/datasets/sin_noiseless_10sample.csv',
'Noisy_10': 'Performance-Estimation/Experiments-python/polynomial/datasets/sin_noisy_10sample.csv',
'Noiseless_10': 'Performance-Estimation/Experiments-python/polynomial/datasets/sin_noiseless_10sample.csv',
'Noisy_10': 'Performance-Estimation/Experiments-python/polynomial/datasets/sin_noisy_10sample.csv',
'Noiseless_10': './datasets/sin_noiseless_10sample.csv',
'Noisy_10': './datasets/sin_noisy_10sample.csv',
'Noiseless_10': './datasets/sin_noiseless_10sample.csv',
'Noisy_10': './datasets/sin_noisy_10sample.csv',
# 'Noiseless_20': 'Performance-Estimation/Experiments-python/polynomial/data/sin_noiseless_20sample.csv',
# 'Noisy_20': 'Performance-Estimation/Experiments-python/polynomial/data/sin_noisy_20sample.csv',
# 'Noiseless_40': 'Performance-Estimation/Experiments-python/polynomial/data/sin_noiseless_40sample.csv',
# 'Noisy_40': 'Performance-Estimation/Experiments-python/polynomial/data/sin_noisy_40sample.csv',
'Noiseless_80': 'Performance-Estimation/Experiments-python/polynomial/datasets/sin_noiseless_80sample.csv',
'Noisy_80': 'Performance-Estimation/Experiments-python/polynomial/datasets/sin_noisy_80sample.csv'
'Noiseless_80': './datasets/sin_noiseless_80sample.csv',
'Noisy_80': './datasets/sin_noisy_80sample.csv'
}

degree = 8
Expand All @@ -60,5 +60,5 @@ def polynomial_regression(x, y, degree):
df_results.at[f'w{i}', key] = coeffs[i]

print(df_results)
print(results)
# print(results)

50 changes: 22 additions & 28 deletions Performance-Estimation/Nested-CrossValidation/NestedCV.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,38 +15,37 @@
},
{
"cell_type": "code",
"execution_count": 74,
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"\n",
"# Load the dataset\n",
"#data = pd.read_csv('HeightWeight.csv')\n",
"\n",
"data = pd.read_csv('sin_noisy_80sample.csv')\n",
"\n",
"# Extract features (X) and target variable (y)\n",
"#X = data['Height'].values.reshape(-1, 1) # Reshape to 2D array\n",
"#y = data['Weight'].values\n",
"\n",
"X = data[['x', 'x^2', 'x^3', 'x^4', 'x^5', 'x^6', 'x^7', 'x^8']]\n",
"X = X.to_numpy()\n",
"y = data['noisy_y'].values\n",
"data = pd.read_csv('HeightWeight.csv')\n",
"X = data['Height'].values.reshape(-1, 1) # Reshape to 2D array\n",
"y = data['Weight'].values\n",
"\n",
"#data = pd.read_csv('sin_noisy_80sample.csv')\n",
"#X = data[['x', 'x^2', 'x^3', 'x^4', 'x^5', 'x^6', 'x^7', 'x^8']]\n",
"#X = X.to_numpy()\n",
"#y = data['noisy_y'].values\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 75,
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Resubstitution RMSE: 0.257766277865693\n",
"CV RMSE: 0.40\n"
"CV RMSE: 0.4852\n"
]
}
],
Expand All @@ -64,14 +63,11 @@
"from sklearn.model_selection import cross_val_score\n",
"\n",
"# Cross-validation\n",
"fold = 10\n",
"model = LinearRegression()\n",
"cv_scores = cross_val_score(model, X, y, cv=fold, scoring='neg_mean_squared_error')\n",
"cv_rmse = np.sqrt(-cv_scores).mean()\n",
"\n",
"cv_scores = cross_val_score(model, X, y, cv=10, scoring='neg_mean_squared_error')\n",
"mean_cv_rmse = np.sqrt(-np.mean(cv_scores))\n",
"#output\n",
"print(f\"Resubstitution RMSE: {resubstitution_rmse}\")\n",
"print(f\"CV RMSE: {cv_rmse:.2f}\")\n"
"print(f\"CV RMSE: {mean_cv_rmse:.4f}\")\n"
]
},
{
Expand All @@ -83,15 +79,15 @@
},
{
"cell_type": "code",
"execution_count": 76,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Mean RMSE (nested CV): 0.3088\n",
"Mean RMSE (regular CV): 15.1747\n",
"Mean RMSE (nested CV): 0.2898\n",
"Mean RMSE (regular CV): 0.4852\n",
"Mean RMSE (resubstution): 0.2578\n"
]
}
Expand All @@ -104,7 +100,7 @@
"\n",
"model = LinearRegression()\n",
"\n",
"outer_cv = KFold(n_splits=5, shuffle=True, random_state=42)\n",
"outer_cv = KFold(n_splits=10, shuffle=True, random_state=42)\n",
"\n",
"rmse_scores = []\n",
"\n",
Expand All @@ -114,7 +110,7 @@
" y_train, y_val = y[train_idx], y[val_idx]\n",
"\n",
" # Define the inner loop (for model evaluation)\n",
" inner_cv = KFold(n_splits=3, shuffle=True, random_state=42)\n",
" inner_cv = KFold(n_splits=10, shuffle=True, random_state=42)\n",
" inner_rmse_scores = []\n",
"\n",
" # Inner loop: perform k-fold cross-validation on the training set\n",
Expand All @@ -131,21 +127,19 @@
" rmse_scores.append(outer_rmse)\n",
"\n",
"\n",
"# Calculate the mean RMSE across outer folds\n",
"# Calculate the mean RMSE nested\n",
"mean_rmse = np.mean(rmse_scores)\n",
"print(f\"Mean RMSE (nested CV): {mean_rmse:.4f}\")\n",
"\n",
"cv_scores = cross_val_score(model, X, y, cv=5, scoring='neg_mean_squared_error')\n",
"cv_scores = cross_val_score(model, X, y, cv=10, scoring='neg_mean_squared_error')\n",
"mean_cv_rmse = np.sqrt(-np.mean(cv_scores))\n",
"print(f\"Mean RMSE (regular CV): {mean_cv_rmse:.4f}\")\n",
"\n",
"#compare with resubstution\n",
"model.fit(X, y)\n",
"y_pred = model.predict(X)\n",
"resubstitution_rmse = np.sqrt(mean_squared_error(y, y_pred))\n",
"print(f\"Mean RMSE (resubstution): {resubstitution_rmse:.4f}\")\n",
"\n",
"\n"
"print(f\"Mean RMSE (resubstution): {resubstitution_rmse:.4f}\")"
]
}
],
Expand Down
35 changes: 34 additions & 1 deletion Performance-Estimation/Nested-CrossValidation/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,34 @@
Use jupyternotebook
# Nested Cross-Validation 🧵

Nested cross-validation is a robust technique used to evaluate the performance of a machine learning model and to tune its hyperparameters. It involves two nested loops of cross-validation to avoid overfitting and to ensure that hyperparameter tuning does not bias the model evaluation.

![nested](./assets/nested.png)
</br>

> We use Jupytor notebook for this section

## Need to know for this section 👨🏽‍💻

### How It Works

1. <mark>**_Outer Cross-Validation Loop:_**</mark>

- Purpose: Estimates the generalization performance of the model.

- Process: The dataset is divided into several folds (e.g., 5 or 10). For each iteration, one fold is held out as the test set, while the remaining folds are used for training and hyperparameter tuning.

2. <mark>**_Inner Cross-Validation Loop:_**</mark>

- Purpose: Selects the best hyperparameters for the model.

- Process: Within each training set from the outer loop, the data is further split into training and validation sets. The model is trained on the training set with different hyperparameters and evaluated on the validation set to find the optimal hyperparameters.

### Benefits

- **Unbiased Evaluation:** Provides an unbiased estimate of the model's performance by ensuring that hyperparameter tuning does not influence the test set performance.

- **Robustness:** Helps in selecting the best model and its hyperparameters while mitigating overfitting.

### Example Use

Nested cross-validation is particularly useful for models with many hyperparameters or when working with small datasets, as it provides a reliable estimate of model performance and parameter settings.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
x,x^2,x^3,x^4,x^5,x^6,x^7,x^8,noisy_y
-1,1,-1,1,-1,1,-1,1,9.47E-02
-0.975,0.950625,-0.926859375,0.903687891,-0.881095693,0.859068301,-0.837591593,0.816651804,0.085315925
-0.95,0.9025,-0.857375,0.81450625,-0.773780938,0.735091891,-0.698337296,0.663420431,-0.217810744
-0.925,0.855625,-0.791453125,0.732094141,-0.67718708,0.626398049,-0.579418195,0.535961831,-0.216852518
-0.9,0.81,-0.729,0.6561,-0.59049,0.531441,-0.4782969,0.43046721,0.241125038
-0.875,0.765625,-0.669921875,0.586181641,-0.512908936,0.448795319,-0.392695904,0.343608916,-0.276597598
-0.85,0.7225,-0.614125,0.52200625,-0.443705313,0.377149516,-0.320577088,0.272490525,-0.43482938
-0.825,0.680625,-0.561515625,0.463250391,-0.382181572,0.315299797,-0.260122333,0.214600924,-0.447385008
-0.8,0.64,-0.512,0.4096,-0.32768,0.262144,-0.2097152,0.16777216,-0.892179357
-0.775,0.600625,-0.465484375,0.360750391,-0.279581553,0.216675703,-0.16792367,0.130140844,-0.94949321
-0.75,0.5625,-0.421875,0.31640625,-0.237304688,0.177978516,-0.133483887,0.100112915,-0.872879277
-0.725,0.525625,-0.381078125,0.276281641,-0.200304189,0.145220537,-0.10528489,0.076331545,-0.840621157
-0.7,0.49,-0.343,0.2401,-0.16807,0.117649,-0.0823543,0.05764801,-1.02369268
-0.675,0.455625,-0.307546875,0.207594141,-0.140126045,0.09458508,-0.063844929,0.043095327,-0.71621557
-0.65,0.4225,-0.274625,0.17850625,-0.116029063,0.075418891,-0.049022279,0.031864481,-0.603624075
-0.625,0.390625,-0.244140625,0.152587891,-0.095367432,0.059604645,-0.037252903,0.023283064,-1.291658542
-0.6,0.36,-0.216,0.1296,-0.07776,0.046656,-0.0279936,0.01679616,-0.48159965
-0.575,0.330625,-0.190109375,0.109312891,-0.062854912,0.036141574,-0.020781405,0.011949308,-0.718855416
-0.55,0.3025,-0.166375,0.09150625,-0.050328438,0.027680641,-0.015224352,0.008373394,-0.583672063
-0.525,0.275625,-0.144703125,0.075969141,-0.039883799,0.020938994,-0.010992972,0.00577131,-1.238732211
-0.5,0.25,-0.125,0.0625,-0.03125,0.015625,-0.0078125,0.00390625,-1.148663638
-0.475,0.225625,-0.107171875,0.050906641,-0.024180654,0.011485811,-0.00545576,0.002591486,-0.773880498
-0.45,0.2025,-0.091125,0.04100625,-0.018452813,0.008303766,-0.003736695,0.001681513,-0.977407995
-0.425,0.180625,-0.076765625,0.032625391,-0.013865791,0.005892961,-0.002504509,0.001064416,-0.467744889
-0.4,0.16,-0.064,0.0256,-0.01024,0.004096,-0.0016384,0.00065536,-1.264977152
-0.375,0.140625,-0.052734375,0.019775391,-0.007415771,0.002780914,-0.001042843,0.000391066,-1.164394703
-0.35,0.1225,-0.042875,0.01500625,-0.005252188,0.001838266,-0.000643393,0.000225188,-0.372671893
-0.325,0.105625,-0.034328125,0.011156641,-0.003625908,0.00117842,-0.000382987,0.000124471,-0.842357763
-0.3,0.09,-0.027,0.0081,-0.00243,0.000729,-0.0002187,0.00006561,-0.862768302
-0.275,0.075625,-0.020796875,0.005719141,-0.001572764,0.00043251,-0.00011894,3.27E-05,-1.004939807
-0.25,0.0625,-0.015625,0.00390625,-0.000976563,0.000244141,-6.10E-05,1.53E-05,-0.749409855
-0.225,0.050625,-0.011390625,0.002562891,-0.00057665,0.000129746,-2.92E-05,6.57E-06,-1.228917422
-0.2,0.04,-0.008,0.0016,-0.00032,0.000064,-0.0000128,0.00000256,-0.027554676
-0.175,0.030625,-0.005359375,0.000937891,-0.000164131,2.87E-05,-5.03E-06,8.80E-07,-0.583735182
-0.15,0.0225,-0.003375,0.00050625,-7.59E-05,1.14E-05,-1.71E-06,2.56E-07,-0.666022299
-0.125,0.015625,-0.001953125,0.000244141,-3.05E-05,3.81E-06,-4.77E-07,5.96E-08,-0.630671286
-0.1,0.01,-0.001,0.0001,-0.00001,0.000001,-0.0000001,0.00000001,-0.521238687
-0.075,0.005625,-0.000421875,3.16E-05,-2.37E-06,1.78E-07,-1.33E-08,1.00E-09,-0.396522381
-0.05,0.0025,-0.000125,6.25E-06,-3.13E-07,1.56E-08,-7.81E-10,3.91E-11,-0.33990062
-0.025,0.000625,-1.56E-05,3.91E-07,-9.77E-09,2.44E-10,-6.10E-12,1.53E-13,-0.207627022
0,0,0,0,0,0,0,0,-0.026636812
0.025,0.000625,1.56E-05,3.91E-07,9.77E-09,2.44E-10,6.10E-12,1.53E-13,0.25893599
0.05,0.0025,0.000125,6.25E-06,3.13E-07,1.56E-08,7.81E-10,3.91E-11,0.095868961
0.075,0.005625,0.000421875,3.16E-05,2.37E-06,1.78E-07,1.33E-08,1.00E-09,0.348533024
0.1,0.01,0.001,0.0001,0.00001,0.000001,0.0000001,0.00000001,0.093522053
0.125,0.015625,0.001953125,0.000244141,3.05E-05,3.81E-06,4.77E-07,5.96E-08,0.842587873
0.15,0.0225,0.003375,0.00050625,7.59E-05,1.14E-05,1.71E-06,2.56E-07,-0.189991721
0.175,0.030625,0.005359375,0.000937891,0.000164131,2.87E-05,5.03E-06,8.80E-07,0.468500083
0.2,0.04,0.008,0.0016,0.00032,0.000064,0.0000128,0.00000256,0.638736169
0.225,0.050625,0.011390625,0.002562891,0.00057665,0.000129746,2.92E-05,6.57E-06,0.637310758
0.25,0.0625,0.015625,0.00390625,0.000976563,0.000244141,6.10E-05,1.53E-05,0.151758758
0.275,0.075625,0.020796875,0.005719141,0.001572764,0.00043251,0.00011894,3.27E-05,0.571485735
0.3,0.09,0.027,0.0081,0.00243,0.000729,0.0002187,0.00006561,1.039489377
0.325,0.105625,0.034328125,0.011156641,0.003625908,0.00117842,0.000382987,0.000124471,0.539970183
0.35,0.1225,0.042875,0.01500625,0.005252188,0.001838266,0.000643393,0.000225188,0.768750431
0.375,0.140625,0.052734375,0.019775391,0.007415771,0.002780914,0.001042843,0.000391066,0.95712042
0.4,0.16,0.064,0.0256,0.01024,0.004096,0.0016384,0.00065536,0.906070705
0.425,0.180625,0.076765625,0.032625391,0.013865791,0.005892961,0.002504509,0.001064416,0.547611399
0.45,0.2025,0.091125,0.04100625,0.018452813,0.008303766,0.003736695,0.001681513,1.170230343
0.475,0.225625,0.107171875,0.050906641,0.024180654,0.011485811,0.00545576,0.002591486,1.129043883
0.5,0.25,0.125,0.0625,0.03125,0.015625,0.0078125,0.00390625,0.906775428
0.525,0.275625,0.144703125,0.075969141,0.039883799,0.020938994,0.010992972,0.00577131,0.755308219
0.55,0.3025,0.166375,0.09150625,0.050328438,0.027680641,0.015224352,0.008373394,0.713738156
0.575,0.330625,0.190109375,0.109312891,0.062854912,0.036141574,0.020781405,0.011949308,1.041836368
0.6,0.36,0.216,0.1296,0.07776,0.046656,0.0279936,0.01679616,0.731265455
0.625,0.390625,0.244140625,0.152587891,0.095367432,0.059604645,0.037252903,0.023283064,0.787968421
0.65,0.4225,0.274625,0.17850625,0.116029063,0.075418891,0.049022279,0.031864481,0.302864715
0.675,0.455625,0.307546875,0.207594141,0.140126045,0.09458508,0.063844929,0.043095327,0.811118522
0.7,0.49,0.343,0.2401,0.16807,0.117649,0.0823543,0.05764801,0.482005232
0.725,0.525625,0.381078125,0.276281641,0.200304189,0.145220537,0.10528489,0.076331545,0.604893022
0.75,0.5625,0.421875,0.31640625,0.237304688,0.177978516,0.133483887,0.100112915,0.640054223
0.775,0.600625,0.465484375,0.360750391,0.279581553,0.216675703,0.16792367,0.130140844,0.566782505
0.8,0.64,0.512,0.4096,0.32768,0.262144,0.2097152,0.16777216,0.137383159
0.825,0.680625,0.561515625,0.463250391,0.382181572,0.315299797,0.260122333,0.214600924,-0.043048417
0.85,0.7225,0.614125,0.52200625,0.443705313,0.377149516,0.320577088,0.272490525,0.824271132
0.875,0.765625,0.669921875,0.586181641,0.512908936,0.448795319,0.392695904,0.343608916,0.426637618
0.9,0.81,0.729,0.6561,0.59049,0.531441,0.4782969,0.43046721,-0.004979189
0.925,0.855625,0.791453125,0.732094141,0.67718708,0.626398049,0.579418195,0.535961831,-0.01728292
0.95,0.9025,0.857375,0.81450625,0.773780938,0.735091891,0.698337296,0.663420431,0.559308748
0.975,0.950625,0.926859375,0.903687891,0.881095693,0.859068301,0.837591593,0.816651804,-0.006631242
Loading