|
2 | 2 | =============================== |
3 | 3 | Tutorial for tabular regression |
4 | 4 | =============================== |
| 5 | +""" |
5 | 6 |
|
6 | | -In this tutorial, we compare the prediction intervals estimated by MAPIE on a |
7 | | -simple, one-dimensional, ground truth function :math:`f(x) = x \times sin(x)`. |
| 7 | +############################################################################## |
| 8 | +# In this tutorial, we compare the prediction intervals estimated by MAPIE on a |
| 9 | +# simple, one-dimensional, ground truth function |
| 10 | +# :math:`f(x) = x \times \sin(x)`. |
8 | 11 |
|
9 | | -Throughout this tutorial, we will answer the following questions: |
| 12 | +# Throughout this tutorial, we will answer the following questions: |
10 | 13 |
|
11 | | -- How well do the MAPIE strategies capture the aleatoric uncertainty |
12 | | - existing in the data? |
| 14 | +# - How well do the MAPIE strategies capture the aleatoric uncertainty |
| 15 | +# existing in the data? |
13 | 16 |
|
14 | | -- How do the prediction intervals estimated by the resampling strategies |
15 | | - evolve for new *out-of-distribution* data ? |
| 17 | +# - How do the prediction intervals estimated by the resampling strategies |
| 18 | +# evolve for new *out-of-distribution* data ? |
16 | 19 |
|
17 | | -- How do the prediction intervals vary between regressor models ? |
| 20 | +# - How do the prediction intervals vary between regressor models ? |
18 | 21 |
|
19 | | -Throughout this tutorial, we estimate the prediction intervals first using |
20 | | -a polynomial function, and then using a boosting model, and a simple neural |
21 | | -network. |
| 22 | +# Throughout this tutorial, we estimate the prediction intervals first using |
| 23 | +# a polynomial function, and then using a boosting model, and a simple neural |
| 24 | +# network. |
22 | 25 |
|
23 | | -**For practical problems, we advise using the faster CV+ or |
24 | | -Jackknife+-after-Bootstrap strategies. |
25 | | -For conservative prediction interval estimates, you can alternatively |
26 | | -use the CV-minmax strategies.** |
27 | | -""" |
| 26 | +# **For practical problems, we advise using the faster CV+ or |
| 27 | +# Jackknife+-after-Bootstrap strategies. |
| 28 | +# For conservative prediction interval estimates, you can alternatively |
| 29 | +# use the CV-minmax strategies.** |
28 | 30 |
|
29 | 31 | import os |
30 | 32 | import subprocess |
@@ -477,7 +479,7 @@ def get_heteroscedastic_coverage(y_test, y_pis, STRATEGIES, bins): |
477 | 479 | ) |
478 | 480 |
|
479 | 481 | # fig = plt.figure() |
480 | | -heteroscedastic_coverage.T.plot.bar(figsize=(12, 4), alpha=0.7) |
| 482 | +heteroscedastic_coverage.T.plot.bar(figsize=(12, 5), alpha=0.7) |
481 | 483 | plt.axhline(0.95, ls="--", color="k") |
482 | 484 | plt.ylabel("Conditional coverage") |
483 | 485 | plt.xlabel("x bins") |
@@ -785,6 +787,7 @@ def mlp(): |
785 | 787 | ax=ax, |
786 | 788 | title=name |
787 | 789 | ) |
| 790 | +plt.show() |
788 | 791 |
|
789 | 792 |
|
790 | 793 | fig, ax = plt.subplots(1, 1, figsize=(7, 5)) |
|
0 commit comments