diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 00000000..6e04fc31 Binary files /dev/null and b/.DS_Store differ diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 00000000..e8ddd9c2 --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,11 @@ +{ + "python.pythonPath": "/home/ctr26/miniconda3/envs/dl4mic/bin/python", + "python.formatting.provider": "black", + "python.testing.pytestArgs": [ + "tests" + ], + "python.testing.unittestEnabled": false, + "python.testing.nosetestsEnabled": false, + "python.testing.pytestEnabled": true +} + diff --git a/ColabNotebooks/CARE_2D_ZeroCostDL4Mic.ipynb b/ColabNotebooks/CARE_2D_ZeroCostDL4Mic.ipynb new file mode 100644 index 00000000..b66655e5 --- /dev/null +++ b/ColabNotebooks/CARE_2D_ZeroCostDL4Mic.ipynb @@ -0,0 +1,1984 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "V9zNGvape2-I" + }, + "source": [ + "# **CARE: Content-aware image restoration (2D)**\n", + "\n", + "---\n", + "\n", + "CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The CARE network uses a U-Net network architecture and allows image restoration and resolution improvement in 2D and 3D images, in a supervised manner, using noisy images as input and low-noise images as targets for training. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising.\n", + "\n", + " **This particular notebook enables restoration of 2D dataset. If you are interested in restoring 3D dataset, you should use the CARE 3D notebook instead.**\n", + "\n", + "---\n", + "\n", + "*Disclaimer*:\n", + "\n", + "This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.\n", + "\n", + "This notebook is based on the following paper: \n", + "\n", + "**Content-aware image restoration: pushing the limits of fluorescence microscopy**, by Weigert *et al.* published in Nature Methods in 2018 (https://www.nature.com/articles/s41592-018-0216-7)\n", + "\n", + "And source code found in: https://github.com/csbdeep/csbdeep\n", + "\n", + "For a more in-depth description of the features of the network,please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.\n", + "\n", + "We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).\n", + "\n", + "\n", + "**Please also cite this original paper when using or developing this notebook.**" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "jWAz2i7RdxUV" + }, + "source": [ + "# **How to use this notebook?**\n", + "\n", + "---\n", + "\n", + "Video describing how to use our notebooks are available on youtube:\n", + " - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook\n", + " - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook\n", + "\n", + "\n", + "---\n", + "###**Structure of a notebook**\n", + "\n", + "The notebook contains two types of cell: \n", + "\n", + "**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.\n", + "\n", + "**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.\n", + "\n", + "---\n", + "###**Table of contents, Code snippets** and **Files**\n", + "\n", + "On the top left side of the notebook you find three tabs which contain from top to bottom:\n", + "\n", + "*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.\n", + "\n", + "*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.\n", + "\n", + "*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. \n", + "\n", + "**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.\n", + "\n", + "**Note:** The \"sample data\" in \"Files\" contains default files. Do not upload anything in here!\n", + "\n", + "---\n", + "###**Making changes to the notebook**\n", + "\n", + "**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.\n", + "\n", + "To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).\n", + "You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vNMDQHm0Ah-Z" + }, + "source": [ + "#**0. Before getting started**\n", + "---\n", + " For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence.\n", + "\n", + " Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called \"Training - Low SNR images\" (Training_source) and \"Training - high SNR images\" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\n", + "\n", + "**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook.\n", + "\n", + " **Additionally, the corresponding input and output files need to have the same name**.\n", + "\n", + " Please note that you currently can **only use .tif files!**\n", + "\n", + "\n", + "Here's a common data structure that can work:\n", + "* Experiment A\n", + " - **Training dataset**\n", + " - Low SNR images (Training_source)\n", + " - img_1.tif, img_2.tif, ...\n", + " - High SNR images (Training_target)\n", + " - img_1.tif, img_2.tif, ...\n", + " - **Quality control dataset**\n", + " - Low SNR images\n", + " - img_1.tif, img_2.tif\n", + " - High SNR images\n", + " - img_1.tif, img_2.tif\n", + " - **Data to be predicted**\n", + " - **Results**\n", + "\n", + "---\n", + "**Important note**\n", + "\n", + "- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.\n", + "\n", + "- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.\n", + "\n", + "- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "b4-r1gE7Iamv" + }, + "source": [ + "# **1. Initialise the Colab session**\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DMNHVZfHmbKb" + }, + "source": [ + "\n", + "## **1.1. Check for GPU access**\n", + "---\n", + "\n", + "By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:\n", + "\n", + "Go to **Runtime -> Change the Runtime type**\n", + "\n", + "**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*\n", + "\n", + "**Accelerator: GPU** *(Graphics processing unit)*\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "BDhmUgqCStlm" + }, + "outputs": [], + "source": [ + "#@markdown ##Run this cell to check if you have GPU access\n", + "\n", + "%tensorflow_version 1.x\n", + "\n", + "import tensorflow as tf\n", + "if tf.test.gpu_device_name()=='':\n", + " print('You do not have GPU access.') \n", + " print('Did you change your runtime ?') \n", + " print('If the runtime setting is correct then Google did not allocate a GPU for your session')\n", + " print('Expect slow performance. To access GPU try reconnecting later')\n", + "\n", + "else:\n", + " print('You have GPU access')\n", + " !nvidia-smi" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-oqBTeLaImnU" + }, + "source": [ + "## **1.2. Mount your Google Drive**\n", + "---\n", + " To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.\n", + "\n", + " Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. \n", + "\n", + " Once this is done, your data are available in the **Files** tab on the top left of notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "01Djr8v-5pPk" + }, + "outputs": [], + "source": [ + "\n", + "#@markdown ##Run this cell to connect your Google Drive to Colab\n", + "\n", + "#@markdown * Click on the URL. \n", + "\n", + "#@markdown * Sign in your Google Account. \n", + "\n", + "#@markdown * Copy the authorization code. \n", + "\n", + "#@markdown * Enter the authorization code. \n", + "\n", + "#@markdown * Click on \"Files\" site on the right. Refresh the site. Your Google Drive folder should now be available here as \"drive\". \n", + "\n", + "#mounts user's Google Drive to Google Colab.\n", + "\n", + "from google.colab import drive\n", + "drive.mount('/content/gdrive')\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "n4yWFoJNnoin" + }, + "source": [ + "# **2. Install CARE and dependencies**\n", + "---\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "3u2mXn3XsWzd" + }, + "outputs": [], + "source": [ + "Notebook_version = ['1.12']\n", + "\n", + "\n", + "#@markdown ##Install CARE and dependencies\n", + "\n", + "#Libraries contains information of certain topics. \n", + "#For example the tifffile library contains information on how to handle tif-files.\n", + "\n", + "#Here, we install libraries which are not already included in Colab.\n", + "\n", + "\n", + "!pip install tifffile # contains tools to operate tiff-files\n", + "!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.\n", + "!pip install wget\n", + "!pip install memory_profiler\n", + "!pip install fpdf\n", + "%load_ext memory_profiler\n", + "\n", + "#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.\n", + "%tensorflow_version 1.x\n", + "\n", + "import sys\n", + "before = [str(m) for m in sys.modules]\n", + "\n", + "import tensorflow \n", + "import tensorflow as tf\n", + "\n", + "print(tensorflow.__version__)\n", + "print(\"Tensorflow enabled.\")\n", + "\n", + "# ------- Variable specific to CARE -------\n", + "from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file\n", + "from csbdeep.data import RawData, create_patches \n", + "from csbdeep.io import load_training_data, save_tiff_imagej_compatible\n", + "from csbdeep.models import Config, CARE\n", + "from csbdeep import data\n", + "from __future__ import print_function, unicode_literals, absolute_import, division\n", + "%matplotlib inline\n", + "%config InlineBackend.figure_format = 'retina'\n", + "\n", + "\n", + "\n", + "# ------- Common variable to all ZeroCostDL4Mic notebooks -------\n", + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "import urllib\n", + "import os, random\n", + "import shutil \n", + "import zipfile\n", + "from tifffile import imread, imsave\n", + "import time\n", + "import sys\n", + "import wget\n", + "from pathlib import Path\n", + "import pandas as pd\n", + "from glob import glob\n", + "from scipy import signal\n", + "from scipy import ndimage\n", + "from skimage import io\n", + "from sklearn.linear_model import LinearRegression\n", + "from skimage.util import img_as_uint\n", + "import matplotlib as mpl\n", + "from skimage.metrics import structural_similarity\n", + "from skimage.metrics import peak_signal_noise_ratio as psnr\n", + "from astropy.visualization import simple_norm\n", + "from skimage import img_as_float32\n", + "from skimage.util import img_as_ubyte\n", + "from tqdm import tqdm \n", + "from fpdf import FPDF, HTMLMixin\n", + "from datetime import datetime\n", + "import subprocess\n", + "from pip._internal.operations.freeze import freeze\n", + "\n", + "# Colors for the warning messages\n", + "class bcolors:\n", + " WARNING = '\\033[31m'\n", + "\n", + "W = '\\033[0m' # white (normal)\n", + "R = '\\033[31m' # red\n", + "\n", + "#Disable some of the tensorflow warnings\n", + "import warnings\n", + "warnings.filterwarnings(\"ignore\")\n", + "\n", + "print(\"Libraries installed\")\n", + "\n", + "\n", + "# Check if this is the latest version of the notebook\n", + "Latest_notebook_version = pd.read_csv(\"https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv\")\n", + "\n", + "if Notebook_version == list(Latest_notebook_version.columns):\n", + " print(\"This notebook is up-to-date.\")\n", + "\n", + "if not Notebook_version == list(Latest_notebook_version.columns):\n", + " print(bcolors.WARNING +\"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\")\n", + "\n", + "!pip freeze > requirements.txt\n", + "\n", + "#Create a pdf document with training summary\n", + "\n", + "def pdf_export(trained = False, augmentation = False, pretrained_model = False):\n", + " # save FPDF() class into a \n", + " # variable pdf \n", + " #from datetime import datetime\n", + "\n", + " class MyFPDF(FPDF, HTMLMixin):\n", + " pass\n", + "\n", + " pdf = MyFPDF()\n", + " pdf.add_page()\n", + " pdf.set_right_margin(-1)\n", + " pdf.set_font(\"Arial\", size = 11, style='B') \n", + "\n", + " Network = 'CARE 2D'\n", + " day = datetime.now()\n", + " datetime_str = str(day)[0:10]\n", + "\n", + " Header = 'Training report for '+Network+' model ('+model_name+')\\nDate: '+datetime_str\n", + " pdf.multi_cell(180, 5, txt = Header, align = 'L') \n", + "\n", + " # add another cell \n", + " if trained:\n", + " training_time = \"Training time: \"+str(hour)+ \"hour(s) \"+str(mins)+\"min(s) \"+str(round(sec))+\"sec(s)\"\n", + " pdf.cell(190, 5, txt = training_time, ln = 1, align='L')\n", + " pdf.ln(1)\n", + "\n", + " Header_2 = 'Information for your materials and methods:'\n", + " pdf.cell(190, 5, txt=Header_2, ln=1, align='L')\n", + "\n", + " all_packages = ''\n", + " for requirement in freeze(local_only=True):\n", + " all_packages = all_packages+requirement+', '\n", + " #print(all_packages)\n", + "\n", + " #Main Packages\n", + " main_packages = ''\n", + " version_numbers = []\n", + " for name in ['tensorflow','numpy','Keras','csbdeep']:\n", + " find_name=all_packages.find(name)\n", + " main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '\n", + " #Version numbers only here:\n", + " version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])\n", + "\n", + " cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)\n", + " cuda_version = cuda_version.stdout.decode('utf-8')\n", + " cuda_version = cuda_version[cuda_version.find(', V')+3:-1]\n", + " gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)\n", + " gpu_name = gpu_name.stdout.decode('utf-8')\n", + " gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]\n", + " #print(cuda_version[cuda_version.find(', V')+3:-1])\n", + " #print(gpu_name)\n", + "\n", + " shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape\n", + " dataset_size = len(os.listdir(Training_source))\n", + "\n", + " text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'\n", + "\n", + " if pretrained_model:\n", + " text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'\n", + "\n", + " pdf.set_font('')\n", + " pdf.set_font_size(10.)\n", + " pdf.multi_cell(190, 5, txt = text, align='L')\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 10, style = 'B')\n", + " pdf.ln(1)\n", + " pdf.cell(28, 5, txt='Augmentation: ', ln=0)\n", + " pdf.set_font('')\n", + " if augmentation:\n", + " aug_text = 'The dataset was augmented by a factor of '+str(Multiply_dataset_by)+' by'\n", + " if rotate_270_degrees != 0 or rotate_90_degrees != 0:\n", + " aug_text = aug_text+'\\n- rotation'\n", + " if flip_left_right != 0 or flip_top_bottom != 0:\n", + " aug_text = aug_text+'\\n- flipping'\n", + " if random_zoom_magnification != 0:\n", + " aug_text = aug_text+'\\n- random zoom magnification'\n", + " if random_distortion != 0:\n", + " aug_text = aug_text+'\\n- random distortion'\n", + " if image_shear != 0:\n", + " aug_text = aug_text+'\\n- image shearing'\n", + " if skew_image != 0:\n", + " aug_text = aug_text+'\\n- image skewing'\n", + " else:\n", + " aug_text = 'No augmentation was used for training.'\n", + " pdf.multi_cell(190, 5, txt=aug_text, align='L')\n", + " pdf.set_font('Arial', size = 11, style = 'B')\n", + " pdf.ln(1)\n", + " pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)\n", + " pdf.set_font('')\n", + " pdf.set_font_size(10.)\n", + " if Use_Default_Advanced_Parameters:\n", + " pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')\n", + " pdf.cell(200, 5, txt='The following parameters were used for training:')\n", + " pdf.ln(1)\n", + " html = \"\"\" \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
ParameterValue
number_of_epochs{0}
patch_size{1}
number_of_patches{2}
batch_size{3}
number_of_steps{4}
percentage_validation{5}
initial_learning_rate{6}
\n", + " \"\"\".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),number_of_patches,batch_size,number_of_steps,percentage_validation,initial_learning_rate)\n", + " pdf.write_html(html)\n", + "\n", + " #pdf.multi_cell(190, 5, txt = text_2, align='L')\n", + " pdf.set_font(\"Arial\", size = 11, style='B')\n", + " pdf.ln(1)\n", + " pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 10, style = 'B')\n", + " pdf.cell(29, 5, txt= 'Training_source:', align = 'L', ln=0)\n", + " pdf.set_font('')\n", + " pdf.multi_cell(170, 5, txt = Training_source, align = 'L')\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 10, style = 'B')\n", + " pdf.cell(27, 5, txt= 'Training_target:', align = 'L', ln=0)\n", + " pdf.set_font('')\n", + " pdf.multi_cell(170, 5, txt = Training_target, align = 'L')\n", + " #pdf.cell(190, 5, txt=aug_text, align='L', ln=1)\n", + " pdf.ln(1)\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 10, style = 'B')\n", + " pdf.cell(22, 5, txt= 'Model Path:', align = 'L', ln=0)\n", + " pdf.set_font('')\n", + " pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')\n", + " pdf.ln(1)\n", + " pdf.cell(60, 5, txt = 'Example Training pair', ln=1)\n", + " pdf.ln(1)\n", + " exp_size = io.imread('/content/TrainingDataExample_CARE2D.png').shape\n", + " pdf.image('/content/TrainingDataExample_CARE2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))\n", + " pdf.ln(1)\n", + " ref_1 = 'References:\\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. \"ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy.\" BioRxiv (2020).'\n", + " pdf.multi_cell(190, 5, txt = ref_1, align='L')\n", + " ref_2 = '- CARE: Weigert, Martin, et al. \"Content-aware image restoration: pushing the limits of fluorescence microscopy.\" Nature methods 15.12 (2018): 1090-1097.'\n", + " pdf.multi_cell(190, 5, txt = ref_2, align='L')\n", + " if augmentation:\n", + " ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. \"Augmentor: an image augmentation library for machine learning.\" arXiv preprint arXiv:1708.04680 (2017).'\n", + " pdf.multi_cell(190, 5, txt = ref_3, align='L')\n", + " pdf.ln(3)\n", + " reminder = 'Important:\\nRemember to perform the quality control step on all newly trained models\\nPlease consider depositing your training dataset on Zenodo'\n", + " pdf.set_font('Arial', size = 11, style='B')\n", + " pdf.multi_cell(190, 5, txt=reminder, align='C')\n", + "\n", + " pdf.output(model_path+'/'+model_name+'/'+model_name+\"_training_report.pdf\")\n", + "\n", + "\n", + "#Make a pdf summary of the QC results\n", + "\n", + "def qc_pdf_export():\n", + " class MyFPDF(FPDF, HTMLMixin):\n", + " pass\n", + "\n", + " pdf = MyFPDF()\n", + " pdf.add_page()\n", + " pdf.set_right_margin(-1)\n", + " pdf.set_font(\"Arial\", size = 11, style='B') \n", + "\n", + " Network = 'CARE 2D'\n", + " #model_name = os.path.basename(full_QC_model_path)\n", + " day = datetime.now()\n", + " datetime_str = str(day)[0:10]\n", + "\n", + " Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\\nDate: '+datetime_str\n", + " pdf.multi_cell(180, 5, txt = Header, align = 'L') \n", + "\n", + " all_packages = ''\n", + " for requirement in freeze(local_only=True):\n", + " all_packages = all_packages+requirement+', '\n", + "\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 11, style = 'B')\n", + " pdf.ln(2)\n", + " pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')\n", + " pdf.ln(1)\n", + " exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape\n", + " if os.path.exists(full_QC_model_path+'Quality Control/lossCurvePlots.png'):\n", + " pdf.image(full_QC_model_path+'Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/13))\n", + " else:\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size=10)\n", + " pdf.multi_cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.', align='L')\n", + " pdf.ln(2)\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 10, style = 'B')\n", + " pdf.ln(3)\n", + " pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)\n", + " pdf.ln(1)\n", + " exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape\n", + " pdf.image(full_QC_model_path+'Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))\n", + " pdf.ln(1)\n", + " pdf.set_font('')\n", + " pdf.set_font('Arial', size = 11, style = 'B')\n", + " pdf.ln(1)\n", + " pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)\n", + " pdf.set_font('')\n", + " pdf.set_font_size(10.)\n", + "\n", + " pdf.ln(1)\n", + " html = \"\"\"\n", + " \n", + " \n", + " \"\"\"\n", + " with open(full_QC_model_path+'Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:\n", + " metrics = csv.reader(csvfile)\n", + " header = next(metrics)\n", + " image = header[0]\n", + " mSSIM_PvsGT = header[1]\n", + " mSSIM_SvsGT = header[2]\n", + " NRMSE_PvsGT = header[3]\n", + " NRMSE_SvsGT = header[4]\n", + " PSNR_PvsGT = header[5]\n", + " PSNR_SvsGT = header[6]\n", + " header = \"\"\"\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \"\"\".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)\n", + " html = html+header\n", + " for row in metrics:\n", + " image = row[0]\n", + " mSSIM_PvsGT = row[1]\n", + " mSSIM_SvsGT = row[2]\n", + " NRMSE_PvsGT = row[3]\n", + " NRMSE_SvsGT = row[4]\n", + " PSNR_PvsGT = row[5]\n", + " PSNR_SvsGT = row[6]\n", + " cells = \"\"\"\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \"\"\".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))\n", + " html = html+cells\n", + " html = html+\"\"\"
{0}{1}{2}{3}{4}{5}{6}
{0}{1}{2}{3}{4}{5}{6}
\"\"\"\n", + "\n", + " pdf.write_html(html)\n", + "\n", + " pdf.ln(1)\n", + " pdf.set_font('')\n", + " pdf.set_font_size(10.)\n", + " ref_1 = 'References:\\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. \"ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy.\" BioRxiv (2020).'\n", + " pdf.multi_cell(190, 5, txt = ref_1, align='L')\n", + " ref_2 = '- CARE: Weigert, Martin, et al. \"Content-aware image restoration: pushing the limits of fluorescence microscopy.\" Nature methods 15.12 (2018): 1090-1097.'\n", + " pdf.multi_cell(190, 5, txt = ref_2, align='L')\n", + "\n", + " pdf.ln(3)\n", + " reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'\n", + "\n", + " pdf.set_font('Arial', size = 11, style='B')\n", + " pdf.multi_cell(190, 5, txt=reminder, align='C')\n", + "\n", + " pdf.output(full_QC_model_path+'Quality Control/'+QC_model_name+'_QC_report.pdf')\n", + "\n", + "\n", + "# Exporting requirements.txt for local run\n", + "!pip freeze > requirements.txt\n", + "\n", + "after = [str(m) for m in sys.modules]\n", + "# Get minimum requirements file\n", + "\n", + "#Add the following lines before all imports: \n", + "# import sys\n", + "# before = [str(m) for m in sys.modules]\n", + "\n", + "#Add the following line after the imports:\n", + "# after = [str(m) for m in sys.modules]\n", + "\n", + "from builtins import any as b_any\n", + "\n", + "def filter_files(file_list, filter_list):\n", + " filtered_list = []\n", + " for fname in file_list:\n", + " if b_any(fname.split('==')[0] in s for s in filter_list):\n", + " filtered_list.append(fname)\n", + " return filtered_list\n", + "\n", + "df = pd.read_csv('requirements.txt', delimiter = \"\\n\")\n", + "mod_list = [m.split('.')[0] for m in after if not m in before]\n", + "req_list_temp = df.values.tolist()\n", + "req_list = [x[0] for x in req_list_temp]\n", + "\n", + "# Replace with package name \n", + "mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]\n", + "mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list] \n", + "filtered_list = filter_files(req_list, mod_replace_list)\n", + "\n", + "\n", + "file=open('CARE_2D_requirements_simple.txt','w')\n", + "for item in filtered_list:\n", + " file.writelines(item + '\\n')\n", + "\n", + "file.close()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Fw0kkTU6CsU4" + }, + "source": [ + "# **3. Select your parameters and paths**\n", + "\n", + "---\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WzYAA-MuaYrT" + }, + "source": [ + "## **3.1. Setting main training parameters**\n", + "---\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CB6acvUFtWqd" + }, + "source": [ + " **Paths for training, predictions and results**\n", + "\n", + "**`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.\n", + "\n", + "**`model_name`:** Use only my_model -style, not my-model (Use \"_\" not \"-\"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.\n", + "\n", + "**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).\n", + "\n", + "**Training Parameters**\n", + "\n", + "**`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 5). **Default value: 50**\n", + "\n", + "**`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 80**\n", + "\n", + "**When choosing the patch_size, the value should be i) large enough that it will enclose many instances, ii) small enough that the resulting patches fit into the RAM.** \n", + "\n", + "**`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 100** \n", + "\n", + "**Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.**\n", + "\n", + "**Advanced Parameters - experienced users only**\n", + "\n", + "**`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16**\n", + "\n", + "**`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size**\n", + "\n", + "**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10** \n", + "\n", + "**`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "ewpNJ_I0Mv47" + }, + "outputs": [], + "source": [ + "#@markdown ###Path to training images:\n", + "\n", + "Training_source = \"\" #@param {type:\"string\"}\n", + "InputFile = Training_source+\"/*.tif\"\n", + "\n", + "Training_target = \"\" #@param {type:\"string\"}\n", + "OutputFile = Training_target+\"/*.tif\"\n", + "\n", + "#Define where the patch file will be saved\n", + "base = \"/content\"\n", + "\n", + "\n", + "# model name and path\n", + "#@markdown ###Name of the model and path to model folder:\n", + "model_name = \"\" #@param {type:\"string\"}\n", + "model_path = \"\" #@param {type:\"string\"}\n", + "\n", + "# other parameters for training.\n", + "#@markdown ###Training Parameters\n", + "#@markdown Number of epochs:\n", + "number_of_epochs = 80#@param {type:\"number\"}\n", + "\n", + "#@markdown Patch size (pixels) and number\n", + "patch_size = 80#@param {type:\"number\"} # in pixels\n", + "number_of_patches = 100#@param {type:\"number\"}\n", + "\n", + "#@markdown ###Advanced Parameters\n", + "\n", + "Use_Default_Advanced_Parameters = True #@param {type:\"boolean\"}\n", + "#@markdown ###If not, please input:\n", + "\n", + "batch_size = 16#@param {type:\"number\"}\n", + "number_of_steps = 400#@param {type:\"number\"}\n", + "percentage_validation = 10 #@param {type:\"number\"}\n", + "initial_learning_rate = 0.0004 #@param {type:\"number\"}\n", + "\n", + "\n", + "if (Use_Default_Advanced_Parameters): \n", + " print(\"Default advanced parameters enabled\")\n", + " batch_size = 16\n", + " percentage_validation = 10\n", + " initial_learning_rate = 0.0004\n", + "\n", + "#Here we define the percentage to use for validation\n", + "percentage = percentage_validation/100\n", + "\n", + "\n", + "#here we check that no model with the same name already exist, if so print a warning\n", + "if os.path.exists(model_path+'/'+model_name):\n", + " print(bcolors.WARNING +\"!! WARNING: \"+model_name+\" already exists and will be deleted in the following cell !!\")\n", + " print(bcolors.WARNING +\"To continue training \"+model_name+\", choose a new model_name here, and load \"+model_name+\" in section 3.3\"+W)\n", + " \n", + "\n", + "# Here we disable pre-trained model by default (in case the cell is not ran)\n", + "Use_pretrained_model = False\n", + "\n", + "# Here we disable data augmentation by default (in case the cell is not ran)\n", + "\n", + "Use_Data_augmentation = False\n", + "\n", + "# The shape of the images.\n", + "x = imread(InputFile)\n", + "y = imread(OutputFile)\n", + "\n", + "print('Loaded Input images (number, width, length) =', x.shape)\n", + "print('Loaded Output images (number, width, length) =', y.shape)\n", + "print(\"Parameters initiated.\")\n", + "\n", + "# This will display a randomly chosen dataset input and output\n", + "random_choice = random.choice(os.listdir(Training_source))\n", + "x = imread(Training_source+\"/\"+random_choice)\n", + "\n", + "\n", + "# Here we check that the input images contains the expected dimensions\n", + "if len(x.shape) == 2:\n", + " print(\"Image dimensions (y,x)\",x.shape)\n", + "\n", + "if not len(x.shape) == 2:\n", + " print(bcolors.WARNING +\"Your images appear to have the wrong dimensions. Image dimension\",x.shape)\n", + "\n", + "\n", + "#Find image XY dimension\n", + "Image_Y = x.shape[0]\n", + "Image_X = x.shape[1]\n", + "\n", + "#Hyperparameters failsafes\n", + "\n", + "# Here we check that patch_size is smaller than the smallest xy dimension of the image \n", + "\n", + "if patch_size > min(Image_Y, Image_X):\n", + " patch_size = min(Image_Y, Image_X)\n", + " print (bcolors.WARNING + \" Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:\",patch_size)\n", + "\n", + "# Here we check that patch_size is divisible by 8\n", + "if not patch_size % 8 == 0:\n", + " patch_size = ((int(patch_size / 8)-1) * 8)\n", + " print (bcolors.WARNING + \" Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:\",patch_size)\n", + "\n", + "\n", + "os.chdir(Training_target)\n", + "y = imread(Training_target+\"/\"+random_choice)\n", + "\n", + "f=plt.figure(figsize=(16,8))\n", + "plt.subplot(1,2,1)\n", + "plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')\n", + "plt.title('Training source')\n", + "plt.axis('off');\n", + "\n", + "plt.subplot(1,2,2)\n", + "plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')\n", + "plt.title('Training target')\n", + "plt.axis('off');\n", + "plt.savefig('/content/TrainingDataExample_CARE2D.png',bbox_inches='tight',pad_inches=0)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xGcl7WGP4WHt" + }, + "source": [ + "## **3.2. Data augmentation**\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5Lio8hpZ4PJ1" + }, + "source": [ + "Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it.\n", + "\n", + " **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.** \n", + "\n", + "Data augmentation is performed here by [Augmentor.](https://github.com/mdbloice/Augmentor)\n", + "\n", + "[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:\n", + "\n", + "Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259\n", + "\n", + "**Please also cite this original paper when publishing results obtained using this notebook with augmentation enabled.** " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "htqjkJWt5J_8" + }, + "outputs": [], + "source": [ + "#Data augmentation\n", + "\n", + "Use_Data_augmentation = False #@param {type:\"boolean\"}\n", + "\n", + "if Use_Data_augmentation:\n", + " !pip install Augmentor\n", + " import Augmentor\n", + "\n", + "\n", + "#@markdown ####Choose a factor by which you want to multiply your original dataset\n", + "\n", + "Multiply_dataset_by = 2 #@param {type:\"slider\", min:1, max:30, step:1}\n", + "\n", + "Save_augmented_images = False #@param {type:\"boolean\"}\n", + "\n", + "Saving_path = \"\" #@param {type:\"string\"}\n", + "\n", + "\n", + "Use_Default_Augmentation_Parameters = True #@param {type:\"boolean\"}\n", + "#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):\n", + "\n", + "#@markdown ####Mirror and rotate images\n", + "rotate_90_degrees = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "rotate_270_degrees = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "flip_left_right = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "flip_top_bottom = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "#@markdown ####Random image Zoom\n", + "\n", + "random_zoom = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "random_zoom_magnification = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "#@markdown ####Random image distortion\n", + "\n", + "random_distortion = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "\n", + "#@markdown ####Image shearing and skewing \n", + "\n", + "image_shear = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "max_image_shear = 1 #@param {type:\"slider\", min:1, max:25, step:1}\n", + "\n", + "skew_image = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "skew_image_magnitude = 0 #@param {type:\"slider\", min:0, max:1, step:0.1}\n", + "\n", + "\n", + "if Use_Default_Augmentation_Parameters:\n", + " rotate_90_degrees = 0.5\n", + " rotate_270_degrees = 0.5\n", + " flip_left_right = 0.5\n", + " flip_top_bottom = 0.5\n", + "\n", + " if not Multiply_dataset_by >5:\n", + " random_zoom = 0\n", + " random_zoom_magnification = 0.9\n", + " random_distortion = 0\n", + " image_shear = 0\n", + " max_image_shear = 10\n", + " skew_image = 0\n", + " skew_image_magnitude = 0\n", + "\n", + " if Multiply_dataset_by >5:\n", + " random_zoom = 0.1\n", + " random_zoom_magnification = 0.9\n", + " random_distortion = 0.5\n", + " image_shear = 0.2\n", + " max_image_shear = 5\n", + " skew_image = 0.2\n", + " skew_image_magnitude = 0.4\n", + "\n", + " if Multiply_dataset_by >25:\n", + " random_zoom = 0.5\n", + " random_zoom_magnification = 0.8\n", + " random_distortion = 0.5\n", + " image_shear = 0.5\n", + " max_image_shear = 20\n", + " skew_image = 0.5\n", + " skew_image_magnitude = 0.6\n", + "\n", + "\n", + "list_files = os.listdir(Training_source)\n", + "Nb_files = len(list_files)\n", + "\n", + "Nb_augmented_files = (Nb_files * Multiply_dataset_by)\n", + "\n", + "\n", + "if Use_Data_augmentation:\n", + " print(\"Data augmentation enabled\")\n", + "# Here we set the path for the various folder were the augmented images will be loaded\n", + "\n", + "# All images are first saved into the augmented folder\n", + " #Augmented_folder = \"/content/Augmented_Folder\"\n", + " \n", + " if not Save_augmented_images:\n", + " Saving_path= \"/content\"\n", + "\n", + " Augmented_folder = Saving_path+\"/Augmented_Folder\"\n", + " if os.path.exists(Augmented_folder):\n", + " shutil.rmtree(Augmented_folder)\n", + " os.makedirs(Augmented_folder)\n", + "\n", + " #Training_source_augmented = \"/content/Training_source_augmented\"\n", + " Training_source_augmented = Saving_path+\"/Training_source_augmented\"\n", + "\n", + " if os.path.exists(Training_source_augmented):\n", + " shutil.rmtree(Training_source_augmented)\n", + " os.makedirs(Training_source_augmented)\n", + "\n", + " #Training_target_augmented = \"/content/Training_target_augmented\"\n", + " Training_target_augmented = Saving_path+\"/Training_target_augmented\"\n", + "\n", + " if os.path.exists(Training_target_augmented):\n", + " shutil.rmtree(Training_target_augmented)\n", + " os.makedirs(Training_target_augmented)\n", + "\n", + "\n", + "# Here we generate the augmented images\n", + "#Load the images\n", + " p = Augmentor.Pipeline(Training_source, Augmented_folder)\n", + "\n", + "#Define the matching images\n", + " p.ground_truth(Training_target)\n", + "#Define the augmentation possibilities\n", + " if not rotate_90_degrees == 0:\n", + " p.rotate90(probability=rotate_90_degrees)\n", + " \n", + " if not rotate_270_degrees == 0:\n", + " p.rotate270(probability=rotate_270_degrees)\n", + "\n", + " if not flip_left_right == 0:\n", + " p.flip_left_right(probability=flip_left_right)\n", + "\n", + " if not flip_top_bottom == 0:\n", + " p.flip_top_bottom(probability=flip_top_bottom)\n", + "\n", + " if not random_zoom == 0:\n", + " p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)\n", + " \n", + " if not random_distortion == 0:\n", + " p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)\n", + "\n", + " if not image_shear == 0:\n", + " p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)\n", + " \n", + " if not skew_image == 0:\n", + " p.skew(probability=skew_image,magnitude=skew_image_magnitude)\n", + "\n", + " p.sample(int(Nb_augmented_files))\n", + "\n", + " print(int(Nb_augmented_files),\"matching images generated\")\n", + "\n", + "# Here we sort through the images and move them back to augmented trainning source and targets folders\n", + "\n", + " augmented_files = os.listdir(Augmented_folder)\n", + "\n", + " for f in augmented_files:\n", + "\n", + " if (f.startswith(\"_groundtruth_(1)_\")):\n", + " shortname_noprefix = f[17:]\n", + " shutil.copyfile(Augmented_folder+\"/\"+f, Training_target_augmented+\"/\"+shortname_noprefix) \n", + " if not (f.startswith(\"_groundtruth_(1)_\")):\n", + " shutil.copyfile(Augmented_folder+\"/\"+f, Training_source_augmented+\"/\"+f)\n", + " \n", + "\n", + " for filename in os.listdir(Training_source_augmented):\n", + " os.chdir(Training_source_augmented)\n", + " os.rename(filename, filename.replace('_original', ''))\n", + " \n", + " #Here we clean up the extra files\n", + " shutil.rmtree(Augmented_folder)\n", + "\n", + "if not Use_Data_augmentation:\n", + " print(bcolors.WARNING+\"Data augmentation disabled\") \n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "bQDuybvyadKU" + }, + "source": [ + "\n", + "## **3.3. Using weights from a pre-trained model as initial weights**\n", + "---\n", + " Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a CARE 2D model**. \n", + "\n", + " This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.\n", + "\n", + " In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "8vPkzEBNamE4" + }, + "outputs": [], + "source": [ + "# @markdown ##Loading weights from a pre-trained network\n", + "\n", + "Use_pretrained_model = False #@param {type:\"boolean\"}\n", + "\n", + "pretrained_model_choice = \"Model_from_file\" #@param [\"Model_from_file\"]\n", + "\n", + "Weights_choice = \"best\" #@param [\"last\", \"best\"]\n", + "\n", + "\n", + "#@markdown ###If you chose \"Model_from_file\", please provide the path to the model folder:\n", + "pretrained_model_path = \"\" #@param {type:\"string\"}\n", + "\n", + "# --------------------- Check if we load a previously trained model ------------------------\n", + "if Use_pretrained_model:\n", + "\n", + "# --------------------- Load the model from the choosen path ------------------------\n", + " if pretrained_model_choice == \"Model_from_file\":\n", + " h5_file_path = os.path.join(pretrained_model_path, \"weights_\"+Weights_choice+\".h5\")\n", + "\n", + "# --------------------- Download the a model provided in the XXX ------------------------\n", + "\n", + " if pretrained_model_choice == \"Model_name\":\n", + " pretrained_model_name = \"Model_name\"\n", + " pretrained_model_path = \"/content/\"+pretrained_model_name\n", + " print(\"Downloading the 2D_Demo_Model_from_Stardist_2D_paper\")\n", + " if os.path.exists(pretrained_model_path):\n", + " shutil.rmtree(pretrained_model_path)\n", + " os.makedirs(pretrained_model_path)\n", + " wget.download(\"\", pretrained_model_path)\n", + " wget.download(\"\", pretrained_model_path)\n", + " wget.download(\"\", pretrained_model_path) \n", + " wget.download(\"\", pretrained_model_path)\n", + " h5_file_path = os.path.join(pretrained_model_path, \"weights_\"+Weights_choice+\".h5\")\n", + "\n", + "# --------------------- Add additional pre-trained models here ------------------------\n", + "\n", + "\n", + "\n", + "# --------------------- Check the model exist ------------------------\n", + "# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled, \n", + " if not os.path.exists(h5_file_path):\n", + " print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.h5 pretrained model does not exist')\n", + " Use_pretrained_model = False\n", + "\n", + " \n", + "# If the model path contains a pretrain model, we load the training rate, \n", + " if os.path.exists(h5_file_path):\n", + "#Here we check if the learning rate can be loaded from the quality control folder\n", + " if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):\n", + " with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:\n", + " csvRead = pd.read_csv(csvfile, sep=',')\n", + " #print(csvRead)\n", + " if \"learning rate\" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)\n", + " print(\"pretrained network learning rate found\")\n", + " #find the last learning rate\n", + " lastLearningRate = csvRead[\"learning rate\"].iloc[-1]\n", + " #Find the learning rate corresponding to the lowest validation loss\n", + " min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]\n", + " #print(min_val_loss)\n", + " bestLearningRate = min_val_loss['learning rate'].iloc[-1]\n", + " if Weights_choice == \"last\":\n", + " print('Last learning rate: '+str(lastLearningRate))\n", + " if Weights_choice == \"best\":\n", + " print('Learning rate of best validation loss: '+str(bestLearningRate))\n", + " if not \"learning rate\" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead\n", + " bestLearningRate = initial_learning_rate\n", + " lastLearningRate = initial_learning_rate\n", + " print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead')\n", + "\n", + "#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used\n", + " if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):\n", + " print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead')\n", + " bestLearningRate = initial_learning_rate\n", + " lastLearningRate = initial_learning_rate\n", + "\n", + "\n", + "# Display info about the pretrained model to be loaded (or not)\n", + "if Use_pretrained_model:\n", + " print('Weights found in:')\n", + " print(h5_file_path)\n", + " print('will be loaded prior to training.')\n", + "\n", + "else:\n", + " print(bcolors.WARNING+'No pretrained network will be used.')\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rQndJj70FzfL" + }, + "source": [ + "# **4. Train the network**\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tGW2iaU6X5zi" + }, + "source": [ + "## **4.1. Prepare the training data and model for training**\n", + "---\n", + "Here, we use the information from 3. to build the model and convert the training data into a suitable format for training." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "WMJnGJpCMa4y" + }, + "outputs": [], + "source": [ + "#@markdown ##Create the model and dataset objects\n", + "\n", + "# --------------------- Here we delete the model folder if it already exist ------------------------\n", + "\n", + "if os.path.exists(model_path+'/'+model_name):\n", + " print(bcolors.WARNING +\"!! WARNING: Model folder already exists and has been removed !!\"+W)\n", + " shutil.rmtree(model_path+'/'+model_name)\n", + "\n", + "\n", + "\n", + "# --------------------- Here we load the augmented data or the raw data ------------------------\n", + "\n", + "if Use_Data_augmentation:\n", + " Training_source_dir = Training_source_augmented\n", + " Training_target_dir = Training_target_augmented\n", + "\n", + "if not Use_Data_augmentation:\n", + " Training_source_dir = Training_source\n", + " Training_target_dir = Training_target\n", + "# --------------------- ------------------------------------------------\n", + "\n", + "# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.\n", + "# This file is saved in .npz format and later called when loading the trainig data.\n", + "\n", + "\n", + "raw_data = data.RawData.from_folder(\n", + " basepath=base,\n", + " source_dirs=[Training_source_dir], \n", + " target_dir=Training_target_dir, \n", + " axes='CYX', \n", + " pattern='*.tif*')\n", + "\n", + "X, Y, XY_axes = data.create_patches(\n", + " raw_data, \n", + " patch_filter=None, \n", + " patch_size=(patch_size,patch_size), \n", + " n_patches_per_image=number_of_patches)\n", + "\n", + "print ('Creating 2D training dataset')\n", + "training_path = model_path+\"/rawdata\"\n", + "rawdata1 = training_path+\".npz\"\n", + "np.savez(training_path,X=X, Y=Y, axes=XY_axes)\n", + "\n", + "# Load Training Data\n", + "(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)\n", + "c = axes_dict(axes)['C']\n", + "n_channel_in, n_channel_out = X.shape[c], Y.shape[c]\n", + "\n", + "%memit \n", + "\n", + "#plot of training patches.\n", + "plt.figure(figsize=(12,5))\n", + "plot_some(X[:5],Y[:5])\n", + "plt.suptitle('5 example training patches (top row: source, bottom row: target)');\n", + "\n", + "#plot of validation patches\n", + "plt.figure(figsize=(12,5))\n", + "plot_some(X_val[:5],Y_val[:5])\n", + "plt.suptitle('5 example validation patches (top row: source, bottom row: target)');\n", + "\n", + "\n", + "#Here we automatically define number_of_step in function of training data and batch size\n", + "if (Use_Default_Advanced_Parameters): \n", + " number_of_steps= int(X.shape[0]/batch_size)+1\n", + "\n", + "# --------------------- Using pretrained model ------------------------\n", + "#Here we ensure that the learning rate set correctly when using pre-trained models\n", + "if Use_pretrained_model:\n", + " if Weights_choice == \"last\":\n", + " initial_learning_rate = lastLearningRate\n", + "\n", + " if Weights_choice == \"best\": \n", + " initial_learning_rate = bestLearningRate\n", + "# --------------------- ---------------------- ------------------------\n", + "\n", + "\n", + "#Here we create the configuration file\n", + "\n", + "config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=initial_learning_rate)\n", + "\n", + "print(config)\n", + "vars(config)\n", + "\n", + "# Compile the CARE model for network training\n", + "model_training= CARE(config, model_name, basedir=model_path)\n", + "\n", + "\n", + "# --------------------- Using pretrained model ------------------------\n", + "# Load the pretrained weights \n", + "if Use_pretrained_model:\n", + " model_training.load_weights(h5_file_path)\n", + "# --------------------- ---------------------- ------------------------\n", + "\n", + "pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "wQPz0F6JlvJR" + }, + "source": [ + "## **4.2. Start Training**\n", + "---\n", + "When playing the cell below you should see updates after each epoch (round). Network training can take some time.\n", + "\n", + "* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.\n", + "\n", + "Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.\n", + "\n", + "**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSBDeep Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "j_Qm5JBmlvJg" + }, + "outputs": [], + "source": [ + "#@markdown ##Start training\n", + "\n", + "start = time.time()\n", + "\n", + "# Start Training\n", + "history = model_training.train(X,Y, validation_data=(X_val,Y_val))\n", + "\n", + "print(\"Training, done.\")\n", + "\n", + "# convert the history.history dict to a pandas DataFrame: \n", + "lossData = pd.DataFrame(history.history) \n", + "\n", + "if os.path.exists(model_path+\"/\"+model_name+\"/Quality Control\"):\n", + " shutil.rmtree(model_path+\"/\"+model_name+\"/Quality Control\")\n", + "\n", + "os.makedirs(model_path+\"/\"+model_name+\"/Quality Control\")\n", + "\n", + "# The training evaluation.csv is saved (overwrites the Files if needed). \n", + "lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'\n", + "with open(lossDataCSVpath, 'w') as f:\n", + " writer = csv.writer(f)\n", + " writer.writerow(['loss','val_loss', 'learning rate'])\n", + " for i in range(len(history.history['loss'])):\n", + " writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])\n", + "\n", + "\n", + "# Displaying the time elapsed for training\n", + "dt = time.time() - start\n", + "mins, sec = divmod(dt, 60) \n", + "hour, mins = divmod(mins, 60) \n", + "print(\"Time elapsed:\",hour, \"hour(s)\",mins,\"min(s)\",round(sec),\"sec(s)\")\n", + "\n", + "model_training.export_TF()\n", + "\n", + "print(\"Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin\")\n", + "\n", + "pdf_export(trained = True, augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "QYuIOWQ3imuU" + }, + "source": [ + "# **5. Evaluate your model**\n", + "---\n", + "\n", + "This section allows the user to perform important quality checks on the validity and generalisability of the trained model. \n", + "\n", + "**We highly recommend to perform quality control on all newly trained models.**\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "zazOZ3wDx0zQ" + }, + "outputs": [], + "source": [ + "# model name and path\n", + "#@markdown ###Do you want to assess the model you just trained ?\n", + "Use_the_current_trained_model = True #@param {type:\"boolean\"}\n", + "\n", + "#@markdown ###If not, please provide the path to the model folder:\n", + "\n", + "QC_model_folder = \"\" #@param {type:\"string\"}\n", + "\n", + "#Here we define the loaded model name and path\n", + "QC_model_name = os.path.basename(QC_model_folder)\n", + "QC_model_path = os.path.dirname(QC_model_folder)\n", + "\n", + "if (Use_the_current_trained_model): \n", + " QC_model_name = model_name\n", + " QC_model_path = model_path\n", + "\n", + "full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'\n", + "if os.path.exists(full_QC_model_path):\n", + " print(\"The \"+QC_model_name+\" network will be evaluated\")\n", + "else:\n", + " W = '\\033[0m' # white (normal)\n", + " R = '\\033[31m' # red\n", + " print(R+'!! WARNING: The chosen model does not exist !!'+W)\n", + " print('Please make sure you provide a valid model path and model name before proceeding further.')\n", + "\n", + "loss_displayed = False" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yDY9dtzdUTLh" + }, + "source": [ + "## **5.1. Inspection of the loss function**\n", + "---\n", + "\n", + "First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*\n", + "\n", + "**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.\n", + "\n", + "**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.\n", + "\n", + "During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.\n", + "\n", + "Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.\n", + "\n", + "**Note: Plots of the losses will be shown in a linear and in a log scale. This can help visualise changes in the losses at different magnitudes. However, note that if the losses are negative the plot on the log scale will be empty. This is not an error.**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "vMzSP50kMv5p" + }, + "outputs": [], + "source": [ + "#@markdown ##Play the cell to show a plot of training errors vs. epoch number\n", + "loss_displayed = True\n", + "lossDataFromCSV = []\n", + "vallossDataFromCSV = []\n", + "\n", + "with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:\n", + " csvRead = csv.reader(csvfile, delimiter=',')\n", + " next(csvRead)\n", + " for row in csvRead:\n", + " lossDataFromCSV.append(float(row[0]))\n", + " vallossDataFromCSV.append(float(row[1]))\n", + "\n", + "epochNumber = range(len(lossDataFromCSV))\n", + "plt.figure(figsize=(15,10))\n", + "\n", + "plt.subplot(2,1,1)\n", + "plt.plot(epochNumber,lossDataFromCSV, label='Training loss')\n", + "plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')\n", + "plt.title('Training loss and validation loss vs. epoch number (linear scale)')\n", + "plt.ylabel('Loss')\n", + "plt.xlabel('Epoch number')\n", + "plt.legend()\n", + "\n", + "plt.subplot(2,1,2)\n", + "plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')\n", + "plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')\n", + "plt.title('Training loss and validation loss vs. epoch number (log scale)')\n", + "plt.ylabel('Loss')\n", + "plt.xlabel('Epoch number')\n", + "plt.legend()\n", + "plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png',bbox_inches='tight',pad_inches=0)\n", + "plt.show()\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "biT9FI9Ri77_" + }, + "source": [ + "## **5.2. Error mapping and quality metrics estimation**\n", + "---\n", + "\n", + "This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the \"Source_QC_folder\" and \"Target_QC_folder\" !\n", + "\n", + "**1. The SSIM (structural similarity) map** \n", + "\n", + "The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). \n", + "\n", + "**mSSIM** is the SSIM value calculated across the entire window of both images.\n", + "\n", + "**The output below shows the SSIM maps with the mSSIM**\n", + "\n", + "**2. The RSE (Root Squared Error) map** \n", + "\n", + "This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).\n", + "\n", + "\n", + "**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.\n", + "\n", + "**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.\n", + "\n", + "**The output below shows the RSE maps with the NRMSE and PSNR values.**\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "nAs4Wni7VYbq" + }, + "outputs": [], + "source": [ + "#@markdown ##Choose the folders that contain your Quality Control dataset\n", + "\n", + "Source_QC_folder = \"\" #@param{type:\"string\"}\n", + "Target_QC_folder = \"\" #@param{type:\"string\"}\n", + "\n", + "# Create a quality control/Prediction Folder\n", + "if os.path.exists(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\"):\n", + " shutil.rmtree(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n", + "\n", + "os.makedirs(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n", + "\n", + "# Activate the pretrained model. \n", + "model_training = CARE(config=None, name=QC_model_name, basedir=QC_model_path)\n", + "\n", + "# List Tif images in Source_QC_folder\n", + "Source_QC_folder_tif = Source_QC_folder+\"/*.tif\"\n", + "Z = sorted(glob(Source_QC_folder_tif))\n", + "Z = list(map(imread,Z))\n", + "print('Number of test dataset found in the folder: '+str(len(Z)))\n", + "\n", + "\n", + "# Perform prediction on all datasets in the Source_QC folder\n", + "for filename in os.listdir(Source_QC_folder):\n", + " img = imread(os.path.join(Source_QC_folder, filename))\n", + " predicted = model_training.predict(img, axes='YX')\n", + " os.chdir(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n", + " imsave(filename, predicted)\n", + "\n", + "\n", + "def ssim(img1, img2):\n", + " return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)\n", + "\n", + "\n", + "def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):\n", + " \"\"\"This function is adapted from Martin Weigert\"\"\"\n", + " \"\"\"Percentile-based image normalization.\"\"\"\n", + "\n", + " mi = np.percentile(x,pmin,axis=axis,keepdims=True)\n", + " ma = np.percentile(x,pmax,axis=axis,keepdims=True)\n", + " return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)\n", + "\n", + "\n", + "def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32\n", + " \"\"\"This function is adapted from Martin Weigert\"\"\"\n", + " if dtype is not None:\n", + " x = x.astype(dtype,copy=False)\n", + " mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)\n", + " ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)\n", + " eps = dtype(eps)\n", + "\n", + " try:\n", + " import numexpr\n", + " x = numexpr.evaluate(\"(x - mi) / ( ma - mi + eps )\")\n", + " except ImportError:\n", + " x = (x - mi) / ( ma - mi + eps )\n", + "\n", + " if clip:\n", + " x = np.clip(x,0,1)\n", + "\n", + " return x\n", + "\n", + "def norm_minmse(gt, x, normalize_gt=True):\n", + " \"\"\"This function is adapted from Martin Weigert\"\"\"\n", + "\n", + " \"\"\"\n", + " normalizes and affinely scales an image pair such that the MSE is minimized \n", + " \n", + " Parameters\n", + " ----------\n", + " gt: ndarray\n", + " the ground truth image \n", + " x: ndarray\n", + " the image that will be affinely scaled \n", + " normalize_gt: bool\n", + " set to True of gt image should be normalized (default)\n", + " Returns\n", + " -------\n", + " gt_scaled, x_scaled \n", + " \"\"\"\n", + " if normalize_gt:\n", + " gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)\n", + " x = x.astype(np.float32, copy=False) - np.mean(x)\n", + " #x = x - np.mean(x)\n", + " gt = gt.astype(np.float32, copy=False) - np.mean(gt)\n", + " #gt = gt - np.mean(gt)\n", + " scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())\n", + " return gt, scale * x\n", + "\n", + "# Open and create the csv file that will contain all the QC metrics\n", + "with open(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/QC_metrics_\"+QC_model_name+\".csv\", \"w\", newline='') as file:\n", + " writer = csv.writer(file)\n", + "\n", + " # Write the header in the csv file\n", + " writer.writerow([\"image #\",\"Prediction v. GT mSSIM\",\"Input v. GT mSSIM\", \"Prediction v. GT NRMSE\", \"Input v. GT NRMSE\", \"Prediction v. GT PSNR\", \"Input v. GT PSNR\"]) \n", + "\n", + " # Let's loop through the provided dataset in the QC folders\n", + "\n", + "\n", + " for i in os.listdir(Source_QC_folder):\n", + " if not os.path.isdir(os.path.join(Source_QC_folder,i)):\n", + " print('Running QC on: '+i)\n", + " # -------------------------------- Target test data (Ground truth) --------------------------------\n", + " test_GT = io.imread(os.path.join(Target_QC_folder, i))\n", + "\n", + " # -------------------------------- Source test data --------------------------------\n", + " test_source = io.imread(os.path.join(Source_QC_folder,i))\n", + "\n", + " # Normalize the images wrt each other by minimizing the MSE between GT and Source image\n", + " test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)\n", + "\n", + " # -------------------------------- Prediction --------------------------------\n", + " test_prediction = io.imread(os.path.join(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\",i))\n", + "\n", + " # Normalize the images wrt each other by minimizing the MSE between GT and prediction\n", + " test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True) \n", + "\n", + "\n", + " # -------------------------------- Calculate the metric maps and save them --------------------------------\n", + "\n", + " # Calculate the SSIM maps\n", + " index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)\n", + " index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)\n", + "\n", + " #Save ssim_maps\n", + " img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)\n", + " io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)\n", + " img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)\n", + " io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)\n", + " \n", + " # Calculate the Root Squared Error (RSE) maps\n", + " img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))\n", + " img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))\n", + "\n", + " # Save SE maps\n", + " img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)\n", + " img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)\n", + " io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)\n", + " io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)\n", + "\n", + "\n", + " # -------------------------------- Calculate the RSE metrics and save them --------------------------------\n", + "\n", + " # Normalised Root Mean Squared Error (here it's valid to take the mean of the image)\n", + " NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))\n", + " NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))\n", + " \n", + " # We can also measure the peak signal to noise ratio between the images\n", + " PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)\n", + " PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)\n", + "\n", + " writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])\n", + "\n", + "\n", + "# All data is now processed saved\n", + "Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same\n", + "\n", + "plt.figure(figsize=(20,20))\n", + "# Currently only displays the last computed set, from memory\n", + "# Target (Ground-truth)\n", + "plt.subplot(3,3,1)\n", + "plt.axis('off')\n", + "img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))\n", + "plt.imshow(img_GT, norm=simple_norm(img_GT, percent = 99))\n", + "plt.title('Target',fontsize=15)\n", + "\n", + "# Source\n", + "plt.subplot(3,3,2)\n", + "plt.axis('off')\n", + "img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))\n", + "plt.imshow(img_Source, norm=simple_norm(img_Source, percent = 99))\n", + "plt.title('Source',fontsize=15)\n", + "\n", + "#Prediction\n", + "plt.subplot(3,3,3)\n", + "plt.axis('off')\n", + "img_Prediction = io.imread(os.path.join(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction/\", Test_FileList[-1]))\n", + "plt.imshow(img_Prediction, norm=simple_norm(img_Prediction, percent = 99))\n", + "plt.title('Prediction',fontsize=15)\n", + "\n", + "#Setting up colours\n", + "cmap = plt.cm.CMRmap\n", + "\n", + "#SSIM between GT and Source\n", + "plt.subplot(3,3,5)\n", + "#plt.axis('off')\n", + "plt.tick_params(\n", + " axis='both', # changes apply to the x-axis and y-axis\n", + " which='both', # both major and minor ticks are affected\n", + " bottom=False, # ticks along the bottom edge are off\n", + " top=False, # ticks along the top edge are off\n", + " left=False, # ticks along the left edge are off\n", + " right=False, # ticks along the right edge are off\n", + " labelbottom=False,\n", + " labelleft=False) \n", + "imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)\n", + "plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)\n", + "plt.title('Target vs. Source',fontsize=15)\n", + "plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)\n", + "plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)\n", + "\n", + "#SSIM between GT and Prediction\n", + "plt.subplot(3,3,6)\n", + "#plt.axis('off')\n", + "plt.tick_params(\n", + " axis='both', # changes apply to the x-axis and y-axis\n", + " which='both', # both major and minor ticks are affected\n", + " bottom=False, # ticks along the bottom edge are off\n", + " top=False, # ticks along the top edge are off\n", + " left=False, # ticks along the left edge are off\n", + " right=False, # ticks along the right edge are off\n", + " labelbottom=False,\n", + " labelleft=False) \n", + "imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)\n", + "plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)\n", + "plt.title('Target vs. Prediction',fontsize=15)\n", + "plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)\n", + "\n", + "#Root Squared Error between GT and Source\n", + "plt.subplot(3,3,8)\n", + "#plt.axis('off')\n", + "plt.tick_params(\n", + " axis='both', # changes apply to the x-axis and y-axis\n", + " which='both', # both major and minor ticks are affected\n", + " bottom=False, # ticks along the bottom edge are off\n", + " top=False, # ticks along the top edge are off\n", + " left=False, # ticks along the left edge are off\n", + " right=False, # ticks along the right edge are off\n", + " labelbottom=False,\n", + " labelleft=False) \n", + "imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)\n", + "plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)\n", + "plt.title('Target vs. Source',fontsize=15)\n", + "plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)\n", + "#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))\n", + "plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)\n", + "\n", + "#Root Squared Error between GT and Prediction\n", + "plt.subplot(3,3,9)\n", + "#plt.axis('off')\n", + "plt.tick_params(\n", + " axis='both', # changes apply to the x-axis and y-axis\n", + " which='both', # both major and minor ticks are affected\n", + " bottom=False, # ticks along the bottom edge are off\n", + " top=False, # ticks along the top edge are off\n", + " left=False, # ticks along the left edge are off\n", + " right=False, # ticks along the right edge are off\n", + " labelbottom=False,\n", + " labelleft=False) \n", + "imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)\n", + "plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)\n", + "plt.title('Target vs. Prediction',fontsize=15)\n", + "plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)\n", + "plt.savefig(full_QC_model_path+'Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)\n", + "\n", + "qc_pdf_export()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "69aJVFfsqXbY" + }, + "source": [ + "# **6. Using the trained model**\n", + "\n", + "---\n", + "\n", + "In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tcPNRq1TrMPB" + }, + "source": [ + "## **6.1. Generate prediction(s) from unseen dataset**\n", + "---\n", + "\n", + "The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).\n", + "\n", + "**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.\n", + "\n", + "**`Result_folder`:** This folder will contain the predicted output images." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "Am2JSmpC0frj" + }, + "outputs": [], + "source": [ + "#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.\n", + "\n", + "Data_folder = \"\" #@param {type:\"string\"}\n", + "Result_folder = \"\" #@param {type:\"string\"}\n", + "\n", + "# model name and path\n", + "#@markdown ###Do you want to use the current trained model?\n", + "Use_the_current_trained_model = True #@param {type:\"boolean\"}\n", + "\n", + "#@markdown ###If not, please provide the path to the model folder:\n", + "\n", + "Prediction_model_folder = \"\" #@param {type:\"string\"}\n", + "\n", + "#Here we find the loaded model name and parent path\n", + "Prediction_model_name = os.path.basename(Prediction_model_folder)\n", + "Prediction_model_path = os.path.dirname(Prediction_model_folder)\n", + "\n", + "if (Use_the_current_trained_model): \n", + " print(\"Using current trained network\")\n", + " Prediction_model_name = model_name\n", + " Prediction_model_path = model_path\n", + "\n", + "full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)\n", + "\n", + "\n", + "if os.path.exists(full_Prediction_model_path):\n", + " print(\"The \"+Prediction_model_name+\" network will be used.\")\n", + "else:\n", + " W = '\\033[0m' # white (normal)\n", + " R = '\\033[31m' # red\n", + " print(R+'!! WARNING: The chosen model does not exist !!'+W)\n", + " print('Please make sure you provide a valid model path and model name before proceeding further.')\n", + "\n", + "\n", + "\n", + "#Activate the pretrained model. \n", + "model_training = CARE(config=None, name=Prediction_model_name, basedir=Prediction_model_path)\n", + "\n", + "\n", + "# creates a loop, creating filenames and saving them\n", + "for filename in os.listdir(Data_folder):\n", + " img = imread(os.path.join(Data_folder,filename))\n", + " restored = model_training.predict(img, axes='YX')\n", + " os.chdir(Result_folder)\n", + " imsave(filename,restored)\n", + "\n", + "print(\"Images saved into folder:\", Result_folder)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "bShxBHY4vFFd" + }, + "source": [ + "## **6.2. Inspect the predicted output**\n", + "---\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "6b2t6SLQvIBO" + }, + "outputs": [], + "source": [ + "# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.\n", + "\n", + "# This will display a randomly chosen dataset input and predicted output\n", + "random_choice = random.choice(os.listdir(Data_folder))\n", + "x = imread(Data_folder+\"/\"+random_choice)\n", + "\n", + "os.chdir(Result_folder)\n", + "y = imread(Result_folder+\"/\"+random_choice)\n", + "\n", + "plt.figure(figsize=(16,8))\n", + "\n", + "plt.subplot(1,2,1)\n", + "plt.axis('off')\n", + "plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')\n", + "plt.title('Input')\n", + "\n", + "plt.subplot(1,2,2)\n", + "plt.axis('off')\n", + "plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')\n", + "plt.title('Predicted output');\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hvkd66PldsXB" + }, + "source": [ + "## **6.3. Download your predictions**\n", + "---\n", + "\n", + "**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "u4pcBe8Z3T2J" + }, + "source": [ + "#**Thank you for using CARE 2D!**" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "collapsed_sections": [], + "machine_shape": "hm", + "name": "CARE_2D_ZeroCostDL4Mic.ipynb", + "provenance": [], + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.4" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} \ No newline at end of file diff --git a/ColabNotebooks/N2V.ipynb b/ColabNotebooks/N2V.ipynb new file mode 100644 index 00000000..91785139 --- /dev/null +++ b/ColabNotebooks/N2V.ipynb @@ -0,0 +1,528 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Noise2Void - 2D Example for BSD68 Data\n", + "\n", + "The data used in this notebook is the same as presented in the paper." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@markdown ##Run this cell to check if you have GPU access\n", + "# %tensorflow_version 1.x\n", + "\n", + "\n", + "import tensorflow as tf\n", + "if tf.test.gpu_device_name()=='':\n", + " print('You do not have GPU access.') \n", + " print('Did you change your runtime ?') \n", + " print('If the runtime setting is correct then Google did not allocate a GPU for your session')\n", + " print('Expect slow performance. To access GPU try reconnecting later')\n", + "\n", + "else:\n", + " print('You have GPU access')\n", + " !nvidia-smi" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Using TensorFlow backend.\n" + ] + } + ], + "source": [ + "# We import all our dependencies.\n", + "from n2v.models import N2VConfig, N2V\n", + "import numpy as np\n", + "from csbdeep.utils import plot_history\n", + "from n2v.utils.n2v_utils import manipulate_val_data\n", + "from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n", + "from matplotlib import pyplot as plt\n", + "import urllib\n", + "import os\n", + "import zipfile" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# We import all our dependencies.\n", + "from n2v.models import N2VConfig, N2V\n", + "import numpy as np\n", + "from csbdeep.utils import plot_history\n", + "from n2v.utils.n2v_utils import manipulate_val_data\n", + "from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n", + "from matplotlib import pyplot as plt\n", + "import urllib\n", + "import os\n", + "import zipfile" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Training Data Preparation" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# create a folder for our data\n", + "if not os.path.isdir('./data'):\n", + " os.mkdir('data')\n", + "\n", + "# check if data has been downloaded already\n", + "zipPath=\"data/BSD68_reproducibility.zip\"\n", + "if not os.path.exists(zipPath):\n", + " #download and unzip data\n", + " data = urllib.request.urlretrieve('https://cloud.mpi-cbg.de/index.php/s/pbj89sV6n6SyM29/download', zipPath)\n", + " with zipfile.ZipFile(zipPath, 'r') as zip_ref:\n", + " zip_ref.extractall(\"data\")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(3168, 180, 180, 1)\n", + "(4, 180, 180, 1)\n" + ] + } + ], + "source": [ + "X = np.load('data/BSD68_reproducibility_data/train/DCNN400_train_gaussian25.npy')\n", + "X_val = np.load('data/BSD68_reproducibility_data/val/DCNN400_validation_gaussian25.npy')\n", + "# Note that we do not round or clip the noisy data to [0,255]\n", + "# If you want to enable clipping and rounding to emulate an 8 bit image format,\n", + "# uncomment the following lines.\n", + "# X = np.round(np.clip(X, 0, 255.))\n", + "# X_val = np.round(np.clip(X_val, 0, 255.))\n", + "\n", + "# Adding channel dimension\n", + "X = X[..., np.newaxis]\n", + "print(X.shape)\n", + "X_val = X_val[..., np.newaxis]\n", + "print(X_val.shape)" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAzUAAAGSCAYAAADaau+TAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOzdedSeVXkv/u9OCATCGBEQiIaCAw60VmrVImpFxQG0iuLU49DTSa1t7XH9eur6naE9p7+61ulwqqvawaqII4I4IQ6IVqoiOEuRqkAIMwiEAEkgcP/+eN7P/Tzv9SaRsSSwr7Wy3jzPc997X/ua9t7faw9tGIZ06tSpU6dOnTp16tSp07ZKi+5tBjp16tSpU6dOnTp16tTprlCf1HTq1KlTp06dOnXq1Gmbpj6p6dSpU6dOnTp16tSp0zZNfVLTqVOnTp06derUqVOnbZr6pKZTp06dOnXq1KlTp07bNPVJTadOnTp16tSpU6dOnbZp6pOaTvcbaq0tbq3d0Fp78N357NZKrbXjW2v/497mo1OnTp06Ja21la21obW23dznz7TWXnV7nr0Tdf1Ja+2f7gq/9ya11v5Ha+34e5uPTtsW9UlNp62W5iYV/t3WWls38/kVd7S8YRhuHYZh52EYLro7n72j1Fr7X621W+bacV1r7V9ba798O9+9uLX21Lubp06dOnXqtGVqrZ3aWvvTTXz//Nba5Xd0AjIMw7OHYXjv3cDXU1trF5ey/3wYhv98V8veRF2vbq3dOtd/Xd9a+05r7Xm3890vtdbudp46dUJ9UtNpq6W5ScXOwzDsnOSiJEfNfPf++vydRbTuJXr/XLv2SnJmkhPvZX46derUqdOW6b1JXtlaa+X7X88kpm+8F3i6N+hrc/3X7kneleQjrbU97mWeOnXqk5pO2y7NZTw+3Fr7YGttbSadzRNba1+fy4Bc1lr729bakrnnt5tL56+c+3z83O+faa2tba19rbV2wB19du73Z7fW/r21tqa19ra57Murf1YbhmG4OZOOcr/W2u6ttYe21k5vrV3TWru6tfa+1tpuc3V8MMm+ST4zh5K9ae77w+favKa1trq19uszVSzfHM+dOnXq1OkO0clJHpDkyb6YG8w/L8lxc5+f21r79lwWY/WWlgDPZi7mljz/n7m4f36S55ZnX9NaO3culp/fWvvtue+XJflMkn1nVjLsW5dvtdaObq2dM9c3fqm1dvDMbxe21v5La+17c/3Ih1trS3+WMIZhuC3JPyfZMcmBrbU9Wmufaq1d1Vq7du7/+8/V8b/n5Pb2OR7fPvf9o1prn5/r865orf3JTBXbt9aOm2vzOa21Q38WT53u39QnNZ22dfq1JB9IsluSDyfZmOT3k+yZ5FeSHJnkt7fw/suT/L9JlmeSDfqzO/psa22vJB9J8ua5ei9I8vjbw3xrbYckr05y4TAM1yVpSf5Xkn2SPDLJz83VmWEYXpbk0iTPnstW/dXcJOWUJH+VSWf72CTfv5Pt69SpU6dOm6FhGNZlEuv/08zXL0nyw2EYvjv3+ca533fPZGLyu621F9yO4n8zk8nRY5McmuSY8vuVc7/vmuQ1Sf66tfaLwzDcmOTZSS6dWclw6eyLrbWHJflgkj9I8sBM+oxPtta2L+04MskBSQ7JpF/aIs2tjvjPSW5I8qNMxpTvTvKQJA9Osi7J25NkGIa3JPlKkjfM8fiG1touSb6Q5NRMALuDkpw2U8XRST6UiSw/oaxOnTZHfVLTaVunM4Zh+OQwDLcNw7BuGIazhmE4cxiGjcMwnJ/kH5I8ZQvvf3QYhrOHYbglyfuT/MKdePZ5Sb4zDMPH53776yRX/wy+X95auy7J6iSPyWRylmEY/n0YhtOGYbh5GIYr58raEv+vTPKZYRg+Mtfmq4dh+M6dbF+nTp06ddoyvTfJMTOZjP80912SZBiGLw3D8P25Pul7mUwmthTD0UuS/M0wDKuHYbgmyf83++MwDJ8ehuEnw4S+nORzmckY/Qw6Nsmnh2H4/Fxf8H8yya48aeaZvx2G4dK5uj+ZLfcVT5jrvy5P8rIkvzYMw5phGH46DMOJwzDcNAzD2iT/O1tu+/OSXD4Mw18Ow7B+GIa1wzCcOfP7GcMwnDIMw61J3pfk529nezvdT2lb2oPQqdOmaPXsh9baI5L8ZZLHJdkpExs/cxPvoctn/n9Tkp3vxLP7zvIxDMPQyqbNTdAHhmF4df2ytbZPkr/NJMu0SybAw1VbKGdFkp/cCZ47derUqdMdpGEYzmitXZ3kBa21szLJyr/Q721y6MtfJHl0ku2T7JDkhNtR9Lx+JMmq2R9ba89O8t+TPCyTfmGnzM/K/6yyx/KGYbittbY6yX4zz9S+Yt8tlPf1YRgOq1+21nbKBIg7Mok9Nru01hbPTUwq3dH+a2lrbbv70d6lTneQeqam07ZOQ/n890l+kOSgYRh2TfLfMlnSdU/SZUn296G11jK/s7gj9NYkG5I8Zo7/V2c+/7W9q5MceCfr6tSpU6dOd5yOyyRD88oknx2G4YqZ3z6QyVKpFcMw7Jbknbl9fdBlmQzy0XidwNwy5RMzybDsPQzD7pksIVNu7RcqXZrJkjDltbm6LrkdfN0R+qMkD0/yy3P91+Gq3AyfqzNZYt2p091CfVLT6b5GuyRZk+TGuY2QW9pPc3fRp5L8YmvtqLk1xr+fybrlO0O7ZLIme01rbUWS/1J+vyLzO4HjkxzZWnvR3OEGe7bWeoq+U6dOne45Oi7JEZnsg6lHMu+S5JphGNa31h6fyb7G20MfSfLG1tr+c4cP/PHMbzI+VyXZOJe1eebM71ckeYBDZTZT9nNba09vk4Nz/igT8Oyrt5O320u7ZLKP5rrW2vJMMkuzVPuvTyV5UGvtD1prO7TWdmm383qDTp02RX1S0+m+Rn+U5FVJ1maStfnwPV3hHEp3bCab9X+aSebk25l0GneU/nsmyxnWZIL21aOe/zzJ/5w7weYPhmG4IMlRSf6fJNck+VYme3Q6derUqdM9QMMwXJjJhGBZJnF6ll6X5E/b5ETO/5bJhOL20D8m+WyS72YSx0+aqW9tkjfOlXVtJhOlT8z8/sNM9u6cP9c3zFs6NgzDeZlkld6WyX7PozK5IuHm28nb7aW/yWSvztVJvp7JAQCz9H8z2Y90bWvtb+fa9Yw5fi7P5LCBp93NPHW6H1Ebhp+VtezUqdMdodba4kzS/ccMw/CVe5ufTp06derUqVOn+zr1TE2nTncDtdaObJN7ZnbI5AjlW5J8415mq1OnTp06derU6X5BfVLTqdPdQ4clOT+TNc/PyuSIyzuz/KxTp06dOnXq1KnTHaR7bPlZa+3ITNZPLk7yT8Mw/MU9UlGnTp06dep0J6j3U506dep036F7ZFIzt6fg3zPZAHZxkrOSvGwYhn+72yvr1KlTp06d7iD1fqpTp06d7lt0Ty0/e3ySHw/DcP7c6RofSvL8e6iuTp06derU6Y5S76c6derU6T5E291D5e6X+TfjXpxks2eP77jjjsOuu+6aHXbYIUmyYcOG+nuSZO3atUmSJUuWJEm22267eX9vuOGGJMnkXqmM5d12223zvpedWrRoMqdbt25dkmSnnXaa9/zGjfMvrVVva238TZnKQrfccsu872+9dXKZ7vbbbz/vPeUsXrx43vfr16+f93zlZeedJxfDk5X6tNlnPOND2/Gj3qVLlyZJrr322nnvkQUZe6/y43f8qx/53nM33XTTvOe0B23cuHGBPeAVVZlrg7L9rk4y9Zy6letv1QlZ3HjjjQt4TKYyXbZs2bxyq06q7JWrHHxqL/6r3W7O5pTre+3VDnz4fePGjeM76uYD119/fZKp/eFVm2obasYXj+q++ebJyaHVF6u9bq5N/pJxtQltnfXR2b+o+rZ2e6/qBN/apz7vsQntqH69aNGiBWXtttvkKgnxTHxDVQaV8FBlwO5rbEA1BtUYdvPNN+fGG2/M+vXr7+nLarcWusP91C677DLKr+qBDdffEf2TN1ustu25amPiFluvtqsfU++iRYsWxGV2jGe8qhMv6mIjeN11113ntUk56sEDf8Cbz2y+ttVf5Vf+xCN8VdvGp99rXMA/Pmps3FyM9Vd94qPvr7766uyzzz7z2lBjhc9kU2Mff8dj7Teqn2/OPjxPtupTvnJqf6ScKkNkXEWX4gx+xTPla1+NpZ4X77TH99dcc8283+l+/fr1C8YxNc5ubtxU9VdjH97Z/XXXXTePd7LCq/evuuqqJNNYyz4953sy9pfskHrotMYM9dUxzO67755koW58pkPtVw5b2WWXXZJMx3tsZpZH71Z7ILsqU3rzHl+v4yv0s3yt2vtsm66//vqsW7duk/3UPTWp+ZnUWvutJL+VJHvssUf+63/9r6NBnXbaaUmSF7/4xUmSH//4x0mSn/u5yZ1NP/zhD5NMDe6KKyaX+T7kIZMLcxn+mjVrkiTLly9PMlXWRRddlCR55CMfmWQSmJLkEY94xLz6GHwd0K9ZsyYPfODkbkUK8Pk73/lOkuSyyy5Lkrz0pS9NMnWCn/zkJ0mS73//+0mSffedd5z8qOif//nJ/YmchdGp51vf+ta8NgtgOgU8CxTaQsZ77bVXkuScc86ZJzPOqB6Giu+f/vSn8/j2/tOeNjlanpOsWrUqydQQDz744HlyEMjIr040N27cODqwIP+4xz0uSXLJJZNLkK+88sokUzt4+MMfnmSqZ22mX06y9957J0kuvPDCJFMZP+ABD5hXH2etHSteBZILLrggydTe2BFdXHzxxUmm9sb+OLF6/SVjwaPyzza+9rWvJZl2tAcddNC895VPJzXQbdiwYXwXz9p4+eWXJ5naEdpjjz2STGVOB3jV1hUrJhdjsxM+yo7ZIXvYf//9kyy0C74rQKqfDSi/dkLsnuxWrlw5j0/BvQbU2llqj/fZxvnnn58k+aVf+qUkU/8jN7a1YsWK7LnnnkmmvsT3H/OYyXVCYsWTnvSkJNP443uDJr6hTeKRoE+Wv/zLvzyvnl/5lV9JknziE5NrLWrcY3e33npr3vWud6XTlGb7qe233z4HHnhgHvvYx8bnJHn605+eZCpfeuKPtdM/44wzkiQHHHBAkmncOu+885JMbVMMZoPf/va3533/4AdPLpynb7b56U9/OkmyevXqvPnNb04ytY3PfvazSabxX5/69a9/PcnU//idGKzNbA4ZRPHLs846a54M8HrIIYfMq4ef6L/EA7YuRj75yU9OMo1xZ555ZpKF8YKM8eOz/piPHHbYYUmSL3zhC/NkduihhyaZ9h1iN5krz5hE/7Z48eKRN3GXv+Plq1/96rx31f3Qhz40SfKUpzwlSfLNb34zydQu2IvxEB2I5/x39erJvJys6fbSSy9NshBMq2MD9kvmZIx/sVCMZXd0+OhHPzpJ8sQnPjFJ8pWvfGWeTN/73vfOayd+xWry+8EPfpBkGlP1Gd/4xjfGmMVe9SP6RP2D2Ck+sx/2+I1vTA4kfdjDHjbvfT551FFHzeOJffJtPq0vFaPJkqz322+/efV6nuzpEL+vfe1r5/HNlg488MAkCydT3qtgoXEY3bERtsbG2Ap/u/HGG/Pd7343s6TMZz/72UmmNu+5Rz3qUUmm4x4TLXZmbKAv/8Vf/MUk0zEG36ILbTG+N/FSLj523333vOENb8jm6J5afnZJkhUzn/ef+26kYRj+YRiGQ4dhOHR2ltipU6dOnTr9B9Ad6qcM6jp16tSp09ZJ91Sm5qwkD22tHZBJJ/HSTG7A3STdeuutWbNmzThrhkiYTUJlKxJuNgtt8bv3amrNTPCpT31qkuTf/m2yHxSyAnmATkFrIAbo+uuvH1G1X/iFX0gyzViYVZrpy3CYqUN0IdlmqTU9D9GCSEHrfvSjH82TgWzEv/7rvyZJXvOa18xrw7//+78nmaKzyq1LsCBU+NKuL37xi0mmaA0Ezyz6137t1+bJUnvqUirIQk05Q6XOPffceeVeddVVYxtlVtRBxsqCNENN6BtSVZf3QFO8xy6gjWQBFaETMqvLH8mKbityAbFgX35nd+wbv7IP+KjLWSBa6mcj0CKyJS/vq0+79ttvv/HZz33uc0mmiGrN0KiDb9En+6EjsmTPUDkoD9l7Hk/Kh6ySMZnQKR3ThbZAYKHUdC4bAoWsKCb7oxMxBSLos3bRid9lZOqylBe84AVJklNOOWVE2zyr7TJ8yoRoykjSN9/gm+xKbPE+hFOMQN/73vfm8SxmPP7xj0+SvOc970kyiYts5X5Cd6if2mGHHXLQQQeN6CfbqhlWMUzsExe+/OUvJ5naovdlN8je7xB96GvNnqhHX8Rn+fCVV145xgqoKOSaTcgKibU1bospsg3Pe97zkkxtrC5HY9vQWv5ONrXfkwHl15/5zGeSTGXKxo0JtP1BD3rQPD7J2HvihfgmbonF4gv02PeIjz33uc9NMs1wkZv4s/vuu4+yIHf9hbbjnf+LEWKdurzH3+vyenYh5rGXF77whUmmmXuyFg/YnSwDu5GdOPnkk5NM+zFZBnFLtoGd6VcQvo1BPK99FXHHj7hFp894xjOSTHXDth7wgAeMMUvZddm6sR5e6NXYzfgJD+xCDNb/kB1Z07P+ha7rUm861b/oD/SDsuYve9nLkkyzWVawfPSjH50nq7rET+aFTYnlxg512ZmMTx1baB/Zig+///u/P/oQvZG57/kaWesTxcOaaZNx4euyZPp84y2yqtsB+Au71Lc/5znPWZAxnqV7ZFIzDMPG1tobknw2k6My/3kYhnPuibo6derUqVOnO0q9n+rUqVOn+xbdY3tqhmE4Jckpt+fZxYsXZ9dddx3RFDNA6InZq9kmtMcs2EzRDBBKYy2598wMzTTNsq3n9bvZrlm+5yEYD37wg8fMR80WmCFbd2pGbvYKzYGQQQq873trIOvmKegJRML6REiCGb4Zej38gIzM+KHCdY/ESSedlGSKJtIJWUK2za7xDRGRlYCEa19FnWrmB0pz0EEHLdgoBmWxDpQdVOQAiiebQL/qqhmYuqEN6uGzdbVQDrohC0grvujA2lO2QEZ+h+rQhQwNm9BeqErdZK89dc8MZAWfyoEAzu4v0Ta2T3/WQ0PzIFnQEzLSZvpjP8qV4eOT2k62eCVbbeKb6iEbOuZn6oUK4ZOs2SG74sNiBiSaHbOduqGS30G88U9OYgV++d/KlSvz+c9/PknyrGc9ax7vdfPpMccck2SqA/EQ73zd92QLCWOPeJWVJWPIm6wyHsWEL37xi2Md9xe6I/3UMAzZuHHjKHexnR7J0R4D/g1hlNUWL6C+4gO9yDKywWovbPj0009PMrUPtiouPvCBDxz1qUz7Jdi5TD775T/awu9k7KHqMj78XXwXS9mw/qfuJSAb6K1YhV97G2R+xAWrLMQRcUN84e/8+ZnPfGaSacZYLLS3gA48r7+re/7Ig5zsQ9l+++3H39gFPYqzeCMbfbXYIwMiC6SN9izUTeh4etGLXjTv/bq/hCzEF5k/fTIdaKt+BQrOnmTLZNM8J/bLANUxB5tgc3VPH1uQddBfs2/PH3744eM+0k996lNJpnahTyRjZdTxjjplJOldvLbfSb8jlvori6Qt9M+X+egRRxwxT2ZkJSNjRQQfZU90wZ/4A/8Uw/GpPuM27axZepks/R1Z8wdyveqqq0Z7MY4QM045ZRIe9b36Mb4n/rBbbVc2PZOBeuwl5NtVh7KkdSXWqaeeOv5/U3RP7anp1KlTp06dOnXq1KlTp/8QutdOP5uldevW5fvf//54igvkCrIMPYIIQLLNgs1qzd7MdiEUdR0/lMb39UQn9UE81AulWrVq1YgEzJ7WlUzXz0JToBl1f0dFKczYZ09YmyXlQMrNzOv+iXrUZT1SEPpj9msdLPTGbNrM/tRTT00yPZkJAg9lgpBDV8gMf3QEvYJIQAOgkTWLcskll4w8QdfoRZaH7BCZkqEy6xGBTo5jNxAlyLb3ofCQJzqo+z6gLtAU/JI53UPY2SXUX/10y5YgdNolYwOlZJeQNn7jcz0SVP32txx44IELEEl1O3HHvgt65kNkA6liv9BJMoG+sBfoH5QH4d3vZIc/OiBLCCz0URvJSPn+Qo28X/d0QYfYY82ikU893ptc6Lwe0blkyZLx1B8+JZMCbasnJ2qLeEOP7EDGhX3zMT6HN+WwI2uTtZWuxMN99933/ran5g7R4sWLs2zZsjGGirl1Lx90tNqIGCoOQebJn43qv+i5Zjf5IF15j+/yNT4zS/bniIH6I6sa2Kh3IcV4Vxde9FMy/uK/98mCzbNlsZZ/iX36ETFeW8gS4v6KV7wiSXLCCSckmdq4+uxRwHeNhb4XQ/mWrJqYjB/9Zc3GLF++fKzTd/UkNnr0fT2yHQ8yrmIeXRif4AWvdMP/6UDMIztxoY4NxEL2+/znT65oYsd04nfv1xNbZeXEyLrPSpwSI8lS1uH4449PMkX4xWS2dtttt412on8wzmLHZOUEtrpHU1tkCZTznOc8J8k0Y0hnxjkyL3xUW3xmD+yS/fNhYwYylJ3wPr8zdhCT2b+MyrHHHpskCzLp/Jeu/U6GdVVSPW2WXFatWjXKRDbLM3UvM17JQt9frw2pe9tltulGP8SXP/axjyWZjkH1f+xKvDzssMO2eO1Bz9R06tSpU6dOnTp16tRpm6atIlOzaNGiLF26dESQ6z0j9bQoM0Hoz9lnn51kijxAPGQDIO9mfGbDZpwyQtBis2/PK8+se+eddx5RGetjoStm5N6FspihQ4ihM1AOM/KaVdrc3hi82LdR96ZAB31vtl33Y1j/XWfTkGZ8km3db/Krv/qrSabreqFWZAg1ogvtJA/l4huq/a1vfWvBJZj0RU/WaUMr/IWSkBn9a/vLXz454EgWio7qHhbooHqrPfnMfiBYvqcrn9kdNKfeiQJ5kwVhG3RNp1AqRFeQQEgLvtQPzZKFuPrqq0eZQXBkGqB+UBjPkXnNlkKWtNEJN1A/9lwvJmV3fNyadnZNNhClul8JOe1Geeojs3qCHVk5lYZN1fW79h5AwOpeIrJnK2yDTZ133nmjjaubXcrUQf7riU/qwJv9TeKguvmY52r8hOLRpedqdrheVNppIS1atGiM0fU0OvKsmTCIs3jjeTFQTJR5Fpv1O/RZ+8F65xNb5TN77LHHgrX9+iexyrt4t2fAHlB+Yi8o/2DT9nXUjK/66qlM/Mj7/FTWHLFp2cx6iuhxxx2XZBoH7Fcie5+106lqfEGMFGPJhS8+4QlPSDLtD+nEe+LGxRdfvOASTTKrJ0Tah1FXgPzzP/9zkmkM03ayEXvxonyIt5j3zne+M8k0o6PPt//JXghZeG1Vjpiun4C06/+Mt5TPnuteTTqmU9+zb9l9fYS4V7Pm5LlkyZJ5Y69kOi6SfRYb9RNkJcPBTtgd3tiVWOp92QKZHjFb22V28C6msndtrSsbPvnJT857TvZB++rpfPp6GSayplOEXyeQkZ1Yww9kiGSWZu9j41vswJhA/6AMcUbcIrN6ib02Ko9+yUQbP/KRjyRZmPE2JnTyoj07D3zgA7Ol4/V7pqZTp06dOnXq1KlTp07bNG0VmZrtt98+K1asGGfRZrVQGjNCs2F/raeEaFkHCSE364V0QB7MHCEekARoDGSsImBQ6ksvvXREcsyMIT517woEoJ4KVk91gWzJEvhrRopXCADUDloImYD+aCvUxCzczLzuBdLGeja4mb69N5AJCLj3IHQIogEFggTaC0F3kBfyka075JBDxmcgW2RR70YhO0gCZMHfus/HeuuKrtCZ9d5kDgVhT3RvDSiiGwgZe8EXWbOBenKY5yB/7Jau8OuEE4gIWyFD7yuXjCH3kMDW2ijLupdLdswaZZkYvEMs69p579U16AgPvq/7gKCIyme3Mh9kgh86qtkxqBE0XD10U09FU07N0vJfMYKc6p1EMrayL/hZtGjRmA2FgIo/0DZt4yN1HxK7I1MZvXpzs7hVs6V4IzP11DXL+++//xbXKt/faenSpXnYwx422g5Z1X0XdC+bLw7QK1ujB/0d9JltsWUxl1+zYTGeTyhPn3DOOeeMvOhn2J7YI/aJITK06uY/+hd1yiroFyDhEOCa8dEv1n1B/Fzc1wa3h9dstqyE7+354xNsny6c3EQmMk74E4/0c2SNn7p6g07ZwNFHHz36KdlA0yHTvoeSazP7kcGxP9DzdT8sPbMfmZJqV29605uSJH/5l385r43iDL+v2S/9JFuo/Qb7FE/Ux7bqHWfijffJVP9TT+BTbh3bXHrppWOMgtqzMydLvvKVrxyfTaYrUGpd+hkyE6/ZkewEfdf9sni12qH6kbb7bDxljKoPOProo5MsvHOIbuqeY2MDsqEL/Gr3v/zLvySZ6sL39e4fYxVj5ne84x1j3/uSl7xkXlvEgnryKLsUv+jGPiEyV4c6xRL9HzurdwXRkffUd/rpp2/xlM7eg3Xq1KlTp06dOnXq1Gmbpq0iU3PzzTdn9erVC25YN4ODlsgOmLXWU88gA94zKzXrhXT5axYLTYJ0+gsxMXs2S7/55ptHVKOeUAMNMcuFouIBQmUNqD0kSF3WndpjgKB0voccmfVCIsyqIWNQkrrPhGzJECJd19dDJiADEAwojXp8Tx7WSZp9yyZAJjxHV7P34tAzNKNmBegRSkgnUBooCZlCSejb71AMuoIQQO3sFyIbaGe9cbne9QAtlM1TD/7JTvn1Jnntw7/vISD4heZARCriS7dsQgZnjz32GMvGi7LUpa31hmR1kznZ8BG+XO+bUb56yaieMMcu6n4p30M72duJJ544r172yg+hmtBAMmDndV2w96FU2qVefLJj8mCz7lN4/OMfP+75IyN6gjZ5F/rLPiC9fJ694QW67DnxzPdkz9fJhsysFxeD6KTTpmm77bbLnnvuOa75J1c2KXPLxvgKm0f6Kf2L2MdH6h6oww8/PMk0DipXnLO+n5+zo9WrV4+2hEd2K9tb/RBP7P4d73hHkumdEVBaaKxYw+7ZOhuX/eZfVl2wWW3iAzX76Pl6Vxki87qnRgzX/0D2tdtJq8oXU+s+JT7ie3HR/pTLLrts1F+CVQYAACAASURBVJdYoo8nc+g5WbAHMYtfv/a1r00y7SPpv6760EfWrJW+3L6NeuLkySefPO+zMYMMUd2b6TkxXv9Jl2QqMyX7Ze9DzTR5Tkxmx2ytjt9mxxLs0h4ZPqN/oncZRHZYPxuHGceI84h9eZ5/sEefa1aL3cpCsEd25Z4ntvHhD384ydRv+LLPypcppUt+5x5BWQw2YGyDL3yzd2MRfQV/3nPPPUdZKEsmzxiXDvDCvtkrnxBLyKLeI1hPa2R/2mosoHz9pHHSgQceOO5p2xT1TE2nTp06derUqVOnTp22adoqMjXJZDZvRlfP14fmzK4VThZmWCAVEEnIl+yJkybMEKG/Zn1mqOo3I4WwQxZWrly54MZZ78gmmSnjEXpi7Wa9lwUSZo1yRaQgyE7DmD3hZrZ+KG49qcss3H4QMjQLrms5zbYhXRA5soBYQwehON6Tdaini5D57OlQyRTZg4wtXrx4RJohN+qiV7xAY9RFJ1A1dZM1tE6d9TZ4GRr2CEmD1kFG6+3T+CUz9lrPZbcGX1tnbwCfLQ8/kHoyJTv2ClGTsVIfG2GTTqpT3m677Ta+w07x4hlU19VCb/ga36r31FhHXu83ggZC2uzd0SYIErutd3XwD6gh/usN6p63pvlDH/rQPH7YEFnTwea+Z4tsTDn8TEaU7G+44YaRh3pbtbLFFfp1UhM79Lfumak3cYsRfJBdkKX6oJQQYDrZe++9R147LaRbb701N9xww6hb93qwQfsFZ+/aSqZoaT0pSfaazfJ7z3lfTIeE8mfx0F0bH/zgB5NMfetxj3vcGM8rD2yCLYh1Mjn15nT9mf0a7BzvYg5kusZ/CLH+p2aAa1xgw1ZjkDFZiD/6Id/zR1kBWQK+4TMfqv2d7/HFFyHd+PP3zDPPHDNzYoD+hGzEjnpH1zHHHJNk2h/hUTzwPt7qCVbQfTLXn9Dt5vY1aosMEoRcBondQeDrCXbikBhv7CALoZ0+y0jpL9WDD30KuyYHcfKCCy4Y+1SoPr3LMrMP/RAZypDUWMsfyFibxP06HmEXskzGb2RCZvbKyADas6PPrntEjV3IzLiMf9QT8fBnDKEftLJHP1dP78RfvXfNfuFvf/vbYxbo9NNPTzLtX+rKD75Odp6zB9p+OG0xfqIDeqZDfTe7YCfslu43de/Wpqhnajp16tSpU6dOnTp16rRN01aRqWmtZcmSJSOyYJYL9fXXbByyADmHMkFnIRlmv2alZq/eg6zU02XqGnjPm3nuuOOOI7IEdcODTApezfStBTWjtp4VTx//+Mfn1WkGDXWBDEDMob5m/BVxx5+2QYOsqyVD5UDEIQLag5xwISNTb6CH+kKp6t0Lvlc+GVeylnSfffYZ5Q4dIQvoBCQHcuB7ddTMC2QJygfhridoQRygJBACSAW+6n4U70O7rW2v94dYO4wfKKjz2tlGtal6X0Xdx1EJAlYR3tk1/OxMhgQqg9gNtIXsILx0A5Wjg3r/BLuB3EIZycz3dFZPgyErqA1U2/s+1xgiJtCZjFBFNyFxbMx6cfLhdxA0NkYO9eRFtrN48eLx/57l4zLE4g50uWYQyZwunEwoxvAxsqn7omTT6k3zkFg+98Mf/nD8f6eFtH79+pxzzjmjX7OdijSLlWy/3uNBb2xXLIXCsnX64QN1X4B4BFkVL8S7r33ta6PtiUH8QsbEu9rAftkIZFf/AHlmJ37nf/yebOppemIcRFv/VffQ8TPIc937IwbWfSWIf/Mh7ZIdqKd8kr2sgnhhBYXMFbn4e/PNN4+xTbaoxkKx7UlPelKSaayRBVI3vdZsgPL+/u//Psl0jxX/1g+I0eIA+6un7ZGV7+lI3LB6QpbOqVZiHt3XvX51xQQ+IPLeY8faRdb4Ev/Y1IMf/OB86Utfmtem2dUGs7Ki59q36tPphL3rW9mROrVFOWTHX37jN34jyXTVBmKnsnNIPWTsd2PKuorD+Iy903U9BVB7/FU+OXje37onyBjqoIMOGt91gq6xgM+I79ATe9W/aQvZiW+ywOwcL/bY8Om3ve1tSZI//MM/TDLt32Td9tlnn1F/m6KeqenUqVOnTp06derUqdM2TQ3Scm/SnnvuOTz3uc9dcMqTPQDW71qXC6mABpkxQpPMPs3e694YSBaEo56AVE/dMrP0/TXXXLPg7pRKZuL1dnAID7S+nupiVgyJgBjU88ahK9A6bYIcQPshc2RaT5JTP9lBqo844ogkU5SGndQ7VCAC2ocv7aYD5UOBkPbgA3K3atWqEbWoZ71DEGRoIKd4ghjQmzbJDpA9JBRqCdXxe81eQZagHuwKn+xXBhB/7NTvUBjoD93Uu4kgvPVuFrKGYJAPWUNx2BK0B7/ev/7660eUjEyPPPLIJNPTwtQBjeQD7Eo2FFpX12GrGylH9sD7/rJzsoDEkRXZa5uYUG9fZ0/44he+Vz5kjk74BTl4T710xb/slat74PjZokWLRgSXXbLxugeG79R7JyBY4pE2QY/5pP16n/vc55JMETCIaZW1+CeD8N3vfjenn356rr322vmwd6ckybJly4aDDz549Hf9Btt5wQtekCQ54YQTkkxPgXISnmy2uCJrwS4g42xOH8Je+A7bZQd/93d/l2TaV8zuD+CPeNNvsRkxxed6TxXb1ed6jm1Cmj1PJvxBnPC9TAnbh77yAaeibe6eDqeGQoH1S2Jt3dtaT3z0e73zzt93vvOdSab9qawb9HnWV5JJ36JMcZre6/4fvKtL1gFvs6c7JdM9L+pUj/5B7CNr/Q5iT3jXf1nJUu/Ksx9F+forsZYuyFTWTgaK7chE4ZudGyv4Hl/Kt4qF/dordumll47v4JFvyBLoo8nWc+yp7qvlw8aY7jMiy3qyKB/nP2THf7RBH6tefuE9+1DogEysCDDuqzL1O38zNvC9GKIefUk9JZcNkRc7v/7660f71Ta8s4O6WkE/Zazod7zwbbJXF/uyUsH32lr3eJEZn9+wYUNOOeWU/PSnP91kP9UzNZ06derUqVOnTp06ddqmaavYU7Nx48asWbNmRGPM5KwhNws2y4REm3VXRBpCWU8cc1M8dKeeIAa1rad3QJUgCNddd92IttW7HZStDZADCBcUw5pfPNR1rdA4WQizYLwhMvGe2TMUxKwZOu90NW2DmHvOzP/d7353kinChm9oo3qhhxAOSIFZtywIBEK7tcssXLbBiUJr164ds0S+q+tRoWXaRk8141ZPuoKQ1f0e0EeykW3CI1nQsTaTNd2QISSCrOha5hE/7JktKb/eVE63dAZJqae1sfd6ch3EEB9Lly5dcFs9dAWxc3p3+kzNlkKOyJCsIKjsv6KKvudbkN16mhlEzk3JUFCoDpTJ+2TOFqDY9VS3uvb5xS9+cZJpdg9qClkjQyiqjA1bhdBBZR/60IeOvqwN5E9W0D5ZMft+IPyIfbFLsYXdI7pSn/fIhn2Spe+XLFmyYG9CpyktXrw4u+666xi7IIdshb9Dvt///vcnmfrCe97zniTJ6173uiTTfgqKKjZDNsUH/s5X9GPqlVWU5RT3vve97y1AjNkCf9Pv1Puf2Ki6IbtipZgrhkJv9RP2HtizIgbVcpQvHtR9JHzkH//xH5NM7+mw5+3LX/5ykmn8EjOVq0/Xv5Gp+usJk2KsjJI+p+4xJZ9DDjlk1J866x0aeNDGOkbwvN/rqgqyZUdO/HLKlLZ8+tOfTjK1K3FB7Kt8iv10rz91x4r+iEzdraKPVy7+2BrdkFkd45CHGK0e/bQsnZi+aNGiBSe6sRN/vSvesxPZJDLBa70fj8/oV+peGuWyb7rQDxrvkM1pp52WZGo/yhOT6ymdMov6cnZI9t6XxTUeVD5dkyX/xA9Z1/vo7PU5++yzx76t3k9DBmIHX3bCobZqgzEfHtiVfsm+blnQureTbMUi5c7edVezkrPUMzWdOnXq1KlTp06dOnXapmmr2FOz3377Db/7u787zobNVpF1ijIykO56ihmEE0pU75CAiCoPygtRqAiEGWpFtdatW7fgnPt6azc0QpsgBXgwA683lmtjvaPFe2b29XQaz2mb+qF7FRn3V7ar7hEy4683LFvbCuGAxNc7Fuo+j5otgJzVE73M2q+99trx1C4IZj3nHtoh+1MzFfYWyDaoWx1kiaCD9dZh9eGVLukMEkWGfkfQRDolI4icbIH363pyKI16tJtM630mbACyQS50RycXXXTRWDc7gpLUk+XqGl3oYLVfPJA5ZEpbahtlCTxPN+wOfxA3aFG9VZtu2ZGMEBlAsvBdT0XT/iozfkpm/AKfkC7l8lPlPulJT1qwVpydqcP6aRkAqCQ917s4ZjNts3WJi/V0NMg/3vyuLbOo5XHHHZfLL7+8p2s2Qbvvvvvw1Kc+dbS1uncNicWQyHe9611Jppk48uYT9T41f9mkjCt7sA9AlpQti+ls88YbbxzjOZ7Zs89OorInra5KEKO0lR/U/WCyCtriZC+ZTDw94xnPmFcfP5X5FNP0L3Wfo6yU7Hg9MZBvaac4xtbVBz3Gh2yJz7IYdGH/Gj8X5/bYY48FmXj+TF/82XP13j1INGSbv4qddQWC7LF+T0x0whc7UA4dieX+aqMsFYT+937v95JMY6kxzWc/+9kk0ywW25BNU68s+9vf/vYk07GHMQJdKZ8tfu1rX5vXLra02267jTLQT4h9+jR2QM+1P1Kn/qeuQHB/Gbvhe/Yt6dPruIg/8BfvkYnMpH7OPiFjGvxon36W7vFT95yyS7Hn85//fJKpDdXTSf0lL8/J/Oy0007juMEKKfFL3exZxs/3+hNt8P1XvvKVJFNdvOY1r0kyHZ+wf++xD3ZBtmSA10MOOSTvete7ctlll/U9NZ06derUqVOnTp06dbrv0VaTqXn9618/zlqhQpAGSKVZcj0v3awYYgXdMbuGYEIsIAJmirN3oyRTNAbiZq2oGecee+wxzvydWALhxruZNLQB2gF18zvezaR9X9FViDA0CKprdus9SBMUTzbKaSDaqj4E0SMraAtkAB/W4ZNVve2VzCD4EK56y73P9Z6RWaRFHZVHMj/qqKOSTE/qgspUZMrz1n+zE0hVXTvse+9DVOm8onr2PsgUQqYgTpAH63bJ1O/qg3BAJCBy0Mt6+zU/oHtrXGUGIB1+p0PIyBOe8IRxfX69G4CsIVL0K3MCzeFrZAQp9Zf+656weu4+n4OgQaz4FRmyW7LAt/rppK7DhezaTyLu8SN2y0+1k26sxa6n4NSz/aGy6IYbbhhlJDaogx3igT3Y12TNPH1CuPigfTt4dbofWbvbQeyoe2ue9rSnJZlmwZYvX553vOMdueSSS3qmZhO01157Dcccc8yoz4rAy1ZUlJ9+IZr2Z0E87ZEQs9mwPkAcFHf0h8rhK3yWLS9ZsmSMAVB4MUXGwj6MP/mTP0kyXR+vLtklNgt9dbIW+693VXheTBR78Ox9WQv9jrZCvsVmWQr7PU466aQk06yIeFL3PogTZKlc6/4h/OqvGST7ovg3H5J5OvTQQ8d9hvaS0D8ZkBF9i3F0QW/6BQi52CtGkw2Z1DtHanZcP6GPNh6SUanZCP1nzSSTiefEHzITY/Ejk2OlhHEW+fzFX/xFkmn8Ea/oRkbA54c85CFjnyXW8Sl9YM1KifNkT8ZiHR3JJOLBWE8/SIf1JEr18Fm+zm/YqX3d/NAYgI6sOKh3lckgWXXEHsUYfGif8tm91SHGNPpJ8hMz7AVdtWrV2B/pR9i4Ppjtk5UxgPhHBtqmLuMXv7NHsjBG0GbEvupdWQ984ANz0kkn5aqrruqZmk6dOnXq1KlTp06dOt33aKs4/ey2227L2rVrF6xPNGNH0CYzPgiX2Tckw0zSul4zTYgCxAyaC8k0M4W0e8/3ZqBXXXXVgrOzPQt5hhyYjSoTj3Xtv/fMSqEdkCwIgOyB2S/EwCy23ntTT7KANJvJm8GbbUOw6i3IdGOfEVmQKf61EyplDWdF0sgLOlD3ArXWxpk+ZFmblAnBrrfLQqSgjs5Rrzeuk1k9jQpCBpGFXEAmIFV0A92BUtb7ltiK76EtZAzFpFv1s2N8aD9bcPKKLAeq2RBIovLZwg9+8INRllBC9kGvlRcyoGeyZx9VBuy/nlxX93xBAfkT/yBbPq9c/OIHQgWBg+RB6CDG1o9rJ3SSvUL46mlu0FO65Kc+Q7DxQcc33njj6Ft1nXTdxwSZggqTJVSNHfqer0Jq2RF0scaEeqoSf/D9ddddt8VTZe7vtP3222fFihUjcl5jF1uhVwgkEvvokT3UfWVirM/qEavpre5DY8t844ADDhj7Ebwogx/JXn/sYx9LMvX7V77ylUmmflBPaHSCkeyi2FL32NiHwUbFHrLQH/Jj/mdfpLigXDFdXJBB1aez5XpvlIyNfvN973tfkmkmCtGluGf/Ev7qqVnLli0bYxbe9D/2TUCgxVonc7nPqPbB9KlfkqUQy+oJWv5qM13JAihXG+o9WdpUT9EkS/uu2Js4Zfwk4yR2yNCIU2K+fVPaS9b1pD32673169cv2Mulf+ALYqR36t0o2kQ3ZFb3FtMZ/Vqdo41In1tPo60nFLIjfbXy9Ysf//jHk0z9B5E5kp2le7LCJ9vTr9EdudGVFThWA+gvt9tuuwX7t+mHXbFLhAfxrp5SjMQUMmZH9Y5F9WkTPzFO4j+ttbF9m6KeqenUqVOnTp06derUqdM2TVtFpqa1lqVLl46zbrNds2ozP7Mz6BB0FIrsOchXPdnJTM/zZscVyYQqew6SoPwHPehBI/oCcTKbrHeVmJmb7UKYIGAQLWVDHKyHr7fEW5Nsxm22669ZcT2No94DAAmrN0u/4x3vSJK86lWvSjKdnc+e/JYs3ANUz9zXXmgR3R522GFJpmik5+z5mb3XoCJG9ADl0Ma6X6migNBISAH0xfuQNkR3yoNgQP+to1U+ftgdVNBeHjKHOvoeSlhl6H1oEmSDDmezAMkURZK50h7trnuG2PuVV1654G4e9sIu2HPdU8Ie8FhPGVM3VM/39rbQmVOIjj766CQL71EiK59lNcQAsqID9kqWZI8PMq9+UU8E4o/8FHmPbWqn/RX4hORdd911Y5nQNrIQM8hCW5TBTsUt6GTN6tY9iDIBZFtv0WYnkDWZgNbaqNdOC2njxo259tprRx/gz2Kh2Cc7In7oKyCR9G9PAd9hQ57j1xBvCDY78Vd89Jc9rVy5cuwv2L3+p95rwQa0xV4bsQcPYpX4sLm68VZP3OIXkGIZVJkcGVOrLMhOrGPz9lLIoPIlmRgyFZ/snRAn+KJYrp1kbB+JflIc4mPk+d3vfne0A6g8+1BWPVlR1um1r31tkmmfDrEmo9pnq0f8J2NxgQ7o2O/a6Hdt1Z+JYU7QspdPjK2xXzxi7/WkVn35McccM69+/sC+tUvWnu7IC2K/4447jnUYN8j2VFnp++iCzPCs/6mnxXoeyYIbY9I7Wehfav/JLtij8Q0dGzPoE+pJdU7eNZawKoRskP5Je9gOG6M772nfW9/61iRTv6KDyy+/fPR9ZckW8dG6J1RfS7baQif6L6t9+LCsEZ+sY9A6zrG6goyWLVu2xfvUeg/WqVOnTp06derUqVOnbZq2ikxNMkUfk4UnHEE8zE7rDb2QBjPJelO7WTnUFeJg5ggFgEaZvULszRDxuGbNmjEbUNdR46m2wcwSSgPxqqclQZagqGavvtdWyBE0r6K4ZvqQLGitmX3d+wD5kgHCr3rIQnaj3uVidm/2DRWyVhsytjmUCfIyixaYocv2QDfYAwSgrimnL4iCrBc7gCz47C80RZt8rz4oDJ1BQZzqAqElMwgEXTz3uc9NMkUjlQM1gWhBE71fEV66wS90xnNkz67tGbPulnx+9Vd/ddS7te72p0F6oHrul4B8yeB4r2byoID19myIliwqhJhdKreeusY+oD9kU09P47v8k85kXcmOTsi63prseXJA7FV7683otZ4nPOEJ41p2smCvfIpvendz+5S8XzOX7B7v7lyQsYHcarN9AnjnV1deeeWov04LacOGDfnxj388+q/YKzbX/VxiKduCPNMzv6U3+pQl4DP11Dr65hvKEw/4+6pVq8bYRK94wSsbrDZaM+jaqr948YtfnGQasyrKC6kWPzynTTIg6nnBC16QZJqpFXuhumSiD/eeeIWvejJkPUES8m4/rtjref0U/3b6mX6dLmazKDIbyhCHZadqn15jZd1TQJ/6Kfuh6p4Yf5UvboipYixZWQWhPnEI/8oTu+iCbGUpfC8rwq7pXMz1HLkYK7EJ7ZEVqadtsYEf/OAHoyzr/kNtow+yxbP3tE3/ww7Juu4BtSem7pEUMz3PHrSRD8vIsHu6YI/22NBtvR+ObMm03oVkHIYfsYQc/NUH+F3sJz+x6thjjx33v4kRVkrV088QX9O2uneYHfJhRDbVJ/VT6nG/lzGsvvirX/3qFvupnqnp1KlTp06dOnXq1KnTNk1bRaZmw4YNWbVq1YjOWldo1gqhgjRALqGmfrdmvN50C81RvhkidKme4W99b72teHa9PdQC6lDPM5eh0Ja6FhnCDeWA4kArIM0QArPg2RNBkulsWfZKFgpa4j0zcjLQdsgGfvAPZYEW1duzIRPqkZ3AL77cAO1375nF1xNSIAizCJiMmjark15lcsiMLqAasyd8JFMUo64BZj9kot56PwA0j6zqfRS+l0ly+hgdVWJndT9GRUi0x300bAmfZM+mZAD4A11Cj775zW+OZcxmFpLpeuma/VSH3ytaA03hQxClijTxweoPdKNc5R133HFJpggvv6v3BkCZZMXoVvvICqpJJ2SEX+Wjehpc9Ssokzuw2P3ZZ589lsn+ZGD4BJ8i44r61TsNnJgDwaq8WYetDdBACK2bx6usDzjggFEfnRaS08/4P/9ig2I9W2fjbFicYCP2F55yyilJFt7Rwsf4pLgGQa2nEomHMklPfOIT84lPfCLJtG9j52KOOMz/xMR6I7vYyK/db8HWZBm0pe5RIRMZF++Ltfh61rOeNa/tZKtN+GCzkHbIbb03RywVt9THzvl/PWnSnh4rD+iSzvX/K1asWHCHiX6orhSp98doMyRb3fUkObxpQ0XZrRCgqzreYXd0XftTbcMP5JyslSOLx55lWup9S/bssWdxpmbxtIcOyEvs1G+uXLlytDuyISsrWciI3RpXWNkii0nv+sBnPvOZSab+IDbi3d9qx/YHicl8Uf117w9Z15PB2AYZaQcZ6Wf4r/f5mX5XX+A5Kyrck+N3pFy2+6EPfWiUjfFFPYXTfXzekW2qd2GJX3Vfm5hAh9oss1xP/yMzuibTa665Zt7Krko9U9OpU6dOnTp16tSpU6dtmraKTM1OO+2Un//5nx9nqWbRkGXrHCEK0JZ6CkNd0wwpl70wa4fq1BMozOYhY3X2b53j0qVLF6yTxpsylWG26ndoCYKCeA6aYzZr9grFN+uFIMhSWTtpnwa0Fxqi/IqgmfGSkbPkX/Oa1ySZImpQKGgy9OfEE09MMkWLzK7pBlJo1m8WX2+2xQd5LlmyZETDyIwsZlGyZJpVsk8HAgB1ccoQdB7iZB8HO2M3ZAzdgHzVs+9ljOp9H3QIvYTWKJ89Q+hkIOs6Uagm5KuejKI8Oianmk3AHxtymsl55503rqlXR0UToXV8oJ7cBqGiT8/RBV7oXUaDPdIBe/a8eusdVNXuq0+TCT+spyZBpKGN6ncmvvfYu+fr6Tj8i51Dt/gPBO7Rj370qJ8zzjgjyVRPEH+ER8gXWfKJU089NcnUnqDG9dZsdsvHvU926iEr6F6976nTfFqyZEn22Wef0a/EdDGPTYnBvucb+gx+yB7svaEncc2acrbGV5H3643dbPKKK64Y/ahmt8VrZTvpSHZcrBVDtUGchqrzn3pvhBgpnuhj+VG9uV1MEvNkC6DCbFms1J+87W1vSzL1KfHIje72xECun/e85yWZZlNkcvQBvsc3nZGH8un4sY997PidtoglssRkYzwhxshykam/9c4t+43oSHaCbOhAzHrzm9+cZHpHEDt0upmYyZ7YBJ3WVRuyEWK4+MTO2Ijxk9/xT7aITVqBoJ/Elzv0+EMy7cPpQ/zHs/GSsmXanACpLr6rbOWwA7IXS41P6jjHeIndyWLI7NV9uWTFRr7xjW8kmcpaTDC28Xw9/ZYu+GldQePUQrb2xje+MclUd+TBn2dXo4g77JC9iC/GLdXXZdGMw/UvytFPyYY5Wdc4jQy0XXn8SP187sgjj1ywF32WeqamU6dOnTp16tSpU6dO2zTd6UxNa21FkuOS7J1kSPIPwzD839ba8iQfTrIyyYVJXjIMw7VbKmvdunU555xzRrSm7gWAJpkpmoWaBcuK1Jtv6/n+/pqV15ORzLrriUZQJc/dcsstIyJgpguNMKs0I4fGQFfqqR0QLOtTIQObOwlJG3yuJx5BjCENZGlmbhbsPXwq9yUveUmSKVJW0Zl6ggkZaReEDkoDHVaP56EzPtcTuxYvXrxgDwLeIQrqoGe/2z8BCYe+kbUsAgRLG+u9NW5GVg6E3Glq0J66HwhSIfsB1aFL9gxZhVT4vZYDlYFOQESglmwQWkXn9SQY8oKsPeIRjxjRE74AubJuX2YGgkoGUMG6J4eP4oFO8OBEOzKlO7xDHSFQZKweGST+U+96qTeU0zXfZr+QV/ziq647pwuxyf4U2RC6gJpD3Pj7brvtNsqUvsWtk046KclUv2RhP4G21HjGPskAj3jmW3hjV/Wv8ry/2267jXHnvkJ3Zz91880356KLLhptv554J/bX068QP4cWsz0+Zm05PxdLxcZ6L5EMrDgpDon9P/rRj8ZTC8U22SQ2wO5l+ityXU94Y7v2T9T7M/BQ7Yhtkh0/dJeJvptf6p/EEZldMhVbIfh1NQXZ1AwTOeiv+Wu9X03/Jx6IP3wYYn/BBReM/iPOsgzgSwAAIABJREFUH3/88UmmMUJbtEFMgoTjxXNOqazjHDIT90844YQk0z5f5tXKBTGZbPSj9U4vNiDG+d1+DPFHdoHMxHT1iFfsHD8yQU7e4h8148TG6gmy11xzzRh3jbu8oy+lV7wYsxl/sTunmtEz2TrNr57Uxdf0f1YEaCMd6C/5Cx+s+11lkMhQ/6seMVnmhc1or76bHPTHMjBikn6UjMlSNqSeQrrTTjuNWSB7/urKATLjWzKRvqc/PiPz5j6mP//zP5/3uzGh7K2xgn111af5/E033bTFfuquZGo2JvmjYRgemeQJSV7fWntkkj9OctowDA9Nctrc506dOnXq1Ok/mno/1alTp073E2pmY3e5oNY+nuTtc/+eOgzDZa21ByX50jAMD9/Suw95yEOGt7zlLSNC8dKXvjTJdH26tZ/22jz1qU9NMl3DWU+OMPODrvhrPaZZr/cg+Wbj0CCzerN/s+JbbrllnInX883NkBH0AqKABygOlE7ZEDRtwWs9NUq5ZuaQjHqfAL600ezY7NpzZAzRqOi+cvBn5j97O30yRXvqLcvaB3mod8vUuzfWrVs3ykAWQYakZn+gZmRQESnoI1lCL6AUZAgVZAfKrydneb6eTlNPRYKeyDZAk+qae+tw/Q4xsZ+J7KAr7FZ7ICdQTmiT3+saWP5w0003jW3Eu3ggAwLpqifBsXMyxDvfwXvNjqG614YM6/0ydElWEGTl1exZvTNidl32LD8QNzqEvNWMI1mzc4gaP4SYKVe9dLX33nsv8Fkn17BPqCLfc7dIRVzrnUDsrN5pQAf0rr56sz2eoYjnnXdevvSlL+W6667b/HXN2zjdlX5q7733Ho499thxzb+9MLLcPos3Yhl9sQ2xjz3ULID4Uu+ggOhDMOlXhke2g28tW7Zs7G+goDI1TqwSO8VEflZPdqwneWmzuAGlRdB9/aS+W2bF79BXNsjGxaXTTz89ydSfINiyIrIHsg36P+1xWiIfwzffkeXQnzlZsGaO8SU2i7Hnn3/+GCPJuGafPeu0O0i18Q7e6I2dkKlYV2O1uCIDglc6FLPJXjaBvZGBGEy34o94os3KF5PZlvLYo36I7o2j2FS9KV4M917tZx/xiEeMZdRVFsYdvvcX7/TLZ8RYdsGu+Y6+WL8my1TvkWGP2uRz1Q2Zsfs61vS5xhQyIFv9WB2PaSc/1l9pF1vSbn7H3mdPELbiyfhEWfSMZ+Nu/Yk4Ryf+1r3CxkOvfvWrk0yzXHWvsrjoPX4lHj784Q/Phz70oVxxxRWb7Kfulj01rbWVSR6b5Mwkew/DcNncT5dnkvbf1Du/1Vo7u7V2tqDdqVOnTp063RN0V/upfjFpp06dOm3ddJdPP2ut7ZzkxCR/MAzD9Wa0STIMw9Ba22QqaBiGf0jyD0my7777Dpdccsk4O7amz4zerLWeBW4mCRGoCIJZKIQDSmV2bTZrdvzFL34xyRT5Vq9T1Kybf9SjHjWiLfW8dAgSNM4sFu8mcNA76A2EGNIAEfCembhZdN0j4KSIetMsNAbi8KIXvWgef5C4igBA8qA3EHwzfPxBo6BN6kFm7QYEkAOyhaz53bric889d2w71IOe8ArN9xmapmzrVfHIHpRbT51hP+wJUqY8SAbZQLyPOOKIJNPsFLtCdV0uGSMIXT1ZpyJuZFQRvHrKmuwDXdU7jJyQsnz58rHNMocf/ehHk0wRLv7MN6B20Ejo8cknn5xkat9QNp/5MmQWisN/+DSqJxuSBUSMLv3Fh8/stt7eTXfaa10wdBLaCEXkH74Xc2oGlu7oEt8XX3zx+BtfhEjRV5Ul/XseksWOa9aVPbHbuqfjZS97WZKFa9ZlZ/F62GGHjSjcfY3ujn5qr732Gm677bYRoWS7YjC91Ewbv6s+Rt9iNjtgF/Z1eM7v7Ie+a3aUDZ511lljjKx3t/EnCLE21dvB3eMhq1Dv0KqnYiq/7rWxZwdaq421j8dH3bOqHv4qY0MW+NZPQczFQLKjM/1n3YfrdDSxXByiO1k48nna0542ovGySs95znOSTPfM6DfoU9vdx1JP8LJCRXyXCdKfVRmSeb0XhE5e97rXJZnGTtkoOmE34gJ7rHeekCmdkIU4Iu5A1lHN9rM99deVFWK09nzjG99YsK9ZnKp7VvR9xjnqcs+ZLIF+SJbbPg77gGTX6komdl9PB9VPGAfWE+mUV0/5ZJ/GNghfYr9+TFaWHPyV0TF2JS/9KpsQm5Ax0w033DDKxDhF5gyvr3/965Mkb33rW+fJwu98kI/yTSfg4tVeZfZl/MaOxCp2x9fJYvXq1Vs8qfMuZWpaa0sy6SjePwzDSXNfXzGXzs/c3yvvSh2dOnXq1KnTnaXeT3Xq1KnT/YPuyulnLcm7kpw7DMNfzfz0iSSvSvIXc38/fjvLG2e7kK2KiJvZQwKgK2Z6ZnZmfGbBMjRmgE5GMsOsKLDMjOwJ5ATietFFF42/mW3ivSI8Zsh1vSHUBTphxqxNEGBozXvf+94k0xm5mbxbd6E2CHqjXHtdlGtWrU1kTSZm6dDiui8JulLXXUIV1TOLtiTTdb0yPZAW9UJ/jjnmmBE1MfOHOkCg6BXJuGibuusJXqjeTu19aAuiA3YANfG+tkBqoUbWu0JXZCDxXW8Xnj2FalYmdFlPrIPCQkR8D13SLu9DJfG96667jkiSc/bZrYwb9NAdOewX+kY3eOWr7AJB//DKnmr2QAZHW/gwPyFrfMgyQFhlWDxPpvxFvXRM51AnNkfn7BHCxvYgvjXbAaGGDO+zzz4LUHLv8Bk+hiCx9FTXkdOJ8shW3KQDBNWqa+v5Hrr++usX6G1bp7uznxqGIRs2bFhwnwL/ZquymtBaSKOYCC2F0NMblNX9IvZK+Kz/k4WUNXFKFX6cOvTwhz98tFd/a0xiazKZ+gP9mmxTPXUMssxe9MH8SH0yL+oViz2v33z605+eZLovkr/JDH/wgx9MMl3ZIG5Af2U3ncqmfLKmE3EAf+KXu2DU7zn9nLGB+0qsjFi/fv1Ylr2+9Aj956d174nYIg6Qnfgthvid3mtsUz4dygqI0eqRbav7d2Vg9CfqFyOVS8a+d7KemKm/0b/V/S51747PdVWLfk77Fy9ePLZBP0NWsmBkpO83ntFWqynqKYDe09/wMTGU3tmjttYT3Pgie9MW4y6yrvfF+H72xMJk4eoU9SL2zz5l22RojAtRPe1PP/xP//RPSSZ+5F1xim/XcbO24UGblE2/9bTNmp0lOzGn2rlxFtn4fu3atZnNtFe6K8vPfiXJryf5fmvtO3Pf/UkmncRHWmu/kWRVkpf8rILWr1+fc845Z8EgUWDRWAN+nQUhMUidR73Ek0EaFNRNj3VTmQEdoTIAm8j+7d/+bezApLEZHcOo6XMdTr0g0vcUy3DqQQTayqkNULVtVpbJ1JDIUqASMOpGabJ15KBBrPbRgQE3JzGJ027Bgg58L8Dhi2HjV/mzS7x0Djo+ZBBQL0gjc+8ZLNaBKH1zqsMPPzzJdEMjWeBR+fUSV52ENtl8yNnJgoy0w+TKMoDagQsidMRW2D2d6tR0SiZxOicduYGFSZd6LrnkknGAWw+4qMfP0qMyBXV2i0d145HM6MJEzmCpblzkL2SnY1OPchHZ0ZHBmM8GAPX4yXqcpKWlArbYgy+2w191mrObF2fbqV0HHHDAKHc+WI9bN/gx4NN2dZIZO6sHYLA7bajHVZuMs2cyxYc2Xn755WM8uA/R3dZP3XrrrVm7du2CZcv8Tow2wKhLhPUn+goXHLMlz/mdbRuoO3aYfsVOvkr/swdF1CWtlkaxNQNfZdZN6fy8DoD9bhBZl6CoD5jFH8QeVwc4/lj/wG/5ijayUXzXC2XFMfz7HVhDN3RnIGYJmHKBKvph/g4sEef43sqVK8dJLlmTv7a+853vTLLwyG68ijWzx+sm08mJ2Ewm+lbv1YuC6yWJ+juxyt+6RI9OjS2Up12OEwbKOJxFf+jCU+UrT7ypy+zq9RLapZ8Us3fZZZexDjypm13RF3tBeBZraxz3HiDbEsLf/M3fnMeTyaw4z1fZrTEiO6lLpsTgo446al47qu/TKZkYZ+nrjcuMVfgXP9BOY1v8G7vWi5/p5BnPeMa4pE9b+L4JFttnDzUm1GOWPa9P5nPKrZN75eiD6dZ4nEyTaSzdFN3pSc0wDGck2dx06el3ttxOnTp16tTp7qDeT3Xq1KnT/YfutiOd7wrtt99+w2//9m+PCAZkGUJhdgmltUTGLBa6qy1mgvUiL7NqCEK9VKgeUYiUA/HYaaedxrrNdKHqUFYzbsepQpKgGHXjseehftosFQghqAgzJMIsVvYAsmEGX4+VlKUw44VgQG0gGWRZN7bhv6bMPVcRLQgwtAqaSdfKM/u/8MILRyTZTB/ivblLM6WYyYje2JG20nfd6Aih8jueIGoQJm3UJtkLaCCEAZ/4qJtSyRyqwoZknOrSPtmSuhGP7ciqQMZkrOiSDeHrAQ94wKgH9szuLPWDDEEjZZnq8kK60la+SiZ48RkaA4VRf13+VY+7ZX+zKN4sX8qFWEHgyIYtyCiJOXSgfuXxr5oppXsoKduiW3ytWrVqROXwZIkDVI/e1UlflmWww3oMNrutyG9FnbWpLkn1vPpWr16dk08+OVddddV99kjnu0K777778JSnPGW01bqUSTypy1drZkcfoS+QtfyzP/uzJMnf/M3fJJnaWF0aI2ba5E+/bH32OFkxi/3yZxkMfZqlI1B+qKn4UFcA6Ffq9Qhs2sb6epyvzAg/5pd4f9/73pdkKlP9GptVHpmIgXynLiurF3jrS2RF9G/aV69O4Mf1qoPZy7PFNvoiw7oRuvqdJegOARK32ZHMqphswzXe2RUUnizEZP2G54ybxFJxg4xlDTxnubQ+nr3TvXKNLRynz07rgU/aV/sOv/tee2eXN+kHxEA2zxe1tfbxeDDu8J4MDVnWg4rYo8/8hN3WTIH6/a5eK0/09fVgKZ+Nyxwiwn9kP8QIfumvDA7d1b7lFa94RZJpn0Cm7Fu/fvDBB4/60K94R8yovNSrA/Q3xg51Wb8+Vt10qg1ikDbUi0mNi5YsWXLPH+ncqVOnTp06derUqVOnTvcW3eUjne8Ouvnmm3PRRReNMzfIoZm7GVq95Mds29pVSL5ZtFkohKGuYYVsmKXXS6DMvu2JmF0j7R3P2NRp3TP0Rh11tmnWCnHCKxROG6EodU2m7JRyzPyhQfUyQ2iQz2bX9RIrsoEWkQld+B6fEAGIQz0atO4xqHuIIC/QHrP/Bz3oQaO8oYPQ9XooAWQACg9ZUiZ91uO1yQT6R4faQocQc2hNXbetzdpmLxfZQnugL3TNJuqR0lBHGSAIFzRUuyErEF/vQynJ3AZWtkmOF1xwwYj40COZkJF3IE7WMkM4tQlP1vXXy1nViWdtt5GSP7EX+wj4h3rJ3oZMbeYH7FR9YgH/Yc/sWPbL73SFxBb+Vi+Ys3H6xS9+cZIpCiVT9ZjHPGZEByH07ImM8MR+oGkQVbohE+VB6tmL78lGfKzIKqTNcb2yrDvssMMWN2De3+mmm27K2WefPe5LYbP8XbxhMxBvMVec8X7de+fYWXqsew3YVL2QVj2QTgj8xo0bx9jxsY99LMnUr9kClBzvvleWPlmZ+ge2ys/ECW3j9/xSLJdllCl65StfmWQaF6DCdfMxGcpm/c7v/E6SaQaFjMRomSGyImOyFTO9L7vC/uuF25DwemnismXLRpnQhxgqa6Qf8Zy668W/EGqxTj8nDvBTcR1vMvB0WK9VIHtjEX9lE/DLHumOHevLtVncwpf+T6wVd+oF4/iwikSsrKtCjAH8fvDBB4/9Dx/Aq36n9lf6DW0Rp8VtvJFVHROSjRhZD9JQrxisP6EL77EJ/LBv9oYvfke3Yr5xmfbbQyM2sEu6qpvyycOYQP105bj/r3/96wtWAfFlvuX3ujqiXkciVpARu6R3faj3yYi+/RXXZDzR+vXrFxy1PUs9U9OpU6dOnTp16tSpU6dtmraKPTUPechDhre85S3jLNzMHuJoNg3VNauF6kIMoD32r5hFm/GZ9ZqlmkmagZoxms3aA1GPUV6zZs2Ctf7QF8gY9AUqbwbuOWgEnusxdtB7iBXEqV60RQZms9AfyIIsBmRAW7XFc9AY6xjZhfZB9GsWAp/4Vg/ZmY1DOBDEDAJuPaZ2rVixYpQVxEpmBK+QASeWkJUTffDEXsgOsqAuuiAbKIc2QrohCH5Xfr1UTD3QPsgFGbINKCFZ15Px6gkm/rIl9l0v5tIuKA25kNvspW30Cclk1/WiUhfH8UG8qoOdqIvPsdu6Vp3M6ZbfsEuIGhvAO5QI35Diul5cPfUy0Hqcu3KhqmKEzCcd8K96BGe9CJXOZ7OCMjNkqu1kI7sE0YVO8xFl0ZHn2BH7qT6uXm2nm3q8PJ6XL1+ed7/73bnssst6umYTtGzZsuHggw8e17+TJ/9jG2JuzU7LJMtOsH0x23PKlUXxHhvkv45tZYv2Acjw7rrrrnna056WJDnhhBOSTGMQm6F7CHBF3fUPjlyGDPMTdSuPjeu72aAYab0+ZLueYMmGnejH/yHoniNj/Qt/1af7XDO59VLEmr3Ap5h99NFHJ0ne/e53zytfn/CjH/1ojMtk4OJHn2WlZIX1R8qif7GXTvw+mxWalQES4+peLu+TMRID2RNZyyTKCMuciPHaLNbhiy75BdmJazJBZM4marwSj/A9ey2Gk2n1cey57hsS/+nEnkh2gMRtstGvGINaUVMzeMhJYVZ3GFNos+OuZUhlhuqlkWI42Rl/yRTJwBjjWLlDJ/jzlwyNIfgznZNP3ct97rnnLtgfTebs0mWyfEkfLRY4Fp0d6bdkZuo1DezB72QobtaTUZW3bt26nHjiibnyyiv7nppOnTp16tSpU6dOnTrd92ir2FNzyy235OKLLx5nYlAeSAEURcbGrNesuN61InsBWYA2QzQgoWaYEDKn08gUmcVCp9CGDRvGsq3hhUBBiM0+obJmyGanEAGzYW3BqzZBRSAL0FvIg3LxbGYPuYCSaIM1n2bF9WJAM3jrH/FrXxE0SLvqBZK+hxhAp6DLEJSK6OODLq6++uoRhYMUQC/qJXZ0AXW3dleWTNvwDo1zgRrZQDG1WbmeZ19kR8b15Dr80VldxyorUU/eqqdksUeoTEUhrcGmG+tz2QTdW9NK9uq/9NJLR3uqaL82Q0pdblnvUoHyIfrTNkgWHsmMT7MfdkfX/tIBf4AC2ScEKYacQbfrpZ4yO+yzXoZW7/whQ3bMJpTD9iCB/Ao/7PqGG25YsAeKfUDGZCDJnn6gh1Bp8YoOrHnWRm2ARvMfMYT9VsSMn1x44YULkMROU1q6dGke9rCHjTLSP5Er26Y/SLj+htz1GfoA+uOf/L5mguu+SX0F/dK/9fhPfvKTxz6TP7JT/l73O/AbxJ/Yfc1IaqsY57Q0PHhObPY7W6/7N9g4X9EvkIF+z95SMc/73qtZep9lCcQTPsRn+Ka44zQ2MZrOyOvGG28c7QDaLoOmjHqZZL0jhT6Vrc+sYwbl1vvMXORoH4cxCHuRzWBXsnf6RdkBtlAvakT6EasqkBgpFuoPtUe2j87rSXKy/z7Tob1F+++/f/76r/86SfKnf/qn83irJ8/xJXq0F4VvsV+ZSf0B++SznhN7awxlp5/+9KeTJC9/+cvntY3O2Q1d1bsP632Bp5566rz2iRn6W/70hje8YV59vrc39I1vfGOS6cW8+mH+Z8zDFvfff/9R/vpqMUIMkb2qPkUWfEi/YrWOsWQ9BZJMnOpHl+yfr+ORn5x11lmj3DZFPVPTqVOnTp06derUqVOnbZq2ikxNMpm1malBQSAVEA7IB7QWYgHBt6bYGk8okNm79YgQhOc///lJkg984ANJFp4m4znrEs3+ly9fPs5Goepm3mbmZvD2d0BFrIs1CzULhiBBCpw973dt1zYzeffgQK6Vg4964hVkHpqCfzI6/PDDk0wRMOV4riIOZATxgDTUNdUQBwSZgGhAzCH0t9xyy7g2nF7ppeqRLpQJ+ZZlIDtl47XeMgxpgIrIHLI/CBJUhF1AIPyF6pBRXTMM8bI2HrFXCAlEt56ixO6VD8kgc3yyHbZJFz4vX758RDydjsL3NncfC97ViQcykbGEtkBhyJw/1D1nkF6yUr5s1Oc+97kkU7uFPvIjGR86gGIrR5t9FlsqEkYH2gX15G9sj5/SFX5keiDj++2334hksSO2zk7IQsyAXNY9gHio2ZS6r8H7kHv2A+0Wi8gU8rpu3bp++tkWaKeddsqhhx46+hU9QH0hkGyk7sfgG2yNrdb7TfzO5+hHLIek8gVosr0c+tE999xzjN9sgA0i+yeg9eydn3qfn7J7/ZCYAwGW6eH/YjQ/4x8yq+KCfknc4T/qY7t8QsaJ7GQ9/E6GfOL9739/kmlmWbyjO8/xJb7J5yDb9QTC7bffPi984QuTLLw/DS98SswRs9RV72lhB3jznFgohsk+1FvnydR7yiOruldP/6mf1S9C5rVZtq6eguWvfST63RqnyBw/bE2f85nPfCbJ1Kbwd+ONN+Z1r3tdkmk2iD3o2/V5ZCPzwt7r8/Srn/C7tuOJ74nBxjfGSXTOP+oeUnaKr3pXi/GULJxsm/GYPh9f9Z4pGYu61+vjH/94kmmWRDYPP/pLz3/hC18YeWdXfJpetRWvfN3Ykl69j9iLVUlkxZ7Jst4lJXbwJ+OeK664Yswqbop6pqZTp06dOnXq1KlTp07bNG0VmZpFixZl6dKl4wkrMidm0RWZr2sz7bExW9/c7fZ1Pb6ZqDWxEAb1Wj8JSTUr/ulPfzrOsM1SzXIhCWbcZp9QOagNZADZ/4AXqAjEAMoO7TODt5YTUmU2LfuAZ8iXk+GgOtCYesswGVobDSGAGFQkn4z8NTuHYEM8tL/ui6JbKM2ll146Ikh4gGhBCaEhECn2AAmAXkCIIAXW6moDHZJBvfsHolD3/ajP53p6TT1tDNJKFvUEIiSrUE/po3soi3ZBMKCTUBmIB13UDNJll1022qU284mXvvSl83iG7h1xxBFJpnairHq3hjrxDD2GDMkwQgHJ1nPQG7qXFaN7aBDd44NOoNiIH9AxdBu/TtVRPmSW/3kOf3Utv/I3tReIHatDRrDePwLpxyOCNtZbq8UQMQeqXO9Fki2jC7LmT3Q/DMOCk5I6TWkYhtxyyy2jv/InuqdH/q4fI28xu6671++I/b4/88wzk0xtWb/HxtiFz3x59pQsGRixgK6h6GxR3FW33/mBWCVLJAbVfWBiEH9kg/oNcQJirJ/yvL+yYZ7Xh7PPeqcdGSL+J07ZP6lvt0qjnkjGx8RWPiL7AZ323NKlSxfsQySzulfO79pS7xqpJ0CSnXiubmMNCLZy8WQMoK8nM/W4p0o/pI3iiKye7BZd1ZhLlsYexlfazU/wrb31ZEntMN6SSTbuW79+/Win3qmnl7ET4yE+IJ6TDd9zP4uMIbsmc3G+Zq3sLdUv0A2d8ovTTjttHn9W3rAz9kqW2qxcfqsvMAaho3oqYM2y+J4MyUs7ZWLJ7ZBDDhnjDXulL/YlPtmnhAd/yZKsjjzyyCTTuEjv7EMm0xiw7uEiy7rffM8999xiP9UzNZ06derUqVOnTp06ddqmaauA5davX5/zzjtvnI1DDszgzWahMmaSZrF1HW1dkyqbYK252beZJxTLLLeexAR1xs8BBxwwom5QOWVCWSAKZrlmmxAfSIKZvpmnWay6ZSPwJhsF1ZF1MDs227UfxQxfOZACMoJgkGFFefELlSQziAekS3sgbcqp68Pr6Tb4Mit3esfee+89/gZtgyD5LGtQ17VCFc3syRRaAbXTJmiFcvAI7WAPsmX4InMZGUhVRV/qfiJUT1fDZ7UV9coo2WvDHrUXUlJPTvEXP+rZsGHDaMfQF75iTS6kR5vICI/QNSfqsA/oNZ+lbzqClELEfC/z4Q4eOpBZrOvSK8rJbyBS7J/d86Oa3YUmiQUVma2ZGzqHPpExG4U0P+c5zxnX/1akqu7bqXdUaYvv8SpzRyfsEA/skO9ZK0/XZOHkJzo9//zzx7o6LaSbbrop3/zmN0cUlW3WPUtsrN4nQu/VhsQhMVJ8Ex/0LWJtPYWRjbrLQtzbb7/9Rn+UJbTG3koAPOGBf/Fvz7MtbZPJqXcrITH0jDPOmFe/PS3iAxsWg+vdLfVOFP2K3/W/+JN9wD/ZOf1Jv8dHlItfsiMHsZWP4Vu8vOaaa0be7TUQO+ivZhXqChIZVCi8OE13ZCRmiW1kQyZisLbjVb/Bt2v/R4bilP6GLMhAbNRXk6EMsxjMFnyuK2fYPT7ZPd2wTbp75CMfueBOFHalv9AW4wf6wSO7qJkUK1Cq/bJvGXkxufou2dW7w2pWXl+t3Drukc2V9ZBJwb8YrU+oJ1vqv5RDTvpJsmRL2q9/X7ly5Sj/esehz3UPL98x1q2rIOpJpOxc/ET8hP3XvcE1jq5du3b0nU1R78E6derUqVOnTp06deq0TdNWkalZvHhxdt5553FmbzZp9ux0M7NiiAHEwozO+nuzcmtBIZNmjGbTdQ2z58xMIWOyKZ478MADx5kz8hkqetJJJyWZIsV4Nus0I3e2vZk2tM8M3ix4c/tL6i32UCN7ELQZwkYm9jRAc+1h0FYoIDTSLJxO1IdPN87XW4IhE1AZ7ykXgqBc31999dWj/LUJQlBPbqvnpkOEyAbiBCGA4iiX7PFW7wqqN36ToXPYIbXqr6ev1ffpxF/fsy+oIH4hGSeffPK8+rTDumF8awebqWv+lbfbbruN+4xkEfig9dj11nOoGnTOe3WNPh/TNr+rj52xE+im09W0hT1po4wOHUN/IF8QNPWpny1BDaFJPtfMDRQKIubUJggRNJ0t+l5MggBecMEFo56hd5BI6B0f1HYh1DpYAAAgAElEQVRr0dUNbauoW12zDHUjK+Q9baKDenrgkiVL+ulnW6Bbb701a9asGW0M1Swh23vVq16VJPnoRz+aZBovrMuX/bOCwHviBh/ir06rE1/YJN8Uf2QlLrzwwtGv2J4bzvkp5BniXe/Igt7LDrFzdqKvZnP6Df5GNnVPG575fz2d0/fiQD0BjJ/63t489UPOyZSs6KKeSKl/hnT7XnwQZ/Rj+uFjjz12HEfM3k2VTPtGZRmX1HtsxFQZeLogI6g6HvWR2mTfY41JdKseOhSb2ADdGbvITmizfZZkWfdxsFd9hjGLGF33vOLT78Y8dC076O+qVavGGOoZMQz98R//cZLpSYN1347YKrNBBjIaZEUWZMau6Jjvs2ffawvdq48NaLM+ni3wN/2UcmXL6jhPX2G/nsyn57yv3d6rY2GxyljjJz/5yWgP9pzXu3rYlf6Lfur9f2TLB8lELPL98ccfn2Taj+nr1ed5Otd//ax+qmdqOnXq1KlTp06dOnXqtE3TVpGpaa1lxx13HGdf1tVCnM0YIRXW25vJmW2brdZbYCvqCz1Sn9m2WTDko954C5FYsWLFiAybdUJBfIayQp5QPQnEjByyVU+UgADg2XpDyAK0FyJgVlzrIzv7LsjEe9osG2amjz+zcDLx1+wZf1AccoDOKI9OIe7Qrbq34tGPfvSIZNYT1SAEnsULfZIZREEd7MMeBkiT32WzZEiOPfbYJFMkgk6gG/X0KugL1IXM2AJdQT9nT8hKNn/jsvKhRsq1rth72k9ObISc6u3K119//Yi+QSjJBCLLbthBXefqeTyzt3riD9k7KYfdeR/V04+0xRpj9gXVrHcTQcjYG4S63gRO55A5OvccBEyWmA1CUdm79tb16/wZKjvbdmgZ1E/d7IrM2K94qEyyqZlCvogXurNWuZ7WVbOxe+2112gHnTZNrbVRXrLsYpc9nGyA/3q+7v9405velCQ58cQTk0wRbfHCaUT0W38XHyGofIFNXnzxxSOizcbYlnf0fWKCfVb6BzEP0qwt9aQ179vXIQY60dT77KvuG6n7ItmumEVmYrcsgnKtPJAd8767n/ix+unCX9lK/SMZihfinuetLFi7du3YVntV6h4aVFdhiFlii+yYsQWZiGXGGMYj2sK/jV/qqo66F89dK+pF+KZzMd5YxXhLP0YWbKKebIcv9kq24p76rCrxl12zhUc96lFjm2QT9E/1pM/Zk/9medP/yHqRcV1ZQobaqN8xPuLrfJEsZRvwZ4xST5qjU/XqV32vPxSryZzuxHjZFDbEX2VgfMaPLD7+fT+7T0+bjHfoiW+LL+zD6gZU744yDtfHfvazn00yHa87YVUs4MuyV/Uun9m7GusdSLPUMzWdOnXq1KlTp06dOnXapmmrydQsXrx4nBFCss3GrfWs6InZdJ0hWrdrVmrGad8H9LYi/3X9fb3zRf3nnnvuOHusa3vNICEG9YQaCJkZtXWx1ur6C3U3y4XSQBj8hbqqz/eQczN/a4OhKRAABNFwmhqk2nuQPigkWWuH+tULyap3vqjXrBxqZVZP1+vXrx/rhtJA8emlrhnGI5QDr0id9E1X0EKIAR7ort4I73kyoFMoCiQD31AjNgA1qplJuqmnIqkPKmXNKp15Xv10QTfKVw7bffzjHz+erkKGfAfyaQ072cgm8E264Af1tD12IEuBZ4gYvUOWPA+hJTO6IJOqe37ATzwHjYT68S/f03XNmDpJCsKmfGirv2QMKYSO1hvJk4Wn4tXTZMSZencQ5IrvsB8y4HO+h4zyBzqB+Nd7UfBzww03LLgNutN8aq0t2AvABiDtMqgyavwSwk5PkEs2W+9g4VNskh/zLf0f/1evfV2P+f/Zu/fgPavybvTXQwiEBAgJhxASqignpfZtO7R2xnbwBOEkZxAEscpUOgP2nXG2e+/3H9t/9ozVTndxnB6stmAFihzkKBCUgTotOCB2kKlYrESCCUjknBCSkGf/AZ91P7/vE+h+gdYE1zWT+eV5nvte91rX8V7fa61rveMdzcfkuneZDvbOp8gOyjrwwVB5dsk/GCOfmudwsIfzzjuvqgYfJtMKsYbSsxdZSLwSu/laSDPeyZpllULxVRz1veezSePj09k9/+N7z4PcP/nkk1N7F/kkPpKd84F4ThZidMZ8PtGY2TXe6DO9Yrv0Tftk6Ln0RT9dDxnnd/RLjDZOz+GvjAuvZcXYAx3ga/k7vJTNwCfP9w7yrW99q+kdm5AhYRu511J8IYPc1yoGZoVUdoLXPnsH4CvFPXrPZtkk+3I9H84H0z+ZTLoj28su8r0wY4Nx5B4cKxas7PE7XcpYcMghh7R4wSbImVzEC3qUZynmygEZPn31/mRs9A+Rjfcgz5fRmayip8rdlqhnajp16tSpU6dOnTp16rRN01aRqdm8eXM999xzbTYM9ZRpMZs0C5XJMUP02e9mr5B4s3izZ7Nms3nPQRCE3BthNr7jjju22eQ73/nOqhqQZGgGtAxaIUsADYGeJrLrmXiQ63I9V1/Mms3Ec/+R9o855piqmt4DYeavXWsvIR55ZgoZQUDMvp0Z43lm9+7XX7N7PCUj48Sn8Xjc0PKTTz65qgbUI+v0+0ue9ALKBmGAvuSeK+gK1AZPoDH0KSuKQMqgOXiWSHqewaI9eki/yZIOIM+FcJCBdugevcVb3+OxvxCQH//4x22s0EE8yrXEkFxjxFNyhTLKgOAJkuHJcy/oB6TVfjnInHbpZ65Lp/f0jV7Sb/fjGSQ49yPIIOVaY3/pNR5rj++QLT788MNntPP444+3MUPjfNZ3PMt9Osaob76fPGm7atgHZCy5dj1PhMZ76GXu8em0ZXrhhRfq6aefbvKSGeE/ZIbznDN2S17kIBaoVkaXZE/dJ0she+I+/gtCSv7s/Fd/9Vfbs6CwUFQVrtgZfYWuZgXJzJRC99mTOMi36oOx8WnskU5Ckvkb/eLzPQeqy0/or3acEG8PDX+W+z98b5yJPmd2X8yRubHKY7J6mzag706PNxaxV5/5HvHG3hM851vEQrzN/R7smD/Aczx2PYLAI8g3Ek/oEX/Et5IBncgz7/ho/krc0h+rQMhCbHE936+KmufttNNOjUdil/cbvBefXLds2bIZYzMGtku/rr/++qoasp/sw1iy4hw91p6x4y2ZysLy+fbA5HmDl1566Yx+eT9j02RPNnwLGckwyZiSSe4TprfeNcRHPuf5559vtkHn2Xae40jfxAsywAv+iZ7KwuZ+arb9gQ98oKoGX6AfeMJe3H/IIYdM6fIk9UxNp06dOnXq1KlTp06dtmnaKjI14/G41q9f36op5HkgUF+zabNmazkhIWZ4ZutmzRAHM0iojFmr7yEEWQnD/ZPnQeR+iTxd1yxTn83sofh5LojZqHXT+gDlMBbrVmWntJ+16c2CIWvav/fee2fcB0E3s4fCyCJYm0kWWWUNmnLddddV1TCrT2RP+5BuCEMicVCcRx99tCFeeb4QlMFfM3+oOpnglRm+tb4yJpAtiJKx4RXURN/IhN5A++gj1JCe+N44oCnazdr3uQcMygMZ0W5WOfIc+g/FoHPaY1dk8uijjzYeyX5lBTU2AAHCW+gLG9RmnvOAR/Rcpk6GRmYjqxKlPXg+GUGe9OPyyy+vquG8JSi6/tMdMnCdfvIBdAXPIM8QMvpKd+g1vkEbVaqbP39+01Py1DY5Qof9DqGFzLMZMqE3kDA27Htyx/vcoyjraqyQ4Z133rnd22maZs2aVbvuumvTQfyHEpMj3bbvhE5bKQAFlhVlr+RHt9gUX81++SM+GuLud/Ft1apVzTd6JjTUGBAdcj3dZDf6LrsIleWD9MH3k2cf4V3VoJuQdPdD7WW38hwZqDEfL/PE96kU5znskl/jGz1HnOb78Y6/kZllK5B25/xMnsguNqscx54g2zI0fGdmv/JcPu8G4hR5ZmafPfOp3gGMQTvsXHwhA+16Z+AD+Y3US+PCSzqC6LkMFR+uIp53JFk571vsxX4S9uRctl//9V9v9+Cda/GeHFUhU1HQnhVjNlbPdB9Z+Zzvb2zd72TKFsUZMvYuQH/wRDVC9qf/9g9lxV8xQTveu+i7DCJZ8vXGRyd8ztVHkz5FH9gu+eKBMXgfoSf0jD7yAeQv3tGPPA8Sz+gxHljRQDZ81P333z91TuQk9QjWqVOnTp06derUqVOnbZpGua7yF0G77777+KijjmrIlRmfNaZ5/oLvc60yBAGqZDZqhmemaRbsOdAdSIbZr/b8bsa5aNGihpqbRZrZa8MzIE9mymahEKesJGL9tMwIRADyRF7ayaoXnp/nCZgN6w/EAiKW62kTaTejh9qYvUMu9Asikuix2T8UybjylFnP/7d/+7cmX22Y6UOsrr322hljx6M8tyirqBlznsljTbRsExkbCyTL9/Q1zwaC3kBHc30umUDgkXFlxTF6CMmDxshSkEVWJJM9gPzn6dyTZyiRQ66lp5fG7l7IFJ5pm/zpJb2CYieqrO94mFkoqFDuLdMu1FI/te8+aGiey4QnWUHKuNJe2Y3n5n687A/du/fee9sz+QRt0qPcl5d6T5+1Q1bJY9kwvPIc+p1na03amt+vvPLKeuyxx17+uOZfYtp1113Hv/3bv13HHntsVQ1ZPDKXdaRbMnTkyAbYPx9Ml2655Zaqmq7mSJ7Qav5LDGHH4iM9e9Ob3tQyqVkNShv0lP2wPzrj2XRN3/NMFc9kx3iinawgyc+wS1XRIOtsw/1QX8/1Ga/4J3GPD7XHRuUtvjR9pew5WyAb2RLx3bsIv/e2t72t+Q6xMvcv5ZluWX1VdizP5bDfCS9k1bOqIRnL7OIlP2JMdICM7Vm1ksAKF+PJKo10Jc8k0w/X0Ufjwjvtui7PM2EfeZ7b/vvv33wfueTZJvSAvrrOShP67Vn0SGxXgY1s6Bcbo/+e8y//8i9VNdgkEh/ZCaLnxmjfo5iee34Q2WX8y+y+dwG883ztsUt88f7lXXk0GjVbI5fc0+n9hJ4YK7nK2OQqI+899NT3eC/rRL/xRMaarbGjBQsW1IUXXlirV6/eYpzqmZpOnTp16tSpU6dOnTpt07TV7KmB4FQNM7JcZ2vGKJMC/TEbzfMXIJpmpVmTPk+8hSxkNQeospnjAw880Nb7QX7cYxaqr9qEFGjDDFvfrSfUt6zolog4NCTREegrBAMiZS21mTykA7oEHYJ0Z0Uw1+GJvTmQi0STzdqhOPhDRmbj+gkZx9f999+/8TYzcRAvyBQEISt9QCf0OXkK8YLGGLO/uecFGknG/sq40GH9o2945DMeQSvxhJ7RGc9LGctAWi8O0dNvSDyUEToDvZlct8uWZAjpX54CrQ19hsbgDfTOM/McpUSiIKP0U5+MTV/xMiu7JQoEhdTfb37zm1U12KG/UM88R4ae0xn6rn/I2mXPp5t8FN1jRwcddFDrM6Qq9QFv2RDbYrMqPrEVfafXmXXCWzbNtmR/Iay5B+Oggw6q5cuXV6ctkz01qbNZFZH94TNbEI/oNp8vey2bALkkF0i6ikh0kx+BcGa8+9GPftRkDBVF/Cx7YT/sjH7TqUSq2bFMCB0V5/CCfVop4Hf3Q85lNeiq++xhYaeQd/5Gf/RbfMVjdp9nl/ErfDweZoU64+aT2arxPvPMMzMynVWDXPhn1/I9MiniFpQ9fZXYyY6Nwe98TJ5FZ99G+hE844v1S4YYQp/n5uAJXy5e0beUiUwmUgHviiuuqKrB1+oXmYjH3h2Mc+7cuU1v6Ae9pBd8KdtwnYwMecumkq9slTF7J2QveJp7fsnYdfTbdddcc01VTZ+hKL6wN7z1GY/x3t5PsV/8zT063o38xVP2ZZ9S9puOLly4sOm276wgwEvZLGPi//TFuyc9xQt+zljoP/8oZtIfdmGsZKp/Bx10UHvGlqhnajp16tSpU6dOnTp16rRN01aRqZk9e3YtXbq0IZqTJ55XTVdOsvYUMgJpN+u1/jDPTMl9KIn+QOCgAlm7Hwq07777tupKnpXr4BHkWrUUn43JrNRMHgpiJppVMpD7zaYhAGbm0EOooSyGGT9eyVpAB6GFrpctg8gn2pynALuPbDzPePIzyvNAZs+e3fgua+Be6CP54am+ejZ9giToM/QFQpb7kCBGEASoEDQoq8JAnPKEdrLDa/2nM54LlfScPINIVsS46Jr73QfdgdLkeQLQUUjJZGUuPNV3NihDyCZUrKGveG5trupiiaTiGSSJ3pMhMlYIW54EjqeZCcrKcvQL2oNHuXfIuI2HD4BOZf8gcq5jF2Sb50XNmjWrZQid/O0aSCk5siFVlPRVJsaY3e/ZbIe+43XuTYRKQrX5URnPVatWNR/XaZo2b95c69evb1Wc+OpPfepTVTVUZMJDGRif2RQd5zudjk3n+T02B3FnY7leX5ziF9js448/3vSbvXmmeEJ36Awds6cAKks38yw4VcEg3HxWVkfLFQTsxRghzp7rd/avn6pbOQdE3BMDZFT4AXt02Ao+8IF8bZ44f+qpp1bV4G/Eb/avP6tWrWr2JfaKzVnJVB/xCiJO3uKSzAn/7Xsyy4yL59Ej2fesYiYDgkfQe9fxB7IG+u13/UL8kExiVpiTmcrsh3GpRirOus87CPsaj8ft2fSJnhm7+OC9DPkez8TAPOeGD6ZfmanHO88XH/LMQzz0HD5ZJtKY8Eo/yFacEtfoI5vWH77Ac9mNfnuOdslStpf+spN777136t31wgsvrKrhPdx7RZ79Rs/FPO83+uJ+to83bA9P+QLv4x/5yEeqasgQ+X3OnDmtD1uinqnp1KlTp06dOnXq1KnTNk2vOVMzGo1mVdXdVfXT8Xh87Gg02q+q/rGqdq+q71bVh8fj8SvCf5s2baqf//znDZmE0mQdfqivv2a7ZnpHHXVUVQ0IgNk3JABCAHWCoPoMlTIDNcM0+5aFmDVrVpsBaxu6bzZphg29geYjs1izWlkCM3Eohxk2xAHhiRm5PRFZJQQqC9XxV9UOKOBpp502g2cQu9z3gUe5j8nMWf+h0Nay+j6RO3sfyA56tWTJkoYGyg5BMbRNHv7mXqs8TwLhLdQxke2sUZ/7oOgpZNb3zlnSjn7kuRT02vpcKKF2yGyy2kfVgMZAvOggJAOSAkGDoNAV6KTxLl26tCE3Tli+7LLLqmrguT6/XMZC3/WFPlizT4bsI2vO64sMEfuxVt/z8Qqa5C8e4TlZ03+yo/f0D7ot+3vEEUdU1SAzyC7952vYu70neY5PynLevHkNoWULmR2DNkK32QA9YXO5XyB5kRk6YzUGPNG3rD40Z86cqVPG3yj0esSp7bffvnbfffemI/yDk9Lxk87Tacgj+dIDmTtoqbjFBsSlrCokswZJdQ6IikyT51TJrEJ6rWu3h0TWSd/pqmfac+P+zKiz+3PPPbeqhrN56LTfJ08DrxqyBfwEJF4ckYGRMYF0a1/8wEv+CPLtfr7aSgQ+334VNma84jIb009+C8kqjEaj9hu7Ez/EtkSqxdgPfvCDVTXIlbzzPDR6JjswWVmxaniH4Ct9n2cP5bln4gP9lCHBAz7TfTKJdMF1uZIBH7LapnbIDlJPtnm+Dr/2W7/1W1N7msTIXKXAV5JrrjTgc8mAz8y9k+ICm+WzvafluXvioziBx1kpTv+tsBGHrCYR53IFjWxYrsjJaqLuy1UheI0P7I3vWbRoUXsnxJt8F/Q+lVVh2TqbyVVISfou22SfU55NJcuVtnn77be3a7dEr0em5n9W1Q8mPv9pVf2/4/F4/6p6oqrOeR2e0alTp06dOr1a6nGqU6dOnd7g9JoyNaPRaGlVHVNV/09VfXL04nT5vVX1oZcuuaiq/qSq/uqV2tluu+1qxx13bLNWiIOMDCQhT3OFzJu1qTih+gakw+w368Kb1eceGrPbPKHbrPi73/1uQ4TMwPU11/ibXZp9msX6HupnVgzV8CwImrHKDrgOCgtBgxLhGZTf9XgIzYEyWfdqhm9mb1yQLVkriEOiOcaRZ8ZAtCEJeRq66yAc++67b0Ph8MBYZRzIT5+NVRvWXcug4IlMD57QC7K0vjv3T0Bg8VDVsdyvhEeQJ4gDHTBmyFuem+N7+nvCCSdU1aAz0CJ7JCBiiWiQifYhhlCd9evXT2Vx8pyI1Ft6IZPBDmQRIFd4lntVUo+y74ny0TN6r32op8wJPcyqg5Ao+si2PY8uQI+ycpz+01vjzD05kMJEXefPn9/0io7L2kLWjUFbKE/wZtv0BwoJJXQ9PXJdVvhxvYwQPXv3u9/9ilVltlV6veLU/Pnz6+ijj25Z8ayGKCvAH+T5IvzTKaecUlWD/FxHvnSYDvFHWZ2IPNkm+bP/X/u1X2s+CIlpfJI2s4oT1NbZOccff/yM691Pd/7+7/++qoZYyv5yJQK7paOyzDIyMkDimX0fMrfQYr6Yr8cLY+eftOd36DHeakd8JVP952+g1bmv8tlnn21tQvVlv88444yqGniOd8aUsZCfyPghBssq0Bc8hd7rs+sPO+ywqhpkyOedeOKJMz7bF/WVr3xlBk/IOKvA4h0faRz2s2QFSeP2TiS+4aEVM7nfi1099dRTLc6wLZk/vJf5oL/0xHV4J0PhWeKSLFSeR0a/xQ8xXsaR7Xku/59jwmOy9i7C7vK8QPGFruSZiWTi+XgpTrqPThkXXcxKe0899VTTqzw3yL3kkT6F3PFIH/OsKbI46aSTqmpYeWLsssZ5BhxfYfXIAQccMLV3apJea6bmL6rq/6wqudndq+rJ8Xi86aXPD1fVki3d2KlTp06dOv03UI9TnTp16vRLQK86UzMajY6tqp+Nx+Pvjkajd7+K+z9eVR+vehHBOuSQQxpykKfFmrmblUMAoDdQFutzIQGQBRkYqGvW809U2MzUbDpPa12wYEGbgUO4ofaeCXWDOOWJx55ldmomD4WzhlL7ECdrpG+//faqGmavEOSs6AbZgAp6PmQDj/U71yTjDYTDukmoo1m4fphVm2WbjUMMoDuQFP2FiOj33Llzp+rvuyZPYIdqoFy761lQDVmpPJsBr1wPZYTiXH311VU1vdcBKpNZJ/2Q/YBI2I+h0orn+p0eQz1zjTGeZhUt/IBSIgiL59KhhQsXNvmRP72ybj+zRlAYvIFg0lvIae7hos/WxOMxGdNf7eId/Wez9AgShTfao3/QykTG+JasqMeHQNHJ1PitjWYf/rqfXuuv5993331t7wQ0jbytKfa9sZEJIht6bszuN8Y8L4Uesnk+Kc8RMIY77rij2f0bhV7POLXrrrvW97///Ybu4iMUlS6TFznzmfwGW9IOlJdfE3/YzBe+8IWqGmxLHLMPjD/g7/je//iP/2g+hYyzIp69brLZ4oHYeMwxx8zokzX+shDGKquOcgWA9vke/sLYZS2h/nTY/Zllt+fBmPkNcRZPZKfptb95Hpzr2bnsGBkbH39g/8mDDz7YeMu38E323ZBHvgtYLcF+Id3G5LM+8y3eR/gsfiPjmuvFN2MWNzxXhocP864gxkPu/Y7X9J4O2Jd5/fXXz7jPfhH9pNf0GMkMuI49/ehHP2q+jX4ed9xxVTV9nlKeRXf66adX1ZCtEkPpM/3DEzFcfGAn9paJrRkX6Ku/fLpsF9v3vif203vvJuJc7pERd+m98XpHICv2Kj79+Z//eVUN75t4btza32WXXZqOkz8eic147V59tPLAO6Axiy95dpX4pV166r2HzWmHnmlv1apVU7ozSa9l+dm7quq40Wh0dFXNqapdq+qCqtptNBpt/xIKtrSqfrqlm8fj8Rer6otVVfvuu+8bc3dqp06dOnX6RdLrFqcWL17c41SnTp06bcX0qic14/H4f1XV/6qqegkB+z/G4/GZo9Ho8qo6pV6sLPORqrrm/0db9dxzzzV09uVQ3Jc7GwWSASkxg4QOmVma+bnPukRolVm3rAtkBQJuFr/XXntNnUj+cicPQ7ygD8bisyyE64wNkgVZg7ZA+fQx16Va741XeJknLpsdy8ToNzQQQpGVeCAV0EftQpXM5iEqsmr6k9k3CAX0B7K2fv36hjRDGyFYuW5WtgCZ2esLlE8fyTPXJOe+CKiOk5AnMxxVAzJBBtATSBrZXnvttTOeLwsCwbNGFbKlXWtSIWS5Z4i+G6+MEtSRzKA/W6qyk6f2GgO0BrrnmbfeemtVDVWU2I59P9a6QjqhdBAmCFLubaFXZAE1IuOs+y9zST/9tX+EXUKb8MZZDuwKz7PqUmZMc+8axI3uQbCzst1b3/rW1ifoX1bVy3XOvqdXaYPkqK8QTlk2Y/Y922Nz1rbjOVQd+vhGotczTr3wwgv15JNPNv9BTvhKZyGS4hAEkm2JM+Qk48sW6G5mt+mo69gQOXseP/HII4802UJTxSl2KOayA21mtS++yooCFd/4FrGRbtM52QOEd7meXj/4E/bDx4kzfD/ey4rgLZtgM643HrwSb/WTbUGAxSlxHr/4fvSWt7ylXastvIKCGwN5ei+R4XEfuemDvTlkR75k5C+fRg/FZvZsf5TsEl7zpZ7nOfTXZxl+Y+c/yDwzLPyW7IJ4iOdZUdW7UmaA2MkBBxzQeJy+7r3vfW9VDZkY+4iM8R/+4R+qaro6Jt7z53ib+7BdZ++X/SDsRbaUzO1pyQyKsbNteoVXfL94J77QO+9n3iFSV/hy8ffiiy+uqiFu6o+4LssnW3jQQQe19ys25F7xiQzIS9/ZtPcjz7IqA2/pg3dcek8fVAs0Zr4FTe4J9R6yJfqvOKfm/6oXN2P+qF5cu/zl/4JndOrUqVOnTq+Wepzq1KlTpzcYveZzaqqqxuPxbVV120v//3FV/fb/zv2bN2+uDRs2tJmhWagZHvQHAgF5z9PNIQVm5a5PNNpsFxIAdYacmfVDGMzWzYZ/8IMfNPQM4qSqE1TOjD/rg0MazPTd5xkQcWs4zezxAE+6+GQAACAASURBVOIEzchTg/UR+geZgxZN1n43lqphdqw9KA5kA4Jglg3NyawFXkNQIOqyLonS5HkG2hmNRu1aM3b64XvIqD5lXf48nyP3HclWWacN1YMYQDMg7PQOj/EAimgMeK5fWWEIQWvyDCL9o6dkDp2QPcg1+XQRMofH/moH0rtx48bWd7ZC/uQNgYIS0xPoDP3Vl0TbZKHYS67txxN2kogpfSU7/ZKpTFSPbLTPh0DaZM/oI3u0zl0WDWol80km+AVlyj10/rLv7bbbrumRvuZ5FhBRyKwxkz+ek6O+uZ5sZJ5li/gQts+3aDdPcr7rrruaPb4R6bXGqe23374WLVo0JfP0vfwEOUPs8wRuOu56VdIQtFk7/BA9EhP8ldmd3CuoT9riK/KcqclqXu6tGnSRH+ezcq+mPmlv2bJlVTWgs+xX+74Xw/kk+4rwjF3Sbc8Vm7WXZ/lk1gFP+AF+jg3JYog1OU78InOo9dve9raWHcjqqWKoZ4tt4kJWr+MD9Vm2gM8iSzLBe7x0v795rhnfmZXv+APvPXyb57qfT8sz7PQDr/J8NrLLPaz8Ft0ie+24b+XKlS224gF9IpfMhr5c5k1GRBYr2yU7faMnViLgGX2zAkF7ZOY5bJLM8UZ8kllRkU479J6+sl/9yiwcXy9W5JlY+CG+0TH9vOyyy9p+HPEiK5p6L7/xxhurauC5Z2lL9kdf+Eu84+cy88gX4JV3D/3x9zd+4zeaLm+J/isyNZ06derUqVOnTp06der030avS6bmtdLGjRvr4YcfbrPiXOOdKK11kHmCrZl+niif1YMg6VAXiAK0NxE0s9vJk6ChFmapZvqQAgiA32UFIAvOpvAZ4mRGbtYKeTJWM3PoRlaNympWUBj9lUGBGuXvUFu8gvbguVk0sl9AP/QPwoDH0GX9ghJDNiAQ9gYtXLhwqlIPuWX2y/f64Hv6Q6+0Q96qR+V+Is81VvdD7yBlUL08NwIi5XeoDl46Adw+DEiI9eEqokA+ZEVk12QN6CPZQ/L/8R//saoGRJg9ZRWS3XffvaEv0H1tWTfNpnyPR1BJ67nZmmfQZ32C+umz7IU192xWFkp79ALv2QfZeA6003NkYIyL/qXPyCpu3/rWt6pqsIs8M4H+QtKgnkh/IM333Xdf67s+ZnUjPJBdhbrxHWwFcgWZTb12HZkmr4ydnzMGGYD3ve99Te6dpmndunV11113NWSePNkAf8Le2DPkkm3QJTpCD8SQrJ6GrJN3Ev3Xv/71qpo+r0v82rRpU9MBmRRop9jGp7hHHHCdNumwzD7do2t0CXprH0Zmh/WD/bMvcVNmVCYmM8Z8Wp4bwidC5BEfDDHPlQ38hWyDGIIf/FVmjjx/7dq1rY+y0fZ76CsZaMO7AF/KR3oXoFeqJuIZHkC8xU56RQ+08+lPf7qqBv3jq3J/In/Ad9EJKxbodZ5bIn7hqfvpPR7jJb+nv2TierrnM1077LDDWkwUJ7yPeM+wF9eKA+9H5EVGMnLaI3/PJjPtsAPvS2I7PRHTVWMTR/LdhV7meTv64zl46Nwmdopn2qczPntHyHN58h2aDGTlfH/IIYe0Z6jaJ0On7+LIUUcdVVWDPtJ3PoJNi6XeCY1Nn+kpGdBH7z3atyoDPfDAAy1+b4l6pqZTp06dOnXq1KlTp07bNG0VmZrRaFRz5sxpMzhovVkzZAGaA+kyk4N8Q8wgIGZzEBQIOzJDNMs1k4SQQAWgTJPV2CBNkADoirbMVj3DOkJoqlmp2a+xZs16M/o77rijqgZkSnbC2mHoLYTBrBmPIAzQPesnIRZ5NguEAc9yHwDEwffWllojmueEZH/N0qFOZI6vVQP/jcVf6IIxawMSBl0jd9dDhg4//PCqGlAWJCsGPYGgkyXECaJB1niiHzIvibhCtKBCrsv14NBP6KNsAxQU8kcG0BgIGmSNvuJtZvM2bNjQ9MNv5G2NL/nhnaosWaHQWCGjsl8QNcgZO0heyhbQH3t12A800V/6A5HKczLom/5BzvDS75ArdojnZO55+OM5WcEq1+J73sKFC5u86Ynf8HJy7XjVUB2JzeNRVjfiI2TN8mypPOMgzxLyO/3J8546zaTxeFwbN25sugcZ57shzPa+4XOeFk53cu9f7qXiU9kWm1FREKKZFcfY96xZs6bOfWK3dBDSzU5zT5W2oLd8Czu+9NJLq6rqnHPOmTEmPo996IcVCuxa/JBNYIfio6yGLBZbYKfuy31l2oEy595QPpH/MB5ItzipiiPb4nNlNJcuXdp8lzbTB2hTFop9sjvXy3yIW7nCxBizciQfRkbGKH7RP/EK7/hYeuYzP6RfkHK6IqvFf+iXeOp7+qzd3LdFb+kA2eVeoccff7zpr+yBtp0Bh9xjrPwy+cu4OGeJnpEVXnlnoEd5Zgu9oD/GxLb1Q0ZF3LF/W9w1Hv31Lut9Tdaefep/7jm1okI/6KJ3BjJlR95VZFFuu+229i6LB/rAr4iZ5IM3/J6xeq+xV9kY6Y/rMnvEVumR9yB2Qn8eeuihdu+WqGdqOnXq1KlTp06dOnXqtE3TVpGpmTVrVs2fP38KTck1yxAxM3wIiZr5sgC5rhGyZdau4gkEIWuBQ0L0AzJnDeu6devatdBP61A9w4zaLNOMPqszybwYa675NeM3VmuNoT8QAH9lIa6++uqqGhAzs17tQN4hb+43Kzebhp5ABozHddAhqCS+QPj01+weCmB2DzGACkEmDj744KnzF+gHJEjffJ91zWV69AUvLrnkkqoakAhjh3xCByERxkyvshIYFIdMoUr+GkeenWDMWd0GkpYVuug5nYCIQJvoK9QnT6bPPRf3339/Q3IgPvoCNYS2kBMeQtfIHQ/00Z4wPIXW0FtIEoTUGKA27AfP8AZP9EO/2T6ek0VWuclKVXSLPmcGESKUZ1HgIR2hg/rn++22264h+vqsLXqc1YQgVVmRkE3qg/vYJv8oK0a/8QQvXE9/6NNzzz03tY+j00zabrvtmmzpJl+WZ3+xEb5WBobd+0xu/JBYkOdsWUFAXvRIO1lh7Fd+5VeaTolP0Fb6Letn/4Isge/576zEpm9ZsS0rMoovdJ39iRsyHq7zHLE2405mRcQ7tsEOr7vuuqqqOv7446tqsHO8Zzsyr84bSb9HJmyRbfFra9asmTrbLc/voB9QcTHf+0dWeRXX+ERj57NyT5Y+koX3HVkBGSJovswxXtMXhOdiAb2m78cee2xVDZkg2Qnjk/nhf7TzF3/xF1VVdeSRR1bVgNxbNaI/+EiHRqPR1N5hPMNDPpJNiAc+26vpmZnJYdNiZJ7zl9VB+XnZd3uLyYwPIBPXO/NOPCQ7/aCPdCT1T8aGHfMFWVUQn2T7yZ4vMV58mOSxvllJwL+QD9+hT3/9139dVcMZVvbYvP/976+qIZZbNZF7udgsPfdO4rnGih588MGeqenUqVOnTp06derUqdMbl0Zm0b9I2muvvcannXZam/mZTZqlQjrN8MwYoSVm42axkAaVviAVZqBmfpAEs2PPM/uGWub6xtmzZ09V2oIYaMNs1mdIthmzGTKCOENPEwmAGMgymc3KgED3tWPm7joIk35AvIzRekpIRK7PTvQHEpZVoqAs1nD6TKZQKO1Cn8gA0rB27drWR6f06hPUUdbHmBNN0QeoRZ7jQa6ZCcr9SKecckpVDeu38Yzt4KksFXTTmPU7M0tZKcv3+mN8UExZDvugjJPe0x3oZ6JO+mPck6gzeVinTS54Rb6eide55pgsyBFqCYVRSVC7bNmYoTn0n43jBR5kxS96COXO82LwRn+gojIueMpePBfPUJ6enZkrMoYc77zzzk3/6EeeQ6QvxkieeXIz2fAd1nXrAx6Spfv11fdp6567ZMmS+uxnP1sPPfTQqDpN0cKFC8fLli1rSLlspCyH7AKb4KvtE0zUN/d/0GlIN78gnvFnbIvu+pvnY+2zzz6tzazIJZbmvjvPzMwo+6bX+qR9usyPs1M6LxNCV9mlseg738nO+Fj7xvhIY/Y7e2db/I3+sGvov+eRoQxxnjjPhvyFXkPWd91118YrY839SylfvMtsszGRCZIFkAWD/mfGn5/nq32f5/XlvhG+TpxRtcrzxD+rP/QPj/GcnsvWey/zO1nLXGqfbMgyffymTZua/tEb73L8Ll7YY8NW2KbqZLJOsgR4RXbak13z7uG9TT8yu8Zm06fKGHlPwyM+GW/yfTHPfWOvqSP6S8/zjB/98XyyNl6Z2aeeeqo90/sBObEJmeKLLrqoqoZVGuRlLGIw/eIv8ZxcxSN6xLfQN7aGJ/bjPfTQQ/X5z3++Hn744S3GqZ6p6dSpU6dOnTp16tSp0zZNW8Wemp122qne/va3t/Wsqm3IDpgpQmPyvAazXzNEM3/rJ83qIQFmoBCTnH1D3MzCIepmv6PRqM2UoSNmpWbcZupZ7xxSkLNaaCwEyLOgHr7PvQNmt2bgUBwz8aweY32jE5+tbTZr1o88T8bveJdVrMgGmmzW73vrLX0vcwRpgCpBt5YuXdrQF7zKPSl56i+0MJEufTSWrMKiHTKib7JvMn0Qg0Q59Svr70NZIONkBY2EZOG1TKV9WlCUHCedQ+yGjkBb6R5KlEZt/6oBTcR/SKXPeKYv0BxIlzHLMpAndAaRJRnaP8LmIUrsSFaCTWo39ZTs3Mc+rPOWZaOPeT3Zeb7ssHFCsvHF74nYkZ1xek7V9B4aPiMr1eR5WdBG+uGvsVjbTr+zIh1f4jqZTvrt+2eeeabvqXkF2rRpU61Zs6bZH17leWjslu8Vf1DuvyKvRKjpAV8py5IVwiCd7B9qPHv27OaP6TdiF/Q9z1WD/PKR7Nt94oY9KzKdeOM+vi6rBeqPOJl7aGRWrQjI6k4vl8nlFyDak+dFVQ3xRr+zqqHP7JdNyezoh+eNx+P2f/LPvZxiHqTZCgTvI57hPhk+PPnkJz9ZVcNKEXEj97jIKuChscoE4Q3f6Pn6SW/EUfrovBrt0gU+0PuYfusX/czzTvAwz0JCdJHvx9+qIb7ghWyBd8A8M+rl9idlzKYvbJfeWe1Dhtoha/os00ePTjrppKoa7MR1fsd7/fdeZh/KjTfeWFWDfeoXuyBDKwnS95CR54o53pnJePJ8OnrIv5CDe77xjW9U1SBPsdH7h/vEaH4Oz8R8vORL9MF9ztmTrTVW7/OzZ89uurgl6pmaTp06derUqVOnTp06bdO0VeypWbx48fijH/1om3VCNCDb1kda55g1wc1Szex8D0kzM4RcmO2riOFsDOv9rR3VruucwvrMM880tNPMGRJs9pqVSvw1g9YXs1ufzaihNGb2PmvH7BVB86Af0D88NLs2c4fKWCcJUYACQxmdXp0VU6DHiQTqp+for1m/50Kp8zRm6PO9997beAFJ8jcRHggUuWYVKTpOD/BCJhBaAinTJ0gUlM791mGTDaQdpawgFXgCBcQDvHc+Dl0wPogH2cj8QG5zLTd01HPJhqzweOedd24onmtzra2+QjrpNfQGTyBi9Dszl3lODBm4jkzZrvsgWJAwRK+hPZ7LZlUvMi6yYq/W6GdVGevJ8ZwdQfLoJPuACEK+ch35/fff33wCnkGsPDtPrabPfINsLnkbm2cZO7+Yla30TaYPskYfye7AAw+sz3zmM/WTn/yk76nZAu25557jE044ofkyOsOuEqnkB2Qv2BZ50iXfy9TRHftg2Ir76DR5inPsnx87+eST23krbJ4e83E33XRTVQ26oA98IqTY9VD5PCWcLiI6zBfb06DvkGh2nXvx2CkeZdUn4+BP+MxcuUD3+YWs+JX72mQvtUOm3kH03/Xvfe97W5/8xp/zHdrwLHEsz5XK/bH8wdFHH11VQ5ZBPHM934qX+u53GXm+mwz5B+8y4pixZgU8fiv3evFv9M64/G7cMsL8jXHQa36LDOjEI4880uIC+YlPdF+fPDvjDxnhjRhOb8nEO+aVV145o328oIcyeog+iNlWptABFVftSfY+xafnnhz3kZl3ET7B9+IYncEH79L4Znx55p8VNw899NCUPsqGeX+gB3wJfScn+oH34hQeiq2emZVHly9fXlUDj8lYP9j0ypUr66qrrqrHHnus76np1KlTp06dOnXq1KnTG4+2ij01s2bNql122aUhCtb25SmyN9xwQ1UNdc2z8kmup5U9MTOEvppRysiYEV522WVVNaxNhnSo9W02/Pa3v721ZS2xWS6ECUqf9dDN+I0pT2CWpfI9tAM6D1EzwzcjN/OGuhiLmbv78di6Xv3LNdf27kB1tZ8nK0PiMvMCVZqsxFM17J3QDhmTifbmz5/f2tJ310J2rLW1RhdaB4mCABgDvYBs4Q0kKZEnfYKKXHPNNVU1IGLGgHfQFXpjLJAPep26Qn/pEl2BAJMZHrseyuS5kGH35fkoeZ7PDjvs0HSfHrknMziQ2dyrJdOHx3juL/lDsPA09xFAlNgwWeKh9mRc2AlZ4g0eQHC1a3y5nwUv8F71HMgZHtMBKGau9829ZZPooz4iekBv0gbzbJ7cPwfxJCN2QM/YDWSLz3FdVtyBHl599dVN3p2madasWbVw4cLGv8z80wHyzrMhVPOBRN96661VNfhs6LE4R550js1ATNmC51lpoPrhU0891WyeXWiTXYih7HvyFPeqIdaKpewz97LI7PAD7CMrU0JdE4k2BvYJWcdb9owXeMbO+BG8xUNxh+/PM4LEU59lR8Qc48zVH5OyZa/GcPrpp1fVsAKEvycLGRHvCIceemhVTWfkrBC5+eabq2rwWeyVX8AjaDwZiMHaMzZ6Yu+Cd4U/+IM/mPEcvOGD6RCEnT8ia7zOvZw+i6uIbpAZXcqKqHvssUeTh3iTz8j9q3jpXYBPJEf7f+itmKnSXNo0wnN66nneMe3tEefwmMzsT/L+ZcVLZnfpWWb/jIc+izfiGPsWx72LeNdwHz7g+apVq9qz6QveWPFi7PQMb3Ifq3c8e3DyPYud6IOx4xFZeNc444wzqmrYL/WfUc/UdOrUqVOnTp06derUaZumrWJPzZIlS8bnnXdeQ2EgVhCurLyUqImZnlkoVBcSYEaZa8yzupl2tOu5ZrtmrrNnz27f6RNkADKNzDpzD4BMB0QqT5GHPEBL8uRYYzObNTvWnuugKtZKQ56N1bpHs+fcG+Q5eVK0MzLstSATaE1WBoICuF570CuIGHSmakCW8B0KQWc9A9oA8SJXvMcb1+UZCeSc66g9DxKRqKAx4xV9hYroPwSCftJzvPEZ2oMHdAJShrKaDaQfSgTdkRGgQxCWyZODoTB5ojL50BOfs1oRudEbem2s9DmzVJCqPFXY86CMUGk8Yru5dpq9kFn+7j79oae55piP0I/c60YX+BR8I2v2rT8rV65sqDR9xBt6Yk+M7JMsmGws3tNXqDXeQO/oBV7zo7n+Ghkj3j755JN1ww031Jo1a/qemi3Q7rvvPl62bFmTG4RevMmzlCCUdEimhi6xf7rpej4W8pl77GRX+f7cxzhZOZDvoKd8p7bYIZ/hL3uUkch9IpkJtbeMHbDfPO8Koswu+TA6zqfjof6Kr+wqMz6Q9TzHxjjZGF6Rnevx0nXGrf+yamKEbMe8efOaXbEnvGCPYjDe8OsIbw4//PCqGvSCv8hqn3iT9q7djLV8Wz6XP+ETZYZ8lhHOk+bpufF6F7Cixb4U7XguXcnz54yTXeGfLMbq1avbGMjlsMMOq6qqv/qrv6qqQc5iKB65/uKLL57RJ3ZhTN7nrMKQTctVHXwq/y/O0Ef2I77gje+vuuqqGf0nA3pFj42dzHM/S56z452BLzJOmS36S+/x2IqE7bbbruldnuHmmrRBvJYZxittey8RXzKG2tPs3VR7eZ2YisdPPPFEXXjhhbV69eq+p6ZTp06dOnXq1KlTp05vPNoq9tRs3ry5nnnmmanzYszg8ywY6IrZu4oRiTwgM06za+hOroeHiEGPoLYyABCTlStXTp1Or4/67Hszb9VfZAkg5JAhn40RYiALkKcRQ94gX76HqLkPigu5sFcGkgahgLCb+Rs7JN71kAOfoTWuwwezdigSdMcsHh/yHITJ828gPPqCt8YC2Z48C6RqWC8LXXEfvdJX16kND4kguxwz+Z922mlVVa2yUFatyfOSoDC5byR5l/puvS6ZQY/ICFJ31FFHVdVwng5USD+gPOzAuL797W835IdNyIrZb0YeaTNJskSq8GXWKrOk0E28Tv2h77mvie3zFfRJ+5nt9Rw+IvUND9k4vcU7yBYkMNcB5x436JTfDz300CY3e6NkS+kBP5ZnLOgbXuRZC/4aE19gbBAvz6NfeIJHZLvbbrtN2VKngWbNmlW77bZb8yt8GyQTAk2X+XI64/o8N40NkqOsAZ3NGJPZ+jzhPc80qxp0QQWi3L+g8iI0NCvoyQ5ZJ8+niFv2VegL+8GTPGPHGOg2u9V3n7UDidae5+TJ63m/lQv2bWjHO0JWB819H4mA81988DPPPDNVqc0ZJ3kejQyGmHzWWWdV1ZARwZscK/3hg+gXfy4DxBel3tCLPH9GPNQfvBB38NA5OHwaf0I2fB6/5ffMKHkXkkEyHrrBb2V/DzzwwNY3vMETcslMY543hrf0xl4xeykzE6LP4o13Q+SzdwO89g5An7IKoFUjrsP7XAnjPe7888+fMW76LHshPpG5bAs91i+yMF79p7uPPfbY1PuJe7MqGr0hZ/5QJlqf2INsmbGxB2cAZYyn18ZKlmx2zz33bO/6W6KeqenUqVOnTp06derUqdM2TVtFpmY8HtfmzZsbkmUWaZZtlmnmaAY/uW69api1Q5cgAFm3Pdei5qm00CdolRmoGemCBQvajD730JhdQsmgG1D0RHLN1M34IQmebWzIjBzKYQ1ynshsTNARCDsU0Awdr6E9eGO2bRatXUgApAGPoTsQQM8zrjw9Wf/IlIz8PfjggxtaAeHyOVH3rMQDUZiswV41oCX0Q3Yq9xrgDSQKgqA9+0fwAMpDv8jW2OlI1qSHKnqO7B8kBNoDXYGW0gnXywSwB/d5Xu6/0v6b3/zmJn9/b7vtthm8gDYba1ZA0bbvfWZL7AGSqyKQ6kV4p09ODc79QYje0AX9hu6QNbQJj9kdn+I6lOcRkKHn5xkk7AGvyRq/2Oett97a+kDP+BFt45HMCcRVX6BSeGSPFpvNvrExPiqr3kGA2Rpbh9R12jJt3LixHnnkkaZb5ILvkEn+QhxBdI8vJH98h0TK0vNDuQIhVy4gz+VPvve977VYST8huHRQ3JBZpQuZDYKeQ13TJ+GJv7kiQNyge6qT0fmM8cbuvkRt6bbPWZXQvjR+wf4N/oON2POQFSplDfQrz7HRvzvvvLPJAQ/12d+sUog3fJAYLS7gIf+uXft4+D7PhfLzA+KlMaWfES+NRZzIvar6YUWDlRH8CR2h17L0+iXeel5W0ZLZkbmRJcTbI444oqpeXNVBL2UH+FJ7gjNznnIWv/D0oosuqqrBl2YVMrJje2yOHdEv9qD9D33oQ1U1ZN/wxHseGbJ5tsou6dvHPvaxqhqqsWUlOuPiw43LviayxEN2smzZsqoadM975rp165p+ZrVV8jUGfec79Jnt0c88x0sfrdLJlVPeY8Rq72f0cXKvfFYUnaSeqenUqVOnTp06derUqdM2TVtFpmY0GtWsWbPabNfsFrKY69ghEtAUs0+zaCiSmaZsA9TGzNBs1uz3q1/96oz2oFyJdK5bt661Ca2AGFsnCKmFQshkQNON1feQa7Nl7efp87nm1/XagYRBO2QVZDe0B4Ux+4ZEuQ7Kg8zWjQfv8Yhscv8KJC8zNBAIs3H7T9TYX7NmzcueVKzvCO89U5v0BkJFftAXKLsx40Wu1yYLCCoEQtYBoi57AVWBWEBQ9Q9ChvAecgF9gYBk1kx7ibCTkb0ZZEJWebbK5P4J1yJo4mTGoWpASyCluRfK75Asz6SfkFHoi/shWHiIBz6TGd/AlpF2rDWmZ77XDnuA+J1wwglVNV1R0bjxOivI+QwBy9r+ZL3LLrs0vbVvB/JJXnkGFJQw90r4nR7wJexDe/TrlltuaX2Y7Fui0pPZKD600zTtuOOOtf/++zc+kr0qPnQD38UVups6lfsLyZ0cIZbkxU/w+fY6QJnZhvYff/zx5vPIOPdf8N98qFjnWXSQXz7++ONnjJUP0p6MCTvyuz7bq5dVptgL5Dj3rtB9dms8xq4/eMBGUJ575R2CL8yMsFgAXRY3jdPvGzZsmNqHprJV7pHJylV4IWazZ+2lrPQdb7VDb/jQrMClHffJnGhXnCMrvFINjQ7xF1n9j/7KEHl+rj7xvYwNH+89TNbk3e9+d1UN73H33HNPG6M4Qw/oDV7L/LNJfTMm8cDY3G+vC/3lq//yL/+yqgYZki1ZeBfx/ia+eCfI/Y3a8X2eTWZVENnhlYwU+/S7z+zLOxNe8z0ZL/WbDu24444t+2WPMT2RPfUepS3yNVb65/3oT//0T6tq4DVZkK93WvHLO8JHP/rRqhpkevvtt1fVzD1geX7QJPVMTadOnTp16tSpU6dOnbZp2ioyNZs3b65nn322zabNECFVWcveTDErMZnVug5CCWmAAmUd7qxRbv0tRMKs2Ex2PB63WSQEJysSmUHLbECsITxQElVnzNyhFmavUJtcQ2hmD93RLgQh0WCoSJ4Ca1Zttm08Zvx4bF1vnjeQJ9xCVLTvudBoSBqeGkdWPHn88cenqp9BoM3w81wIqEQi0ckDY8Y76Eqi7a5H2s+9EfSHvmW/8FC/teM6em/sslb6B3WBVNAR6KS1yHSSLOgUXUOyMN/5znfaGMiFPJD14BAwcsr14MYIFbROO9dn00+8oldOWqafEDNZU+3jQWaI8MR1audDxthZIlR0JPfR0TnryP1OR/Be+84DSKR6/vz5TS58Afm4NzMomaUiI8+WmVWn8QAAIABJREFUNYMa5hk7UGqZSH5MOzIGeR7A3Llzp/YIdhpo++23rz322KNlI3PtOLnSUbYintFNNkZnyYfP548yqy8DY+8F5JTe0D1r3e+8887mj9mb39inv3THXgB2mv4+s4TsPfeQ8htIPORH3J9nLfGN/Eue0SQzc/bZZ1fV4NuhxvbkibeeY5z6kVkU8cg7h/HLpkGR2Z7xrl+/vrXJV2QVL/LWtn1E7FFckNHI7Jn9hfbl4gkEnD3znfRH5kN77BzJHNENle3sqbAHU2zIipB4wv9kxbr0/eJdZgkh/PpNxvT76aefbvLUFp9pVYYxyODom1jq3Y6N2jvKpm666aaqGs4Kok+599PY7FXxHHGM/pMl2bBR+phnVKUvzgqJ9JPtq2JIh4xP/8QQ17ufXdId7y4LFiyoI488sqoGefNjfAA583fGJKOW78DGKmtm1Q59NwYykrHEs6xcys7+s9UEPYJ16tSpU6dOnTp16tRpm6atIlOzadOmevLJJ9tMMNc5QoP8bnZslnvBBRdU1bCfBSpj9gtd8tls2kzTX2gURAGKC7UxQ62quvnmm6tqQGjf8573VFXVjTfeWFUDKve+972vqobZLnQVqpJrQiEH0DlojbFDgSBZZrcQKzNzaIo1yxBtvyNIAIIqmsFbywxByBOd9RfSZ1zGnXsxjBMqmWs+IQmrVq2a2oeDJ3nSLQQMWgFVh0pmJTYIgLElgu25EKU8hZo+0gu8dx+UJk+xzopc1s8iGRn6TWZ4op8+Q7Q8F+qIT2SWyLLnz5kzZ2r/hjGxFaTv0EKoDoR1ss3JMZAJ3uOxMeCx76GLxiRjSGa5H4oMZV7oY2Z16Yo9PcZLb7O6DVunp+wvqxLSzdwj5Dlz5sxpPKI/+sR/QbbYYiJeMpQyf9ZX46H7jBESlmf+QOD4FDyGgq5cubIhfZ2madasWTV//vyWBYTW594HNiADg/98qu8z60mnZAfIhbzIFTqN6DId9LyDDz64/QZ9t5dx+fLlVTXoiBjIV+U+PGOG3rN39/P77M/1WXWTzuOBzNBkhn5yTHgkbkCYtSfbwF8Zu7955kWeuJ5nkLlP1oVPhhbnWTMvvPBCi/18lHjEJ9mfkfaOt/SCLzE2vtP+RDKSubGnk8z0lU/PTD9e4Xmei2a/VFah9U7Ap/KZeMKv8c3Giefeceiv++jAlVdeWVVDlsW5a3TgN3/zN5vfpWf2eGUmkr65Hi/5Natw8JYMjNkeGvHtzDPPrKph/5M+2echTnhXZTdsURzAQzGafpGRdxb66D1RNbMrrriiqgbfn+fR0Mc8+8rnzNJ5FxATDj300LYqQtaJ3smwuBbvxOw8i877N0pfIma7zjus+OX5+s7mrXRYv3791HvrJPVMTadOnTp16tSpU6dOnbZpGr3SjOe/i/bYY4/xMccc01AeqIyMidlsnrBudmoWmiijGSCCWMiaWKMH2YY4ZJ11z588Y8YM3Mwb6qHPuVYScmVtJlTD/Wb8UD6oSK7HztPoc/+G2XNmQIw1T6CFUOVsW7883++QBKgkRALCBpXJDBQUKs+EgUpCJCCKa9eunaoAYkz67Pc8nwOK7z7rZM30rTelb8aGR1A8cve9MUASIBj0QtbOGlKygu5A1qypzqpnZOk59BkPXU+PyRpSRwZZRYeMIG9QnNWrVzc9zap7eMsW6Geedo0XuedGe2yL7PCebdFrKKTn6ytU2nrrzIrJZlkjTQZ+z/NnoJIymHhNn/WHb6H/2sHbPAcDCkU2k5X76BPdpof01JjxgK9QyY1+5JlCbJic8xwlepcZgjxfBa8WLFhQf/M3f1M//elPewm0LdCiRYvGZ5xxxtQeKPJjj3SNb/OZbbAlSKVstpjCRugL/yPLYg9CngTPVvm1uXPntuwuX8gn8c9iKZ+gQhHKykcyNEh2SSaTffITEGX9yKy6vort4hPUlp/gM9kd0i9+Rn/xSDYbz8mK7eAd9Jgd86Vk4h1D/41v5513bmPkU9gdnvAZiD9wxklmPtgrWbF/9us6MRMPxSNj5heMmczZPR9Fz6D3+Q5Db/NcK1krqzaySi2eir9Z1Y1e20/JruyxIMv169e3zAZ54w3ea5P+4Rlfx2fqo5jqfrFZVoCM2Cjbwgt2IgtHn7XPt+c5fO77wAc+UFVVl19+eVUNvl6/8Mp+FVkUPPWXTrBD9px7T9mH/okR9gxt2LChtSm2Z/aHDdLXzJbmGY1sWDyTYVT97O/+7u+qavAp9DEz3vwqmcyfP78uvPDCWr169RbjVM/UdOrUqVOnTp06derUaZumrWJPzWg0qh122KGhMBACBF0124b2Q1/MahNJ0J6Zo89QI0haVkuDgEA+IeSQl9Fo1BBqM3ionb5nNgCCYKZvhp2nrkJ67S3I9dtmrWa3kGToihk/Xum7WTAEA4KQpw1DezzHemG8cz9kDSIBOYF0m937HXKQiAH+QQf0d6+99moZD2OFJEDhtOnZEDGfjcX+J7yBqjjPBW8gENAbiAFEyppfY6Avxprn5Divguz8JavMNEFCrF2FgOQZR4lSkR2ki+xyvbHxTaKxnk1OkB3oPf2FykCuIJmeKWMB7dNXtqU9eg4hkvljD/QCEkr29APv6E3uC4FK5tpiiC/9db3voVJZQU7/8qRpPokvoRMQcajtm9/85uYbZHugjomW8xmyTVlBKuWZ5ziRER5Bl40pz1ZwHR/S6ZVp48aN9dOf/rQh8ZD2q666qqoGP8Em6BSbyj0O/pJHxhDVgdzne5k+2VGoq/hEz55//vmmU+yKD82sIXScz6N74pwYm/t66BY7l13O68Qx7fCl+sqv8B/2kfF94pUshrHjOR5mhUAyyOpvrscf2Q12jOeyY2IJX4t///7v/96eIa54/yB3fj0rbrFXcsNz9iw+sHs8xiM+SmVHPpMvh5TzibJOMiv01PuOMdNDcUK2wzjoCl9HNvyZzAzZqUjHlxufrBc/lhXCXP/II480/eGH8UZbYrs2yInNfP3rX6+q4X3LOyQfiIdZ6Yv/F0/oXfpSdiXj492Sb81qbPbosC+ZGz7b3mxxEW/ZiXcEv7ND+6Xw1H3ioH7iLfufN29esxExEi/ZrL7Sa/qYWVbvuGKgZ+FJ9j0r2eFNxjdjXLJkydRZhZPUMzWdOnXq1KlTp06dOnXapuk17akZjUa7VdWXqupXq2pcVR+rqh9W1WVV9eaqWlFVp43H4ydeqZ299957fPbZZ7eZIsQia8qb9WbteUiJdZUIGmumB4WCxuRs3mwWsmHW7vnaG41Gba2j2WWud89MhkpXUD4zdrPaXOcqY5HVYqByEGqnv2bFL2g9XkFtzIohHdAjGRk8zHMJ8EL7Zunah7zRpzxjBqqJ56qQQNqMf3IfCLlB/aB0KqvlOSxQlKyv7nd9wTtIBOTMsyEM+Xy8hmBB2GUf8kTmRJy0A1HzXHoOfcAjGSnoj+utn9Uevc29M3TMuHON6oYNG6aejSdsMFF+5Dryo0d4S2+hfFl5UOYvqyexD5+NDRIHiaJ/ZEDWfAQUXDv0kUzpOZsmQzrkd+3IFJFlVnfCc3blubNnz25jxGMoYWagocz0FxKrD5kFdT+/Sd/4FPrCN8g885epT3Pnzn3FtcrbKr1ecWrBggXj97znPfX7v//7VTX4MPyWXc/9iHRO5tn99AA6CzVma2wIoi2m0FF6INvArt2/aNGiZm9kzJ6ych69paN0Kn1g7o1hf3lOiLHQMdWi9M3vxxxzTFUN9iyLwM74bnHI8/hqMYD/OeWUU6pq+owV48vzSvgN2YrjjjuuqobsAts8//zzq6rqsssuq6oh07xy5cqWOWGP/DA58bHkJAvFL8uAyCZ4r4GM24uQ70OeZ0+K6q/ihXjk3YG+4oXsHR9IN7KaGb31Oc9Vks3Qrthgv8nXvva1Ge3IcuU5Pfqjshl72nHHHZu+5j42vHCvDIs4og/ef6zOkJkhA3rkfY7t+pwx2F/64Z2B/ehXnieI2CU9tZKHHvpde+xUpoUdkYX+0DHZPXGWvdEl76O+P+uss1rbL6d3+kAPnO1Db2Ri2Kz3NbGfjeMZ+bN1tq89fpS+aOe5556rr3zlK/XII4/8l+ypuaCqbhqPxwdX1f+oqh9U1f9dVd8aj8cHVNW3XvrcqVOnTp06/SKox6lOnTp1+iWgV52pGY1G86vqX6vqLeOJRkaj0Q+r6t3j8Xj1aDRaXFW3jcfjg16prb333nt81llnNaTJLBlyaaYoS2AmaTZurajZuNm1z2bNEJGsNgRlMnuFyuYaZrPayTZlYiC9Mi1mvVBSbUFxoHO5Hjf3j2QlCc9N5AKPsoIcRCurpEGSPQ+v/Q4VhiBAyM3CXYd3sgvuwwcIB2QMugS5MBuHaJiVr1ixoiECmWnL81r8npV+VNmQWcnsl+uhmHhpfTUUhHpDc6AoEDVIOvQEEpFVyzJ7BpHAE/32GcpoDwa9J2N6Co2BVpFpVmmDhMiubNy4scmLvqVe0xu/Q4shYPSDDIzRdRCjrKLGBnNPGF7Rr+QpvSUTaJB13HigP7ITEDmImjXMMi55doj+QWQ9lz7jMVlCfKGeZHfAAQe0MbAxf90DwfIs35MN29EHKDlUkj57Dn+V5xTRWzwyZra7ww471EUXXfSyCNi2SK9nnFq8ePH4nHPOaTp+9NFHV9X02WHkRTfoIh8H2c5MTp4RJSMtO8//3XDDDVU16DbkXzyEdN9zzz117LHHVtXgW+henm1CV9irseijDCz75VPprmfTMTooLvETfJJYTIfxQBzM7BUEGk89h27bvyg7bVx4xmZy70RWh2LXshqypmIFn41Pq1evbv/Hq8y0ifHaEtfoA57oa/Ydb/lEYxc7xS16JyvGh9M37x4QeDqB13nOmr/0HQ/Ek9znZJz8GB+YFcjEGJW/6ADZ02uZzDlz5rTf2AZ/jofiE33U1zzdnozwnL+mB3iIF86F8ZzPfe5zM3iQKxLyXCVx8u67754xprRZMqB3qrCJV9rNPc65kkHmJ3083rMj/Z08g4Ytea+SJRLL+Q6/k4UVJTKWeCau0Gfv5c7IyiwVvcY7MvGeQ7abNm2qSy65pB599NHXPVOzX1U9VlV/PxqNvjcajb40Go3mVdWi8Xi8+qVrHqmqRVu6eTQafXw0Gt09Go3u5vg6derUqVOn15F6nOrUqVOnXxJ6LZmaQ6vqzqp613g8/s5oNLqgqp6uqk+Mx+PdJq57YjweL3iltpYuXTo+//zzpyovIdWAsv66WXee8wE5EYQgFWaMUBmzZrPVPJPCTBICMXnaOMQWgiBjYWYNJTFjhghnRQmzXQiTZ5jBmz1DS5DZMYSLHM2W81was2CUJ99CbxCU2OwYT83e3a/fUCP8sLaaLCEKZvuQFOtsIYf6uXbt2oau4BH0HMogU6KPEAToCPnhBR5CgqAjeEh/oJEQNyhJ7m0gY4iadqA3EFP6pv2s9qI9SB5URTu+9xy8dJ1xp6zxKZFASMiGDRtaX/SN/qHcY4Nn7iNfmRj3y35BWXLNM2SNPrAfY4XEkYF+aB8PPN/6dXaV2bNc628cdMNfekkm+sVXyJzSuTwrK6u5rVmzpsnJdzJmUEJIFX3VNn1KveKnyDsz2niI53kiuPuyKtumTZvqS1/6Uq1ateqNlKl53eLUPvvsMz733HPb2nE+L/dn0HH+xl9+gK+FZCK6mifLa5f8IeP8y4c//GH9q6qqb3/721X14innmUWic3SD7vDr7Ir+59463/vM/vhiPodvlNGhm3SOHWonM7eJgOONFQvsUpx0Cr29NZ6rH94RjIedZzU48UiWDI/xR/aBn3n22Webv7VvVJt4zP75Ckg2ubo/q0flPlP30y/7L7wL8O+ZqfFOYs8NPeHT+EqyyT2lVjzQHb4UT8Vu8Zq+Z6VUOpJngn31q1+tqmEFgSyG+0ejUdPTzH6Rv7F4T+EDjc2zxS19/vznPz+jndxfxAZlFsnUGDO+0Q/6ytbpD5n5XgbR83JvJ17be0a/9TP319INdsV+vEd6Hh2dfK8jP7aZ52hZDUFu3mk9W8z2GU/xWmzkC/IcJPe7zh7ma665ZsZY1q1bV7fccks9/vjjr3um5uGqeng8Hn/npc9XVNVvVtWjL6Xz66W/P3sNz+jUqVOnTp1eLfU41alTp06/JPSqz6kZj8ePjEajlaPR6KDxePzDqnpfVf3bS/8+UlWfeenvNf9ZW08//XQtX768zdwg89YVmv0mGmxdoVkvpMMsGhIGCTWrzvW70JusOgSJkwWxbrFqGpmCFGjLzBjKknX9PSv3xGSFKu0k2m926z7PcX2iMShr0FsXbt02xFl7UJXJ9YxVw6w90WXIAZmoIgNZkdnJaiEQlMn1molk5h6CPJdIX/XJOQ9HHnnkjPshDdqHDsrgQF1cB+2xdwtyAT2in36XUTz11FOrqqbOX7InRgaFPkL06AqeQmVU3DvssMOqakA3IXIQEdk6z8nqNFCa+fPnt77lORQqz0B62Qo50h8IFh4ZEwQXL1FWTUr0hl3QE/aFd/SGjboOmoMX2qdP0EL6S+aycvrJ1nMfjP5BtjKzRRbQKbq1YcOGxtvcGwbxV72In9JHtgq1swaeXuobX0R/6AFyXZ5HA+3W3rx581pf3yj0esapzZs319q1a9veS/sz7B8hc76PH6KjMjBsis2ICZBSPpOuQaz5fD6ajvNr9MZz169f3/Qb8QH8tDbZiYwFXclKivZBsEcxlX2xe32ke8YiHvEzf/iHf1hVw54YPPFccRBqK17JdrJTVa3yPCy+E2/0i52zQXse+Eb8SN9JNvaNzJkzp2Vc9Jlv4lPIVUY0z4bjp8kifaQ+4bm/uWdOu3yy/Rn0Dm9k+nJ/LL0hexkl+kdmxplngkHa+Ub91w49917lupNPPrmqBt8unsvKvelNb5qqskqfnCNETnhJ771PkZExiNnaQfT5yiuvrKrp/bv6wcbFK3/FSZ/FQ/bjPYes8VYGJ8+gI2P2lTEks4F8eVYBNX6xgl3i08MPP9z4LmskLukLXrIt8Yl+5hl23hG8M9ivRh9U+8v9sy935p37v//979crrTB7rYdvfqKqLh6NRjtU1Y+r6qP1Yvbna6PR6Jyq+klVnfYan9GpU6dOnTq9WupxqlOnTp1+Ceg1TWrG4/G/VtWhW/jpff877ey00071jne8o83CzWrzHAcVuKCxZooQiJwtZyUyM0wzUSiw53qe/TJmiHnOxNq1axuCBFV3roXZqiwChChPdjV7hUC5HkGYshKEPkOmzGqhdWawMi+TJx9XDbNoa4az0gWeGZ/+ZpUzKBB0BqKhHUgaZMRsHGJg9p9V1CZPmibfyZOFJ/sE7cBbBIU4/vjjq2pAZSBc+pR7DSBkECtVx+yFyP1SyHXayX1F0BS81R/Ix2QGcHJ8ZEb/8Sgr6hn/5Bk/k9/juWwflHbRokVNv+gJxBKKxrb0AUKlTWMxNnKll/SBfpIZ5AmSxMYy0+EzREk77E3/rMdO1Jvt4x2e05E8UwgSlp/xh47QCciY9fRQyMkTj/NsG3pHn9g6ucoAQKr0EY/dpw/GBAX3PM9hN1k9UIaB/u68886veFLztkqvV5x67rnn6r777mvIMjQXmpq6xFbYf1ZfpPt0EcJOXuTpewh7njhPB8VH/mSPPfZoOscu7bdh3/y/uCVusKesvEcHkZgKZYWU+z51mZ3zcc59yQpfeMCeL7jggqoafDqesW92J6tplUfuHcIz17M9/caPzILil2wqf3TPPfe0eKDv5Mb/ijdHHHFEVQ2ZfIj19ddfX1XD3l4xnv3m/hE+V3yUwc+zvHxm39/4xjdqksiS7PhWY8MTsdz+IrzRXz7DuwCdgcTbkyMTIMOD18ZHV2SaL7zwwqp6sVIl/vP74gc95dtkd/TJ2OiBjAh7EBP5d/rP5uznFov1Q7bWu4J9bfQWz/CSrcuaicFI/Mjzavh48Yuu8SG5J1Z84tvZi3da8dI7Ah16xzveUZdeemlVDXuS6ecll1xSVUPszqqL9M0Klqw0Jz5ltVn74NgivcqMpj7Sk/PPP7/ZwJbotZ5T06lTp06dOnXq1KlTp06/UHqty89eFxqPx7Vhw4aW9YAI5CzXrNtsOE8pN5OEmEAsIQdmyWaE1uGb/UOXsoY39Efliblz57bZqhk7BAqCnTXls8oShMiM2uxWlgpl1SeIAV7l6b9mw5AxyIWZvZm/2bGZu/tljKA0kAE8gqJAOvQfeghZM268h2bKMOGf66CQPi9cuLDJWV+1LRuV2SEzec9EZKHvkCJ9kmGBTMhWQJLIlH5Bd6A5UCLoEcQKKgj5MEayg+ziPWSPjPUXGkkPycg6dGhSVirSXygQHdLO2rVrm5zz/ArPgATpo3W2xk5e9Bfal+cH+MtmoTyQJbyAOLEDsoKI0Vc8hayxk6zW56/x0R39gBpBE2WIoEX4IesmC4eXEOGsagOpe+c739n8l3uNAfqNh9A4ek82fAUe2e9ELyG1bFyf6aOzDdgwe+IfJ+3h1VbD/GWgOXPm1IEHHji1IiD9iyx2+r70rYh8yZOuq2pFX+gqv6c9WXz+jw7uscceU9lo9qnPEGw+lJ5D99mxWMw3shN9pJtINoKd6hPe2LuQZ9PlGV2ew1+wETzhP9i5z3kWmPEbHx8pcyUuiwHeQfhOPObDyfb3fu/3mv3oG7vOVRnske/QJt7l2Sjkm3sS+EZ95M/F4NNOe3ElpXiQ+zr4Qrogk5Txiz/Rvs/ek+iOdvWfzshy4QM+kbF9uTJV7uc76cSqVasa38nJ+wN99NnKF77Os8Vivpj+synvcfYBuU7WjT7bg5V70MS3PGOR/nhPwlvtqkhHf+1xJct8LyMD+qvSneeKy94xjIvOkY322Ndll102VeHW+y/bnXw3m+xr7jfCEzzMeMP2ZXbI1PWyS0h1PHr2hS98ofFnS9QzNZ06derUqVOnTp06ddqmaavI1MyePbsWL17cUNoTTjihqgbU1hpyCJfZLwQiK7CY0UFOoDSQArNkaKx1jGbFEDgICuTAdZs3b26IkBm5zzIbZvSebaYMxTErNRbIgRm4MUGW/E0E3CzZ+tkbb7yxJgnKYt01gs7IOni+WTpeIUgHBETWzOwbggA1xlv9g/RBgs20c2/C5PkG0Dl6AHXJdc+yCZ7hOkiWDB1kKTMrUAtIRFZFM/ZEO/EIIoWywh0ECo897+XOH0hd8Dt9hIR4vnHlPi7jlLFhDxCYFStWtLZcQ8/oDT3RFkQMkgp1mzyVfrLPbIrt4iV7gXjm3qrMohmT9qCFEClInv7gPST2jDPOqKohc8kO2VOiRvRZ/9hv7uPSb3ZkD4P77r777qmTkvEYKsiWPMN1fAR995etsD2ygogZA+QsT/aGtEEN6fmPf/zjZlOdpmmHHXaopUuXNp1Ueenss8+uqkH3+Bu+jRxzv5YYQZeuu+66qhp8/0033TTjs+tU3qJHMrv0jJ6sX79+quqXbBB7PuWUU2a0cdJJJ81oi2+RETV29p7nhkC+TzzxxKoa9omJuexEHIEsa8+Y+DZZST796quvrqphD4PYjDdsQ+aWjrOVrDKa8dS+E9kwWTO+mR/jR+bPn998iWu8P0DJ3ZN+nnxls8QRGZw8r8ZKET7bfhL34YXn4Cl9yzGTJR9Gr+yR8Du9pDP2vvA3dIfvNr48rFY/+Lubb765qob4rR1xf3J/CR/Fd7lHNTw+U4yk5/Teu6L3JLbI5xqTvSd86hVXXFFVg+/M/UZsXt9lHvn2PH8JD/2V4devfP8Tf/FMjCd770t8jEwou1u2bNmM3/VPu/R4u+22a8+mz1aCeCeUiRYjxGDvUWyZ/Nkkf8Qm8dgqB76B/upjVnf17rBkyZI2zi1Rz9R06tSpU6dOnTp16tRpm6atIlPz/PPP14oVK6aqJuRaO2iL80fOO++8qhoQALNos3RIA6QS4m82DxGBDJipIgi77ycrdEFRoB7WIEO0PEPbZub6Cqk2G4aofeITn5jRZ4gAZMIsOs+YgDyY1ZrR55pqPMjT7PHYukroDxlAf6BE1nIal/7iBxnoD8TM/g+zfCiA8eP5unXrWl/1CYIk2wOFyQo3eCwjYWzQt0SufQ/J0kd9yrr5+g6RtbYUigRhgt5AGIzD86GS9jz4PtdM6w+0JvevkJXf87TgXOste7FixYq2phda5zOkx/dZSSv3AdC/XN/tM5mRVZ45Rc/wVl/dlygomUJ3jBEqBFWCwOGt++kSntFv5HlkTo/ZPd2BTGuHLIzzsccea3pKH11jPTj9Iq/UGzbBF+BNZrHoIx5kxTcEQcN7sjr44IPfkNXPXi+aNWtWLViwoGUZ7An4+Mc/XlVDNTS2wx/hNznJLmYFH+3x3fZ5sXe+WsUwWRV6Y48o1PfBBx9sdgL1FKegq/ogXtEJOstXsBvoPFQ2T5Hnu9ixuEWv+CY6LBOif2KtWC/zr9/Gtnz58qqa3ucIIZdFYwP2NMj8GJ9VIGyEX5KB4tPJJKugbdy4sa1KcK82xVgycI/3DmPxvcyK3/lEWWjyFq/4GGPDK/pF38hMtssY6JN2+Q2ZI77X88Wp3BtEBnQEb+mvLLishmwYf2Wc2rnrrrtmjPMHP/hB84XeN8ROvpQ+8ZWQfLEzq9vRd37dOyVbwisZEVk1cQLvjUmcUOEOj/hyuqFd9uD5ZMJexBX6mxVNZZ6cjUfn6Cm+2AuU+zXFK7TTTjs1HpGje9mWmMdnWGWkr2xY7PTXmFPOeJhnLLIDMqdPfMcTTzzR+Lkl6pmaTp06derUqVOnTp06bdO0VWRqql5EHbNGt1kz9MfMHWqcVRfMGKEseTK82SkkxJpns9fcMwFpgSZBLLbffvs2s8664WbykB+zV7NUyDWUA/oBOfa9vQtmyxCIrK4BjcEjCIRZtFmvGbwx6b8ri+xLAAAgAElEQVS1m1AZPCUDz4OEyLiYVWsn906QHWRB+67PCl1koH+Tz4RkQwyM2ZggPihPwMWrRKg8C+/ISLYNMgDtp3+yE2QNFaF/9Ise0Ru8xCtZPIi936Gc+pUVxPIkZ3xgP/QaWmm8Wa3m2Wefbfpx7LHHVtV05ZnMDrGRPA09q/ZBeyAqeI8gUmRDz6GXkC+ydD+EDhoJwWWjEC1j9hztQX2MD6KGZz57Lt2DTkGC86T0PDWcbPbcc8+pc448mz5DJvU1z9whd/olSybDwoaMWbUa9/EpmaGhx67/2c9+1myu0zRt2LChVqxY0XSNzl988cVVNSCOMnJ8qdhAPuyf/8jqhfzQVVddVVWDXZM73bJ/i13Tm8mqWHxQIrr8rXvoMbuVbZKFlkXyu+qaeWo4HvCtdE6GxPr5rPAnO6D9T3/601U1IPN4x6dnxhZvoML8Arvnw/GWbbFX/WHH4q/1/+xe1sP1c+fObfvnZErI09iNOWMeX5rndBiDGItH7pMpFPeMga/m0/mT3N/hM17yve7P/rmODPjQzDTiCb/E93snEQfZDV8jK5f+0f3vete7WkzNal++z5UmnpFZHzZDT9kc/Rb/ZG74RnGJntsPgvCWLMVwY2DD9JJte3fA43x3MT5xiD3gNV+e78QyOfYcWRGU8Yn9/+QnP2m6T1/x+tRTT53xbLzk5+hb7iGkx/RQ1jPPsjIG+kzvrbbg39C6deta9m9LNNoaSnjuvffe47POOqsxRfDmLDgYqXAvNAbPUflMsdxvQxpjvvzyy6tqOIBSELCpitMwYWHMnM7ee+/d+kiADDf/cmyUU7k6YznyyCOraggip59+elVV3XDDDVU1pPkpaZYc1Df94ewFsyzTl5tVKZ6Xudy4bTLlpZLx5FI97XvB1r9Mo3I2jIuxchaT5bu9dOWBjL/zO78zY+y5PIdT9ozcnG5s5KkPnLXnmDy5P2V73HHHVdVwyJh2jZEe6icjzZLPJh8mOWRHVtrlWDkbL8XGxdkYP4eJP3kQ2Zo1a5o86Qmn6MWLU+eA6DHe4jk9z8NYcylVvmjnAbf0itPO5VDu089c2uclzEtOlnKmZ3idgT0PsxV4tZ8vVxx2Fp/ApxdeeKHpPjkCEgAHeODl15j5OfJDrqe3gpDr2DQ50w9LRwXss846q6oGfVm9enV98YtfrFWrVo2q0xQtXLhwvGzZsvZCg690hJ/yksYW6Dzd4V/IKUGVlHceTWDjNttji9oV13bfffem1/rIzuiWl7xc9pKFZfTBWPhm9mXpt+exS5Mb9pIH8rHz9OWewxaAgeKQ8WTBjgQskXbFT+BILidSojqXoxmH6/B6v/32az7JBIrfzWMdckM8OzYJUi7fs/w1eRBPxBG80H5OxMggi7AYU5Yk5+Nzub1+mpTkYbNZAGUSmKwaJgB8cx72yXd6x6GbxrfffvtNFbzgZ9kgW9BnekGP6GkWgtEX7x2eybeyB6Cf3+l5yk7cNCbPNakhO3b4t3/7t1U1HWfpvUnJ1772taoafIB4RObeu7wz4NfHPvaxqqr65Cc/WVVDuXA6Q1YbN25svGFDYqYxsXlt4E0eCquPxsgPZqELMstCAd6rclk2oP6OO+6oP/mTP6kHH3xwi3GqLz/r1KlTp06dOnXq1KnTNk1bRaZmn332GZ977rltxmbWapabZfAgBygP2IOiIGiQZUVQJbNhs3uzZDNFyIOZ5z//8z9X1YszTM+A0iBIDoQAYmSGL2UHLYHSaUcfoCCWp+GNvpqJ56GDeONvlpuEluQhmdrJEoiu8zztug7ikZsFpZKh0tCjRBYgZVBLiOBk3/Aky4xCKD0TOgNJyGWDeEj+npUHkuYSOX8hD/QQMgcRy2yDsWkfWijDkptP6elhhx02o7/0W7v0Vj/cTwaQkclDNqsGlIosFy1a1PQUD/XJX3KmR3iHp5AkpVbxIDN9+i7FzBbxHJKFN1mMATJGZlAc+piH/+WSLDywzAWihefQJhkmv+Ot8eB5LgHUT3bs+ocffngqS6rvbAGCpW+WoiZimQUf+BJ6godZAjwPd80xWILwzDPP1Gc/+9l66KGHeqZmC7R48eLxOeec02xGfLAhmm7SKfZGt3LJo+9dl34nMzv8Hl9Jx1LH6cHSpUtbVgdCzGdaBqNtcYjusFvPlH1gt/QeUo0n/Lz2+I/csC/GG7O+Z1ELZMx0l43I2kN5ZVp8Nm6IvHag0T7z5WyQzbItqLKYwrfed999bWyQbKsryNUz/KUfeGhMeCue0C8rRmR68NpmePpnSaB3gfRlxiQe8flZkMc4ZPHwhG/zl18R6+mM59Fvm+fzcE7vQHQGP/AW8n/uuee2rLbDNa+99toZz9Z38ldO2zNkFeid9vTN/WwzN7m7Xmz2rqmP/mqPfhoLXpBBLh/z/pUrfMjcKg16qNgCWWfGSTveXfkess5M1v7779/eG8g/ixrwM2KzEuD0RCwWC/OdlO8xJtktvOZL9PmCCy6oquGdY1KfL7744nr00Ud7pqZTp06dOnXq1KlTp05vPNoqMjV77rnn+KSTTmoIRW5ozQOvIAzWpv7RH/1RVQ17Y4wJGmMmCZ2CGECfzGoTyYaQmJWbUX7zm99saBpUwkw4Z9pm6pBvz1A61+wY8m32C7E2o8+9D1AcCECiOhAMv0MwIGXagZTnAYFZohbi9eUvf7mqBkTe7Nt9UKfcv2HcZu/u83wbSaE2b3nLW9q99joZk5m+rBPkh/5ATeyrgNpDNvE8UYxcWwz1gEhNrj+tGpAMOpB7ZfDU9foD+cgsFZm7HpKe68X1N9dY22dC12TR9AcK5TnLly9vKBw9sd+CrRgbG4TiQHjwCJIEbaT3eMfm9I1N0VfPpx94kAem4h3epm+wIRiv8ET70MxEinPTPp7pp3EYJyQQKs7nGA8U9bnnnmty5LfIy3pwn/NgWzZKT6HGsp94n4UAfI/oAd5C0jwfT2fPnl1/9md/1jM1L0N77733+MMf/nDTFWipfZB8sBggkyaesBm6KC7RGbosrrEZGTvZc7ZFr2Rb0scffPDBU6WbPYMv02cIdGb+/eVD2UuW+WU3/Dh7pNt035ihsmI2O6LDfLsDSvkVMVi7+mtfIzRX//iB0047raoGu+bjZZJcx88gGSeZH7KSZd1+++3bGNgt+eiz7I57xEpjzKI++kJ/xAlIeWYyxA8yEmfEWHoEMad/ZCluaf9DH/pQVQ0+TPv6S5aIT9evf/qnf6qqwU9B2vliOiezicdKlYs9fO59993XMi32iHim1TPeU/LA0iz25Hs2lSXBySaPkUB4y8bxVJwUBzJr7v1Hu2I4nvPF7A1P8Iiss6BH7lnla7SrH8bvd7oh5syfP7/FNn5G7JWJpL/0SZlrbZK3PcL29vldnFIYyRhlDvEk/Zs+2otz0UUX1Xe+8516+umne6amU6dOnTp16tSpU6dObzzaKjI1b33rW8ef+cxnGrKRlYbMyqEz0NZzzz23qoZqUGbh0B5oaq6nhZiZxUMSzNLNGD3HrHyy5CLUwuzTrBLSAw2xpjOrwZjh6ysEAEINSTKL9b0shewENAM6m/stoHbGAM0zcyd/s3TojVl6HlZFNp6XGSXIQq6/hfBnhTv3QU5Ugdt///2nqprhJd5ABvBEW5AF6AnkwRiUN80DSaGOMiMQJfqQB9HhgevxBpqon3iUmZfco5MZQXpqPC93iJrn4g/UCiqaek83H3vssak9WvpAT/XZ75BQaGJm9iBD9M7YIEPGJntgDNpHWXYbr/QHTyFtyQPoI31jX8quahdP6C1fQkfoPzQVAs0nsUNIG3563rx585reGKM18FliHLosMwOlhhLjHTScPtDDPHxY36Hi2qH3eEPmP/vZz+rLX/5yr372MrRkyZLxeeed12RP58nPXr4sA05X+Xw+jq5kJk/c4dcyM53ZbrpG99nCQw891OwgD/uTwaHnmfUzBj6Vv55su2rYj+UzH6rkqr7iTZY39jxjZ5eZ/eZHxC37VozZ9XSanbIxKyD4Hf1gU3nAMls9+eSTq2pA7F0vJnz3u99tPITKH3XUUVU17PswBpkXyDM7z+yCPueRE/y4zJA+4yVZGCP9EyeR7FQeXJyV6egEf+IdgV+hU9qjn77PvX/4o9+eL77zQ/ZJuf6HP/xhW1GAF1Z0uMczZBPsgRFr2Yi/3g3JgH55nyL/zGplOW2ywGO+ID+LL+zjU5/6VFUNvGYHnpdHerBbpaDFAPbofY0d4SkZqvpntYvYkfv+Jnkh/rAJ8lWx1zuALC19pR/kJ2OTR11k1Ud9wgvvGlnq/O67764bb7yxfv7zn/dMTadOnTp16tSpU6dOnd54tFVkavbbb7/xH//xHzdEA0HWIe1m+JArs2HXXXHFFVU1IAfOp7HmGXJhhghpcz8Eznp7SIJZMLTguuuua0gRgrZAKaAb0LRcP2pWajYM0fKMPBckqzQhz8vzafIQQYgFFNC6Rf3TXwgF9AaSrT8yPZA7s+esUe++PBhM/xK9yspQq1ataqic74wV4glRsBY50Tu6jRfQCnqgPWMwdu1BcbKySCINkNc8DwCPXJcHkPpMRpB2sobgJmKb5/BkNTZoVO7FoSPu+973vtfGBu2DtvgeL2UUk0fGmhV3IF7QP1k37dDj1FPIlb9ko38QVpkT90GmIGdkRYaywPQM6ug6ssgD46Cenk/WxxxzTFUNyBk9pv/WFb///e9vvCFPn8kjKwvyQ3gE8cRDvIUWZsbOdWSR50PIeFuPDRV84IEH6vOf/3w9/PDDPVOzBbKnht/I9fNswu90f/ny5VVV9cEPfrCqps90ojN5Zgpd9dkBgeR3zTXXVNWgV2eeeWZVzazMRCe0xSep0kS3xFgrCuigDKlsHx8z6UOqBt+KF1ldkB3KRrJvOogn1s1n9prPlhXhA/kPmX72y0bSV4rtMrYqnSKyY6O5j5e/YKvr1q1rdgahZmf6Lo6RY54jRgbOzsp9qDJ9eMWfi7F4Tlb57mH/rncOWQpj5RtPPPHEGfcZj3iHt/wFmflL/z1fdsF7Gj3nn8SzCy+8sKqGM/zcTxfvvPPOht5nZdyskCuGyqzleUgIL3KvKFny91YcqLpGdmK69yK8Ihtj9xePZWp8pmd4SK9kN1TGFJezqidZ0Xe+gYz4BvZnXOybbv7u7/7ulN7lwe/0gN54hr1XYqC4xdaz2p+sqbHRe7aVh796J/b9ihUr6vrrr681a9b0TE2nTp06derUqVOnTp3eeLRVZGr22muv8SmnnNJmbNAks96sOmXmn6eFQ19yzSZ0xwwUqgR1NQs2qzbrTqTc8zdt2jSV+XBtZpOQGXSitp6Z9fnNpLUDJclzYfLUVTNvyJUZfSJe7jP7hopAwiF50KCs8KJdaGP2H5KhP1llxO+5rwBCvnjx4ikeQDcQBAoSbSyuT6RTVoEepFzJ0hjpI8RCVg2iKiNiz4TPUBMojqwCpE2/6TnZ5Lr2lzu5Wf+zH3gsYwXFhAJBo+jKvHnzGm8gRFmBMFG7PFlctgBiCzGjD+wEmgOlYbNQHHpNZjI0eK7dPOMHimQdtn7jNVnSHcic8eSpyL6H2NkrQ4b0VT8hy3SE7KBZVdPrtMkh/ZG+kyt0GC8gX+TNZiC+0HZopSoz9jPoM/SRLNCaNWvqc5/7XK9+9jK0aNGi8Zlnntl0gG6SD130l25DMHO/Grvm29kgv+SMGXYvhiB6xX9Bn6GyRx555P/H3r0H312V9+J/NknIBQKRcA3GQKCWIrYz1mm1VqsVkJuAInBAWlSO2trxdDwywh/WttPLnOP8rNPam+BPpIpFQBQE1FKq0uqAVasG2ypKCAG5BIhBiBCS7PNHeK3P/r53gj1ITxNYz0xm57v357Muz3Wt97MuLYbmmnt+294DdpyrF/gUvoj90SV9ss7e6ZhiqpOsxAG6L4vFj+Q9GvqUWU2+XqZVuXgjWyDmq097cy8cG+I/2LH+iwmvfOUrZ7z36U9/uqq2IPh4zIelb5Q1YP/sXIyePH2wanoFgn0c9uqIxeSfp7rqk8yNcRBfbJVGrkjQTj7M+3xm7n8kc3FLdprey6rgh5ggTm/rZFdxi7+7/fbbW51/8Ad/UFVDPFGGNvLnbMoYD4/YEr3CE/plDOk0QOMl97GJh9rmd3rF9+b+I7xmR8rBG3pqzIH3maXnI+ia3+kIezWuZIf0WTZMBpJd3HfffS3GKUPftIX87aHJFSniSbbRfTQyO3lHT2YsfebY0vf33HNPfelLX6p169b1TE2nTp06derUqVOnTp2eejT7v7oBVVtQ240bN7bZKqTB7NtMzUw+Z6dm+hAHs1pIgdmoEyMglVAnyIFZu5mm3yFgZqwrV65ss1mzUQgOBBpyBC2FKFlTCd0zezWjh0j4XdugQUh9UA31Kkc2C0IH1TGDhwbpG1QJigiJgiblCXB4BPmAWEOhzMYhIpk5khGC8miv9+bOndv4bUaf+z3UpU3khdf0xSceQku8r8/0xxpOqI02qle2Au/wlE5oJxl5D7IBZcI7CAikDq/ILvWarJJ3TjaRlcBr9WeWY/ny5S2bBNn0Dp5DoyfPs68aUBT6lvsK2DIbdn8M+dJz7+krXiI8ZTd5j04iZ2zYp3ZDWqGcZCjr4X0ZGpknviXvXoCw0WO6JOui3YsXL25orzq1jU36W5ZsMstTNaCObJZdQNu0VUZAW2Ts+E/6Z18cVBpS9uCDDzZ+dpqmefPm1cEHH9zkQffJmt2KN3lnGLklQs1/eY6fgvbKTpArfUFkRr7se9WqVU1n6Kln2TFfKGsIhWX/dJav4vO05eSTT66qISbzSXygLEGemiku5P6k3IOKx+wtTxeVXfA9//WBD3ygqqre9KY3VdWAoF966aVVNfhUccvKCTwkU3sb9NepW/zED37wg+ZDjCuQ2Mz+xExtEafUqc98lrEBfy8Tm6fmkR2U3z6/3IOXcSKzBnlyIxnRBXFKu4xN+Pr0Y3il/94Tb/VTOXiMD+LVihUrWhtlg/g8NqcvyhLz6KEy8cIeYKR8bWQveJunbIpzbJmtaw+9Z2ey5eII3uYdcul/fZ/3wqHcCytes1PlKUc/ZKa8t3LlynbvjLErP+JZbRBXrrrqqqoa9BlPxGb6LstF392DRJ+tqsjsbN63N7l3jU5ujXqmplOnTp06derUqVOnTjs0bReZmtmzZ9eee+7Z1uZBUcw2zQzznH+zVoi4WbW/ZReUhxLBz5OVzLbdWWHmaqY6Z86cNiOGWEMpIEtmkq94xSuqquov//Ivq2pYw2w2a/28srUF8gDRhgRAS8zwoTFm05B3J5xAwO1hydPN8pQMs2OIh9k0dAXSod94AyHzfGZV1CsjADHXLogHVOr++++fuqUdGgdp0iZ9gEpADqASZGLNp7ZAVCHeZCq7BiHwvTbqoxOBoPzKtSaePkJJ8A4iRU+dwEPf9BuPPZ/rY+kKhEQ7IPH+zlN5XvziF1fVlkxUIltQN2iJNmlrntQFZdE2MvJJD/NUNLzP+yVkDSC4UGw2Txae106+gn7RGc9fffXVVTW9J005eE2HnDTFdzhJUbvyviVIHFQdQjcej5uN4B1bk1mB8ELm/c5HIHKXQYT8syEoOx6mntOXtF31L1++vMmn0zTNnj27Fi9e3OxZxkY2UoYlT/gjV/5AdlKmjA0hKC9/5f6RvNXemnfy5o/44s997nPNF7JHcYaPSXtlL5BwMRLKz6fQKb5SXIPysxN9tkfzr/7qr2bwTBxTH7/C3yDthubSWX5FxsUN806Cu/jii6uq6td//deratB5WQA2JPbnHRnilL+NRfiJRYsWNXvEE/5fXHFSIt8jg8G30Af2LYMhu5D3wZBBno6Y2Qvf20OT+wYh5/5GuV920pdVDZkg+m+MIT5qJzJGIetcHaKcvDONfq9evbrt9xCv2IY2khMesym+Ubac/DPDb++kNuEZmbJpPONr804xz/tbnDFu44P1EU/ZsOytrAnd0E92QZbK0z5/Gxsr17gOnzxvVcpee+3V5EzfxEDjLHquTfySNrNJ72sLfecLvKc+mbzJlVBVQyzXZzG4aoj7W6OeqenUqVOnTp06derUqdMOTdvF6WeLFy8ev+IVr2gojdkvxMDs1wzRrNqemXe9611VVfUbv/EbVTUgJU6lSiTAZ2ZLzO7N5qFUmcVYtmxZQx3UYeao7erUB2gdfkNloBpmrZ5XXmaRrJvVpry3Ju+BgUiZJWdGJ88xh1RldgsKBNGHWEHozL6hMlAW7csTMczOzcrzVLWddtqp8QpiBXnCW9/LnED/8vZ4baAXeT8HVAMCKzvgPbLCQ7LynOwGlAd6Tz9R7lfxHh3Iu1vwCqqUSBe0B2plnTDe2rtDlsrz3pw5c5qOe5fesRF9h/hCeCCibAjad8opp1RV1Uc/+tGqGvSe/OkxW6LPeS8MPSIjmRH14pVsB7LnBg/ov37lmmQZ1jwbX/ugj3QOmgSdgvSxO4ju5OlokyfMVA0+wN/kSs7QRXqa9x+pm+2wC2g2/0W/ILf6zt9BI2Vnbr311vrjP/7jWrVqVT/9bCt0wAEHjN/5znc2+4Wop0/3mXcskRtbyNOr3Ntg3wBdk1mVoaaL9IX/4P98f9dddzXd0JY8iY++sms+js5qGzRe3OA36GTemcXX5S31sk10ME8CQ3RV9kkWjC/Dc32HDvON3uej/c7fiI/8Ed+tPlk0WXBElmRw2GGHtSwA4hP4Pmi5PpCTOjLLzffYnwfl9yne4QV9Y//KI6v00WSWe3T4UuMsyDgZkhXKDHDqte+NMexrzP3BdMW+y/PPP7+qhhUHz33uc+sf/uEfqmoYbxjrXXPNNVU18JSc1cGXsgN9yX1v/LzMo9jNr+Nx3h9ItvSNzYqX7EKm84Mf/GBVDVkQdpj3RanXCgV7Sum9fuK1+o09tMtYeXIfbdWgS/R+PB63tmaflXHGGWfM4JG2OTnQKoVcdSR+0Xf71LSBvtJD4zY+i82qd/PmzfWJT3yi1qxZ008/69SpU6dOnTp16tSp01OPtotMzZ577jk+9thjG5JghnfiiSdW1YAKQXkg9Nbq+d26Wudim42bDXvP82aITkVzalTuAzEbnlzzavapDrNTyLBnIVCQZWgMpGgSJZ3sg7qtZ4TKQtK0kfygHeq3dtL7UBgoEjIrh7JA7L2HyASKY5YN6fCedusfZIRs87Q4zyH8WbRoUUMttB0KoW3QCG3Ju0u0jbwhat7L23m1HeE1dAM6A3WRjcobm7UHsgVFgirqs3bhEWQCGoRXEAyyUz4EzveQE+iU+vQPGjR5Br53oftQaG3ybt60jYcyePQIz8gCmqgPmZGDkOUJQJDdPN8/7UJ9UFEoj6wc3tMl/dBe6JF2kbH32aesh/bpFxlDbiG/si/j8bi1lc8gZ3Ulsm9tMj1Tt77nrdqZicZ7vFOvPuMt3zGprxdccEHdeeedPVOzFVq6dOn47W9/e7NPMpbFIHvZD76YDvHhbC6zjJnJJRd+gL+RFb/22murativInPEvr/zne+0/9MxbbVeXdv4VnWzB30RG9mZ/SJ8EB3jH3zywZ/85CdnPM/30nlosPiSe+jYCh6Jt9qj3XjL18n0an+eRMav5KmK9hqIR8oV5yDcmzdvbmMAPpFPY79iXd6Yzucqk2+RkdMndgupNqagb3iW4xcZfHqU/kK71Gs8xCeqR3vwQEaFTLRfO3MPq4xz7reie3x0+iu+dfbs2VP7o/OUPPoibtG/zKiIA/okG8ZO2KTf+VwypT85TvIcW/S++vSdz3UH1Rve8IaqmrY/epk8ITsrZcQ/74n9xhayJ9ptZZHf7fl+8MEHmxzwTpvoN1tiE/yZrKu24KU285dkNZlxqRp4buWAfbhkg9j017/+9br22mvr/vvv75maTp06derUqVOnTp06PfVouzn9bK+99mrIAcTRXSnWJ5q9mumZlfvdudlmnImQmnFCEiAXUCQzUOgSJMFz3tt///0bsgPNgHJoi5m72at1pX7PddHQFoiAbAP0A/oKQdYXGSDIEVRG3/HU/gqniJg9q8dzeQKG+iFpUGOICVlAUrQj9wy99KUvraoBTcJTs3RZDu0Zj8dtxg+5hhRAO7QFSqEPEB59gDxBCvAcL/BSG/LGY7KFvFnfqxzrfKEhZJRILZlAS3P/FN7Qf7L0HNSUTkGD8BIahA95N4T2sI8rrrii6aM6oS/QRDyAdCmDTPKOKMiQ08DwlF5Bm7WVHUB+c/9BZmT0LffkQIvwnszxTjm5hwiyRZfIXju8n2uv2bPy6AJ79PyaNWum7nuQ3YJk5u3skzcnVw36br2/k3f0lV5k33Jvos+8P4ne5pr5TjNp48aNde+99zbfRpfpCl/GvvlCJ3CRn/fzLi86ww9ALmVqrEWnq5npQbIQS5cubWgn+4DwXnbZZVU1+AJto5t0S5ygG7LMnqdz7t9gT+KZOKjteUJYrhxg93hkxYF6MksvW+U9JFbYgyeW4A3ip/BB/8R1+0r4KbFA3Nt///3bPtXM3MmckRd7y/1NniejvFeNr9Rn/iAz/e7/sDdLveybLsjw4LGYi4xztJ9+nnnmmVU1yFq2DE+tMpE14YPtCdPPPEkv73HCF3ydM2dO4z/i48Qhesomrb7Rx7zXRIZPjNWHvJtu8q6cqkG/8/RAepQrd/BOO4xBZEr5bjzIVUJ+N47SD+XTV7xFuQKHvRnz6B9dOfDAA5vfybsSJ+9kqhrilCwqvyX7xHbIyMoC8qRv4pzYaSWWevIuIXqx++67P+4pnT1T06lTp06dOnXq1KlTpx2afqJMzWg0eltV/feqGlfViqp6fVXtV1UXV9XiqvpqVf3aeDzesM1CasuM7fvf//7Uba1m02a5ZvbQJjNJ6xahM1BnM7888UHWwKwa4mUvjpmnNYU+zcGSLbgAACAASURBVLoPOeSQhtjIYEBpIAVQEetJ7bswE1eWGTcyw/a9v83Ecz9I3h6vr4nay1ZBIiAMEDh9zvt2PKe/efMytFF/za7zxlpIBPQRIggVgERA4HfeeedWd958DFWD3uCFzAx5QaogQBBPeiB7BckiG3po7bpMDN5bd537gfKuE6iKevGAPmu/3+m17AaCzEO4vA+1zH1x+JX3aGSGZ/ny5VPrX+kN+asTuqJveJZ9td46s1N5Yh3Z0BtoELugz9qetw3To1z3m+vDc4+P9uZpfNrDDrRX+9kxfqWM9JcOTvoQeqyNnslsmDbJ9lhzD7GFGub+JTbp1Dx911dthqzxc3kb9ty5cx/3puYdlZ6sODVv3rx69rOf3eIKP8E28JPPzbXhfHju1+ALIdqQdL/zkeROJzMDB32VOVq1alUdfvjhVTXsPxUvZAn5fygrHwj9V6ZYq038f8ZGf7OPvFdKvBTb2bF4IVvBD7EFmdzc66a9nvM+ZF22kh/h+8gMryD9CF+8zz8pV3vvu+++1tYrr7yyqoY4wbfwx3xT7msw/mDHskD0CNLte7yTMeFH1GOsQa+85zm+Trl5KiYf7q4UsUEc4ktl32TTcgwkTuZJln6nQ3nfiXb6+5/+6Z+a/H1Hnnio7X7Xd6eGkSc526/N333ta1+rqumVL56fzLxPlicjRFYyOfQz44IsB33K7Bx7EX+MbfEcD32vHGNmKy2McbUTz9mb00mVt2LFijrhhBOqatDPjEP8Fz0TR2Rt7c/xO19A7/lNPuioo46qqkHv6dExxxxTVUO8pJ+/8iu/UlVVF1xwwX/OPTWj0Wj/qvofVfX88Xh8WFXNqqr/VlX/u6reOx6PD66qtVV11hOto1OnTp06dXqi1ONUp06dOj196AmffvZYsLihqn6uqh6oqk9W1fuq6qKq2nc8Hm8cjUYvrKrfG4/Hr3i8svbee+/xSSed1Gap1mSavUJjIFTQFQiA2bnZOjQIZXlQX+sa8wQoM9I8Cx8682//9m9tZqwsaAs0DdqmTu8iCIE9DeqEkKsTGqvP6lNuns4CVck1mbITEAV/K0e9kDD1Qh7yHHTIhdk0WUB5ZEGgxJAGPM+TzLTLOtxbbrml7ZfAKwiOmbusEIQIsqDvuZdGn/URCoe3UA59IkvIKiQ89xkhPIOO5El2UB480L5E7jNjhPeQLeiRbInvoaEQ4ryDhX5DoW644YaWiZENyKwWuWmDtpEXxIjt5p4seky/6Dke+qRP2q6NELfc58YO8BoPcm2x+vFSlgtils8pH4LFd5ChjI3+aj/dSbu88847G6qkbdZfW69/0kknVdWAAvMleA+Jx3t6jyd5cps28VGZ8WRzfJQM0fr1659yp589mXFqv/32G5955plNB2Q7IOZQW/y0v8xz7JO8ZN7sz5TpYUN0js7myXzsWmxRL9u77rrrGuqJ+MC8s4Su8IniUq7x59e953mZGevp6VjeRUEXZY4gxz7tkWED1vzLfvDV2ouHfDPeiVv8GZSYv8j7S8QYvpWt5sll6tO/e+65p/VJGZnVdo+HMmWLoOdOzJLp50v4YPuTyFub8ZQP9jc7RxB1J21dfvnlM8rTJ+Mo+iS+0EvPIdkE/s3v4jBfTkdkJfhYcZAf0x6ZJfV/61vfmsHvqkHX6ed73vOeqhqyZXwkf2zli3GS/UHiC97TC/r23ve+t6qGvZu5KkT9uYKGLPh0maU8oTVXFlgVQsbGNvRYe2Wa2DHeep7O0Q2+iGy8J5t2zz33TMUZ9wp96EMfqqrh3iByft/73ldVwwoTtmI1EL2QEeS/yMD3eKFN7vBhJ/bF0c/58+fX1VdfXffdd9+Te/rZeDy+o6r+v6q6rarurKp1tSWN/4PxeOxs3Nurav8nWkenTp06der0RKnHqU6dOnV6+tAT3lMzGo2eUVUnVNWBVfWDqrq0qo76v3j/TVX1pqotM8OHH364nWRhFg5NhZYm0jh5a3fVsObOrBZyAmkwE/U79AXiYCYJOYAKQZugOP/yL//S1gOaRZohWx9oLafZpjJyxg+tyT0t0A1ohb54HsJu5o8nmXHxPiTNelzthXhBgyAWZt156lneZwLBIwPvQd7wGEIAvSI75ZCx53bZZZfWJu9Az62vVQcUQlsh1ZACepWoOnQkz75XD5QnbxFG9AmPPU+foII+oZ/QFt+TlU+oo8+8PyczUmQuswh9wQ+In3ZN7i+hl2yCHkE88cQ7UDz64FPmgs1CiJRr3TkkFyKlfH0lC0gVGUNsE6VmB3iiP/pOX6FB9Jss2DhdIBPrfukQu8w7GiB2MqJ47abrZzzjGc3WoHL00Ek0edu1U4fyJB1tzuw6NJA+4pE+kbs+ZMYS0rxu3brWn6cKPZlxavHixXXooYdO3XkEhYVM4yudZCN0hC0px3uvfvWrZzxPF8UUKDLdYyuQUz4UIr9mzZrmk8gcoqtu9iMzSlcg3XwbO2LHymEfvqfb7Dlvu8cDPpad0l32YwygXXQ0T1njy/P0Mj6b33jNa15TVUP8zBUJMkf8gX7wZ3yqk1L5sdmzZzcfd/XVV1fVkMXKO3aURS/wAm+MDfJUQn6BHiiPfeeJkXkCGD9j3wgyFhCjja+MRXJFAFnLVNor5jn98rvn6SlSvuzZBz7wgaoaeKr9fPxuu+3WfJ9n7PvQ57zdXjaInuapfh/5yEeqavCxxx9/fFUNevcnf/InVVV17LHHVtWgZ+rjS4138o5FfVOf9+mRlSp0xxiGXtNn8ZHs+X7jTjogxtA9PkU8T19Bv8lqt912a3Vpm+yXvsmeslU80FeZtzzlL/f/kCE/xzfRR7HamDX3nK1fv761dWv0k5x+dnhVrRyPx2vG4/GjVXV5Vb2oqhaNRiOTpWdW1R1be3k8Hp83Ho+fPx6Pn89oOnXq1KlTpyeRnrQ4ZcDQqVOnTp22T/pJTj+7rapeMBqNFlTVj6rq5VX1lar6XFW9pracLHNmVV3xHylsPB63WTDk0cwOkpDr7yEbiQ6bHUNVZAOgRmbjZoTKN/szw8w9DmamBxxwQFv3CWWFbMkKQfHypConPCCIVZ4iAwFQLhQEYpHIueeUZ5arPOsU86Qcv+f9AX7PU64gDrIok+vxJ99DfocUQNLM6qEyEAjIw1577dX4b4YOwT7jjDOqarhrAZpHD/RJW6EhiN5AvDyvHCgJFBFiat0s3uJ1noVPT+ijNacQD/pk7SmEgn7THe1XrvqzX3gH5YT8QY3oXp60csghh0xl6iCW0D3yQNaFQ0/w0v4gbfKpb/RZ3+gZZOoLX/hCVQ32pF367Dk2S58gdAhCB5HSVzI2MPW7dmfWQjnsxXsQP3aRGVq6QpeWLVs2dfN73o3gb31TJnkpSxvUbU+MPTrQcX3F+/PPP7+qhhNxEL2S+Vy4cOHjniqzg9KTFqdmz55de+yxR8tK0A2+lxzITZxiM/hM3uwTuuy0KT4UogkJZ0P8CZ1jA+r7sz/7s6raok+QYXbHnvk+aDukmM+1EoHdOfmKb+Pz+GS+x2fu1czTzegw3yS+8a0yndbdq0/GSLneg7Sfd955VTX4Zn7owx/+cFUNtiQOiT9I+XlvCduyIgG/jj766CZPvNI344iPf/zjVTWcFsYuxQE+LzN44hCZZTb4kksuqaphVYh7/ZRLZuJirqqgj3lDPP3mj5TH3/BDkHZ7LYwB9CdXRvjdPW98u/bx+bLhdHHt2rUtoygrgIds0P5Ed/XIsBhL5uljeCh7RZ7eYzf8ffpiPMMDWS7P6Wue/okXr33ta6tqGKOK4fQux3P0WNyyAijvVcND5ZAF3THOpJtOLPvmN785tc9Om/RBnTmexhNtZLP0T7l5f6DVEGwYb5yGh6f0AA/XrFkztXplkn6SPTU3VtVlVfW12nJM5k5VdV5VnVNV/3M0Gn23thyX+f8/0To6derUqVOnJ0o9TnXq1KnT04ee8OlnTybtuuuu48MOO6yhO2Z+mSExK4VQQUhkLyCcZpKyIpAys1T3ATjJBepsdmsW63mzYOj1N7/5zYYoQc+8YzZpdmqGrA3W1Wtr7ikw2817PPJGZ7+bVeMVlC9PQYNMyIZB6vCKHpgVk4FZOvQfWuNEH8hHnnalv+pXX+7lyXPZtd/vk33TBxmUvH8FigMxwwN9hkzlbfbQDMiD9/QJ72Uzco8OGSoH0kBvMiOkfOil00BkPdSHB7leN0+Swx+oKXuAfEBw816dTZs2NSSIHD1D9/UNQpSnh5EfGeRadG3Le2P0iX7gCR6y8TzpzvO5P8Xv6mNPZMZXeJ/dsnn15x41OpJ3XfFVeas8e2QHDzzwQOMFHuCx79m4v+kDW4R8aYtPz5ONv7WN7PIODggsG9Xm448/vk477bT61re+9ZQ5/ezJpGXLlo3PPffcZmd02V0wUN+MY+IMv0Ae/BffCsE+7rjjqmrYd0bXc18iXaX7/MdkBgjqz2fyBewgT8ASM/3Nd6Vv87d9rLJF/LcVC+IUf4JXfFye4OV7WQ0nd7ERyDs/JPuJN5Po/mT94pysmHs6+C8y0x8oc55mKpaQ1TnnnNP6Kv4oM0/LzFMo2XneQ8Un+R3P+Ex9pwd4pm/Ks+9Ce7TTPsXMdtHnPElVNkS2JN+TzaDvxhJ8MV9ON4yd6KAsOZ3ij/Rv48aNLQ7wcfqIN+SFh0cffXRVDeMN+ve5z32uJolvtfdKm9gSf483PmWKjAVkPOhn7km2p8vqEmNOvCBz+kqWxiB4mCeuOpnV73y6TJOVCHTGuE05YsH+++/f/Bp9JX/+JfeAsQm8ILfcq2WvmUyPzF5m/qwWyqzSb/3Wb1VV1aWXXtra/NnPfrbuv//+J/f0s06dOnXq1KlTp06dOnXaHugn2VPzpNEuu+xSv/RLv9QQB7NyaIjZp9k3xAPy7RPKJNsBKbAG2Ro+iFqiw8pRvrWxZv9m+fvss09DW8yQrcE3s/c7NA7aAXHKey0S9YCuaEsizXlWvVmtdkA/8ib1RJMSKfcJ7bMfQPZD+z2Xd6iY+ZMlxFs9eSutepSjnwcddNDU3hFlQcfJB3INlVGmPpO7Pts3gnfW9kKgvGffBnTGellZOSgJlBHvoSMyJtBRBH2B0mj/3/3d31XVcD68fuI9HuofXtJb/MA35UJg8NiN0ytWrGhr1+kLuUJLoHv0OPefaVNmfPOeGVklemv/geyZded4ClHiAyBN7CXXnUNi/U1XILb0Uzlsmn2wY/0mY+2FoOmvdtARCKLy0eRJLdA4dWprylMGj37SF+XoK3lDLbUxb9nO9d70UbmydTfddFPzU522TrNmzWr2yFfZC5O3gudt5OyZfMmFjrFBugTRZM/vf//7q2oahT3rrC33hvK9/MGXv/zl5pNkGnP/Arv8i7/4ixltolu5L49+yFjYc+IUJd+LuQiirW15DxR0l8+yZ9Vz+oyn+iE+yWaxP3bJt2uXvXtWa+R9W37n58QEtoXH/NP3v//91gfySp+jjDwtj/6IW/xC3ucC0SaDRNDFR22DiIvV9n/I0PB1eCJW54oF7cz7aPJUv1zBoN/azy5yLzM/lqeRiiX816JFixqP+UB7qKD7skjGXfRb3fRDptF4iQ3pi7glG8EO8E5GiCzIXPuMLbWPHpEJniqH/qX+ap+9PvRVhsf7fAp70B8ywy/9+NSnPlVVQ5yjO+vWrWt6QV/zFGF6JGbjtVgps0zuntNHNkPP2ajfjfv9bbxtbKAvz3nOc9opiFujnqnp1KlTp06dOnXq1KnTDk3bRaZm9uzZtWjRojZLhfpAb8zwzdTydlbICKTTmlDn/nvP7DvP/YcQmK2rL/eTQCquv/76hq6YrULzIVk+tSnPnjcDN/OG7kFLoBTWPuZZ3ZDmRKath4SQe157zJZ9n+tZtRcPIBF5r4zyndADdYKQmG3jmXLyjHtIX95gvXbt2tYWZUI78JCcoRLezX1D9Eg5UAroCMTLuljP4RnkQduhnniIJ1Ah2TXv4w2EDeJBZ7T3hS98YVUNqCEesQvPQ1Mhf9AZss2T7/LG5sm1zU6a8Sxk0zn30BM2pi8QXjwgE4iRtcXWJkPx9B3CBNXLm5zzNmr6qV77FCBa6s97kfDOJ70kG3ZExnSOT5DZZA8Q6ESwJ/eATfJh6dKlDWnNPYCQLGWQO55pK/9FBlBBbfd88gDRr8zKZkZh7dq1jc+dpulHP/pRrVixYgoB53/oMN8oG2kFQcYAdgq5pvuyDyeffHJVDZmdk046qaoGvWEL6uPX+KPFixc3/8rOofL0ExrqHb6CbukT9FbbM+POF9FZ5cv00km6L66JH+wpeckvsUP2pzx2iexjEa+0S6bnuuuuq6oBXZYVUT8/JeZD7jPj5cSx1atXN1/CLvN0PPaNx+KYPuA1H8KPe55fIG8ZfX3DAxl5PEV+xxN99zdE3vdknXex8Ol8q/rFJWMU/ipPHFM+HbCCgi5oT+5f3Hnnndt3/DLbo8/iCF+G9/w+faIHxnoyHvZLi5F8JX30HL23F0cfyNIYIu95YqP0l63n3Xee1x8yx2OZILxOX09f047t5ZFVy33Dd999d/NXYi9/xT+Rj7ihLHrLZoyr8BLP8tQ0f+eprXmPm++1a999921t2hr1TE2nTp06derUqVOnTp12aNouMjXr1q2rT3/60209PWTATNEntMbfeYt4Pm+GZ1b6ute9rqqG22S9Z+Yo8wNJMOtPZHXPPfdss1GoBrTUrBQ6kScfuTPCTBxqk5mNvK0eCqONZrmJhOu7tcgob7Y1o4cu5bnsMlF4aOaPV1AXSAM00axbu/O+HgiI7II1rp6Hah544IENHYHKQNPyjPK8vyj3MUFzyIzcfUJeIaS5hwfaAS2CTJBFIhJkCU2gp9qTN0L7pF8IigiNUY92QLjw3mk3soeyHLkvRvuWL1/eULxESZDvyRVPIJd5pxMeQ5bUhVfKxxt6kWfVW1fte/Zi3S59oT/eU78sXt5RlBkldqE+9oxn7nbRb++5F0G7yEh/3AXxve99b2otO322t4nN5gmC0EQIGfk58YfvoAd8ApnQxzxtSDn6OGnbbKzTNC1YsKB+9md/tukKOWWWUbzIvW6Z9cR38qJL2zpRjC3mGnTl83OTiDsdsEeFjmgjO2VHThuTleab+Bhxwtp8GVd2lrfc594BdimrlKcW5gmkfJ3MrJvkocTsEooLCYewv+QlL6mqwYb4EzZobOB3/IC4s9m8D0Q5N998c+OxmOod8pc9wnN6gyf8PP1h99qIF2K89/FMZoQMyRqPnOxF9nwvmZGBfuR+Xu204oEvxwO/iwn00Cf99WlchffGMOIjX44fs2fPnrrfjLy0id/WB23h85RJX/DU+56jr/ap+Z5essnLL798xvdkzbZPPfXUqhoyofbd0XurOoyzjAl8b6zLtska740HZfXEI5l2e4zojJiRe61l33beeeem+8YNbIi/oz+5dwbPjDk9h3f0hM+wgkrcyhOA+UGy8rt9RfPmzWuxb2vUI1inTp06derUqVOnTp12aNouMjXz58+v5z73uW3ml7NXyDhky/d5nrbZu9mt9Y6QNLNf6xTdyOu5PD0Kwpk3Nh900EFtlmmmbMYLjYGyQA6shVSGrBRUBuEBRNj6WEhArmGGDJuZ4wXkAmqfewycAAatwbs81Q06KFuhP+rXXrNrs3R7Hczyc02sfuOxzI3n7rjjjrbHBGIgswJJkGHxO95AHLQFMmSNJkRIG/EOr7UFUgXN0Xd6oi95TwieQBjoBJn4zPuU/A0NTaQESiibBtGla2TobygShMw6XMjcrrvu2vqIZxdddFFVDXrgk95AzyC56oC2eU4fIbNQF3aAJx/60IdmvA8lJOvkuU96zlb1zXt8AzQbGkqftQuRIXTTc7JfytvWaVJ47nf6fccdd7SyoGP8Eb+mzVDmPH0RAqxMN4ezQeicvTN5Szv00O9598/k7dfa32ma5syZU/vuu++UffEnEHI+1e+yh/wQXYeuQo3Ji3/zvTsrPvzhD88onz845ZRTqmqwJTq52267NR9JByHF9Pyqq66qqgHJddqg3yHEsnqQX/s8oKviIN9E15TDt4q16j3iiCOqavB9uacLIivziYf0VJbADe1kwD+I3SjvzVGeeGzPDR7aO2GswP8p96CDDmr2ihfKJnf2CIUXy/l3dmzvXe5RwTuIuD6kL0zkW1bL7/wAX5b7UfDA++IOWeY9Jnw9PyID7Hd84G/yxD3vyxzpv37jw2233dYy5sZ++uQZd6HYF0KP/O5vdkCvtRXv3vjGN1bVoM/2PhsTsD1ZMeMkGUt9xVu6YU8xmcue2etGd3Jso594k3uPM4thtYl+yrC64+Xss8+uqiHm0L177rmnxV4x1V1RbC3v/dMXvMi9Up437rYywfgeb/COLdMb5dF/+rzLLrs87oqCnqnp1KlTp06dOnXq1KnTDk3bRaamasuME7oL4YJcmR1DxCAOEDPIl9m4maVZLyQB6mKWa/YMOYE+mTFCzBFEYd68eQ3dhzx7B1qhTG2Atl555ZVVNaAe2pCnSEExIGMQr9zDAoVXf66/zxPdrHN0jjokA0EYZA+gJhAtCLX6IBjab+ZPlvYTQH3MzvOWZbN/KMGSJUtamXkePh5B3/UB2kHO9Mfz0AkZHYgB1N73+gytky3AA7wnQ0gcFJSeQCqcsy7rgfL2X/2EimYmEkGtEJmTiXKUS0egPdDUZcuWNRQk9wnIKtBj+kQv6Qn0RcYNL9hsrhvPE1NyX4m2ZnZChhMvyJqMyYbsvEc3ZEfyDgTto2vsSzu9p1/6ba00hCz3wOHHAQccMHXPUN5ZgOf6BjmFCjvBikwgqvwdtI6+0AdEz/BGltZ76vF3p63T+vXr6xvf+EbzbeKPbAP+0UU2QN78DR3kG8lHXGMDUGK6nHcvvepVr5rxu/cmb6inc7LK9JMvUNYFF1xQVUN2QRvFTs9BxOkMO+TH82Q9vlCcYyf+Vl7e1aM+cVY/6L5y1QvdzRMp/S6zrFx+7JhjjqmqIT5mFlU9eaKXdm/atGkqc8+XpT2LlXjL9+QpYcrh52Xg9IUv1Vb1iUu570dfvGdPA39iHxUfKwYrnx4qn17LCqgn93TlCXayGHlipd/ZE12YPKnS2ExmI08ElVnRdzxVJr2QcaNH2pwnl8pseB/JKiBjCpkRNmgPDf059thjZ5Sr7+yBnosr+rOtk3jxlk6xW6tbEF4bm9i3y/4nVwNYAcB/Gas6dRFvcu86Holjxrz0S2YuTz419iBLfouesgPZV+P6Rx99dOpevEnqmZpOnTp16tSpU6dOnTrt0DR6vBnP/yvaZ599xqeeemqbfZnh5bpAM3hr+SAGvrevBQJmTR4kzKw+b6KHoMtO5MleZtNQ5c9//vNtBu7EBzPpvJldm/QBsuV9s9HMSuSM2ywXTyAM0BW/e187/G5GD0GGGprxO9Ei+5O3oENVzNLxFpIma5bZATLCB5+yBtAks/f169e3OmWHoC/QGL8jyI4+QvvJU1v8DdnG+0R/oDc+yQoSnrfc+x6K6W+8hoBBMvSVrOgfZE/7IBf+xmv6qXxIBmQNWqndfsefY489tq3ThrZ5x9/2mqgTz6GKThciPzbFZvRVG+gZ26QX9Arvc1235yCt9Cf3EUHR8VY/nJ7EHiBs2kUG7MsnGdM9dodPUCc+BL8m76iQecuTldRB7nkPl7LojTZYzy+bRC9kYPRReTI07rfwe6Lpd911V733ve+t1atXb+lcpxm07777js8444zmj9hlnnzJP7FP8oK+ph+jK9aw28dlfwv7ZoN03/p/d1CwhcnMLt2AouZeNbrpe4ivfTx+Z3/iCJ057rjjqmrY72FfGJ8KIeYT6ewHP/jBGW22p4EdeV5flEdnrZDwnPaIAfyRT/Eq74aCkOepjfybdrFF8dGN5suXL2/y913eq8fv2ntA/srm7+kB+7QSIO+qe//7319Vw4oEPMlb6D2f9+XwhdqNl0iGH8/s79CPzGpY6SDm58ljmU3Hw1x5ww/KfuPjww8/3HjmM0/2NKajB/aI4Z2YJz7x3zIoTs9jB3hA3+lt3vuX9zPJCMpOqD/vNaNnmaXLu8TwmizEnxy75Ml29DXLx7cvfOELVVX1+te/vqq27CUzdmTrGR/06ZprrqmqIXbK5vo9V46okz/UdzGfP02/mae72lc3e/bsete73lW33HLLVuNUz9R06tSpU6dOnTp16tRph6btYk/NaDSqefPmTe0JMGPPe0XM+M3SnQJ15JFHVtWQHcmbdKG5ZohQJOs1zW6h0RAU+z3MWF/ykpfUxz72saoaZp1m7hAfyAE0BFqnbdaGQnMgaW4qNlvVFjNuyBJ0w/f+Vg7UJtepQtLylldriz2nXKgOXphV5/0iEARIOZ7LmmmfdvmEQOCx9u66666Nl5AAfYNm4J2ZPJI9gE7QG6gHRACCABGDPEDK9IkMIOjQeWhQngCkPKQ+5WoXpBcvIFzaAcmHFkF5oJmeJ0syUk/uKaJT3r/uuuuavCFRThihh3nKn75B2/zOFmXB8FY2lf7Te/sMrLNmP3jsd+XyDdrOhvPmZ9+TmZNYoE70E/qtfLpGr/kO/cBjsiEz6BRdYd+T2Tnv8B/8DaSTfsqaahP/RK5QZ8gpXvFB9EU9+kBGTvHLO7ZkS0844YRmj52mad68eXXIIYc0eckE010orewBXRKXvEdX2DMUWSYNSkteyqHjfKr36VHuC/nCF74w5Z/ZI98hEyou8GW554Z/9lzarxhsf4Z66Cy/Yp8IH8if0GW+js/KPS5WMOAx3rN3z/kdop2nqOG5/vJLsg78ovLxia2xsR/84AetrLwjB+/EUO+KtZ7jw8R2SDh94Uv4HG3Nu+D4Gb5ZuWJ83m3Hr/CtfGTef2Q8Jl7xP3mypfgqTpMxXvHd+EUX3F9CVvyTIwnhoQAAIABJREFU75cvX95sgE+UpZL5l23KrKk+alPeacfG9Nk+Entb6HHex5RjCX6TDHOfruczoyQW5KmanqfP4qrVQjItypMlJPOjjz66qoZsr3YoL8elP/3TP93qJB/P8h0yf3nnIdvFa/pKfvSK/usLfVSfWJp7t4wJyfruu+9usWtr1DM1nTp16tSpU6dOnTp12qFpu8jUjMfjeuSRR9qMzMkUkCszQ4hU3pZqnWzemA1tMaOEYJgVKw8aAxWC6kCFrFuE1qxZs6bNRqEKkAF0/vnnV9WQbdIWCJJZK/QjTwiBupiRQmPM8CEW+uY0GX2CAnkez3LfidkyNDf3h0Ay1JdrUyFr0MU8kQVSAGHQDwgZmanXeuAFCxa036AR21p7nCd24Vne6A0BgI4k8gllJwPv41GimplhwRuIqTXU7j7AQxkm/YPOQEchXJB3qAq7gAjTS7pCx/yNX3krNp09+OCDGy/zZDe8gZA57SfvYcIz9wNA/el33kQOAcML5dBj6Az7yr056scrPKQDeX8MVBFv6KnsH95Aor3HjvLUGfygI9rNrtiZ9+fOndv0hx7iGT+lbqgffZKxgULyRxArn/ZinHzyyTPKZwdsm9y1Oe/IWrlyZet/p2nauHFjrV27tiGYfBadIgd3UPCNp59+elUNWQwxgM7miUZskSzooHr5H3tv+MwTTzyxqoaTl17wghdM3dVF5vQ4T+CjI+wuM/riGR377d/+7aqqeve7311Vg/3Zk8Mf5KlV6hNfoMFiNP/hLh9+xHtI/OQfIOR8t30efHeeioXw0Ht4r14+W3+0e/78+W3/nraw49yLmdlwvCf/K664oqqGOCL7xW+QBd9onMLPazsflJmT3JvHh73hDW+Y0UeypyvGAnxzntKJt/yN7KDfxVfly7LIZPHR9J9v168FCxa0OvHKHi5xREzEI3omPtDb3IfLLui97NF5551XVUMmMU/CpVfqybEAXihXdsL3xq76rh66Qz/JUL/s/cl7d9gbHmuP1UdkZDyo/XTw/vvvbzaobPFG27zjM/fc8An0y++5b002TNtynw//Ro/0fXIlF3+0NeqZmk6dOnXq1KlTp06dOu3QtF1kalCeRw2hgs6a3VoXK4NjhulWYfc65GzdzB9iARGAhENQlG/2DwExI33ggQfaHgMzb8gsNM7MGbohMyGj4sSaPF8dkiALAcVFeAChgMJAMuwhMAs2kzdzR8o365ZdgARoPx5efPHFVTUgbtpr9g09gh5ab+73PFtfvbIHeI+fhxxySOO/ZzMTB30gT8gWFCfX9iaCAFmAFEDEEpnSRsi4G7yd6y4jQzbeg5CrTzv8nifs4Fmii9AavM32yqrkDdP4AHGRPZvMrshYQB7pEd6xGbyDkNifo83WHOMVGbExbaZneUIXFIaM8QCSZX25tutr3iwMZVIOlIf9QY302+k3UNQ87Qlph/bqL2Q6ZU/Pr7/++qm7E9gmnbcHLPflJWKa+3qgefTC8/xX3ulDNrkXy1rpuXPnPu5NzU932rBhQ61atarpFp+fGWCosLgkyynTCmFPO3fzdt7Yzn/QvY9+9KNVNSDebBCiyjZWrVrV9JleqpMOyQrQHTHOCX1p93TJ7+yZbxQfxBu/y6CIA/ZXaLu+irWQbbyC6ouPeKvPfLJ4x4YQ/0FGspp4zodCiX2PX0cddVRVDXFXNm7dunWtTbkfNU95IguU+4G0EW9kkWXB9Q2Crc14i9Tzyle+ckb5Z555ZlUNe3bwXD3imLhGr+mf8Rm/IwNDRnmfmuwGGSlPXPK3fskcZaZr48aNzf/KrOibEx/zbkP6Sr/oOb3Pk9t8r81vfetbq2pYLWQsSsZiqRhuXCcusCO+Vrvxjo8mKzFAn2Uo9Ut8NjbhY8QGuoN36tNOvsPf+s/3/MzP/ExbEaCv+mAM4Fm80lb6LZ6dcMIJVTWM5fA4V0SJ5XwTuuqqq6pqsEXjf77jOc95Thunbo16BOvUqVOnTp06derUqdMOTdvFPTUHHHDA+Hd+53faOeWQKrNYM0WzTKioWaksAXQYggHNMeODEPgdUmk2DQ0w2zebhSBAd5/97Gc31MJM3Uxam81S8Reia7aqL2a7UDenUViPLRuVfZWNyrstIFrWbZt1Q64gaGbXkADrghE00vOQe0hFrnP0PITNc9AgMoVeeQ8ShrR/PB7P+P9km/EEgqQtUHg8sZ8HAqUcyBA0Rx/zDhS/Q7LwWPm+l4XwiaCa0KL83mkfeKU8n/qZyHDemUJfZT+gUpA0z+ETtOjQQw9tPKV30Da8xAMoCdSP3sqSqYttsRn73egfHmojvcpT/OhL3uED5bH+HILq/Vx7DVnNO4q0xz4F75O9ctiH/vNFmdWDamk3Pj7vec9rtpp+Rd8gWfRUFkyb2BD5aguEyykzbMpz/B1UD0qtHcrnm2bNmlXveMc76rvf/W6/p2YrtP/++4/f/OY3Nx+cpwThI6LLCAJPN/lClHszoL6/+Iu/OOM5WXRorHpkF2QC99prr1aWeAXh5gOsLIC+8m18E9/I7tkHnyJ2qvtNb3pTVQ0xOu8X0Z48zQkyLS7ygfwMJNpqDAi6+qG+/AGbkuXgP/BUvCVD9pzr+CHCb3vb26pqsGs2eddddzU5iEfGCXwnn8gX6pP9SHyZNuQdWmQlVkLU8YoP5zfSB4kveGFlCv+DB5llwwP10Ct7Iowt7OHiy7O9eUobO9E+q0v0mx9Du+22W/OvdD33n+WqHc/7XnxSl7rFo9e85jUz6sRj5VhNoY94zhfQBz6WrctSqZ/+Kxev0q74FCt1ctWHdtNzscA4kR0bh5Kh+jKLuNNOO7X/syny9on3fIYT1tg+uZEjuSvP32KqcsUpPkH80jf+cnL/0dVXX1333ntvv6emU6dOnTp16tSpU6dOTz3aLjI1S5YsGZ911lkN0ch1iBAsqCjkGhJvHaTZrxMsvG/2qq+yJcqDnEA0zCChzWaeZpj33ntvQ2zN0DObBOnNuwjMqK2vhto6VcWMHWIEQccDs2FovyyUG5qt7YQUmJHnTfD64nf1yfDgRaI/OYv2HkQEGmPWD5nIvTuQiDzDXPlr165ta3U9q29QC2gEOWWftQ1CZm0m9EPfoCv0gJ74VD7e+ZusoTHQRfs06GmeJJSIBTSFDuVeLwiG76GMUCe8l92DYiGIofrYy3e+850plFmm0d/2a5A/Hqd+Q5gS7ZElgJBmps/z+k5P6JF66a9PmZa8g4jMlG+vjPKhPmw/b4Zm+8pLH8JeoZrsM++GoffPf/7zpzLEUEFtoIf6pE6kjkS88v4s3+Nh3vGRdyX4JNNvfOMb9e53v7tuu+22nqnZCi1btmx8zjnnNLQTig9B5KPpGL9DJ+gOW8nb0fk+8vae9fd02f4V8mVbuX5+9erVzd7YaSK7uQeBPfGheX9TnuAoA8In5c3lfDUf5j3r793hwz59j0fudMlYLUOcqD/d56PFVXEG77Qj9+YpB7/4XP0mc/d4/ehHP2ptytPIkBPd8EbfUK4YoT/kT9/ojxidd3l5n2zFA3GTnlpVQs9kH/KUxtxbzE8gvPnzP//zqhrGJnRHdoJd4F3qlHayB/2kz9/+9rfb3mNy4POMgzJbxv/KLpx22mlVNWT03DGX9/rRY3GPzGTN7D/UZvFLO+iC9/RRvPR9ZrWUo/14TlfUjyfiFh2h7/RazJFxFUeNdazEkME99NBDp+6xy5UdPvOEUHFIn9gO/+c9vFPnq171qqoa9nCRPx7SG7zlmxYtWlTve9/76vbbb++Zmk6dOnXq1KlTp06dOj31aLvI1Oy9997jk046aQopMHM0izUzhDSYdZqtQlugQhATM0drz2VyzCQh3RAECATkIfdW7Lrrru20CjNqM/HJ9cxV0+tr85QXaJvv9d2MO+/ogbxB3KC/Zslmw9bpaqe2m9GrDxIAJZQdgHB4Pt+H8kACyCrPWfeJL/oBWYG84NPkPSKyUHgi85GnzMh6mdFDEvIOEwiYOuiJ370P7cu9V2yFbJQDIbP2XQYlT6lxYpDTPZSnP8qhdz7xXn0QOjzNG8y1l0wgHtDTyZvPlQkFyfsclOlvz0O81J22lidv5e3WieqwYXpBNngpu5T3A+QpSRA97+kzPdQO/SFL5eg/X6E8CBk+0TE+Qz+zP/PmzWtthdhC+fKWa/7MKX76BJVTN5SSLfE17MVdQHidJzpBULV58kby3/u936uVK1f2TM1W6JnPfOb4rW99a+NfnrzE3vgrmWF8h3zLnMm8yGaIBenbxSN7rnwPbXUPFpSZH3nggQeaX5UNdhKieJU+jC7xLTIb3tcnmUo6Kpba7yOjyt7S9+IRexd30g/ZQ4OH1vPnyodXv/rVVTX4StkrPpSO65/TyzyvX3jLvsWz3L/p982bNzc54iF5OGENr8VMmRNZJDyQdcK7PPFLTOarIN5QfHoD6ebzfvd3f3cGD2Qx8o447adv+kwX8CIzT+9///uratjn6/fcC6vd6km70T88Fj/nz58/pZd8X+6V5Pc9n3d/sTG+lj3QE/qMUi/JiG+VqWTTnld+7hsx3rEiR9/ZMtmQnXLxQpzVXrxVj9VLaYfsTbuMmdjRTTfdNLW/VVtzL7NMDt8iPsl2eV8b6WfeOaeP9t/JONqDLOZqu/1LCxYsqLPPPnubez97pqZTp06dOnXq1KlTp047NG0X99TMmTOnli5d2tbNQjbMqs3gcq+L2S6EAoJmFu89SCnkI09wMhM0KzaDNKM0S5f1WLNmTZthQ13VYQYMxYOSQ5rMcs069RmiAIU144fiJ/LleX2aXBdfNcz8IQ9QQoia2TAeKQ9Spc94CY2CJMjw4L1ZdZ5Vn6gjZBwa7XeIt/W4hx9++NSdIVC1vOvG+ld9Jk9IER5AxPFQXyCl9sJAJ6Hu+uZvegMlhMDhCcSDbOmTE36gLZCp1EOoZt6PRJfwLk+E8TxiH5ASyAo72Wmnnabuk8ibuempNimL/pAXdEYGR+aPPkKZIU7shSyhlmTtUzn0MPeu5Fp7yJP24gHZ5Yk8EDwokuyFk13w1lp/vFO/96BY2g3J27hxY0O18YJ+0EN+jW1AG/GefrL1vDOED2JjyPvsSJv9zY/i3S/8wi80Xe+0dRqPxw2dzVOq6LL4JQbwSzKkeUKgPXk+vU+nZEP5G7GHHPlF+0wgrLvsssuMG8OrBntJnchVEHmnij7Sb3fl2BPjd7rKF9JZSLq+84l0WNygi07q0tcPf/jDVVV1xBFHVNVgAz75YO3mZ/JuDTzCe39ff/31VVX1xje+cUY/ochiBz7yQ6effnq7Cd13eUqhzKlMvr+h/nwK/8A3ep+v0ib6J5MiK4V3UHr6iIdkoXz+RrzTbvpj1QeZ4iUEXkZKNls7xRk8Q3mfT2a16bt20Y399tuvjW/YFN/Gn4s72qiPyswsED2UXTK+yZPayNt7xk/iHv8v4yOe0Gc+ng/I9rI3voGdsGntEaf9bh8Lfc9TePHOmEActK9J/dq9cOHCFvPzFDK8wzPjGP5IW+gXWyIzJxXaQ5N3vdF/vNEX+swfksXs2bOnTo6cpJ6p6dSpU6dOnTp16tSp0w5N20WmZtasWbVw4cI268wbeK3RNJPMtaTOpve9mT8EE8JtZmnWbNaaa9TNaiElEIXJ09cgANbymvlCNaAckGOzWKdBQTnMTiEFkCcIggzGl7/85aqaRmHy5BV9yBOOfA8lNks2y86bo8kA4oD3noPU40netGtW7/nMRMmKyVpAOCb31pBv3hsD8VGH5yBSUHq8Igv6lCeHyK5BSsnI+m081DdrnH1qRyLuUKNcaw/pypNX8DTvN4BKKVe99krQMeiFTygt3cBj9e21116Nt+qwF4vN6DM0Mm9mViZkTN+gK9qoHjzOE1TyVBh6kNmxXK8LvWGPvmc31hCrRz9RZq/0AzqF5yeeeGJVTd8YnQg0dBLytnr16oYK+6Sf2oInbC1vUM47qPSFv7NH0PtsGs/Yi7biVe4j+uIXv9ja1mmaNm/eXA8//HDzH+yKjrLLvBeLHPn4vBeLX6A7bIpeyJ7ytT75L4i/ctjokiVLWnzJPTCQXCsCIMrQWf6fDtELft49Hnmim9jKP7ALsZg9icX8gj7bF6Re5apfZkU5Mj65MoE/US4/IYPDz/EXTlkz1sB7e0bzJFbx62//9m+bvWoT+9N2ZRgb6CO94Cv1RZbqk5/8ZFUNchUr8TyzTuyfj9IXWXB6K9tgTOI5PJIlyPtG6IixDiRefNNu2RR2oD10gq9V3jXXXDODX3ytOPjv//7vTefVgXfik/uOcl+2VRzigYweXoqV/Pmv/dqvVdWQTeMj2RQeW7Wj7Yi95eoLdiUukIWsBhvWTv1ip3yJLBpSvvqUm/uKrRQyblTf5B18eZoeXuONzKC4dfnll1fVEJPJjV7lPnPlKF8c8r2+s2Wx2/di9ZIlS9p4cmvUMzWdOnXq1KlTp06dOnXaoWm7yNQ89NBDdeONN7bZKRTHrNbsNJF7MzczQjN9yJb1t5BKKI3Zudm+PQqyJ5AQs0FIB9R5zz33rKuvvrqqBjQCapJr9yE7ZsHWF2ozJCBPyDHD1nZrJfUtbxnP/ScQbyiOPuOdcqzjhaa4QyOzZHir3dqnv/oJfcw9RuqBIEDCvJ/IyUMPPdTkAE3Ik2n0RZ0QAWgFnpKzGb++61Pu2dJWCDmEjV7mXhc8hnRpF6RBfdbG531LecpeooxQGL+rV/YM+qLdkJPcu4HXELg777yztY0t6TNkij7Rvzy9zPOQVHXJItBna9O9p1y8yPtvtDH3ekFDyRSvrXnOW7DzXiUIK7SJTNlZrtWn19vKwJKF56FPL3/5ywuRo7bhGblri77rK3+V+sivQQ21JTM5bFomhz6yD0gc37XLLrs8LgL2dKdNmzbVunXrmk5ALvkR8qTrdIMc6YGMjJOKyJHu5Kl4bAXCzk+lTdB5/mLz5s1Nr9kTvcx9HLmXJvcf8h3s2j4KugZJVp79I/pCx5Wrrbk/hC7LbjhZK08Jzft0tEPf8V58JiM2pL9s7qSTTqqqIWboh/L0x4oJY5Jly5a1zEXeP0auuYfXChH6YHyBJ3xNrqJQj3L5bjEaL+mDcqHyftc+J4fhCX3yOz3mE8TPPKkyx132aYjb28qK8P0yXeKnLImVC4sWLWo2QM/oo0yG3+kRf0xvcm+luvCcL7SfkczYNN6I/bKt9MA4Tlxiy3hFrz3PJ8igip/6lyfZ4X3e8yS7RzdOOeWUGe3ED3aD15dddllVDZnapUuXNv2V9bKPja0YHysjVwKwJT6E/mZmj96wWe8bR+GZjI2+6DuftC36sRFsNBp9cDQa3TMajW6a+G6P0Wh07Wg0uvmxz2c89v1oNBr92Wg0+u5oNPrmaDR63o8rv1OnTp06dfpJqceqTp06dXp6038kU/Ohqvrzqvqbie/OrarrxuPx/xqNRuc+9vc5VXV0Vf3UY/9+sar+6rHPH0uj0ajNtiGOPrd1wpjZsJkehAAalDf8mi1DHjx37bXXVtWAypolmz1DaSANjz76aJs5T55yVDV9Ky5EwPfQCaiGNkAQIEWnnnrqjDZDVfJuCagrdCdvZTW7hQz4HUoEOTD7dp66dnrfenA80G4IHLQXgqedkC08J2PoERnj0+TtycqAjuMZRMC7Zvaez9PHoC14BJmCBFgTDFHAK22GGKnH73gPOdWePFHO7/ooO5Dn+ufNzpANujR5S/3k85AP/bfe13piGU714ssee+zR+gAhgjh5BtpCvz2Hl+pEkC93NPidHpE/3rA5yChZswef1vBDXvFaViv32OR+K5lB75M5n6KcXJOM9/SdPfI5/rZOXj0Qwvvuu6/5jzyVMU80tBaZXkCu8Ag6Tf/piWyUzAwb1Ee2SZ/ypDYyve+++34sCrad04fqPzFW7b777nXMMce0G7DJmn1DOulO7sUkT8R32vtHTuSbcYzPpHPuvbJPgK3R4YULFzbUVVvFNr6U/4eqQ0XZp7J8z06tWhBH2L2+W5FgrwrfCjH3vEyuPssYQ8xzjxvfi5fsXtxEuWdA+/Ubz/kxNif+yaoj7RL32dC6deum9kpqc+7xxEt2h/hYPivvrMOryVMrqwaZ4qHYTKb0TpxTrlUk4qDvPac+MoHUy7BoL/2Wgfa+vUDq1y++nT6Lq7IeeOpvuvbLv/zLbfyBp+IUXqqb7/vDP/zDqqp629veVlXDOCRXTdADv+deL76anttvK+Mv9uJhxlgyMb6iv/SZ75Xt0me+WlaOzPh8f/MRL3vZy6pqiJO5t5XuGf+lb1m/fn2Ts7bJlnpWn/yujX7HE/ons0z+5O20PTbl1NlcxbMtfb355pvbb1ujH5upGY/H11fV/fH1CVV14WP/v7CqTpz4/m/GW+iGqlo0Go32+3F1dOrUqVOnTj8J9VjVqVOnTk9veqJ7avYZj8d3Pvb/u6pqn8f+v39VrZ547vbHvruzHoc2btxYa9asabNYe2nMeiEHTtXIG3FlaMxSIQfQU7NUqA1kE/KRp2lBY8yi1TOZJYHEmnXm2nzIAhQNSgtBsl4VOgv1MAu+8MItcThPYkv03rpZ5UDEzJ710awZyueGdmh/Zj0g2pCKT3ziE1U1nDQHuci1p3jsLHPtwtNct4uPkydyVW3RAeu/cx9F7kfKM+XN4vVRlksb1YXnECi/y4Q48QvKAqnIO1Doa+4vmbzZu2r6Vno8wTPoIxlDn6xjhzJBa1Deyo1fkDEnrECNyHT58uVT9774jQ3RC2VDV/JeCnpGBlAX6A0bhbSplyzZPhvLk3DwDpKFd8l7iCxZsk99135oo/bIlmkP3lvfzWew38wkQU/JEOJ79NFHt77wIzI35EgfIFR46T36Td89J6sJpYZ0Qt/YNrTP9/SYf+S7HnjggaYHTyF60mLVhg0batWqVU1XDz/88KoaMmR0lLzoYsqTT6SLYoFYkfdEsB22QcesQOAXEjVevHhxiwd5whAfKMuk7XnvBbvJW+4z+6RtdJyd8Nl8Ue4n9L4+s7+/+ZstyTZ2Zc+DVRv8yhlnnFFVQ0boqquumlGufvOxUGRZsdNPP72qhlOsMkOTWRV7FGTB586d22KaLBh90Ha+RmaOfPhOz4np/IG26APCa7GT3I1z6Fnea+M5vM67X6zu8BzZkRk/kasq6Lk4Y9yUPpH/2dZppnyp8vRv/vz5rQ/awGZe+9rXVtXgp/GAvuVYkdzJ0TiF3uMNyn0/9E7WQWw2XvJ+7ovNu33wJscueMVnZAaTT1CPLJ14w+5ydYf2yoDSDTr27W9/u9Wd42Gxld+T0SEDeoCMMfGMD1K3NrELMtNHsZd/Uz7er1y5svF3a/QT7wodb9Hg8f/te6PR6E2j0egro9HoK3lpXKdOnTp16vRk0hOJVZNxymCxU6dOnTptn/REYbm7R6PRfuPx+M7HUvb3PPb9HVW1dOK5Zz723RSNx+Pzquq8qqrFixeP99prrzbTz9Ogcp0glNiM0gzQDN/s2kzPrBgqC52aRKyrhll8ogGQb8/98Ic/bGiIjIhZJ7RUGyBgZqNmobIHUA0n4eQpMSZ8yoXyQDugQtBY2QnfKx9SZWaeZ97L+OQJWZB6yAfEBLIAdcYPyJ5TOPQTr8kUv6BOZAJBeNazntXWUjq1iRysm4UqbwuZ1iZ6AvnBU23LfRa+pwfaSJbQPrI44YQTqmrQU32BxMl6JOKep/shyJn+5OlGdCvXvZMdfsgYQTHJDrrz9a9/vaGA5EhPMrOI13hE/5A1yb6XGdQH6A3e4Ck9p6f+ZsuXXnrpDN5BpfFaBgkv07YhW1AmvMQj69a1F6IlG8Ie9SP3ZdElz+MjRHzjxo3tGfKCSOkjm8nMTd6LpO25Vl2Wid7miW30mwz0WT36vHnz5qfi6Wc/UayajFP77LPP+KabbppCV+lY+ma2RD7+Fq8SybeGnb/gf8SGvEvm85///JaGx51M/v7nf/7ndrJnovfszWdmZvOkyNxjwH7f8IY3VFXVZz7zmaoadIxu8kF0jt3kLeHqVR+fDelOHyyrIC5Cf/kpOs0m2Dvfqh6ZHX6Nv+MX7bWRMfY+Go1GLa6IuXwQPcn7wvKUNPoh+3PkkUfOKA8qnft4fS97ZS8lZFyM52+0k3+gL74XHyZ9V9Wgr+rxuxvijZuMLehzZvcyEwV5J1OfZDo5FsEzflgMxbM84dFz2oqX9nrmiXLGTeKIeGHco036Kv7IRvidHuEdmfPFVoFoP57zBWKBFQk5Hst73rSXPmfswDf2pjxkjHvggQe239iM1Ub2HTm1jz6JQ+rIk0y1UTZVW88888yqqrrooouqapAV/RHn+FO/q+e4445rbdwaPdEIdmVVnfnY/8+sqismvv/1x06WeUFVrZtI/Xfq1KlTp07/L6nHqk6dOnV6mtCPzdSMRqO/raqXVtWeo9Ho9qr63ar6X1V1yWg0OquqVlXVKY89fk1VHVNV362q9VX1+v9oQzZt2tRQTugR1AfSCKEwIzSTM0uVBYFKQXPMEJVjjR5UJs+mh/KYVZtlT96anDecI7NYs1VohdmmmTYUzic0A1qnj57PPQVOtYG8QUHyFldthjxAEPAAagMJgzhYH5zrthOptg5TtsBabQgc5EG2wBpUfDH7hvRP7lkgf6gGXlmXKnNHXlBEzznL3dp3yAFUUsZPG8nIc9BG9UCgPA8VIRO/5x4eekuGUEDIBASXTjlZKG+xhhLJRJEhu4Eq4hekRX+sI8ann/qpn2p1QOHyhm7ZMqjicccdN6NtMiW5b0Tf806gXGdND170ohdV1aA/9Mm+D3tj+AZ6Asl13r/yIVDKIRO8kyWBTkFLZWzyVnCoIp/DvmSW2HveRXPjjTdOrUlm03kDvAwkG87bpOmNvtM75ZCVT3bB/+V6b/rtNK9DDz20oWQUkukYAAAgAElEQVQ7Iv1nx6qNGzfW/fff33SSPOgUuZAf/kOJ6RyfbT/HJZdcUlXTe5zoHB3NZdp0n267Y2zyVnM+xX6dvL+FD8tTOe1RgHRDtPkH+s8v8APsj655Xtsh2/RMhkbcQfyI99iI7IB2Q+rVy29B2CHn4jQfSCZiCd8sjrIZvpU/kflS/yOPPNLkCe3PE0HFCz5C35RhX4dsmD6lnfID4hQ/jqdis77k7fT0giz5NrKHyGsfnytOyrrpj322v/mbv1lVgz5mxlI/fIp3SDvFFOXTrSVLljQeyHxM3i5fNchd3+zH1Tf6iAfGIbJU5CvW0qdcsZIncokb7IYs7TvxOx+Bh/Qy7/yxf069vhcf8YFe4oNxGh6zc7GBvdFr2RNx+K677mr6wG8pkz7KEJNfnormd+Pp3C9nPEKG9g3pgxjO73lPn+n3oYce+rgrCn7spGY8Hp+2jZ9enl88tmb5t35cmZ06derUqdOTST1WderUqdPTm0Zm//+VtHDhwvHP//zPN5TE7BIaZJYL/Te7ht5AYSDckAazYDNL5Zll5xnekDQzVEjL0UcfXVXDbP2GG25oM2rPQl8hRBDfzBrJgGgrNBbi4HmzWDP7PAnJHhizWwTlyIyKrADeaA+eQmX0C8IG5TFbhg5BRqwthSbiR87itQNPoTiQEbw3ux+NRq0P9EGbtAFBx6Efxx9/fFUNqL81wdYcQwIgUFAW6A1ETdvxSt8gqVAbe8HILPVV++kGPcoTSPBGPXmzuKxG7sWAztAh+p5rpaGU1p/fcccdre/kwKYgu/RLJg0iq494zI+QX64bzxNPvKc8stMXfYR0ufcCUgqp0WcIHaI7EC08hwrhea7RpxN8hDX3bhy3pl676F7eGcNedt9996Z/eIIH9ExdfIEy84Q1PoC+Qe/YA9vm5yCnuZdGXxE9XLFiRV100UV19913zzxer1NVbdlTc+qppzY+sxm6JI7kenv+htyhsEcccURVDbbGhqCzbCBPoqRbsg38ACSUnnz/+99vvooOQOOVRSdkiWR9+He+gw6KO/YRvvKVr6yqqo985CMz3mcPedcJRFx5UPpcP88Xind555e9E+yQr+ZjxU22l1lyvM19HcYg7Jht4CPfLOu6YcOGVheeyTo5wTT3RvLjypQlZ/f0RN/Vmb6GHavX2ELcwXvtydUjfB3k3Cefmvf+8aH0VXwhA3e45P4s9WkvnWIvfmdHmYl6znOe0/TDd8YP5KUN6qAveVoZedJD8Srv5zJeEUfEDTGUTMQ37+O1vrMvMuHjZSGy3hyLipPiNDvSbvFJlla9svj4kHEtTyn8/Oc/3/rG//BTnrFvifzwnv4ad+krG84s0VlnnTXj7zzhl/7ba8zfTe4tvfDCC+vOO+/capx6yu0K7dSpU6dOnTp16tSp09OLtotMzfz588cHHHBAm02aoZlNmvmZhZvFmhWbVUOszMahQGatyjd7V37upcgTYcwczbrvuOOOhl44ScoM3G26UBFlq9N5+zI7ZtZm/nm7PBTDzFybIEWeMwO/9tprq2rYmwOBc+eK2TTEDaoHCYD2mX1D/fFa+5yEATmzh8LsHHLmPbN6s2714Aeee/4b3/hGKxMihLfkpG/0AU/wDFIFSYNaQMiQNuGl+shUeRAy73sPj/IEFQgExCNPAEpELREKqFSuPyer1Mvcf6VcdgC5n0SNfOcTD3O/mj7JMCoDskRueQ+R8/Nl2eiPNlp/m3ZEf9gsdBAv2Jf6ZXzUD4mDMtERdktmdCL3ZXlPBoh9aQ99zhvVyQ5ivWzZsiY3qBu/41m27zmIJZ5C2ckzb5THQ+/LELBhfcvbqOkNnjzyyCP1zne+s2655ZaeqdkKPetZzxq//e1vb/zLrCQdcCoZXSHvvO+KXPgD2UhypqMycrIMnqd70Fp6Q9fH43HLqLJr8cN+UX746quvrqpBh/RNTOWf9ZEuybiwB/aZJ0TKrOiTrDkkXN9kWsQnvNUe77N/76f/0m92mKcX8jOJ+OeJguKcW89llCezrer0G1Rf27WVL8QjPNUGsdOnOsjMvls8JWd9Fafom/JlXnxPZjKK2pn36PieLI1hcv/t2WefXVWDjDwnS+Yz77ajA2KL+pDn99577zYWyNNeZYeM8dRNJmK/uEVGeMomcyWM+KUcNieWswu+WN+1SzvYlwy/PrGf3Nc72eeqIc7SRxkeJyKyK2OPXPFCVnwNfuCX7MzatWunTl6zb5v+il+51w8Pcv+1cTh550mk9EwGhj55XnvIxp7TWbNm1QUXXNAzNZ06derUqVOnTp06dXpq0naRqdl9993HL3rRixpCZQZnVqqNEA5rPaEqZqPW23vfLNdsNk+cMHuFApuNQzTyXhGo7kMPPdRmwmal3oEwmaVCkKFz6lZ29s26Qutk9cVnnq5kduwTAmA2LHNjlqseyDlkDlqbNzGb2XsOkYHsAKQLEmY2D01WLhRK/ZACs3T1/uqv/mrLLmWWyswdT2Rucs8CGXgfogBhyBttc98TWfnMvQjKg2TRX+3I9bKQMNmtPAmFTKFSUCH9hHTkLdlkDo1MZNj+Lkgfe3nkkUda2fqQp5TRK9+zhdyDxQ7YEplAIfM+GHpPT/WZjPWRDNkPxE27oHyQrUQxrUGGMGfWJLN0ZKxef8vO5f1P9ijoN9nSqfXr1zeky0lLeAKRIke2QE/YFj1F9Eyb8V7fZJlyL496oYn25vBRDz30UP3+7/9+3XrrrT1TsxVaunTp+O1vf3uLK+SXSL09mHniHnvO0w5lq/kdqOzll19eVYPvVi9fm7eQsx0+fsmSJS32kT1/DI3l22QkIcXsl69UB53TBnZkP+HkvRdVg65qk0/2BI31vXbyP+wt9xn65E/IAO/sV3KyH3t1qlvuhVBv7lPMPa2y3lYJrFu3rmV3ZcrEBRmbyVO8qgZ7FAv1lc/TVv7amMJz2qTPSLl8Dz/AL/AHxh5OsqSnMpDqpQvisPrINu9BE8PpN6T/Pe95T1VVfepTn6qqwUfak8bXim/8JXt69NFHmzy0zWoK2U3P+h2v/c4mZa+Ns/BUHKPnfC99YKtkTLb0J7N0xpZ8s7ikXXhLr72PjMfU732yU69y9IvsxUky0l92JzsyeeIw/rM5Nkk+Yr+sExsQQ+mzsYRssKySGM2WxPzch8e3OMGOXmv7HnvsUeecc05973vf65maTp06derUqVOnTp06PfVou8jUOP3MDB1ibaYI4YKIQF2tV4QGWZfvXG3vJ6Jh9goBhXhBusx2zVhf9rKXVdUw43zxi1/cZpNm9nmLvPWqeUqSmbVMDBRElkm5CFJmZg3NVU7uF4L6QQz0zfNmydrtxBN7cCBRZv6e8/7f//3fV9WAbEGXIAOQOr9DICAWyjXrhqRDJCZRSTN3pC95whwEQVvpg74qh65DWyBo0MY8eU59/rYmGnIA/aCH9I1+0lf1klGekU8Pc9+HcjwPcXO7tewAmfldf2XLICdkg/e33HJLaxM0RpvYgKwORCszK+RKj9SlT1BL6J7MYd7rAiVSD95BuaFA0CH6CEHTt0S+ZAS1C1rJLqFEkDr2RC+VkzdZs0t2D51U3uQ9VnwC/aQvicCiXLOcd+pAxvBen+kxf0nO9Ei5UE+ZBu2577776k//9E9r9erVPVOzFdpnn33Gp59+etMpOsvu2D8fLfuB/2yJrxQj8qQtcqZTbJQt0Kf0wfwSPRiNRs2v5x1vZM+HJgINZbVnQZZRHIKI571s7FQfZAG1Ce/oqPfxhN1kpihthF/B+8xu0n1+DJEBnuWeUDGfTNz9JevAhpX/pS99qflrPkbb9AXv1Z32y2dsa78gdN7YQIbD32TCN4pbuWJBO/LuIXEQT/hk9SgvVwLghXaLHd6ne/YjyQyRmTh96qmnzugv/uDnt771rXank2cyuyVjb+xojCdOKEsf2BY9Q97zPHuht96j/04Co9+yY3yy9vIJnsv9RfSfXdCpjKM+ycxzYgmiM3wH+zY2ps9kWzXYAj8ik5/yNA4SC9N3eN+4iYwymyTG8zXsAK/1Xdyyt/D444+vt7zlLfWd73ynZ2o6derUqVOnTp06der01KPtIlOzbNmy8bnnnttmsWb+kAjrY60PzvtlzAwTETXDhMKYMZphKidPgELKtX8AcjEajdqsNtE6s02of970midkJQpnza9MDkTI+9CfnO1Cd6D37hHIW3ohz9qvHZAHKA7kPW/CzdM6oLxkpz1m25AEa0Lxhd7hnwwNGb70pS9tpwjZt6CMzApAnKzphWxnNkDGA5oDuYIMQQL0FWoCZaEveAqVgYQlukh/oSreh7DKGljXCiXUbgjGFVdcUVXTtyZDQnKNMxla+zp5Z8ok7bzzzk0fcl8P1I28ZGjY0OGHH15VVeeff35VDagKBI0Na7MMn2wqBIx+5F0q9CNPJPQ7e5O1o8dQI/3J27WhpbkPgp7zFRA55UG08UU7fOoPFFO/RqNRQ7AgXbKz9AdP2TQ0kVyhedA38sz7kpRDZnitj3mimz5D3ubMmfO4p8o83Wm//fYbn3XWWY3/+C0bwV/wN+z/yCOPrKohNvidnBHdUz75kDOdp2P2QHjOe3T75ptvbvZBJ/KeqLzrLVF7vgxayu7/8R//saoGH+okK3ZqfxB/oHz2wr7yFnI+ik8VL9mQ9otf6XvV87GPfayqpu+0Y5d8JfvGQ7wVQ5SL19rhzo6vf/3rrS14Jy6xW3qgjty7gGd5Bx3faQyBZ+wdui/ma7s+64N4Ke5oJ3/jb/pLL/PuvMwOKJfM+FgyV/8555xTVUO8EwvE0TxFkGxkEebOnVtvectbqmo4pU9M1ncrVJxWl3rsvjXjFjGXXI0tZU/z5EJyJxOyRWRLNnkymPENnrHRzNblHh8yMe4TZ/IUP79rr5UIVpPQIfWTMZ1bunRp46VxMPllJlib2Dw9Nk7zHt76zDiUe5XFWLxj2/RNJvIZz3hG/dEf/VGtWrWqZ2o6derUqVOnTp06der01KPtIlOz//77j9/85je3WSWEwEwtT/7KvQrQIrcb2/viPWQ2nffcmIkiiBdkDKJhhrpw4cKWibD2XlnQCKiKNb4QITNmM3Lohtluzk49D72A2pnlmrFDSyAIRx11VFVNr5m2p0HWCaImS2EGj3dm/pAFMsIjs3c81D5oMFlBs6BYZIfH6lPPPvvs0+QqIwGlwFu8I091bOvkNrKyDhZypXxtsvbTGmDoDwTBc5AkbYZ+IrzJ+2EgaZARqJ+TeuzhSnSSrKBIdCDvjICM0DFoI50lw/nz5zedlllJ9EQmTtvJiX7ivTJlmfQZ+pz7C8iOTOknxIrs+AJIMVuUUfI7mcluQAHpgqwFHrIzCJ/68nf80A+8zDus2JOsLiTxa1/7WkPNZM62dvN71YAmXnnllVU1yMBt0VA5epU3jkMZoYlkq3y8Yj/0hczuuOOOete73tXvqdkGPetZzxq/4x3vaDpFrpD1tD+6zla8lysC2C8fnuvq6QE/xEd7Tjugwmzrq1/9aruXTNZ7Wyd+0kX2nHsDxLHcm6l8foE95y3i3lN/njYGQWZ36hfPZNn53rzZnW+TDWcrGX8zY5X3bLH/zN6fd955VTWsaCDbDRs2tL5rQ96pQ37skv3mfS70QJ8QXpENe7Z/KVd7aI969dXzYnTurxUnZZLIJrP0uYKC7OidfpCFjJKsoX5ccsklVTXos+wEWfBvt912W+OZMsV6fTD2yywPW0z9zH1x9Ewf09fiobbioT2YZOS9PDkMj8UTYwD10lM81W5xLPdVko2/815DcUr/6AbZKY+OzJo1q/VVNkyflEGubNN+M7YtI23MJ97RZ20lQxluMmNzeEaGZGKlw+rVq+uGG26odevW9UxNp06dOnXq1KlTp06dnno0+7+6AVVbZrUPP/xwW18IPTEDzHX6vodYQGed1gCRMPOH4kJXoC1miHn+tufyfglIw4oVKxqKZmZuTSZEQBkQJTNjM2/oBbRGtkB5UJO8L8Pv+qSPZr/qsY7X7DjRP7yEcDg9yj0GiezhNaQMOmMWDoEwu5fhyRui8ccaU7Iw65/MekA3oDFQFogXlASqAYnO+2j0DdIAtfC3tudpHO6JwAMypo/e9wkNhJbgcZ7m51PmCEKh/VCkzFRCKqw/1w98ImOyt25Y9kB5eLx58+amL2yPLUB6lJkIrYybjIX1rpP3s1RVfeYzn5nRt1xfjpeQWCgd1IjeqS/vVfIe5I7ta5f36OtkVqJq0CGZJjJUPz7IrsiM5n02smqJwq9atarpT2aFtM3nl7/85aoasj7QQKi6PkG4/E1/oIn6Sv65fjtPWKTXmzZtmrpbqdM0sTd2Ticz68EW6Dq++55vzDsqZNf5Zr49b0mH9iL64LmDDz646QR9h+qzO3qc+3le97rXVVXVX//1X1fVYM+ZnebX+WanpWmzPvse6iqryP/k3jS85ZvV72927ns+mM3wW+zTc3jHjslMvbLtTiNle/wXfil/48aNUzwVd/gCvkqf8sQq9i8u5F4E3+cdcDIjKHkh0yFjpF48Ebv5Jz5MVj2z8p6DyOOZTKFyER3RLnHY98YSxkTsRvnsY4899mg25+RP+mzlCb297LLLqmrIIBoLyBjKaPDnbA+PtE0cpG/0iv7iref5TTboefGMvhpj+FQPHSBz8chYgJ7SP3b1vOc9r6qGva1kKpaIFfbCiS3XXHNNVQ3jtQ0bNkztY0PG1zKKxslsyRiUHMmEvtAn453cI0Zf+QBjZ7y0P1x5K1eubLq0NeqZmk6dOnXq1KlTp06dOu3QtF1kajZs2FC33nprQ0nzdBez7Dz5yAzRrbFmcmbJefOyGafMC0QFIuCUNadz5A3RZpQrV65sKEaenJaoad4dAS2FZEEr/J59lTWQmdEGyLT3M1OiXu3wnswR9DD3VOW67dyDYxatH2bbkAZ7g6BXEJFck61eWQSzeGjUkiVLWp8gl9BEfcMjSJCy1JV37+R+CDLITA+UA+/wCtJGT/EU6pJ7uCBc2qMdZA3Z02fIrOfzNDOoUyL22kPPoTz2p0CT2I3+zJ07d+o0PX1XZ56wJNMBpVOXtehQHrYHTfa3jE6e108fyBJP3CoMMZVVgHCRESQPz/0NyaIb2o33dGpbewv5CPuwsr3QLXavXrI7+eST290A/I93oX4QMH3Z1i3nbJdf4rfodd6xATmjT8qzbtvaaVniW265paHCnaZp1qxZtXDhwobSyvKRAxvKE7ToemaI2TG7PuWUU6pq0B17DviFzBLwyeqlP/zmC1/4wvr4xz9eVYPueYavY7fkTpfOOuusqhr2lEDG+X/r6tkHNDbjFN7ILugbXuS+IbGW/WvvaaedVlVDNoGd8U958ql+yeqzTz6cLeAl/uSpomTN7vPuu7Vr1zYe8P/KFKPZJd7xXVZFsFO8yfuOMosgNvuU4cBT7VEfX8eP4ClZy3arX1ZBxoT+kmHyjD+TlcBj/JDJ2VY2g52QlfaKz+vWrWtl5cly2iIusAk+kn4pW1/I2Sd9ICuywxNt8imWGoOIvd4jQ7zVbjGbbPRDe/GEfYoRsnpW+OSqEc+xO3agXeKUsZT2oJ/7uZ9rp2CybX3AM/qSdwHJvLA5beQz6BEfwZatrDKWVC9b43vsMVXPjTfe2Ox+a9QzNZ06derUqVOnTp06ddqhabvI1MybN68OO+ywdvY9lB96k+eXQyzs+5Bh8T50GAICiYS+QDIhBIlw5mkyZpJQ7IcffrjNIs1ePQuFgUzLHkGI80x7aybtPbDHBBJgZm0PilkzFAP6AlnXZ2iI2bZZtNkudB9KLKthdm2Gb1auX1AZn5B7vMOHvBNDxgaSAgEz44aYQ3922mmnxoM82cNMHjKV+x38DZ3BI7LCO23MbFquG4cUQLitKYWSyKZBsugLWeYN0pAw7SEL7YfayETmaVV4RcaJVukfXsq+Jd/+9V//tfHI/UhsDSU6qC4ILHnnvrbcL0S/2Bykil1AsMhY+WwTusgOfI+H7AlPoIJsVp/JVj3+1j5r/SFm+ke/IXvaz66s+4V2Qdp22WWXqX0GbMIzbAyq6FMfyR1KnfuKoHj8HD/JRukhG7744ouratAjerN48eKpPUGdBtq0aVP98Ic/bDrrtDp8Zr903z5B/oFvJU9+hO/OuzLsiYKyZibPJ/8AsRc/P/GJT8xo++SnLK4y+EA+h25YOaAt0Hi+zN/iAITYSY50HYLMF/MX+q4dfKpYLs6yezrPBvgXcZQM+Cn1451++F578xQrtpqnbqkXf5YvX97GFcYrfBA5k4e2+95zysTL3BekLnFBTOczZYLxQmwVj8R8fkZ79A2vyUJ99Fz2PU+dlaUnOzzK7Lnn6IzTaflsvlWWz9/GTvvtt18rm834jV7gpTLpLV+oD/w3W8ETbTVuwxvP88X0w3M+yZb/1B5xgt2JN+xABsoYgCzsv9QPvt1+WjpgfGgcJU6L2/ohdogB7EX8vPnmm5vtGz+560kbyTFX6/ierVrdg6fGpE551XZ6gxdibd6jZLzEbhYsWND8xNaoZ2o6derUqVOnTp06deq0Q9N2cU/NHnvsMT7yyCPbqT1QHLNKs2kzO4gXJME6fTNAiIJZdK4thdqaQUKrE5XNm5rNGMfj8dTZ8doISbBm0d/Q0jzVBXLkewiE2a+Zdp6Ikn2DGspemdWiSXS+akDCM2ulr5AGqE+ejgM9VE/evZF7bSAt0CZ7HaBR0AB8/MpXvtJ+gwJCISCa7h7J+x/wiB7kHhzZIkhDns4BIchb5SFq0ECoyzHHHDODJ1AQfYe8kq120mflyRzlenN8IKO8I0V50Fey1W82Tr8hd1/96lenbi7Ok9vYVu7ZwpvcS8Km8BL6Z20x3kI1826FRLy0Sx/YMCRLudqFl4jstFc57IaO0Am6AwG0F0eWzvt5ag47IAM+5bbbbpu6B4KNqDN5qOzUZxno5DW5Ko+t5X4p5UHG2AObu/fee+vcc8+t733ve/2emq3Q3nvvPT7ppJOaz+SPMttMp3LvAj+V+7jI09/QV3sqvEe+/B/EPW8VnzzVMdFQ9qaNTpPiy/gqOnrsscdW1eCr8h4MbWNPiC+ke+yaffALdFB92sF38aUyofyE59WvfP3CE5ljYwq+j7/g58R2f/O1/FjeQYYWLFgwZU/aRK78Pp7JSJBnZgHURa/yJEjjInqoj9qsPlkKYwU3wNNLPJHRMf6SMcpxEv8lvkDk+SXxi08UX/O02jxZU3/oED5M/i3+6JOy6Zk+5j4QfaDXsuO5f4ee0W9/0wuxnG2pJ8cIYrPvxSl91S4yF1foTN6fk/1jN+rHU5lGZNzGJyjPmJkMtO/CCy9se1/oQ95PpCzvsiVtkjm0x51+pX/SF7wkAzaMB+Ka97Rr06ZN9cUvfnGb99RsF8vPNm/eXA8++GBTDArHwdnoynkQEGOiQBTfIIEx5ab+XMry2c9+tqoGpqbzUB7mz5kzpzGYw5EeZETaaukJ5WQ03mfA+iyVnANbxiy4WC4jPalcPDG402cOwiYu9VFygzLGoM94mk4EaQ+HmEd0Snvqp35pX17WyOgPOuigqUvJ/g979xp8d1neC/9aOUJCI4eYAKGgVFFAmXIYHp3p4AEKKueDRmEoD7bT0tk9vOiMM/Z54QvHTqd72l1399hxq5RRQTmDgA5oFexhQ0fa8hTSaQhECQQEJIQEkpCQtV/A5/6t/3clwaptE7mvmcw/a63f7z5cx/v+XveBczWZkabP43oNGjgOwUXd/qqLcRoMpkOwSV192sqJW95FL8kWb+nfTTfdVFWDIxOY8fTWW2+tqiFNqz1kIWBnPzIl7a+lUPilfZZkHXvssU1eGYg5qtz0KSBbRokcrmGyimcCrnIQWdFvNknP6Q/HqI95lDQe5AWTArAN/pYfeF5/9Uv9eUQz/df+vGQ0fY92s4uDDjqo2R6dB3CQj8ENfVA2/fc8fcUjbTLxNBj2Pj3Oiwk9L9jgQT8kYNc0Go1q/vz5zZcJ7vjoezqdE+dJ31Y1+ErP049crsYWTXpyUiUGeM5kZ9OmTW0JnAmSmCoumWT43tJrPs3AJg9V0Ra6qA/azs4cNcuv4EUeh8+X5pJzNsE3ek6c40s/+clPznhfu8iEf/AeW8lNypaTiv0ALyQGmKxt3769jU/4kNSPvO6BfPBcPOFb9Jl8latNjrMFRtAnxLfxQXx1Lqvmy302mSEjn03s6Cu/4gAL5HuTJ8u2+eKc3NFJYyUTj8mlfX6nP/w735mHomRMN67Jy8/T7+uzCZtDVHKyjJdsk0zy8nFjRT6aLzAmASbgjXboBz3VH7zQ7rwkPS+zNmZQr/FeXvJuudsxxxzT5Gb8wjfoOz0Qe8Vkkw+8YkNiNx74izfGzIB/AL8xsL7RMzw/8MADm67tiPrys06dOnXq1KlTp06dOu3RtFtkasbjcb300ksN4TDTk86CukAKIB2QRSgSBCAvmfIeBDOXm0mxQ1QgBRBOSNjkrNhs1bO5eddM0kzZTDwR8LwwMS/iMuPOpVP6DMGA8pkdQzCgOblBM5fB5cEA+gHN0XflQBIgd/jxwQ9+sKoGpN7vZv2eg3iYpUPSJzMBufTNEjdLjsjvPe95T1VVO74U+gbVUSbkCq/pER4g5UIj6SEkjZ5CMLxPz7QXj/UVL2QPLBkku1w2p/5MpSfPc3mT9/AUEuh3yP8+++zTeEwvIKfQQvpKD/PocWXlEZF55CKkCW/Q5EWgVdMyglxpF5uW4sYTlEel54WseSEdnSITNp9LPPgGSLbN+Z7PozTRD37wg9ZnKCB/hnfQPcgmfcvjo7Ull0RACR1/zX/yPXhLxpBeMp3cfN0v39w5zZ49u/bbb7+G+rMNcoUKQ1HTT+Ux6XyiY4wd1uE56LL32QL9YCP8HwTU9+vWrWuy5UP4BO/1bpMAACAASURBVDpF3nSU/5dx/fznP19VVZdeemlVDQh5LutUJx3OC4mh/urN5WsyInxoXlhqZQJ70152mxcfsxFxE2/yqN/sP9uU8eEHyMTvsg73339/i4WZeWN/5MZ3ibX0ILPU4gXUXbl8m0w+3oljeey88siUv/A9nvND2uOzmC+++at8PCMr7SQDfo3P9T2eZ6bK9/pDVzZs2NDiUS71xNs8kjyXbPOJshGWFeOJFSx8ou/ZdC7jVL+/eZS0WIuHZC+zKMbn5erqVU4uSWdX9E/G0CoQdkYH8ZJOqY89kOnBBx/c9IneqVOb8ZSfUpc2GhdlNlNGTttzLGgc5Hu+RzlWmoitf/Znf9Z0eEfUMzWdOnXq1KlTp06dOnXao2m3yNTMmjWrFi5c2GZ20BmokNmk2SfkAXoKGTPjs2cgj5E1yzYLtm4eMgE1NqOE0qgX6rN169b2fyiDzERuqDYbzf0VZvLehzhABPACMgUJyM3z1hrnfg/le07frNOHOEPwtNv6YGtP81JEyLb3oFTQSkgdxAMv1Y+XPpM1gkJ+//vfb3KF6EB89M3lZdbi5qY96AyEOzfyIihQooS5bjyzWYkial8euQodceThtddeW1UD8oBHZAwVxGOoDB2hE3ilP2ecccaM9/ApN/zJZD3wwAOtTRCwvLQuN5crU1tzL07uQ4NCyg7kmna2Ry/pcV4epl77pzLLBZHyGUoKTcqj1qFLnqPvEDN2msduyyzSpUSQHc2O57NmzWrytZ4Zb/LoVnJmI9C03PdE/1Ifcw10ZhD9jrd5ge6iRYv6kc67oPF4XJs2bWqyz8wtHw9pzH0muZ+SXPBfFoV+KJ9u0UHl0mUIfV4COWfOnOb/2SH7pnvaIotHd3xPJ8VacY8d8EGy4bkiQBvzIsbPfOYzVTXEp9w/JMOaKwv4arFbeWyFDNg9v8bG+B/2zhbygsy8csAxxHnh5Zve9KbWZ23CO30WM/kBlMfno+RZrgCQucv9TbknThzgY3P/rENifK/8PAjHXgdxkI7wR2RgL4SVL2SQxxmLr2QOyXc1h70bZLBs2bKWGccbcQDlcfj0HU+Mw4xvxA88ynLZsOfpj78yKsZ96pMh5FszntBjvM5jsMlU3/nwyf3cVYMvNyZgJznGEb+0I68GUd+cOXNameRMz2Sx6Am5X3bZZVU1HF1vH16O15THDshEuXwSu7Hn0/jpz//8z6uq6r3vfW9VvawnVuXsiHqmplOnTp06derUqVOnTns07RaZmtmzZ9eiRYsaMoXyQjxrh61Vhq5AT6wRzaNSzZLtAzD7he6agcrgQMYgL2bJk2vU8xSmREfzqFozZqheojTqyGN4IQ/e10Zth3KYBZvdQjvyGGyItVlxon8QMoiDWbR6ZIY8D4WBgEGNIQaQMKdgaQ8kO4+BVM6GDRva2k1IEyQBr6BnUHzZBXWREVnImEDWoNLaiGdkBr2B1lhbSq8gq3lCilM9IBdQGyjkhz70oRn9sN6VTkBz8FLGErIhi5eX1Oo3mdJj/cM/CF3VoBd4kxfCsS22lu9puzaok37l8Y95ep++OE4ScoX3eCHbRnaOsNQ37YF4QZD1mX7RhTw5jK7wBfgASYZs0xG+yvMQL7LWjrlz5za/kftvIFmQLXWwQdksbaYfiUbrO71kU9DkRIjJhMyh7GeccUZrY6dp2r59e23evHnq5Ds24iShRKYh2XSfzvKxkM5TTjmlqoYYAeGk8ztD4j3Hpib3e6jDaWfazB4h0nmkMT1wwhG0nr1qkwxOHpGM9FnszYyJ530WF9gXu+QLrcLIPTDKpdPqzbX3fHT6A/xgO2TCxsgMX9ja2rVrG7/5SGg4+8MrcSKvDFCmNmuTjIo2s399pXcIWi+rIV4pRz3iSV4G7ZQpz+k7n8w/aafxkXbQMTzXHnwhW/HphhtuqKqB93RsR9c8yBqwPX3gW3OfYl62LB4Y/5CBzykjezbFB75azNYHMskTd/P6kLwwVR/FWdkP8SUvijfuYwfGe+xOnKNzueIhjyRXvzHJs88+22I5vWBjO7vegY0qC4+Mf+iF+JOnxubF8jlmwFuxVFysql3GqZ6p6dSpU6dOnTp16tSp0x5Nu0WmZtu2bfXMM8+02S/0B0JgZgiRNIOTPTAbNps3QzSbg/rK4ChP+blu36zdDBKiYsa6ePHiqYujMisACTDbhXooC9qqHIiDWW2u6YQEmEUrf2dIFR6Y9ZodJ3ku14mb8ZMJ5MC6Ru8pF9qTl2TZK3HuuedW1YDuaJc1rGbpkyimTAQ55ZnwUAYIA57QD4iV02LyskEIgO/xGtIAHcm1oWQIFdJ2+uMzVEY77SvBI2uXIRCZSZGh1B4IGR2RRYOAyGR+4xvfqKpBVp5T7+Q58JAeZeIFxApBUvVNZgYv2Io1vmQCWYUa6iNZQelkOuhvZjTZaF7SmsgXHdFHMmE3ysHzvHcAwsUelSsLBk3kCxKRg9JPXqCpTeTkwrMrr7yyqoaskdPLEtVmM7kvzfdsJnlINniP55Moc9Ugu+9///vtt07TRJZQ/Dxxiw9l53RLRvfqq6+e8X3uW0u/xafyP2IMG5DNZyv0wd8FCxa00yjpjIuCZTpl6cidPYql9h6wG3ZB584777yqGrLmSLzjJ1Ln8h4R8YSO+gthtvcPsTM+zb4lcYUt+F788t7kyVpVQ9xjq/otY6s8foPdb9y4sfUJkq2uvPvNuEId/DlfZrxCH8gGWq+veZl4Zlyh/dqlPZ6nd3liavLAPlm+zgoZerh8+fKqGmK4eOO5PKVTvKUT4hUZ+6wf2nHvvfe2sYA+kU+eAktv6YExnz7oY16ajMf0XMw21vCXvhi/KQcvyVS59F79dMDzfAGZ5x43v7N55XpOf/geuiHu+R6/8JZOTMYA4257otQh05xZ0rvuumtGm9iG38VUfpKeWsGSekje2iYDajxkHLNx48ZdntLZMzWdOnXq1KlTp06dOnXao2m3yNTMmzevDjnkkDb7gtqYdZptQ4HNyiEgZnAQDygP9FR5ZqKQS+i0Eywyk2NWa+YJxX3ooYca6gClgGpAWyAGkASkLeryvjrNimWX7KPwGZJgNquveAcd8Vc9ubcGOuLELLyVjcjbz/OuBSii98ymlasef6GUypfhgSpBFyfPYVc2FAXqAX2D+uujs+chnlB1KKWMB4QBzyfXlVYNMpncF1E16EGeTpUnk+A9dMXnvBEaagptyfsrIBvaob15zw6d026ZAMgKGbInMpk3b14rQ58gUuqW8bBeP0/HgiSRKxthU5dccklVDXqhHntoZKPIkC3jifbRd+XLHKlHJsY6XWuOcx8KBA+SRafwVjshZZA56CIe5/09qTP4ccwxx7S+qOPGG2+sqgGpIh9yxCO2zzbUBRHTFxlGmetEaPkgfcnMH56uWLFiak9Ep4FGo1HNnTu3ZczYOz7TycyI/fVf/3VVDXKhk3RKObfccktVDVmOvOuMLdhPoD62yt94f/bs2e2UMj6Kb7H/h455h67xRbJS2uo5fYTKsxsx2Pu+10Yx2V8+CjoMydZnPlMsEHf4HfFSf+zfECPELZTZzLyHhP9jM2zXygn+kA1t3ry5+R5lsyH7T/nj9Fl4pW6+lX/PvTa5IoC9y4S4zyNP4dNX9TmhUVyFpOf+o9tvv72qhvGSMU7eDSPrxXcbo/B3fDHeyhLKSNqLY1WHMYB+v+1tb6s777yzqgYdJx+2RK/ZmLaKtWI3XsgCkK+2s7m8FwbJGGoHXvDJZKftsrh8Ot7wIekLtF8cy5Pp+Hr6n9/rl0woYre33XZbVQ26Jl7Nnj27+Rd707WJvrJ5PCNX43Hyzb1gCM/4HFlTMsr7cJBy+IJVq1Y1Oe6IeqamU6dOnTp16tSpU6dOezS9aqZmNBpdXlVnVNWT4/H4ba9899+r6syqerGqHqqqS8fj8bOv/Pbxqvr1qnqpqn5vPB7f/mp1vPTSS20GWzV9qpNZqVm02aVZrNm0mSKEAPqbpwyZcUK8oMYQEDNP9UOvzBiXLFkydSeJumVioChmo1AQM+pcT6suyJf1uWby0PXJfT1VA/JgFmuGbl0+BMO6a+tWZSu0J9Fd7YKe5Fn7eaqVtaV4DL2CeEMKc68G5AuyAnE78cQTGxpOH6B3TuSBMKkT8pQn+OQJJHnfUZ624Xm8gcLkbcbQHbLwPl7kLb4QEMg4meIhJCPRSJT7j9RnzxCZ2iNh7bOTXKCaZLPPPvs0G4Cu4DmeuAsIMqZN9ITe0n96Ro8hV/QHekiG7EX57IRsIa94To987z0IFZnLAOUeANkKOpO2DU2FmtuzQ9Z8CruCUsq2QJkguStXrmzyhXDpGwQMwgoVf9e73lVVg3+jr2yJfumb8vhFNqoN9EXb9ZU98IebN2/eYzM1/xlxavv27fXCCy80mee6eTYjNiB2SHfpKh2178Ueq1xrbk8O/0AnIen2MliJwIZmz57ddOeCCy6oqgEBzhPZfE9H6RadsmfGXSKyA+xQn+mStufdFnnfGnujwzI1kOU8yZL/4ac8L+brD2TfZ89P6nrVIDv8YCtiDqRdfFKe96uGuMAfe8ZKAT7n5ptvrqrBB+beAT5JLGeLMhriEl/IfyhP/fSFj6VP4o8TxLSTLOizeEWvxItcaaAdYoT+8U/0Pe8fQWKD9tEhYxH2tGbNmlY3feMLyYG8tJ0t6GveK4jEL3WJE2zT++zBmEDf/eVz+QakndpDj8UTvCWblIF26yc70W4xw8oXdmlsY6who0RXrrrqqqoa4vWWLVtaLKUXxsVWLMmw4UmeGszX5P1M2WfjbOMUNklPZLGMYT3n87HHHlvf+ta3amf042Rqrqiq98V336yqt43H42OqamVVfbyqajQaHVVVH66qo1955zOj0ajf5tapU6dOnf4j6YrqcapTp06dXtP0qpma8Xj83dFo9Ib47o6Jj3dX1QWv/P/sqvrqeDzeUlWrR6PRqqo6sar+z67q2Lp1az399NMNLTGbth7X7Bka4zZR+zjMbiEmkBGf86Z4M1IzSfUkOgNBMHP13EEHHTS1plPGwZpLKIRMB0Ra2fYoaBMUXl/NXiEDkCuzZOVDusymzfyVixdmz1B761jtQ8Ej6Ip2WA+uHMgdnkME/uqv/mrGZ+Xn+l98gChAlXIt7NKlS5u8oSiQrLwjB6+ggDIrKSNIAdSFnsmYQC3Ikvyh//qMR3mKGbTE7/qatwIj7YCSql//tJ9dkMU555xTVQNq5X39zzsn8q4H68y3bt3aeKguCBFEFKIDIaJ3smKQKNkIGQzvn3XWWVU16D2esyk8Uy89wDvonfKhQxCxvNeCrug7FBQS5j227j3tgfzKnkA7894Lv0Mloa6QO5mqY445ZupkQL/RL22gt/oI3WP7kFm8hX6TCX2gL2zJe9oOnYZeT96TQif3NPrPiFPj8bheeumlhn7yqXSM7uIzO/a8LDQ/RrdkC8k5Tw60L4Z954ll/ALdg8qed9557dSjRNEh3xBfvsdnfoGvkIkUf9izOOEvO6CTdDRvE7d+nh16XhYAqi/DxMfiJTtlz7lHQNzhH8iEXxE38Sr3GLEl9dkTKl5NIvR8Dh/jGXXyXXlCG9+Jx3gndhpvkInP9qH63ntkQrb8jnpy/xFZKodeGjeddNJJVTWMKWT1xRvx0O8yzPwWYgeQdvrNP4k9eReLv5s3b259VBdUn7x9Jl9ZJjaWp7TmqWNipLaQQe7vYRfawbfirXLwSjxUv+9lW/N0PWONPNHQ98YQPstiaBd+0Fuxhp2RMXvmuyazy7kyStt973RVPKLfbNfYlp7fc889M9quzdrmPTGVj+Bf9cnfe+65p/FrR/Sz2FPz0ar6xiv/X1ZVk7f8PPrKd1M0Go1+czQafW80Gn1vV5t+OnXq1KlTp5+Sfuo4BTzp1KlTp067J41yDeAOH3oZAbvVWuWJ7/+/qjqhqs4bj8fj0Wj0v6rq7vF4/OVXfv9CVX1jPB5ft6vyFy1aND7hhBPqfe97efUAhEO2wSzc7NSM7Zvf/GZVDTfh5ilm1uNOzvirhllw7n0wi3bju/KhQrIV8+bNa8i1k0R+67d+q6oGZMBM2LpnKKzZr1ksBEAmA0rrr5k1FA/qYZYMrYM44ZV1spADyIJZtewEtEamCLJlogm5zbP2IQPqVQ/ZQX+gOdZAkoX2IAMG/dywYUPjiWe1GUoGBcnTW6AUUBZ1W0/qe/qQe3AgTZAk2TJ9hyJCnvCIzCB1kDP1yBjlXT54C+0kc4gclFI7oRR45nuyJUO8Jivt9/ehhx5qfYakQkzVST8gSpAtCCwECOIDxWMrUJ48xQx6mKfE4CFb9pw+JEJD9jvbr4Tn+pe3fPN/fA20MG8OZ3eeh0riF32HFPJRmzZtmtpzAUnFM+9CbMmRnihTRkdf6Qv9okdQaPXpkzbhNeSMnW3cuLH+4A/+oFatWjXzyJ89hP6j49Rhhx02/vjHPz51miEd5vvpGjmyZ/GG3fLhvk9kn7zzJCO2l9luNsvPPfzww83e6L0MhkyJk/jYKZ3SJ3FLHFO3FQnsQebWunvxw3O+Z8fs6eyzz66qwY7wgh07rYkvVj+e8n3s1/4TfkGMSN+b7eBvyCT3n5C5scOkDSnLd3yWuCDDIm7t7FQzCHdmpZRj1Qe7tfJA3zwnAysO4A3eIX1Kf8MvaS//sjN/onyUq0Nk24yn9Jtf5PdOOeWUqhqyE8Zl3/3ud5sc9JncxQtlGvuJfbk6QyZGefSXrdFj5cp0ytx5L+87yqwCXdBO7ck9i8Y77MP7eKJfeGGfFp2hI2TCJ9lPyU60V6aKvRv3LVu2rJX1qU99qqqGk0vpmzrETjb70Y9+tKoGfaK//BxeKJ89iGsyzXwBGfFV6jH2OOuss+oTn/hErV69eodx6ic+0nk0Gv2/9fLGzJPHw8zosar6xYnHDnnlu06dOnXq1Ok/lXqc6tSpU6fXDv1Ek5rRaPS+qvpYVb1rPB6/MPHT16rqqtFo9GdVdXBVvbmq/uFVGzFnTi1evLghFRBws0rIg9msG0nNSqGyZulmpcpLJMw+EqhMnsxllpwnTnznO9+pqpdnuWalv/Zrv1ZVA5IFpTGrzdMq9Mm6QqfKaDMUB3oiAwKlMcOGqpjVWuco22VmD8WRUTF79tzXv/71qhpm9LIQnoPW+F25UEJ/oTNk4vm8WwOCl6dpmbWT9dVXXz3VVnIhV7yBBOQJJ56HYFuLri6IFN7KCEG47K3xu1OKlKNcfcj9ULJldELf6VvyCmlHokr2Q3lP/Z5PJAQyR+dkS3x/3HHHtTq0GUJF/+hb3kSsDdCUXLtLj7Qx1/LjrXq0g75AsOgf9I4N0yc8ZtNQI8iYU2zoAtlPnl5UNdgT3YHwej5P1nO6EiTa92QsG7ho0aKGbGkTPZURtNZeX3J9NYQ0b3jOe2dS3/Upb2/3nj09eCTD8/NCP+s49cILL9Q//uM/Nltx6hO+sz9yy5UDmYmFktJFqCz9IA+6zg+JJfSAjZIn3Xz44YebbpH9dde9nIxib7LBeReOPtBFPo79spNEtn2235UuitGyWXmPiLYrR6ZEnMRL81K8zAwv3oq7uW+W71Mfm3KqG1nmXRuel5XQ/8l7TMgBb/0mu8On5GmXfJ0+8Kl8jzbIJuAdv5/7/fgVGTtjDHtkMuued9LxtfSLDhiX0XO6or/q5ef0y94wvlH/vKefsmzGQtqzcuXKqfvxyEdspRfKJEdt1Fenv1577bVVNWSpMwOSp6bx83mfjXglm2BMwUeIm9qbd1PlGJMvxkPtyfsK2SHyu2xY7vul/8ZOeY/geDxutsEWxOg8RZa88NZftu0UVvuGjKdku/iSXCViZZW9NWwcj8n44YcfbjFxR/TjHOn8lap6d1UtHo1Gj1bVJ+rlU2TmV9U3XzHcu8fj8WXj8fiB0Wh0TVWtqKptVfXfxuPxS69WR6dOnTp16vSTUo9TnTp16tTpxzn97CM7+PoLu3j+U1X1qX9PIxYuXFgnnnji1ElbeboVZAI6YyYICYVoeA+6AzGBkEA2oFZm72bn1hn7bAbrJKcf/ehHbfZpv4UZNYRKpgYqAR1Rphm8bIO2qctMG2oCAYDe5RrS888/fwYvlAMJwENIg88Q9by1HCKinkTSvJ/37agXsgKJ8zvEgqytISUzezKOP/74xhPf4SGeqluGLc/1h5BBGPIMeMg4RAlv6Ys2QTM9r295HwUkDurpr/fyTHy/66fnoEzaQ09lFiEZZAAJYxcQNiiN00egQDJfELqqweaUCSWD+lqDjye5Rt8eE2giVIbe6Jv38QK6J9sAicoz7dkDnuQpgGQtmwv94TO0VzvwJn+XGfI9JE87tFe/kHJ9z46ee+65ZuP0Lm/c5jsgWNBGeqwvmdmjt3wJPYKkQs7YXt5voz3sZuPGjXvsPTX/GXFq9uzZtWjRoma/Mi10kl+AmMus4LPvyRtyDgGHeCbRA3LPbAH/AK2dzObzZXxV3luhzRBvvoKv5GvoM3+t7+IBnyOeXHPNNVU1xJ+8OwkKi8RcvORP7Fll32xDO8SEPOGU/4HuylqQgZiSt6bzX07P0m8ylLWYvPtOX/BUG/WF78AL9ihWs0fIdZ5i5nc88b0YymfjsbFC3uPmd+g8xN2eLHqNB3w+HuE1WcuOa7fv+aHc98KH82diDJ9Jj5VDF9/61rc2nmizbJX9sjIq9Jmvo8f6zifKANJX+oA39IUsxYXcD4TUq830TBzJOxXFG1lY7SRrvMIjYwtjXvpIJqkzZOk540Sf8ZFuHH744e1dY0GxL7OcbA+PxB+0fPnyqhr0k18zjpKR+ZVf+ZWqGniJ98Zz6jXWFtuPO+64qfuGJulncfpZp06dOnXq1KlTp06dOv2X0U98UMDPmkajUUMqoSwQgjw5wqzT7BbKYpbrOTM8iHQiWWak0F5/zY4h5tYJQ2m2b9/eZvxmztCGRJzMuM2QzUrNrKEqZs7aoM9m9tquPAiU2bDyIVbKy3Xe0F13H0AKlAvpgrbgnfZAzKCHeAPBkKEiAzKEWCgPf/RfPyeRMG1WFoQTCgKth6ZAQzxPL+iB8vKseb/bFwIFwAs8o29OydO3RLchUVCQPD1EOda/QibyjHntkP3zHBRKP/BDtkA5+AKJg5TQ/29/+9st+4PwiH6xRfLVdvqhDXhHnyBR0Evv523ZeKzteReDz9A7yJT32BFUSYbRe3iY9yRltgxvreXXXvvo6Ou5555bVQPqmicD0Q3PP/jgg8028d2zmalBEFzrn/3OVnI/HYII0w/oHp55HqqIdz5v2bJl6kbsTgPNmzevDj300HZKE39jjyfiT/gnus52IOB8Pd1nG5m1Vg85sS16YQUBnZcNmTdv3tQ9S/YG0FN1yoyKZ3wGNJ7O0k0+DqpPl7SN7iXS/e53v3tG38VNfoM9WQHBR3ufL8SLL33pS1U1xA37nNix1Rv2jkL4yYbNiGv+Kk//xYD0Fxs2bGi8ZZd8Ff/Pp8om8FHK5Eu1ZWcZ/D/6oz+qqsGXkQm/4j3jmkT76YcMoT0TyscrvhdynvGQDtBn7RADxK+LL754Rnvch6W8jF/4RubGBD/84Q9n7FGsGrIGOTajP3hBL/3uL/22EkGd2kb/9FG9eMUXy9zLRuQeMDzDA/Zl7JAn+4pPfAYZpj3kfhT8Mb5i1+wKjxFZiDEbN25scUIb8Io8kX16fAueyVjKPNOntG16z15kdvgKYwbkOW1ds2bNVKZsknqmplOnTp06derUqVOnTns07RaZmrlz59bSpUvbTbeQAEiFtaFOVbC2FBqTp/1AncyS/TX7NYtNJFu96jOjhLBP7kGAAFg3DR0zY4cUmOGbxdrXAGkyG4bGaauZvFmwWXGejpY3gCfKB5GDmuS9Mma8eALN0fdED+3f8BlCANE//fTTZ7QXYr0zxH1yPX/VIMMf/ehHbc2mk92gY/pMHyBleEYGeA85zduxyRWyrfxcP6tc+qFP1uPqG72D5kBZIHTIuluIG1loH12SDaPfUPTsN30lC2gO3moPVIvd7LXXXk2e0BRyw7NTTz11RpnWtdJTvMh9T/RW3fQ7MypQIL9DUnPdt75Y+4/neIxnZKh8MiIbKBMdktWCLmm/crWP/cq60Pc86Y7uQYJnzZrVfsMjbdUH6DY54i3bo3cQUQhwrsmXTc0TB/XFCT8QVf7P+/fdd98ub2p+rdOWLVvqoYceanZI1yDlUF2xAZrMv1h77hRFKG3egSRbkJlXfoT8oML0KPcwKKdqsIfMRNJJqyTowle/+tWqGpBle2jYpzjGPsUjmU06aP+etuWpmOwOr/BSH6C9fDBkXYZZ+6DAkHM+k60lgs8fZHtkgmSgZDW8p38yTEceeWR7l5z5rrz7Cw9yzyXiM/GY/Nk5XqqPv1cO/6APfJeTWfFERojv9r6TI40R+CW/K08s0A730Hzyk5+sqsF/yQYg9SIxhuzppCwf3zp79uw27tEWe7ZkPdken2l8pQ0+e449yLhkhkbbjBHYqHtrjFfoF79J1ngn64Wn7Ezmhk2Tpbjhd5/Zl/iU+oyM25Qniyx+GV+dccYZVTVklPbee++pbFLqs74aj/Fz+shP0TOxUp/5mjzVjIyMu7SdbJXDP95xxx1NV3ZEPVPTqVOnTp06derUqVOnPZp2i0zNpk2basWKFVM3oZvtmp2aHZtNQ3nMEHNfiQyMWZ3foU6QLOiVWfwtt9xSVQOyiaDEjz32WEP7IT0QJWU49cusF0oH+fEZipqIF9TerFVGSJuhumbsEC4Ik5MlIFxOCYEemfVCKiBjma3IE3zIiGzM5rXja1/72oz2+R1qTFbKVw6UAH82b97cUDeZOXJDeRKNMtUNuZIdgqJA2SFKeJxn2suUkCkZ0UPtgaRByJzYA2WhI9pJhhALaA3kgj6SCWQs92awLtg8kQAAIABJREFUEygW9Iks8AMiTzdkW44++uimt56hH2xIG5QJ2SFHfT/zzDOrakBw8ZDctTlvQqb/+qIeekYv9NH37ATvoEnQUBkhvsNzUEA6cuWVV1bVgHBpR+5zsiZff6CI9rpBG/MOhlmzZjU9ev/7319VVTfccENVDbanbcr0fJ5YQ9/pbWY3IbtQQXpN7/GeP4T0+/7oo49uOt5pmubMmVOvf/3rG/ItBtA5GVfoLt9GN+lSxgzy5JPFN/7IvRF8MbmSN51lq5DUQw45pNkNXSJ7OpL7fPgG9kBHvccXITGRz0LsVvzxHJ9Il+k2e7RHhj9hG+zRHhe85Q/8lbXiX6yuyLuB7C2i+3mfDv/lef2AVuPf7NmzGwKdGTd/+VJZBBlzKDr75FtyNYW+5SoPY4Tcu8An54mmdECmT3y0TzD3rELiIe1+z3q/8IWXDxnkA/PUNP0W7+gpvtlnBbGn35N3vJCHtskO5PgqV4CIgT5ntij3evKlxi3ilt+1kR7ro/rzpFQywXvPsUvtE6/EGfqtf/RSfKS/4gxdyZU/dI0uit/GVpO6oY2yUZ///OdnlCWzl/cF4o3xBd753fvalnvL+Bg+RB/ZJPuhf6eddlrbD7Yj6pmaTp06derUqVOnTp067dG0W2RqxuNxbdu2rc3C84ZmyFXumcm7L6BL0MacFUPI84xr5ZuJQl8hHmaaZrfHHntsm+FCKyDOZsZQC3XrmwwO5AAKJ5MCFTHLvfnmm6uq6uyzz57BGzN3SIBZrNk2Xnlee/VNX+ypkL1AypOtMsvGc7Nys271ahfkAQoFfTKLt1/G91Ac769cubJlbSCfeeqTGT0eK8PJbtB8M34y0XZIqzrJDjpn/0WeyIMnkDO8gQpCZKEj0BxIHn2CmtBbqI12QGpzz0+ipX6nr/qb+yPoIPTpmWeeqQ984ANVNWQw8JL8oWZsBk+UDWG96qqrqmpAliC+ZKYvUOk8PUb5bN6Nz/SLTZONeiBadEQf8RwSTQ/pKbsgSzwjQ7xUDt+hP9pDZnlvAL/wb//2b/XhD394Rtm5x4pcZWpkJhPhVzY0mn7rc6KQfIk200e+B+/5rqeeemqXa5Vf67Rp06Z64IEHWsYNQvnZz362qgabOO+886pq8A/kx87p8CWXXFJVwwmWdI0c6Sz0lo7JwvPF9ILf8Ptzzz1XV199dVUNmRL2ygfKYueeMnogK8XnynjQQVkD8YUd2RcCtbdXT7wSs/NkUnFOPdqhPn0Uq/EAz2Sa9UMWK9fl8wvij/aoh33zlXmaG7+4evXq9gwfpu3KVjf5+8wX8i15Fxfe+J69514sPo0f8Jxy7f+RobGCBc+8n7HYfkQyEP9kq/IeHbLFY3wxxmAf+qMd9v3ij1NB6eqLL77Y6lIG+cjc6XNmO73neRkNWU/vkbs2i830GW/w0jjOOCf3N9n/RD/F6Lz3jH7qB9kbq7AbPoSPyBP0xE2+RX9kP2Q+jfPyFMA5c+a0Pe0yaMYZTnv0uzbm/jq8pP/8V/JGW40J9MWYV7ziU8iMD3vqqaeaDe2IeqamU6dOnTp16tSpU6dOezSNdod7CX7hF35hfMIJJzQE02wYQu4GUigNtATSDnUyyzbrhTSYDVtrbIYI2bBmFbJhlmxWDEmADs+fP78hO2ajZtLaDLUxuzSDNutEUDgIuHLyFBeoDJQDmmImDwGAuEFfkL5AkPEyT1IyO/a7fkJr8qQS7cNjvLJeX398n3sd/G42D71atGhRQ5a0gdwg0NAYfUsUTlvznhh6kZk+bYEgqR+SAJXEQ7J3Eg/9hVxAJPzNu3zwXNaCjkDiIK7aRwfIWLusSYaUQBXJMD/j8RNPPNEQI/c7QNXwxl916ivbwTOyYZsyKBAytqPuRHdQnoaHV9Ah9ekTniXqaV8K5DYRLjyD5GkHuyPbvIMCv8hav6Cq9Fp7V61aVe95z3uqashe8VuQSG0iZ77De/wZP6Xv9AZBD6FvkNO8a0j9ZEAvRqNR/emf/mmtWbNmVJ2m6M1vfvP405/+9NT+DHsT+Hpy5Osyo0JX+QW2Ah0mJ3/pYp4oKJaQs3Kt61+5cmXTLc9AjNmVeAXR1WZ9yT0JfK5y+Up1K5fv4veh7763JyD9AQSWHeojXWev/BH0ly/lv/DWSZNiRp5oKk7yF3gnrpGJdolzylmzZk3zOdrC59h/4V3jFH5czOMr6BUfB22XGfziF79YVYMs+Fw+yWf1yMKzb77YXhl9zDvD+DK8oW/2AF122WVVNegMfc8TvvIusbw/0NjF9/xhZm7mzJkztc9I1gC6n9lKfcjsqCyTFQlI3WI+m5SNyMwemeu7LDt7Qd5L+2HL4oi4KfYbAxuP5eoMsuHLZbDonHZaEUNHPGcMNJn1Y3PaRA/ZhLL5iLS5k08+uaqG2O15MZIe0EMy0xf6cuutt87gARmw2ccff7xuvPHGeuqpp3YYp3qmplOnTp06derUqVOnTns07RZ7ahYsWFDHHHNMQyom79GoGtbb+h06ZG+CWbaZIBQWcgm5SOTcbBZiYYYJ4TfrhqRM3gAOLVWXtuSZ3JAxa5fNNs1uIWLQHAiDGba244kMSJ5Ok/sttB0PzLYhTfYleV/f9cP7+qMd0EP98j7ZQADMziEQUCS8Ntv3u+8hLQ8++GBDCqAr0DnoBV5AG532Qq70J+95yZvX8STREkgC/YKuQF+UD3nynt/t26A30B2/56l7ZAodTFQHr5UDbYIU++t3/NDfvPdk69atTX/ppc+XX355VQ0noUCacl133npNFr7H429/+9utzqoBsYVWskkIMJlNtrVq4KnMizX7+uo99yXhnXroM5+C1+rRXnaZ+xzoAn7JIssS01/8OeSQQ1rZ9ISN0TdyhEJ63qlCsmN8TqLubNL3/KDv2S4kmX3kaX6bNm3qe2p2QVu3bq21a9c22eIz/okTUGFy5ZuhujJn4gy7h0xCiaGq/JPYYI+W7AbU2tr366+/vqpe9nfs17vsmL3IKvAZstH0nq9ld+ybHUB3xTdx4C/+4i+qasg26gOfyRdDuD3HJ+IZopd4In7kPhZ2Lr5BeWXryUqcYr9p13wl8jvZ8NmzZs1qdqct4gafkaeg4aH9fDIy7JIM/JWp4z/olefpj3r4NvZtjykkPfcnXnTRRVU1rBqBnMuk6I8+qx8v8JYOsAuy4OtlQWTPZOv0g3/jo8X7o446qpWJ6DVb1Ibc45z6mnss/W7clDGdrxZ/xKmsV4zna8VR9dFH5SjXGEbfydx+SuMoMhAb1Kd8zxt3Gjvx+XyO/Zrshc4+//zzUyfvGm9cd911VTXIUZ3kT9/wUF8yA5MZbTwwduCb3HskM2gMq31HHnlkW3W1I+qZmk6dOnXq1KlTp06dOu3RtFtkal588cVas2bN1O2oiQRALM16fW8GiNwT4sQH6EuiOXkag3LMgpF6oAXr1q1r61UhTZAuCADUDXoy+W7VgNKYuZu9miVDNSC/uaY/T52BbEGilKtdEGn1mPUmIgeF8T002MwfgpV3vEDYoD7Wxeat5pA9vMc/CJz+Pfroow1tzxm7E+Mg3XmGO3k7gQTvIUJ5bwxUBkoDFfG8U9jUg9eeh4SRlWwDHms3XVAftBAPoEzqc0ILXkErPW+tNYQND+mWfVoQXWe7Q5S3b98+tc+M3uW+Jcio5+64446qqjr11FOrakDV6Ic2QsjUDYWRbXWvEduDKOExFAeiDM1xYpTshvaSLb3zOwQr77Ggj2QDafMZkkam0HXl574H5bKzefPmtTrznqU8hSj3YEBg6R0k1Pp/ddBXMvCc93LfG9Q+93otXbq0Pdtpml566aVav359Qxgh3uTLlvIkMD7R/UR8K//FV0M66bb18nwrf8LG/K48euH3bdu2Nd8k262tEFu+kp931xZ75UP4YvbhtE52qpy814PfhziLb3wne1ZOfuYTIfN8spjAl4pHslXqvemmm6pq2Gem/LyRPbPqZKLdbJKfMjY58sgjm/2wb8h13jGiT8o2jrGKQuzVFvEh79rhU/NOKfrh1FfxyLhJHyDw/jrhTr30in8ig8xaIzqScY5PzH1JytnVKYxVgy4deOCBTU5WkOTJWsqSDTMOyr3Afsc7f9kO+fOlslvq13af9dV+SXGKLhgHsj+ZEfbmr/LEcHptHEVH8o4y5ZERuzLmoTt0yfd0YXKlQ94npE9iKPnmXVf0SF+Ug1fqykyfPhhzkDcZvfe9762q6YzOs88+257ZEfVMTadOnTp16tSpU6dOnfZo2i0yNXPmzKn999+/oam5NyBPQUOQMAiG2W2iUn43U5y8qbZqGsmH8Oc6SH+POuqoNrNHUFWzVW3zvdkmxEjbIAzWIZoVe96sFroOXb3llluqquriiy+e8dmMHOoC2YBo+B5BXaC72g2hh4BDebUTIuCvU0jw1nMQe0SmEAkyxyftWLJkSUMKvHPiiSdWVdWXv/zlGbzyDrTCulN/IVh4sbO9OtAV6IusRJajHncq7AyhhS5CIpSPp8qj79AavIM6QiXxUnvuvPPOqhr0GEJCj7VTf6Gw1gO/5S1vmVqrDoGEBiLyIV8oXt6ZkvdXOAFF9kh2wbpqbZNdJUuotaxV7nOSKfF+7jOhM2Sj/Z6D0KU+Qou0n517T//IABrKjqGMUNn77ruvIbb0AQ/oPqQTD+kduWuTOiHBbBcSDH0nf+ge/6d8+/nwAG9WrVrVfG2naRqNRjV//vwWZ/gTqCGEW6aF7jq9jn1Cpv0uC8jfnXPOOVU1oMvkBp22Hyb3OWoPG1m6dGnz+3lKk9gGSRaX6Fietpn3TWkT3RQbobyZeWFf7AiiTYdleibbXjX4GXaWmX79Ewugv2TgPeXiAzvl53zWT/GWTLRH/+wHeeKJJ5qvEJvz5nSZirxhPWWS2QPyF6fYKX1SvkxePgfF57O1i56QkXpkivMuvfS1eA45z7039jEh2QY+nc7QY/WKCdoprq1evbrFSDbkMx6KdfSYz8VrMjL+odeyCeTNLtgyfZONyDjHdv3Of+beNLEaT/E8s7XGEOxCrMhy8drYVfs9pzz10AWyoHP488Y3vrHZIHnb1420Nfeo591ubJvepzy1Ec9kn/LOKCtQ6JOxrRUsO6OeqenUqVOnTp06derUqdMeTbtFpmbz5s21atWqqbPgzeCgKWaVZopmoZ4zU4S0+97sN0+OMKOEAkM4vvOd71TVgM54DzLy+te/viEFyoaWmVlDMSA7Zqdmo2aveV45JALypK8Qo3PPPbeqhlmv7BXeQaqhKWb8Zt+eg+5CvNRrdq29kDlobyJcZAC9hCA4zSZn9ZAQ5UCTIWrqX7BgQUPp8p4NJ4tACMhvZyfSQb4yo4cHeX8EXkEz8mx6CAOEDK88DwFzepX9V3Ql13Pj5Y033lhVAwoI2ZLlUL5200vlQZ/oMR2RdVOfz/PmzWuoHX3DAzYoUwhhksmBEOGF7+mzcsgQwkQG9B9y5mQVWS68oj/65jknPOERvWJHdAW6SO9lhOgtZBfRhUSe7R3SXghe3lxOJpMnK+ozXtlHZG8YnYeaZwbHnqjcd0aP+TH7HNiUPRx8Ru7R8hcytnjx4qn9hJ1m0ng8bjrObukIOYsbdPCuu+6qqgHlhQLTJbaVN8lDSvkZcodK02H2Tx8m7xDjv9mFbG3uL6W37El885mP5VNlzdldxhE+my/ls/KEMP5nZ+grH8Zu1Zt72fTPc3kiHR6Jj9qVeyzw2L06bBYfxAAyO+yww1obtFUZfBekGVpPJuQrhufeBJ+Vo4853snTEfkDvluGOE/rk20gC7IRP3J/igzvGWecMaNfeGyfpTjFZ/PleScZndAOz4tF/N1oNGoZfH7Zni+xkh7JSskC5d5IekMG5O69PKkyT/qy8sAYQ1vZB3vSTvUY/9AV9qJc9bFp9icmaFfqmuxKjj2Uy5cYX33kIx+ZUf/kaaJitEwd/RMTfPa7MZwxL32j12SQqyDImQ0h7/Nv+kjPjFv+9V//dZcrCnqmplOnTp06derUqVOnTns0jcxk/ytpv/32G5988sltdmq267N1iRAxSCbEy8wu74oxI4RwQE8hFtZf5rpjiILZLnQGArP33nu337xj5guVyJOw1GEWm7fO57paM1GzWeVABiABZsFZb579bRYMhUnCI8hb9guCbuZv9gwhgS5BMCBdZvl4nu0gAygPpGH//fdviJM2QcOVlUiBNkOUrNW1Bvm0007bYR+1WV8g6dqmXjKwpwGSoO+53wi6R0bWDFsbSs8gYeqDcJAphCNP6KEzELa0H/2ht8qZROq0FQKER05NgvpDstSZyCV901eIE33GYzySDYX6yVZYC61PysFr5UOaydB6cu0jk1yzn+vU+RZ2yO7wPk8+5HPo0M5Oypvci8NX5N0CieaxVTbM9qDYfLXftZ3cyTWzaFA+v7MLfhMvFy9eXJ/+9Kfr0Ucf3eFNza912n///cennnpqQ1XpNB2RGcNv2YC8N4qt0SU+9UMf+lBVVV1zzTVVNfhYexbEDvqTcYofnDwFjR1pEzvmI/gy79ANZYq57J2Oil90iG9hX5lV4Pv8zk4SsYaka58sgAxM3uUjcyv2sxX2zxbYisytfuCH9ssa4HXeMaN+ceqZZ55pbeRv7Z1UNt+or+qgB+wX7/Eufa37iPDIWIA+aKvxj3aJuU6TUq8MUd4ZxG+IydonS0YnZFr0mw+l5+r1HF+MH+wm70xhL2S3YcOG5jvzbi36lVkh+kBOed/LzvaO6ise5x5SPMdDNmiMoH6UY1D3tfHhsrhkr58728vmMxmrj73Qe/t4ZemNXWT/6SDZH3300Y3feOdznlSLN8ZfPludQ0b2KpO7e2tknslEHKQvsl5kY6WKU/ruu++++pu/+Zt69tlndxineqamU6dOnTp16tSpU6dOezTtFguox+Nxbdu2rc16oShmn9BZM0AImBm+v1AY6FGuF7aeOJF+WQgzSd9nBkh2Yc2aNW1mDdmdPEWiakC+zOChG9YjQuX1zbpZs1LIEPQC0pQzbChNZlLMgiEKEIu8hRzPfXZCiz08+gUpsGYa+gJ5g5hBkyDjsi2JyOUdL2SHT3Pnzm0zfDyGztADfXaiB16TEyQK8qMvCLIk86dvkCd/9dHv0BXoHlloHwRW38hIu/CSHkLiIHxkjqCPyqHv9Jq+0in9xR/ok99loh577LEmb3pHf+gffaVP6oa2kQ09IxvlQWvsQfGedeD04qMf/WhVDQgVtC/vXYIiQqLIJDOiOzt9z9psvCMLslGfPQHQIwiZv9Zoe+7888+vqkGG2rlmzZrGCzZOLsrSV3JLBJOvIec8tY9PoHf0kT0gstUOsuNbHn/88cbnTtM0b968OvTQQ5uOywAnasyvsG++0Jp1OsvuocfkYp+APRhQWQgmm8l9jr6f1I88cY3+T+67qRpiap5+5nd7GGRuZavwQuzk/+kyRJpd8EWeVw9dFnfOOuusqhrW0TvZj13ru3iCh4gP5fPFFTySGcY7Yw42KfOa9/ogMl28eHGLP9rE7vCI/DyHp1B/dbJHf+lDxqUcH/FReCsjQz/V87nPfa6qBp5D+fn6jM3eyxO/ZKnTX/EviC6RtTFRZpa0n2xy/LZp06bGs5Rj7g3Ju5yMK8Qn+mScJhNDZpmVYEvel/3SRnbE1sUT7aSHeEZv2L44yP7s2SEjPGC/MohkNXknVdUQd9kl/cdrcTJXJNx5552tLPqY+mD8IB7deuutVTVkVOiHctg8fbW3C8+Vz2/iqdUbsrPK43u2bdvW4v+OqGdqOnXq1KlTp06dOnXqtEfTbpGp2b59e73wwgttxmbGZ0ZoppcINaQAemP2met7zb6hLxBN9UBp/W52bU2z2bVMz3XXXddmlRAEM3qojJl97nmBeH/lK1+pqmGtZt4KboYNaTBbNuN20y3EKbNWMjKQCzN3qIosghtl8c6M3ywaD6E2EAiIhPWREH2IBT7oF0QMiumzdvmerBcuXDh1CzDa2Vnw2gRV1HdIUZ4WBI2BhuRJWPoGKYDm5Okx1qVDVyBPEFf1WoeeSLm+Q1HdTyIbgOgE3fO8v4kuum9JfZA77Tv99NObjUHR8l6l5AUesz28oN+Qfu/rm+/xyPps+o7+5E/+pKqGNff0HwpIBuwIr/CYTmRGRbvxgN3lqX/0nv3a/0QHZfXYo37miXrnnXdeVb2MvE2ehFY12B69Ubbn8JKt0VP+kY1BJ8lZ3bnHLLOmsmVkYq3yCSec0E8/2wVZUcDHInZMN8mLL+PjZDGgvb4Xn2Ri6BIbY5N00x4bnydvXp8sZ968ec1u+fXMaNAlpG+QY7FV3FFO3rEi2wi1zdOo9BXlnSy5bxaizg7xkm2IWzfffHNVDf4EAo6HbEZM11/xTL15hwa/xLeKJcqbvBvIPgUrA9i5vZ18UiLSsliZmcE7dpz3y/AT9IhPVX7e9SMe8NV8Yq76sCdCLMi9y5ByPlf2gP7liXZkx8/oN33ma/LETFkN7T/iiCOm9iEpEw/oI7nwz+TNvxsL0Ne894ZNkRmfinIlDJKVoHf6IDuhfnqqXO3kk+kOuzDWtWpFu/GKbNiFfuTYRgb0oosumlE+ma1YsaLZDttXNhsmJ3e/iaF4qo3uiNJXPiT3tebYgN/yPtv0HhtctmxZ49OOqGdqOnXq1KlTp06dOnXqtEfTbnH62f777z8++eSTp+5Qef/7319VA8JtlmxWa7YMxdEXM0hrkiFonjPjhJD4bPZnlmsWa/bt+eeff76hE9bUm0Vqg1lmoib6aG2j01TMjvOEHGsk1eP9PHUKCqPePPVM5gYqaLYMSZBp8T6SgYLKQNK0H68hfGQEQYEOaz/kEIKAL3gMcXjDG94wdRO6tuFNooyQb2uXyQtv9R1ChteJlGtb3uTtPTKgJ3nyiWwYJAKvZR0gYZA49UI5E3GH+PlMZvbaQG3wPk+/SuQQsvjGN76xoW4yI/ouA8E29BHP8n4iaIw+kxX9JXeoC72EDmUfZFb0IW/lhuZAnNgRhBev8Uw/yVA/ZQwhZurVX2uc9SdvoiZzds8vQHrnz5/f/s9mPcOG8sRDCKa2s2280kZthzrnfiT6BL0kU2ujtQsv3vKWt9Tv/u7v1sqVK/vpZzugQw45ZPz7v//7TV74DNGcPB2zakBvyQvf6TJfys8ojy5m5le5dBzKrH7rzPnNydPP6BIklz36XubD3Vp0VVsyEyu7564l9sa+oPJ5rxU7F7v5kVyhkHe05ElwuZfAPhLthC67+0u/lYuH+vPhD3+4qobTqdgOW/VXrCDTpUuXtniTt9VDnhE7NZ7hO2UhMvbmvXn2U+gz/aEXeCL25n1Wed8Nv6Ld9BOv8BBPofx8qTiEh2SjHj5ZjE/9FvcSqddev7/zne9sepF3xuX+DTbDf9N/eqiPfCdbwWttoG/8unEWnubJp+yITNgBPTH+8T6fq3zjxNzTk/u0kPGddmkvXhvP5f2EuY9yckWCGJ4rAshLGfogc6cPMnRiuCynvTH2D+XpreyFTebeYP6ST/nYxz5Wy5cvrwceeKCfftapU6dOnTp16tSpU6efP9otFlBv2bKlHnnkkYb2QHet3TPbNPODUJvB5W3kEAYzQDNIMz2zeagLZMAM1OzVTNHs2yz+da97XZspQ6ate7VHIU85g1Dri3WIKPfcmKGbueedBMj3uZ8oT3XRF6iMNaN4CQU2K892eR/aj6cyONaQQiZ8hmRAk5ybblaOpxAY5d51110to+BZfdXW3IMCrYEU5bpUiAIEItdZ4wmkCtErsshsXGYR8BT6CAXSDjK87bbbqmpASSH3CJJhrbN63AGkPPpPbyFw2p2nsk2eqKJt0EYIlAxbImCySPQcMsQOtAlyShZ4jYdsHVKr7/QVj7yf9dMB6Gfu2YKM4QFdgnqSle/xlp2qx3PaQUfwVH/5CHo/eXoUnkLAEpnkE6DIEDOIFv/EtjIDSR9znTa0MO8X4PfYFR7+3d/93dS+sk4Dbd68uVasWNFsht2xc36EHUKR8Z+P89keGysS6KL7Qezn4rPZSuoFpJ+vlnW9//77m6y1JTP++uJ7dqetfBxUlm7y43TP83SUvdNx2SY66XsxVzl4wI7ZI59M53P/CPvDK3EHr40ptFNsYM9OtNQ+8dz72ku29rHdcsstUxnWPNWJnoidSexVH9PX2ZuZp+jpc2YI+TyxlH4ac8gqXHnllVU1fe+RfYe5t5i+kZFyjatkbvDY8/ojU0PnyE67jZXy7qAnn3xyKrYpk83RF+R3e0mMX2TJ7Cviv3NMoB5/83494zPlivmyEWSJF3m/GpmLe+KojJR4k3eNITL0fJ4QlifG0gXf58l1S5YsaXpLT+k+myMPddNHcuZTPG/v6Omnnz7jeX6NnvIpOV679tprq2qwNTLae++9W7t3RD1T06lTp06dOnXq1KlTpz2adotMzcKFC+u4445rMznIgbWhZoDWD0K2oDLeM/O3tyLXBub9DpAVyLXZca4t9Zx2HHbYYe2kk7w93ozbfh4IArQdcgARMKu1ZlI5Zs1msWbcZsXW+kMcoCl4l3sIzOStgYbGQF2se4Sgm9Erz/sXXHBBVQ1oilm15xA0CQIHUcgbeO3Jufzyy6tqmMXvtddeLduVCBR5WBcOQcAzMsj1sOQJ2aIfshGXXXZZVQ1ojraSGR4gyLcMiHZBVyBYyvG7cvRLxgm6CN1BsmEQdTKCVrAD5Tt1BNLi5nMIDLRo/fr1jTd5OtHO7qMhN/LUZt/TZwgUFFmf6AWCBEN/yE57lANxhnixefaHp2w5b16mrzJMMkYyK/qZJ7LQrauuuqqqqs4888wZ9VqDb/04/VX/E0880WzfngeoOL2hX3jhZEM2z7agheQPjcwMAJvXtlxPDvGXFSbDt7/97VN60GmghQsX1jvf+c6W1bzLp79sAAAgAElEQVTiiiuqaog3kHVyoev4S0f5TDpPvmxK3JN9pFtOo6JjKE/N4/sXLFjQfAeZ84EysfoizkD/1clXek9Gk0/T1rz3wr5BPpvdaZsYzP+LY35nGzIkyUs2JavF1/LleAyhZu95pwZ/oX15n4/+kC27/ta3vlVVL/tkWQS/Td5hUzWMS5SNd9pqXKKt4orxiAwHX+R3vCErfXT6p9iufcYW99xzT1UNPl4cxEs88T7yXu7j1T7P6x/7yD2E/J9sgXbRf/Zk3Pbss89OnQIrHuT9MWwh91qpG4/oNd9IzuIYPSNLMhJH2E/ajZUG4g6Za29mStRvH4r+iQX0Wv/5eGMb3+MpvRfHZOONN3Ofsuzdhg0bWpnkL16Qjz7IwPBfymLr+njhhRdW1bCvjc2RK73NfWq5/8f+PeV//OMfnzqVbpJeNVMzGo0uH41GT45Go/t38NsfjEaj8Wg0WvzK59FoNPqfo9Fo1Wg0+v9Ho9Fxr1Z+p06dOnXq9NNSj1WdOnXq9NqmHydTc0VV/a+q+uLkl6PR6Ber6tSqmoSV319Vb37l3/9TVX/5yt9d0rZt2+rpp59us1br3s3UoChm0bkuFqKV9zNAdSER0CGzcrPXvL/EbBfi6VQP5d9///1T59ybOV5//fUz3oXCQa7Mfs2goXpm6jljh0BAAG655ZaqGlBbSEKeYgdZ0CezX/VCaWVsZHoSycArnyEYPkNXrDXNta3Wmpq1Qy7yJA3rjsn0gAMOaMjXzlAVdUIW1A3dgIx6T98g5pAl62ChhHkzeJ4oB6GCpOVdJt63vtUeGLLXfhmURDnpDvSFnkKfZGb0TzmQXKiUdtM9iOHk6TnkAznSFplAKJu+Q1kgq8qUNSJfeqENTu6h1xA2fyFfkCTl5Alg+gbVwQNIrvaox2c64jmoD55BwBKZhYbyPcqT9fJ93m+hvs2bNzd+6xP0ENor+4pX0MA8UTHvNcosW96VpU/u15KVQtrOzn74wx82u9xD6Yr6D4xVGzdurL/927+d2tOUJy7xC2wI6i/zzB+xkckTH6sGn+xmbbYJgbcKgM3QHzrNprZs2dKyAeQKmaUz4gsfI5aqy/N0Uuy99NJLq2rQd7rIPvI0T23Dq9xjKe6xgWuuuaaqhlUZ7Fk2g13mvTj8B5tjO9BeNmJswE71i+3kXWDiI/7ox9atW6d8jXfIwaoO3+dJkbJgZMXXiSN8lrr5BW0gG3HonHPOmdHnzLLZc0VPtdNne3iMe8QxccfYB6/x3g3zxiR0i36+613vqqpBJnSHPagH7yf3htIvbeT/1Y03yrDawvdWT5CRcZCYzXbpIduix3yk7AJesA9jC7IypqQbZI7EFzzSvi9+8WXXRd/Urx95pxEe5UmKOcbFU+2h/8rbvHlzG9uJ1fqAp+IXfyQ2s2WZvDxxkZ8zZjX+ypUzfFTeU8RmyeDee++durtwkl41UzMej79bVc/s4Kf/UVUfq6rJ0fTZVfXF8ct0d1XtOxqNDnq1Ojp16tSpU6efhnqs6tSpU6fXNv1Ee2pGo9HZVfXYeDy+D9LxCi2rqjUTnx995bvHd1Xe888/X9/73vcaomg2CY0xC05kQQYHYp+3JpuFKs8M0ppNM0azW7NjaLAsi3ogJ9u3b29tMfvM9ex5Vr1Za56Moq3WNluz77PZK0TKCSNm1erxnCxT3gNjNmxmDyWG8kNCIA2QCDyEiORpNXif93NAGLyXe4PIxJpQsrF+c9OmTQ29UCa50xMZCEgBXpjFQ1GUo4/2LNiTA0lSNx7KGHpfGyFsEAtopN8hWXhIduqhb9AkPEz0UD36Q9fyVmuIMJ5qPyTE++pV38EHH9zWpjtpEHKExxAiKLM9WXiP59bLshnlQm+gkH73ns+pF9oskwStwRP2hmcQXjKgjxAxvGO3kGgylHmi33RB//Ha72n3UEuINHRq2bJlrSz7ByCZMnf6ylbYIDkm6qZs/op+sRO8Rp5jy3krN2Tu/vvvb3X8vNDPMlaNRqMZp+7QVZkVyCbUlPzoDn6zFX/pHp/+27/921U1nMhlzxWdZse59y4zsfvuu29DoGWj+Sp2w0fhDX9MJz3Pd+b+IXFNnZBuOsYH0012zr94ni+mo2K+duSdLVBjmVrtwWNyyj0zst98aMapPHWU7bIpsiWTl156qWX8+QLZ37zDSptkSiDSucoi9wfm/R+pT3wo34YXZIZ3fJd68TBPlhOXxFVjkbzbh757j0+kl/rB3/nLV9Nn/WcP2iUTtWnTppYhIbfUE+/iDb30PJRf/Egf6n3l6gN/nnti6K/xnZhv7Gic5j0+ml2wYTLTbv3BUysjZFrwzD1tsijep79I/crNsfTkfvVTTz21qoaxGt+hzfqu7TLG+kZe/B4ei930jizoNT2Sycy7sfTNapElS5Y0ueyI/t2TmtFotKCq/rBeTuf/xDQajX6zqn6zqn7uAmmnTp06dfqvpZ9FrJqMUwYEnTp16tRp96SfJFPzS1X1xqqCfB1SVf84Go1OrKrHquoXJ5495JXvpmg8Hv/vqvrfVVX77rvv+JhjjmkoSJ7mAlWCHJjZQSQmUdGqAemEOMiOQD6UZ8ZpL4UZp5kiVMfsF1rwyCOPNHQGygB19a5ZrvW0ZpvqhHxD1yDGZtb6AFmAxngPKpTov1m27IKZuImjWXKixmbBMjgQjLz91WwdKqN90BUIAx6bpeeN7GQLCYRCQfZe97rXtTK0wVpfCJR9QXhin8P5558/gyeQKG2HeuijuiEH2pYnuPndqVJk7C6TPOs+USWIGZTF+xAMsqBnfk/0lF5bo+x9egrxkGXIu10mT+eRSYGyKMO7kCHoi/XbMoB5+gyealOiPfSTPrC5PEnHAJLdQAvz5nLtYNtkoK/azw75ltx/gugcRBqvIHnqTdn+3u/9XlUNdz+cdtpprf15qiPUGI+1mY3hHfSPr4EA8xX0Fto+eTpj1YAq54lxylUv1Hnx4sXN1n5O6KeOVZNxatmyZeMjjjiiZUbxn89D/A0+0xlxy8le7qdic75X/s7u05LRYVNsly5D1J977rm2zyJ9Fv3Vh8hitbroEGSUPdEpGRixmf7oc+7J1FY+WGzOE7DouHigr3mPmjiU97nxefyKzHHeE6IecTjvAMKvvN9tcs+fkxG1RV/EJz7CPg0oOT/gviLxQYzkB/BIW+kVfyEe0i+ZQzKQ8ePT+GCyoQv66m/ef5T7IpP3slniD97jKb+Ue2PFljw5U0yZO3fulNzFIXV95CMfqapBH7UtTyHj/91blHfPiV/GQzkG9ZwVBN5nc/b66JN9TOwDT9i0eozX3Omi/TfccMOM97XnrLPOqqpB9tqhXHGSvtJnYwc65u/ChQtbW/gC8YK8ckUU/WYb2qjteOYz/cx9qMZP9E05ufpDH5YtW7bLTM2/+56a8Xj8L+PxeMl4PH7DeDx+Q72ctj9uPB4/UVVfq6pfe+VkmXdU1frxeLzLpWedOnXq1KnTz5p6rOrUqVOn1xa9Kiw3Go2+UlXvrqrFo9Ho0ar6xHg8/sJOHv96VX2gqlZV1QtVdemP04iFCxfW8ccf39AbM7k84QTCkHdZnHLKKVU1oDDW3X7wgx+sqgHh9B50SrlmolAkyIJZrFk9ZPXkk0+eWqMPNcm1i5CmRD9yf4PfE/nCA8iZ56E71m5C3M34IQlIX82qUaL6kDntyFtiza4hW3k6FCQBgoCHUCYyzv0e+gfBf+qpp5o8oRd56zV55clWUBIItLbjkQzH17/+9aoaeJhZqbxFXhut14W06SseQpWUo1z6Cw3CC4gH5DbXRssqkCnkXj1kBHGje3isPv2dvM0476vQR+gJfbGGGPKkbb4nd/pNjlAd5XkPKuR7vGEv7CvvwSFTewTolz6xcfXaN0eGeIFnZOCeJr4kz8FnF5Au64/pP9lA6uj1vffe22zdGnN9hWySm7bSd7YJ5ZYVzQwk9NBfbaUHZEvv+T/IGQTssMMOa23ZE+k/OlatX7++brnllsZndu20Kfcx5Ml89p+RI7SYPPkN8iQPui+Dw+eyDXsDP/vZz1bVcIIkm9iyZUtDOdmPvSvknHsO/GU//H3ejUV32Qs/oTx995y1//wL3403sg/sRvadfebeAfYrO48nYjWe6b/shWy7Ewjx5Utf+lJVDfHVc3iuHbIIdGD9+vUtDmQWipz9rg1kQP7iHN7zncoTXzLr5Xl7siDr4h9eQu9lbPCKvvET6tF+Ps732gmBx1v+hF+SFTEW0Q5jAPVrF17nnSxk/L3vfW9qRQofJ/bnShgxNfdliBtkQp6Qf7LJu6bsH9EXMZk+i1uyWU6pTVnLvuEFmYlb4qv62RG7wiPxycoG/LCahV4bMymXb+Jb2O+KFStaNjfHrnlPkqwpOcvAIbIy7pZxswKFflshxXfILBr7snm2Khu2evXqJs8d0atOasbj8Ude5fc3TPx/XFX/7dXK7NSpU6dOnX6W1GNVp06dOr22abdYQL1p06Z64IEHpm5wzxNazPzNAM3GIRDWsprl+t7+EahTnu9u1pzr/dUPhfXe3Xff3VB1aLyZOBQ/0RSohj6alUJ+IVof+MAHqmrIxEA18h4Y2QUzebNoqAfkIddu5qku1nhChaEySbIHZs1kYlbullkzaLN3s3ME4YJaQRSgyJM3AuOlGb2+6AOkgB7k2kwoiLaeeOKJVTWgK05+0zeonDbjucyQvkGSEKTV+/ogowIRVy7Egd4hvMcTqBB9J1P6qR66lWfh02+6KGMF7ZkzZ87UPiP6ddNNN1XV9N4ra/RlKKB3iXhZv00f8y4NfWVb7Igs6ZO+Ql7xnv6yL4gYVIjdyEThBX3XHjb9vve9r6oGdIoPwh/toKe5X89z7Ax/3vSmNzV0DtoGgdQXqKF3ncwm+yTLindQ7Myy+l19focC6iseKJfebty4cZfn/3d62QdBENmE+zz4NrrHP/HNnufX6FTuvckTLe1zZFOQ0sxQ8zNsYu+9956KaexXbOPT6JA2sA8+N/eZ5ulMuZ+EPUOuZQvZJ58lq+A5vl4WAXmOb7Rnjd/hUz0HdWZj/JAYkHta9Y9teM9n/OEf+MuDDjqoxRe81XcxLjMykGhtRfrMr/O94p04oG5EZsYWkHU8oScpW3qpvtxPp510iM/m65TDb/EndMMYRj14qj798J528kGyLG9729uan1UGXypjIY7wpfQ693F4n8zyfbITM/NUS5kOPNBmGUO+Vp+Vm/tLtCdP81Q+vaQT+kFnjE2Q8pUnE0Ov6S0fZSzCLtauXdvazB+pyzP8WWaEyd/z+pynt/IdxkXaQF/4Btlb4zjvscUFCxbMOIUy6d+9p6ZTp06dOnXq1KlTp06ddica5Rnp/xW0aNGi8Yknnthm32aGTkLRRsgVRCFPc0oEG7IFzbHu0KwXUu6zWbIMj/X3UCWo1ezZs9sM3Uw/UQhoHnTEHhWzYLNYt/BCFrTBZwiz2SwkInmTp0XpE6Qa73JfCgRCu6FN1smSCXRQtgyiDmHQzjzZxe/QyVyPDq0hc7I64ogjmjxlsSABylIGNCXLUnfeA+Jv3p2Cp3naGMQqCZoDEYeuQEnUA0EjG7KERFgbr71kBGXBM0g7HpMhWdgXkqgNFJTuqXf9+vVTJ65pK8RJHRBZPJKx87625ukvEDG81gblqi8RUpkQNuo55eKJDE6eqINXUHS+AhqUe77UT8+h3/rnfbqQJ46pX/9kcv/pn/6p6ZG6IZ+yrWxNXQgSSr/Upc95DwW9YPMQLzZL7pBRGR19/Yd/+If64z/+4/rBD34w8yisTlVVtXTp0vFFF13UfDm+4yM037p2Ppgd0rE87Y69X3jhhTPKI09+jc3JItK1RLon95CKWRBfqCqkWUykm+xKluCKK66Y0eY8/Uwb2Km9BJBouivO4Emefshe+TB2DtXlO5UjCyHbbj+T59hxnrapXvbLrrXf87lnlF9SDvt22mHV4Gfxmi/Mk9i0kR1nJjfvf6EH7F099CdXEohL9EHfZMsy2+59siA7MZ3v1R7tk3Xw3m233VZVgw7xO1aT4D3+iBm+157UpTlz5rQYR/7KELvJhU2KX/SfbapDOeQtuyAO4jWfzKeK7Xihreqxv8nYBA/pN70yZtEeuuG5XFWBR+yC7rBD/NAOukFvxRL8kUWczMC6gw5P+Dk8Efv5An3Mu3BkQem/topf2owH9uV6T5vYDX1EL7zwQt1www311FNP7TBO9UxNp06dOnXq1KlTp06d9mjaLTI1S5YsGS9fvrzN8M2SoS1ONIH2WqtpBgcpyLPCJ0+BqRpm25BRZAYJhTUTzbXoTic69NBD20waEiwrZEYs4wCtMIuFEEBTIAwQB0gAFCTvkdHnRIjxzHn8UJtrr722qobMCV5AXyAH0KC8IwVPkewV5MOsGyrk91zTjIf6oZ5E1vBnNBpN3YwMYYLgQEDVaYbvlDOZEs9BHiALZAYpxXP3yCxfvryqBpl4374nbc4Tg+gABM4pMniJt3RE+8iSbsjg3HHHHVU16FDeaQSBI1OygYAkyqmeJUuWNHQwb3emD9AUv0OOcq2+zCQb+sY3vjGjHBnLPFlFWyFMkGN6qj59ZSf0KW8CR3iKl3wHu4V2Q774jNyrRl+tU5/cKzPZXnt68Brydvjhhzd/wzahaPQtT0tS5iWXXFJVQ8YlbQdimvuMdrS3omrQa35QO/iSI488ss4777z6l3/5l56p2QHJ1NAZ+wZlPXLtOTnQcXbPTukMm+C/xDvxhp74PbP4F1xwwYzyxJTjjz+++XffsQO+QNbbZ5kROigDwh7FI4g0u1SOOCDu0DW+Fi/4XDponT47U35mRsWbiy66qKqqbr/99qoa/BGe82v67fQnfohvZjNkwI/wa3maJz5BvB988MGpfQx4yG7xCE/pT96XJ1uWqyrEE3pmvIN32gRJz5iqj76nR/ag8ml4SNZ8nPGQ9/kf7VSPcvxOBng+uV+2ahif5f4lOkSW69ata2XhYe6pyP0abI2+yQKIO7nHmO2SDd7SB8/LOuirOKFdKRtjmNwzo/3GsvTMmJNtay8dwcNLL335sEYyz5UR2in2aF/uCaKzy5Yta/w3nmCDmUk0/qB3eMPvsRU8ZUsZk2V58UJMPumkk6pqsNnc9zYej+umm27qmZpOnTp16tSpU6dOnTr9fNJucfrZiy++WKtXr24zPmu/IQVf+cpXqmqYtZpBmjmapeYtwWbJkAGza7NzCEPeoYIgF9YXOznl8ccfn1qvagYMGTBjh7JAx82s8yQvM2Yz+rzFXt/yBuZEqiESyoOCQMrwINda65v26Y/12c5l1z+zawhbZtEgGPoP4fPZaSGQPO0g+7322quVoW+QAIgWfYDoQBgykwHFIbPJu3CqBsTV89bvQkjVB/nWDn0hK+V5Dq/xxN4FCB5UMvcbkZnnoDjQJ9nBXCcLUZO5si+FjslSTN4nkNktGRm80gdZJjcZQ10gpX5X19lnn11VA3qcN4Jra9pL3hxNL/AUygMNsqaejdMJd/Lgaa7b5Vus+ecrtJN9QIr1C29z7TId8j10aZ999pnKQOsLm8iT2vgdGTp2AAHFOzaap3GRmbazXWvxZbnwjN2sWrWqIXidpmnLli21cuXK5nPZNdmTM99Ip/0uvkFPPceOydPJf9DV3JdIh70ve0me9vRs2rSp/ZanWoqd7IXcZfT5fZlLOpKZE7GR3ss+8KV8t7Z997vfrarBz8syQWnVz5/gNduwb8NKgMzM+MtH4j1757vZQJLv+YO8oZ2MJk+zEpP1lQ9Rlv036ia3HLdAptXld+Uj4x99VG+uRBHT1Zf7C8UfsuJH+GTP8bn8UO5DzBUKecKkewMvvvjiqqr6y7/8y6oadCszNHlfz/r166dOtuUDZQvEaPpIv/hStiiDIQtKn2+44YaqGva10X+2JnvAjmRUyFpMRfZl4yUfb5wkjogrxl3aL+aLx2TMTsVVf8nOGEN8ztP++CAyP//886vq5TE2fTNWlKHJ0zfxzngk71Hic2Ri9A2v1Z2rjMQ3voW+eY8+PfLII01XdkQ9U9OpU6dOnTp16tSpU6c9mnaLTM3cuXProIMOmkJRcxZu1m3WLLOSM0afled9M0lIg1mttaFm1dCAvHPG++PxuH0HZYHCQMfNuCG21klD36ATZqtmo1Aaayon17tXDXdYQJ7zdnDoRt5EC/ky67Y206wZiq9fZul5s3SeKqV/EAVIiP7lvhj9dd9HrvvV3v33338KcVKGNqrbO5AqfYGOWNOba0aVA73IPTh59j35Ky9vCfa+TA/9yvtioH10AQ/wEMpEHyG1ZEYv8UdmChKft3jTdwi+cufNm9f6BFXBU3pCn2S7yF3f8N7zPmemUBvIhN7RW7bOTvD4+uuvr6oBFWf79FI2Cw+gPNAmaCRkVTshsNqlfzI/nrenADInK6a9fJX+5mk0e+21V1vnDc3jM/J0Pgi97BGfcsYZZ8xoc2aJ+UW8JhttpF/2N9E7bYaMHn744U0+naZp/vz59Uu/9EtT2UU2RF7kR85syp68zMyQV/pkusWm+E7ydhs4vYDmihn//M//3OKAZ/i0M888s6oG3XDTOn/tvhg6pM/aTJczSyE+2P/oe7p2zjnnzCifr+Iz+VL2xv/kje95a33ew8F/5Mlg4qbMNHRaf+h/3kvCT2m3McaiRYta2fx3ZpvJC2/FGz5XfNBmfWCneQKprFbeU8W/G3sYH/GBfJe4kLfT66vv6VGemqk9uVcz9/0qd/J+v6ohO5En3BmvZXbv7W9/e7MdcjCm01YxkfyMAaws0Da2KBbLIojl9EvMZzeyD+KEPoi59MydUny7fbN4m+M7Yxr3pGmf9oi3sq/iC10gSzqnnXw92RtfkaF+yZqdddZZ7aRDPKRH6iRPbdMXWTM2TH6yXXhEhmSj3L//+7+vqsGW836cvENK23dGPVPTqVOnTp06derUqVOnPZp2i9PPFi5cOD7yyCMb8pVkZma2bYYH2YIMTJ449Eq5VTW9RlO2w0zUzBBKJGsCxVWf+g844ICGkpiNQlkgxVCRvBFdm61/957nzeDNuM3ctckMHDqYp61BHqApefMshAOykKfE5M29eLKz2+6tg7RHBqIAIcBDqJP2yJbgaZ6+VTXIDeqRJ4RAHZ2vThbKgpCpAw/N/CEH9ASaAjGS+YGMQ/MgU9ddd11VTcsQmgQVhKTlnUKyenQE6R8eQiEhXtAp5WmX/U0QMfyDjECZJk80g+TqqxOylEnP8VqZbI0+QL7y3hh6Ym07ZJaeKh/yRFZ4SW9yr4csRZ4+hnfqxaNcP86n5J43aKf6rQdHfJSMEuRLu9khP3DmmWc2fcJTdVpTznZyPwLkCmJFP/ESD+kbZFUW9Atf+EJVDbasPAgo9E6bX3jhhfrEJz5Rq1ev7qef7YAOOOCA8emnn950nG9mA/wJ+fAb/MVNN91UVQMyTS7eo5NsgS7za7KIspNsR32yIeqbN29eszd+OE/3QspWV6Lv2gjFlSXymR2IE3iT96/hSd4Tw570jY/k8/kd+070XTx1UqSMLl8pzthXhg/q1V5xjl8hA6sErHTg5/i/o446qmV9+Aw+gt27X4hvJfc88VNbZEYg43nPmZjNrxtLeI//0EZ+AaJOVnmnSp5CJl4aL0Hi+Vy8UJ9xlPrwWtZc9npypUDVwFsyMgaie+94xztazMZTe0G+/OUvV9UQd4xH6I8MDr0QJ4wZ1K0usTXv4lEevWFreClGy6prpz2n7EX96sMzdkFv6ZL2yDj5TEbaiT94S6fyNDXxnoysfFi7dm3zAe6wUTa5Gc9cffXVVTVkbOg726Wn9AcP+Zw8EZUe8zH00RjYZ/Fx3bp1dfvtt9czzzzTTz/r1KlTp06dOnXq1KnTzx/tFpmafffdd3zSSSe12acZmXWN1s+aJedpQZBOmRfojPLM1qG9+gzJN+tPxAU5sUU5e+21V5uNZkbmm9/8ZlUN6IxZrhl53nSb6099D0HIU5WgvGb83oc0QQwgWPqsfdAYn613hNDhDRQ472BJZMJsHppsdu19yACUGJLg91wnaXZ/4IEHNoQID/KEOciONkFHnL8P/cArfYbGaYM17XiurVBCKL/ycs00pALSQG/d1QJVxCt6pB737kBC8AQaBJ3MG6vJkmwgaPqhXjoDGfT+3nvv3doCJaQveAMlhFTZx0Mv/CVnbXaXD97l+lq8tT5cdgGPrSWWfVI+O0rkjP7Te3pMNybR66pp9FC/8oQyiHKeKgMdzft28qS5J598svEfAgbxRMrWVjYETbPeOm/41sY8mZD+yMDkrdtsLPdTzZ07ty688MJ64IEHeqZmB+SemkSN6SbdwNf8nLeL012ItlgiuyLbIVPNpiDkYgb9Iv9JHy0O8Z2f+cxnZrRN7OR/ZTzoljaIwez53HPPraohw3P55ZdX1YCU0zmnSvFNeCVrwP4SQdYHPEK5okG/ZAtyxUGeqIWXyoWYZ5w0xtAuWW4rHSZPhvIOFN34w34Hfphvse9CX8iRXzD+4ceNGSDbfJk+eE794g8eWLlAdsZNficr4yExQbl443sy8hfP8Uy2W3t9T+fwXL133nlnVQ1+jP57T8yoGvZf+E6bZAkmT6itmj6dEi+VbWyHJ07NpH9iO7+Ox+KIGMxOZGTot0y/PWbiglP8ZMtydYbn2acxC1un13QHz/gW7SI7sYDOiBGTp/rhEbnLfumzsmQc8Zrt6TN9kA1ziid9YfN4YQyb97fxXcZR/O4dd9xRjz76aG3ZsqVnajp16tSpU6dOnTp16vTzR7vF6WcLFiyoX/7lX26IOmQj7wGAJuXa4TzVCnrqOTNEM0HlWNNurXPe2AylhTBAONetW9eQHzNsM3WzS6iMOqEl0BGzWQgAFB9qZ5YM7ZAF8B6ECqKe6AuCIvne2l4zrwEAACAASURBVGUImlk2ZAAP82bcRKqgLBBuKI/sA56ZhduTkVk2soA4aE/VkHGRYYBq6DNUHq/1Daqo7tyrBWUhw7wtWlucJe85MoICQfF8Jju8wHuyh7pAPiDusgZkpB76JoMDAaPveAh9gfDRf7y0x0b2wvNbtmxpyJA24Z0266M6UZ6qYp/AqaeeWlXTdy1AmCC2eCrDxxbJFoqT+6noG/2DWudt2XivHPdauImcPsoI0dv0NdoLhZKJJbPcG5T3J+y3335N56FoeQs03rPpPPUPeqiNbJUM3BuQGR5tgozR81x/Pbm+2/87TdN4PK4XX3yx8ZVtsGPZOz6dH4HYQ1f5LYgoPyNeyXaQl/fouvfIkY6zATb1xBNPtPsy2CU71ka6wkfqA9+TGQ9xyXu5z4Pu5r1riE+m8/SNDssysF/l6zu7zBUm/I3Mr9Pc2Fr6XHwgQ/FbOWxDfzJjox9r165t/l/GUxnazO/joYyEPuQ9LfyC/Q5WLPDJ9EKMxXt9htrn/WV8LtnyfeJrnlJFv7yX+7G0Qz3ijbiER7kv2Ht8sLgu25LZj7Vr17aMh4y+jCJbQORNfnipT3nirfK0yfdkx7b40MyEGP9ZJYRnvicbn8WXHCfaX8uHk7m4Sla58idP4WUvfhc3ySx1TH0bN25stq7P5CMuidH8ld/pj7rztGLjEXbgM5kY1+SYwf4ksVaWbsGCBW2stSPqmZpOnTp16tSpU6dOnTrt0bRb7KnZf//9x7/6q7/aEE2IB9QFSgL9hP6YCUK2zM7zhlyz20TCoUnWK5ppQl7MriEpUJ7nnnuuPQuNgGLoQ56aoU1m+pAxM3Qzat+bHUOwfDazhgJCM/BIm32GNEEM8NCaYGiPmT5eabd+QDDyXp3MBCk3TzvLW2gh/P5CF9GTTz45dUeBGTx0D0oBwco9BXmCHLTDzB9Kglc+Kwf6At2AeGmHtkM3/YU44D1kzl/txzN6CEWy/lU50B1IFj30fK4P977fyUT7PPcbv/EbTa54Rg/1FSICEaNH6oTK2fdDn2QTlAvVIRPPaSPes6vMvHhP9jRPJMyM0VVXXTXjPeXQGXoOeeVD+B5ZEXxIJE27yU5/ZbrY+6xZs9r/2Yy282+yWMrMk6nYPpSaD4Go0Ve2lXsr6I/fyUpf2frChQtr+fLlfU/NTujggw8e//qv/3qLD+yZ7xeX8gQ9yKQseGbU6LTnxAD+ge2IWzJFEFP183P0azQaNT0WL+g9HVO3MsRMaH9mWqH56kr7p9PsDSIt26jvfJj6+f88XUyf2UDeNec9Plrc8RxZ8Sd0Xj2et99NdkR5kH88xQf0+OOPt/iRJ1p5lt1ZiYI3fB4ZpX/mD4xz3C0k8y7jZzWHLFryNE+xyvuQ8ER8sU+K/tFTOkMWdCb3X3pPfcZX+Xv6Lc9pr/JffPHFqRNu8fDss8+uqsG2+EZyF0PZZJ6+t3z58qoasgb2tyrHfkaZE+Mhtu2vcRe7Ycvai2fqJYN3vOMdVTXsz6Mz9DVPPPU7HZElEZfxlL3rv9gjjpO5+MdOq4a9XTk2wENtFjPd5UYP8IicZW6Up0/+apv3vaeNxlHoySefrDvvvLPWrVvX99R06tSpU6dOnTp16tTp5492i0zNwoULx29961sb6mOGbqYPMYdYQXkTPYYEQF/yRmfICCQAwmG2a/acJ1vk2tWnn366rYfNbIK1n07BUJd1qdCIvCEZ+qbcXH+tLcqDrnoOQmz/hZk+gjRBPMz4IRp454SLvAcHMkA2ZvOf+9znqmq4KdqpNFBMSIBTSvTH+l+zcLNzCNoBBxwwtT9In3wPnVc2pBQikHcC0S9IGpl4Hi+R3/FIffQrUSP66nmyhFxAF6H7mX3T3jzdBhqUN9DrDwRFed6XXdAvdqTd8+fPb22hT/TRSUuyPGzEc+qGsrAxJ53QV6gz3pCVNtIzeke/8cRnKA50iGwQNNM+LP3K9+ivrAd0kt2wk+uvv35Ge6GeZJg+x/d4zS88/fTTzc8kgpv+yLvkSN/1BYoO4WLL3mdL2uo5tqvP/Bm90ee99967/vAP/7AefvjhnqnZAR1yyCHj3/md32nyIE/yYp95cmSeDOgzm2IjsoIQSnJla9BctkJnZcHzjqinnnqqZR605eabb66qab2WaWEnsnzqYq/aDDm27p2P9Tze2PfFR/JduYeHD8ZL+2m1n3+RnVQ/X8u38aHayS61X3vEJzaoPO3iIyHwkG++GI8XLFgwlfXyDLsj9zxJNH0YX5Kf1Z3ZYfrCnj2fWSh9zyyCdovZPssA4k3qed4FZOxizwQfbHyGH3w6OxFLIP/8UWZCt23b1nSfnqiDLWXmnh5oO3niJX2h/8ZrTvGz2gEPrO6gD3nCm3EdPWJH4l5mHGVUjA1yv7j4Sd+/+tWvVtUQl/l6/dMP7TJWZo9Ws4jnOXZasWLFVOafzcm0ZPaIPMVCcU0b1Ml/2ROT4yFjCOXnXYt8kT6uXbu27r777nruued6pqZTp06dOnXq9H/bu/Ngy8vyTuDfHzTNJtjSSMsqYGwF0RkMWSwNChqRaIAIQYRELBdSxIozozEVJpWZmj9SlTjGZZwprIkCg0YWEREQXEfBVBRQHBEVBAQVAZuGNEhjN930b/645/M75z63m6XRvufi+1R13T7n/Jbnfdb3/T7v0qhRo0ZPPpqK3c+22Wab7L333nN2LoI4QBKgPHX3Ktcb+dXdPow4jbbNn/Qco1kjVaN976s7Lt18881zTiiGBJx44olJxoiAkb0Rf+WtrgeCkvhsRG2kb2T+x3/8x0nGyFvdkQvPEAffe049vRzaB/0jU0iEOauQ9XPOOSfJeMcxCAXEw5k/EBM6sPbAqB1y5jm12paM7QDZIUclBCoH5ag7aUECoPTaDA3xHGiLM1WgQtAMbVQVQJCnOneUTahqsSu7WkFtLr744iTjXWCs/TFf2M5d+KNT7yEzNsOefYakuZ4t3H777QPiZU45/dR53+y2oilkqMqgDWQMHWQ3rscDH/Q8uvA7qmc11Dnz3uM6KCBUiSzYMQS46lw78O859dwaNuF+/sTvfF66dOnAIzTOO9gDmTi/wr3OAqkVylqhgXbXs6ggZhA0+lc9rTtZ/fznPx94bDSX1qxZkxtvvHHwmXoWWN3hke2xIfmDT0Ey69oGiPcRRxyRZFw9YOP0bm2FNRlyAzs45phjhp092YB3QNHxYhcobYDuu6/GQPHcXzalmlDPLiEj60L4o798Qgz/sz/7syRjn+FD8h30X37EFz/Gr1wv94vFnlt3kMNHXf9R10c53+Tee+8d1hk5l0P1t57vIh+wC79rcz1NXr7i73Qin4m18oGKjXxJt+43O0IMFGPZpbbjn2zqbmlsRVW/VtnqbBI6Vmm2xo9uaj/Pfez6uuuuG+Kq+K5yw970t+gRql93udOvEFPZO9nTu8+qDOxL/tEGsZes2Iu28fG6Ex4+yAp/bIVOzMDR56lVOPbOLqutsCG2cMoppySZu4Pdc57znKEvpg+K9K/EDLqwA12tVNb1Pa6Xj8SgerahWKIyo0+iqiZfPvTQQ3N2P5ykVqlp1KhRo0aNGjVq1KjRgqapqNRsv/32OeCAAwYkwGhcFUQ1A7pixAelMao2IoR0GEHW09A91+jbSNF19TwHo3OIxf777z+MyI1i6xkn0Huj1U2dOq1CAsXAC2TK6BWRxQUXXJBkjMIb4eMRz0a0kHZIAUQeuuI+o+O6jgMfUEUINZQJQqCKAS1GRuXa7bnQI4gEfpcsWTLo4bDDDksy3lmnnuMCrfAOiHddD0R2kHBoDZmzKzz6nuy897jjjksyrqDUMx+gKGTke0gq9ER76loXMmFbfocKQX/qnvkQDu9n53UPfsjvQw89NMgdugKlw0uVkYpKXWfm3fXEZtUr52XwKddD2iBk/AQqTYcqJRBha7jwCV2CWtadVaB+dEtG7FB17/zzz0+SOWsR+PtRRx2VZLx2SLWD39M5NPTuu++eM5eeXqCAUD0yhsCyU4g932UHYg7ErKJy4iOZkr22e/6LX/zigedH2v//15222Wab7LnnnnN2LSR3MVW8YJv0y1bEejGVrbIh11nnIiayC/Pi+YL382s+dc011ww6F7vYO+RXTuUXnmUXQTbLplQXVD6srzjhhBNmtZ2Nei4/5F/iRD1Xg61C3Mmg7hBnrUKNPy95yUtmtZdf11PtzSTgc3SAVC3wLa/po8iba9euHdqsvyFm0Xvd8ZOePKvaSV2zIEbLyfy3rivy/LpGk87YDdmzYzaAr3rWCx3VU+/p1nMRWYu5+l11d1D5yPVveMMbkowrQPL4D37wg8Fe5Qk80HPd1ZKP1TVY2iJ21h3i2JFqgj4Du6F3VVK/81EzdT70oQ8lGetM1d05Z2KH3dW8V7usnZHP9PP4wWQOT8Z9F/KpOtBHoit9GP718MMPD3rRBzAbqZ5XJG+RJdnwkclZNpO8uV6O5PN1l05tVQH1XJ+vueaaQX8bo5bBGjVq1KhRo0aNGjVqtKBpKnY/W7ZsWX/iiScOyJVRd60iGLFBJKApdechczMhXkaEUJ16Ho3nQbGMdn2uuxQ985nPHFAL8wyNHKEPfoe+QHj9Pjk/MBmPmCFRRs1G2PUUYW01IocO1QqKNkJJ6LvuWAHN8VzfQ+4g1xXhMor3POiP90JIyAOpNuCf7lU97rrrroEXVSgy8I66Yw2kE2Ja9ev6eiq1+6zrUSWo9qWNdALNgTCwT8+FFtU94yFgEA3t8RdSUXdUoSO2MymrZC4KBZWB1kClyOOyyy4b0A88s3GyU5lzr794YCcVGWLPr33ta5OMqwLsSdWBTqFC7LS+x2dIVq2GsR8yxE+tbmk7pM38cIgze97UiePQWPKCarHRurPRXnvtNeeMDW3CI2SU3amy1rUZvofw0g0dVpRbG8isngchFk3GhLe85S254YYb2u5nG6ElS5b0L3vZywZ9iAfky8bkDwgkm3rlK1+ZZIxceo4cwU5qvGBTdadLVVDxh23Kd/fff/9gA3SMt7p2RnWYjYqRdQcruZUN11kO8g4/s4ZF1ZlfuF5bxERtYfveRyaI7MUPcch7+Ybv6QK/qgdiKt+oO9BpP37pku+sWbNmaFM9z8wzofn8k55V+smmVnSg8nTo/Bg6lPc++tGPJhn3m+RqMbjOXMGnSp/30QHdmiFAVn4X67TTGrOvfOUrScZ+UHdwlRvoxtkodK1C46/1vV/96lcHX9E/wou8ol9ET+xXrBMzVe7qmUD6mPyBXWkbO6wVPGurvB+fdcdedijPsTt5Vp/F9XTMn7xP++Qt9ilG8KfaDjuhkh8/8/4dd9xxmKVDv2SsKqsKRn9kqO36y3yNfVr/RHZkWvuIdcaKtuuTiFF33HFHzj///KxYsaLtftaoUaNGjRo1atSoUaMnH01FpWbJkiX9oYceOowejUahMxAIIz2IhtG4USlkAxntQrzMczfn2KjaaLeeP1JPCp6sFEA3VCDwWBGdOicZSgP9d7/RKgQICge1MGo2Yoc41XMGXv3qVycZozAQgVrdgubRP36N6Ov+71BG60AgemRf0SDfez4ZGu3X8zwg4ZOnbUOcL7/88iTJsccemyT57Gc/m2SsR0iBd+CZzFQHIA6+r/PEUd0BCs/so6IsZFvPT7LTDwSEjOkcAuF7zyX7Wj1jS2wIAsaWoDCqaZB99q8ioBJ0yy23DOgI+9EGvqTyUREev7ODekKxd0CooJGupyN2o8pFFhAr1VbotvUf/A0/fkfsyLxssjzyyCOTjFEl7yPTs88+O8kYZYLC8jP27Hu6dOK0aguE+ic/+cmAzrET9ijOkB29WSNTq1aIHZIdxJSs2CU0G/GPSuxkzZo1rVLzCLTbbrv1xx133LDehI2KXfINW6rrQehZDPeZ/tgFm2IvkPczzzxz1nt8D9GEqPv94YcfHnIoHYvL8k1dj8NWxVY2KQ5saldDsbWe3VVPTne/mIYfsVD8+cQnPpFJEpPr2T++916xmkyhzvjEF1+wmxrfcR+CoJMXnaiCv+AFLxjejRcy56diYI2h1leQtZhEJ1B3fQrrK1RmrYtQIZEHVIA8V/yoMwz0o7y37saI8F3ji/ikH1VncZAV2cv15OW91miwY7F6cq20tviNHutOcvTPB+v5gKoIKjbyFf+49NJLk4x9yzq2008/fZbMtE0M0E+Tn+paSzGi7iiHVILIjo7IhH/53cwf7RcbyFb/UV6UL+kOH6ooy5cvH9rGB9lRXYursiYXqlTTEV8he31NOd997FWfU27mg95DVnS3fv36XH311e2cmkaNGjVq1KhRo0aNGj05aSoqNXvssUd/yimn5HOf+1ySMeIB3YFQGwFCaYzcjMLdBxkzl9VoF0ICZal70Hu+++1kYYcyI9JFixYNI2GIlVGoSgmUDvJVd0uDnhg5270J0qCqpG3QQWh7Raa1wagaigIRs4e+NpAdhPz4449PMp57CmVyPcQLAnHZZZfNeq+qB5Sx7vevOgKRq6iV+8lz+fLlA5qAp4qIepZqgwoJhJrM7FSF2Dx7gcaRiefX07ShP/ihY3N/IWtQenOIoZX1zCJzjyETEA3v833dWx/q4jMd0BF0qO7E5znQnttuu21ATus7oCPslP3WE4npwPoh1/M5KGNFq/HE59iV57EfFR8Vl7rLGLuC+kAbreXhR/iFQvIz9odvvl53wYE6aX9Fy6GgbADfa9asGdoIyWIffA/CCRWE1Iox3llP04Zk0T+EDXpIJvW0aXZXK0bLly/PKaec0io1m6ClS5f2RxxxxJzKrthK7myQvtg6v3VOFV9jw9bWsCnxSTzjSzUu1XOsJtdW1NyJN7bnmc7Ecq+KPD9UNYfWslnnjVkfUXfK8l68itUQ5s985jNJkj//8z9PMo7/1qrxJ7FX28VQcUffgY+JV/oQ3lN3AuTX+BMfxPZ69oq4MSkHbaJnOV3+ca38xC7kBQi5/FbPE9MW8V3l/3Wve12SsX+TNTsU86xfrGdq6Zt4j7ymsiR/id10TPZ1vRZdkTm7/tKXvjTr/dpb2ylniL3i2gMPPDD4gnhNH3IklJ/dvv71r0+SfPjDH04yrn6Lpdoo54rJ9CvuyzvswPXsvO58Sgbsn03w+UsuuSTJOAbXs/RqRdQ6EnmPLs0CcV4hfs4444wk4+oYf5Cf5AS2J9Zss802w298zN+6qyv78rme0eNdtcKin6Wvy96tS0Ls0hpnfuO6gw46KOecc05+9rOfbTRPTcWgRllf8OZ0Ao1AWwcGFktRtI4Eg6A4U6fq1r+EzRB1VBhc3fZPqbLrukHwgrAAogwpoHFsvHMmPL3nPe+Z9ZmiOZXn6+AIPN6PJBcBtDq5DgwnEyDroVacR+CRaAU0gaxOG6hTXiRPOvF+z5cstUsw0WG77bbbhsRSp9qRoUDzgQ98YNa9eGc3dCOB1SlxAov31IDAOdlZdeJaVicDHQHBoS5qrIv/yFrgxq/Oq3YZsNYyby1Z1y3L2YhS9r777jt0bvBeB6F1m08db3ZUNxjQNgFSMpKI6UibTf8gQ3bos86PNtbFg2IB+5S0tKcOtg2mPc+UDluG6vzQMZ0J4O6jI50gOrdlLVs98sgjB/mzffoja+SdEnXd/p29Cfb1ADVxq06/9B7xj0wl8smps6eeempuvPHGNqjZCO288879IYccMuQXtlRtlX51BvilzoAkzSbFCb7Cd6qe2LY4YdtX+qs+efjhhw/5A+n88BttqR3v2umRK9kiW7ZVLRsUJ8RzMdZWywZgAEcykS/JwF+DJ51CW8PLDa6rW+AifQQdc+0SU+UQ4J0OOZ2ZImZ6qbzLx5YtWzaAZjrnBjmezQ8BLZ/61KeSjHVR7caz6dNzxCBThtiXDq5cqj8j75ne4/4K3snR9YDvukENvnRO63ET9QgEOb8uomdj8pnpdHKCGMymnv/85w+/ead+jXf6y57ZnX6GKW5kpq2Vd9fzTf0jRGfsrsZeflGnqLPfOrWQzmpuqIMw/T4yM13uH//xH5OM+538y3MArnVTpLr50dKlS4dnX3jhhUnGcY0d4JWM2Yn4BTSTx7ybz8rJ9ZDVmqfe+MY3Jhnbke/x86//+q+55JJLsnLlyjb9rFGjRo0aNWrUqFGjRk8+etRKTdd1ZyR5TZIVfd8fNPH9XyR5W5KHk3ym7/u/Gn1/WpI3j75/e9/3n3s0Jp75zGf2p5122oCyGHXWcqnqgL9G/rVcqU3QHKgQFAnaakRpNO25Ru2ea4QIWV+9evUw8jdahQTVwwZNqzGyrwv1IWYQMaNiCFst+xtZQxC0wXPrVJda5YIo1MWBZKcKYhoOGRuF4wfSAUWEHtVDzOqi9zptCXJGRxCShx9+eEA16jOUL/FUtwNVfYDqQxagcxACaHrdgrNWTKCT+CB7z3Wd57AviASkwXshFqpldEp2bIW90ZHr2an72G/d9rROnTBlUHXh0EMPHdpMlj6zI233WZs9Q9shQ3jgH3QDYWPndKikDEFjzxAqsoCQ1cWsPvMj9uN9UEJ8qHJAyrQHmgl5g6yxEUitiird0KUqmOvZ7po1awbkVjVI2/gS2bFPdsu3oMgVTdQmsq6LU+shfmIO2ZHt5LahC3X62ZbIUyo1tRotdskBbFGsq9PF6oGXYq54YTpb3WwGiS/4kO/kK9/vtttug+7FNO9ic54Ffa/VYzHNNE42LH/wH8gvXr3P4YJ12pjYZIqT3O+57q9bK5M5WYqJ2olPea5uMS/OveY1r5nVPtPd+Cb/NzuDb/p+Mg97py2XxZyK2qty8zdtqoc9u9/MADGGTvRHav6pfYDaz3GdKhqZs5u66Y88R6fiEVn6TGf1OAuyZVPu05dhC2xHXKrTkY4//vhB7jZyYR91M516CDPf1Ca5mM94rm2x6wG4nquNZForL2TIXryPnYi9KpJ0qnpWY3GteOoD8xv01re+Ncm4z2yjA9fVPMmO6UgFaM899xzaro30RT8qL773LHlMzhX/6oHXZMCO+aL76kGnvq/5bLvttsuZZ56ZO++8c6N5atHGvix0VpL/meRsX3Rdd1iSo5P8u77v13Zdt9vo+wOTnJDkeUn2SPLFruuW933/8GN4T6NGjRo1arQ5dFZanmrUqFGjX2t61EFN3/dXdl23b/n61CR/3/f92tE1K0bfH53k3NH3t3Zdd3OS307ytUd6x+rVq3PVVVcNo18jPaNWCIgRW13MrioBkXC/uaJ1frzRPLTWX0iKqkbd3hbK9ZznPGcYgUMjjHyhEt4BNbHVn8pE3W4a2u6vd/kLAYAEG7VCmF7xilckmYveI4gWhMIo2PMdPkgGZFcRcqg/JExVxCibjM35hDb5DL3UDs+rBwLee++9AxqDJ3pVxYKQ4dX1kAA8QQjwoHpFBtAeCAR0EGkTXZsDzD4hDpALCBQdsVO6dp33sgmLdcnEc20/CdmrVS781s0j/A5JYXOQl69+9auDvdaF+fVAT3pTkYPSIQgutI+98j06rJss1E0YrL3hg+xEhQUCBwkzn90GHT7XQ9rYvaoJW6Fb17NztkKm0FHy8DzP1062qSK18847Dz5SN2Ngz2Tgd/bqL3upCyzZoS3PxSuILN42VUHkcyqQK1euHNDfhUZbIk8tXrw4++6776B7tqm6CS3l52za9fzdOhI+RA/0pIrCHmpFD4osb1Vbp/9169YN75Qj2YB78cq+3ctGfF/X1on/YgxEWVsqj2IU25d3xDLxxfO1CRLO39kqWUC4+ZLnil/ijeeJgRBt7Rez8SEPaY9qqThJnjvssMNwL57M1uCPyO911gXU3uwIfm+xu8Xl8pa/RxxxRJK5i9bpRDXK86yVYJf41Faxf/Lg4Em+xR3VN7GY3XoendKl78lOrhC32IrY6zOd//jHPx7aJM6K476f3FQgGfsc+6Ej15FJPb6DD7FXVQZtrJsikLk1Y56rslOPbxBf+dV5552XZGwT+sB122Sy8pn/6f/Jy/KmdSkOmf7yl7+cZJwz+An6zne+M2dWg7jDPmyaYL1PXZ9UN9VhH/pBfIsPim+1b0kXeFUxpOsDDjhgsKWN0eauqVme5Pe6rruq67oruq77rdH3eyb5ycR1t4++m0Nd153Sdd03uq77Rt1NplGjRo0aNXqC1PJUo0aNGv0a0WPa/WyEgF1qrnLXddcn+XKStyf5rSTnJdk/yQeTfL3v+4+NrvtIksv7vr/gkZ6/22679ccee+wwQoSwGw2bM27EVhFPo/E6ivV9RR8hDkbL0FojRSNLo24jRwj3ypUr5xxOWbfiM4I3Oq2HkhkVm4sJvdBGCEGd46zNUBB81Hn3RsPeAxXSBshaXYcBMYAmGn3jG4IAkYZkkT2ZuQ9iZtQPMSF7iAZ+J7e3dC9kyjUV/a/zWKHw7rcOAtpiRx0VEIhDRQvJio6hgnThdygh/rRJG8nYZ8gEnXiu9yAVSohHRQ2196KLLpr1e919x3qRuo7ruc997lDtqruWQQHZiV2A6s41fFWbVR34IISJbOkAskX2vicrlUdIlPfhnR2plkGH6JQu3A+BhthCcOvW1WRtPjBZqtjwC/bOH8UU/Ngl6SlPecpwLf3Xgwfxqu11LWBF+2oFsO4YxT79rQffajOZiV3r16/PO97xjtx8880Lbk1N8qvPU/vss0//zne+c/C3k046KcnYH/mOvCM+qeCwWVUQtiP218N92Qd7oTc+yG7YxwSfSWbiA9+XZ1zL7/g9XlX42XmtGuHtsMMOSzLebUmMFoPsIsYf8CG/8Jc3velNScYVWusp6u6ekwcxJmP/s9ZNLNRX4CvikueJ7fIR/sSVWtWuayXoTFzYc889h9kI4nl9V0XFyb4eKGo9D9nUmSi18opXsVX+8L3Kj6qx57ufrCDgtsf2HLsl0imZ+r1WOcS3WqkRW+26Kc6xs7ddGQAAIABJREFUtTo7gH1PHnnApslAm82ioZdacaxbkuO9bkssv9iN85RTTpklQ3aiXya30rU+CfvEu2qsion+2TnnnJNk7kHfbIXM+TxZeW/d7l1eq2uNyGHyUPNkbGNsb8WKFYOsxDM+5x218iZHOwwdj2Y58HW+w97+5V/+Jclcn/NZrmX38qB4+MADDzzils6bW6m5PcmF/QxdnWRDkl2T/DTJ3hPX7TX6rlGjRo0aNdqS1PJUo0aNGv0a0WPZKGBjdFGSw5J8ueu65UkWJ1mZ5OIkH++67r2ZWYD57CRXP9rDtt122zz72c8eRnJGilAko3AjRujyptaJVKTL6Nxo1UgQCgW9qnOfjSBVX4ya77777mFU6zcjdt9DRc2nhyR5Z93JzSi1Ig6QAaiI/fg/+MEPzno/VNZ7oDJ1nmqt6BhV4wMypa1G41/5yleSjHVQd7bQXu3Aj8oOIlPoMJnXdSff/OY356ALtdLBDvAEMYMMGelDgLQJysNeoBveV3f2qvuoQ4e0EQrjICyyoAMydkYLNBJ/2ud9KiobQ1MmP0NH6y44KlFsEQrFX6BIO+2006B3z8aLedz0A4076qijkowrdezt4x//eJLZZwskYwQLsW9VWDqBPOEHmo13Pkzm7mNPdEyX+K2H9EHToU18nY6gluYgq+ioLNXYAMGDoLHFyTU55C4WOFCNb+DdIXUnn3xykrHdkYk2kR2ZaRuEk93UnQrFADx7Lpn89Kc/HfT/JKFfap568MEH8+1vf3uoQog/bFI1AoovHtWzLVRBVXTqjkb8m62xab4kjvidr4ihkNFbb711iA0VARajxCB2Ly5D1dmONvBDZ63UA/jEQrZuLVzd3amehyXmOfcGHyotiE2Lwc5+QWRR1wrWs8hqTuFDNVeQMfnVNTiTbdVf0RZVX3kAai9HQrJ97++73/3uJOM8p3+i30J24gEd+1tnotSzu7St9qusy6BLMYyNsEPfk6n4xo5V59kpm2FzbIru6aS+d3IdLlnxjXpuknfIidru/DKHs6rk6wO4TnUcz3xZP4cd1p1T+XCtBPEDVVr8swn3+55u5IRqf/IsWbne7+TBBvAhFpx22mlJktNPPz3JuDKrnX3fD/lAH9O97Mc77OB2wQUzhW3611b9EHbIvuwQiti32CCW6EvXGTDafM8998zZEXKSHnVQ03XdOUlelmTXrutuT/Jfk5yR5IxRef+hJCf3M72f73Zdd36S7yVZn+RtbUeZRo0aNWr0q6SWpxo1atSo0WNaU/OrpmXLlvUnnXTSMNI359LI3mgcamu0Di02uvUXOlORTYikkafKkBGgUX498d33Pi9evHgY6RsxQh+gNpAu8xI9w7sgQpAAqAdUxvzEunNErZBA2MhEmyFdkAQIEyTCfFnzayEAEAnzXCFafscn/o3uJ2WTjCtS+IaEmN9bq2XHHHPMLDmuX79+eBfe644jRvqQA0gAO6HvusalzhElO2ic773HrmSf/OQnZ30PyfCZLCC50P66M5j5r3SH6AqSzzchGZBhMqvnL7EZiBfbqWdGkMdee+016MUz2Gmd18xu6LOuZdFmc5I9B9oGdfGZ7iBL7E0Vy3MhqOyirlvAH34gtZAtdul3FRoVS4iV9p177rlJxjakesJm+B3Zeh5dQd5UeL71rW8NvqONUGZrvcQQ87z5XD1JHA/aDkG1Yw6d4YVdQs7ES7InG/Fwl112yd/93d/lRz/60YJcU/Orpu23375/1rOeNWcnvbqujB6gpc53YJO+56fWpbAPsdv8fHqGpP/lX/5lknElSNVALJ/MEWxIbISCQj/5Qa2wIrlUbJQbxRrorLjAn534znbrujK5Wpv5WV0bJ17wJ7ImG9UNPoSfimTXM1vqDl/yGV3xc7pQ/a75crvtthv8S4yqu6byV7Jwr7iOV99ru3UXqgD8tq5pIVM6ZSfsDT/aIr7IxXY+xb/77J5mvZM8QjZipxxCtmI02dczf5D3IfLA5/HHH59kZtctVR25VN+QnbE735OZ68VWvLBffYo6k6Xuzsp+9TXOOuusJOOqhr6kNltnIsfLE/qFzsXBD77Zl+v8Ln/JO9aSyX/ydz1Ph9/ymzqTQV/7BS94weAD+rDszV+xRN9RrqOvI488MslY/3YixROZ6uNqk+9dxw9UempF8dprr81VV12V+++//5e6pqZRo0aNGjVq1KhRo0aNpoKmolKz++679yeffPIwUoN8QHsh3BBqo2cjRdcZ6RtJGuFBHnyuc1DrORFGiEaQEBeIx9KlS4f5heYoGwG7xojbqBWKUU8DhgyoRtTdyaAqdW3BS1/60llttt+5tkK461kGRvSugyz4vs65NjonY0gBtKa2w/dQI6N/J1bTIWTDe6GLELxbbrllzlkgkCXoIVlAOsmwzrd0H2SznhHE7rzbnGgEhYR6aCPZ1p1/tImu3Q+hMK8WwmFnF3YPhXQ/lAfaBF2pVTrPh9B7D0SePLV31apVg57ZE9lARexUos313Jh6ZpT7645xkDB+oq38CDpU7Q0S6z3s2nWQOu8xv5ed4R+qiL+6847r2Td0laz4Jb+hi8kzQSY/Q7d+8YtfDPagGsQOxR0+UndJIuNqjz7XM5/wzhfFDP7C3rUBcorXpz71qXn729+em266qVVqNkJPf/rT+2OOOWZAGMV6/kq/kEa2p9ogr9iRT8yTK8QtcYQPsGm25TNEXnVCtdT7Jreghuw6L82cf7nUmjH+qaqnjXjC8xVXXJFkbKN1LRabqhVdfup6Nm9uP79WQdnUjpN1R0nX6RPUXR1VqPi12Fjn9ZO5qja+6JjPyl9bbbXVILNTTz01yXg3SjGmVtLtaqlPIL/xT23lz+yIbOSpep4eWbA/FRgxzPoSFZ46Y4Et/P7v//4sWciv8gvdyP3uV/FlE+yaTtip+MOmVAlVfHyvnVtvvfXQhnqmDruW2/Akr2hD3QFVHiBjudx9ZKnSY6aBM1pUYuwCyC75uLzk+fKsfphd1cRw9kdGbOBv//Zvk4zzr4omO6YTsidrtsW+//AP/3CWvGpla5999hlsnNzxiGf2J65cdtllSZJjjz02yVh/7vdscUplG2980npt/RyzLdxHxnRy44035pJLLsnKlStbpaZRo0aNGjVq1KhRo0ZPPtrc3c9+qbTttttm+fLlw4jfSNDo2GjXaBgyAQ02SodImvcITa3IF3TIqNhoFrJpVFyrK667+eabhzmJ1oYYcUNRvRNiVHfvMOqFomtr3QXK/GxtM2qGpEHIjdiN4CEa0BwydR2kzpxNbYOAQQ3rDlvmZpIdOVTEWnt8hkJBA6Az2kVOUKwHHnhgeLYRfd05B9IMGYCE1fUPECsIJ97Yj0qc++qp11XXkC7IAp3XtVueo41QILpToam7wUBdrOXxPR3RJZuC9NMt/tgCfk444YQkY+RuzZo1AyLExqEjZMeu6+nziEzrGQZ45lOQKzIgO/ZXd4fx3IrMmWtMp95TqxRQIqcSs2uVH9U7yBd0mwytn1LhYXvs1pxt72P/KkqTc7/ZG0SKD/me7MU1bfYsstFGVS3VNfaLdzqgK22jaydD1+rWfffdN+dMr0Zj6rou22233YBU8zM2LD6weflDfqrVTvEFgi6mQird7z1+N49eLpBT2JFK4P777z/4k3xiFkM9U6XGQHbP3sVWNlTzEP+0yxR0te5oyk/ELDlaBdaaFv7qffj8zGc+k2RcMYIOI1UA+ZUvyKNkwz/5Wt2lCgqt/fjGD1q3bt2w3kZ8FtvqDlv+ysl1VgYdie8qLnV3KW1kh+KG96ku0aXnqFL4S6fsSQzXN6jnz7BjMwTqOTvaxV7J2Joc78EfPsi4Vh0m/aie7eXdcq8+ARmQoXeKtXXdLP2L53gQY91H5uyPv7z5zW9OMs5D7EeMV9HUVrKS+9lp3QWWvcpzql+oVq7IAx/aI5Z4jxxC5tqxdu3aOefM8Em8yYF4pBP2Irfi3fUqNIgvkbE+oe/FktrnFTf322+/Ye3txqhVaho1atSoUaNGjRo1arSgaSoqNYsXL84ee+wx7NBitOvcD6PjOg8SMgYtqmdPWMdhNOt+o3TzLKE9UADoFsTBjjCQiK7r5pzM6tlGl5AovEK6EQQAr9AY71Z9wCPUHWJgnq21Ct4HvYGI4x2iAQEzsoe6QGlUVCBrdGEUDYmrp6GTvWqF7ysyBzkhFygOtMnoft999x2qBmRlpA5t0zZonHuN9P0OOfBOsqV3MoO0QXvwDGWhK/PSzY33HteRDXuB7kFPyKpWs6DkECrth6rQKR3hr57cC635kz/5kyTjHb3qidW33377YE91VyD2aq0JFMffeqYPJAjqAu3RNmiktkOg2CXkClLmd+fiIMgS39d2xL4nzztK5uoI2kTmUFC2Ygc6scL1/AJ6qVoCra02d8899wyygRKyj1oRJlvVW3bJXtkBFJDM2K3r+BiEjS7Yh+pV3UnugQceGJ7VaOPU9/0gZzZWz5Nig2Jf3ZHIvHzVaQgn2xGLxYHJc6WS8U5mnsN2VRftQrRq1ao5Z+qIISofKuPsVeyA0kP7kRgo5lYUVWVGZZafsDUovR2+6tlw73vf+5KM4wUZyCuez5ahyXaa5CNiL5l7j3Nw+DFZeo4YLS/W9VL4kIMOOuigQW+qyNoqRuIN1dkaYqCYJi6wIzL2XLmbjMlOLnWd59ZdX8UDdja5pi4Z5ysyE8M3tUsoPvW3fM8uxWy2xR+8T04Qh+iOHHbcccfBDrTJeo96JpdKJJJj2Tm96yO4Xi71Hvfpl+hvsWex9/3vf3+Ssa+zBZVE1Sqxnd2wK7IV0+VdsmJvZE+XdCh/kZ0Kq5yiPdqn4sN/2eDPf/7zwc48U9u906ydujOpPkLdCRgPZFkrcvSs7yuHeq/ra3980aJFc9aZTlKr1DRq1KhRo0aNGjVq1GhB01RUah588MF861vfGlAho1ajS0hCPXXVXD3IBTQFymNHMCNHI0uje1UHo/aK3Bv9Qzgm5+lCM6BoRtwVTYGAQSdUlcw5dh80w+d6hgpZGBUb0ZuTXE+Ph5xBFqAp0D7XGeG73ui4zvM3sodAQKGMzqFMdT1JPc/GqBvCDqVB5PW0pz1t0Dc9VLSfjKAWdT9/OsCryor7a3XCddCcTT3HbjDm0jtBGVoDQWMj7qvzackKigrlISM6dh2q6ClZQ76OO+64JMkll1wyi39rOvjNsmXLBrREW1Wz6pk+0EDoGb1CksgS6kw3CFJFBnjGC/+AYLEPyC4d19OG/U6nfNxaGc/hN85e4C/1HJx6gnN9Xt2xBwoL5VJl8d6lS5cOdsBHql2RKR+BhNLFpZdeOosXPNRzkshGhQ/aJ0bwcbKH2uNjw4YNmYbdMKeZNmzYMMRQeuPfdS0o3+JD8gefgVhefPHFScY7kMkR8hk7MD9ejvjCF74w67P4R4e33XbbnHWC7BavKhWewQ/YmDawZ35adwj1XPmCbKwXgmjXmQPykMqIiihfsLMYfslCH8HcevmUb/ENz5M75EE53jkj5FB3ilRRgriLI3xx5cqVwwwBfQGxRf7Cs1kVqsXyjLbVdUz8t57JpUph/ZHYKk+QPfI8MRI/quDylftVcuUl7yFbsbHOHqlxhV+Y2aDd4iB7V72QU/jJ5JlI7MUzvZNdskdUq0b0Vit5/MP17MZ6VmcFsXu8qYbWmSt1rTEdyo/4QHWmD1viy97HP+QZ1Q3rWdhtPUOPf5GlfIk/fd777rtvmHWgr1crjuzaWhhxjH2QrRxfZ6Kosoox4pxYIzebnSS3OtNHv+n5z3/+IO+NUavUNGrUqFGjRo0aNWrUaEHTVFRqkhm00IiwnvkCuTRyhL64Dppi7Y37oLtGkka1fjdKhyyoQhjdG137O1nFMO/QM6AYRr5GqxVhgJZAW4w4zcc1CjZ/0WjX6BYyRhbeC2mA0miTETl0xnPwDy2EntS98qEn3lf3Xfd+sqln/ZAPpAGfRv0QBqiP6yEXydx1EqpaRu5QN7KDNOAVTypuEDPv0Fay8X09oZsuoTbaVHdOUeWAqrAfMsZv3cff7yqRKkuqH1AXCBr7P+aYY5KMd/2zuxUE0DxaNkcee+yxx/AsbeBDteLoL97ZJ7vmM+yOLNgv+66+BWWkf7Lj89pA1uwFEmXtC4SLbsjUeiIoFB1rD9tiM3WXPgSpha7SSV0bx//Z+SGHHDLM24b+1vOXyNScYggvPTkzg+9DlbWJr6no2NlNnGO3k+eWJGMUkM/tsMMOg/wbzaWtt946O++882A7UFe2LRbW6nZFT+taKtfzRTZMz0cccUSSsW3WnS3pX0wVP57+9KcPNlPXR/BvPOKNTclfeOGndV2i39mY/FZ3YqzrQiDavudP/FteVJERV7zPeRs1tsoB0F5VCHFHtQPJY9pFXvxXfFMR8tlavf3222/OLnV40Sb5yroMsYmvkV09u6euIeWvZgRYQ8MuVI+8T0zn32ZB6BexL7FNzCMzfHk+lJ/9sje24Xr2qErn3Jvzzz9/Vrvpim5Ujjxfe774xS8OvuW3uraLj5GZv3TCZ1Wz2TcdyaUqfvQr55KNtpEl3cu9qg5kJk/Ws4jEDPz5q29C13REtuTAVrSznrfmPfKd9qmG8B99nVe96lWDT5KVtmq7Ne51dzv6tNPbK17xiiTJpz/96Vm8uE9/StVWxYdd0+lrX/vaWbKTe//t3/7tEXfpbJWaRo0aNWrUqFGjRo0aLWiaikrNokWLsnTp0mH0bMRoNFuRdmiOOXpGeuaCGsUbvUNKjfyNGI0wIRvmzav8GI3X9yRjxKj+heQYSRr1QtUgQvWsEfMLIUfaCMmqc4U9x4gemmOuKbQHkqztKjeeA6muJzHjT9u1x+jdnE4IoOdBOCB2EAVokXYavdOpUb/3XHHFFQPSBLWDDJlrDkGwBoXstb2ukTG3F9rj+ZA1qAbe6cKaBO+BIHgupApqU8920Vb2TSZQFRUniBX+oUreQ3fkQQ50DrnTHqiT82m0G6qzatWqOWuf2Ek9p4Y98J1q7/W0a8+DQted19ynLZBdiCy7NOcdWoMvvs6v6E5FBx+qHHaBEkugQWRht7U6N1ksojsVyWrX/KyeGH3HHXcM+odEsSttY8dQQxU89gnhVc1yOrTKDh+EkKkWWa/k/rrmjH3R7cqVKwd7aDSX+r7Phg0bBtuvuyZay8R/+Y78Ih6IF3W3znomBltkm4gv8Z260xEdXn/99YOfQXT5i5gDlYfCa4s1BfKC2MNvvBsSzbb4hZjFX/ghf4IQ81M5Whzwu3zhPdpINmSnwqxyq1otn+CP7MWhej6W712nvXROjqoPq1atGmJTrYx6Fj/jv/XUe8/WNlVnlVc7d1kTSZZ1jUI9U6euc2VvdadJfOlTWCcipotf+CJj8cWaMDKH6NudT4xk12Kj3/mJ6hrd0smGDRsG33KNPEB2fEcs1FfTBpUKeUO1qq4/om/rZb1XdaHmB3Zeq6byKDs2W0Kf0noi/S0xgy3VsxG9ny7lGXZKN2Qs5uDvT//0T5OM+2n1DKIrr7xykE2d9cCeP/jBDyYZ98vZr2fUM+kmz8NLxvHPGVD6L+yLbD/72c8mGdup6pVYoB+4KWqVmkaNGjVq1KhRo0aNGi1omopKzYYNG7J69ephtGo+K/TFiA1KAiWCbBslG8l5jhGkebTWKBiRGs0arUMYJvdHT8Yj18l1BXZmMOKFkhiZe7dRK/QMkuDZkAGo7T/90z/NajNkCxoHIYJkQ2Wh8lAR/J166qlJxigKxMqI3vONko2265ktddcyCBvZk7nvoT91bc0b3vCGJGNdmYuKX/LbaaedhjmXZItXiJNKh2fThWfgAWIFSYNIkSneoYzspK7lqWex1F3QyKiemOtzPZ27nrGCH7ahHVAliAnd8QMVGrowl9kasrobiXatWLFiQJshWRCliqpBhuoaK4gYJEnbIFGQInaPByhhrQT6C7FSjai7sJFVXSOADzLxmZ+xETbhOeSgKqe9FU33PjFCOz/1qU/N4sNagNWrVw/oHPRO2/Bo16MqYwg9+6N/qDaeybSeISTmqNSxW9eRqcrQihUrhvY1mkvWfcoH9CpfQdSr3/NT+qvz39kMX+G/9Eqf9VwhyCd/5v+qlQcffPDAE/+WV+Q+qCv/5B/+mgEgtvFPNqNqDZWV78TYunun5/I7OV2O9hmiLE6IiTUGik/WFbpfHlPhIRP+zAftECnXkAMfoUvtxSc0+Xd+53cGP0SqwXKnvKWyQSbswloT+uOD9O57yLZ+jxhW11KSRa3cyc0XXXRRkrFs2RH7swa59m3qbn122RQbxVp88Q/Pl/PrLn1mfdCFs4RUtH/xi18MfTbPwJt+i7xFr/6yI7rQH7H+Ay/6AKoPcq6/vhfvrXVRRWAnrj/99NOTjO1FVYqd2blQLNcOfUw6wxd7J1MzYtiKGKRqZx2e/Ml+2aB4MDnzRhvFCDYuXqkQ1/P4VBDxYDYHEq/kTP0l9snX2F9dV81eyXLRokWPWK1plZpGjRo1atSoUaNGjRotaJqKSs1WW22V7bfffhjhQRQh6/5WlBdqZIRopAkVqruJQMaNhiGnngsBgTJBs+ouWbfccsvwTqNXCFE9fR4KUXfLqPOnzZ818of21P3RjWrdp434qKcCn3322bNkiD8IHFlAzI2mISIQbvP9yVI7yBwSQlZ133cy9Rn6DGWEVk8i71AEKBuUBPqAJ23VpjpfFPJAppAnOvN8soB2QIc8B5ri97qjDxSQLk8++eQkYztyHcSsVgnomkygjPUEeMgfe4XosR1oKYSs7lRHRw8//PCc6if0WAUGD9ATvLEDKI7f69zgOi+77roErVEd89nz8OW95nNDmrTd7/hnE+5n3/yRjqCLnodPlUOoOTlBWfkhxI0coPjWiF177bWDXUH3yJ/9iHsQL+g5n63nYGhTPXfG9XWdm+vrWkQ6UlXq+35oX6O5tOOOO+Y3f/M351Qv6rx7eaaueZGvfC8OsHW7P9X1mPQjNqpWHH/88UnG1Qo5RazdaaedBsSYPaoeeTc/EWPqzlr8FAIt77BZeQ3PbBLPbNDzxMi6K5OYXXcorTtt8RnoLxvGB1/hY2Ii39E30G7+ze693zkk5ARhF6f4+w033DDkPrmwov3WwOHR+iV+igcypD+oekXGxR5t9FmbxECxUqxT8bG+sJ6hUvOHWEgH3ud3urbOBX/WJqtC1DNF5Lka29kKP5Kv9tlnn6FteKF/vOuvOMtJVUvM85kOvEubrHtil67TB2EfZmewB7Jnl/iQ8z2Hr5pZw67k8mOPPTbJ2HbYgvaJMfzA794jD+pLeI/Ygv8zzzwzybgaJ8Zcf/31QwyoPs+n9LfEIblUm/XL69k7fF6/S5+irlvTT+Nj+udiAXvZfffd2zk1jRo1atSoUaNGjRo1evLSVMByfd9n/fr1w6gacmEUC6kwajU6VdEx8jPSgyJVtBcyUc/KMPqFEkMBoMzQZ3t7P+95zxuQGmgLFMToEvoPETDKhRRBzvBixEwGRvh4qvNR625MRvjQeeit6oT3QBSM9PHl+fUE6ioTSIbrtAc6aXQPpVElIA+6xS+deB+k5bnPfe6gRygehAvSZA4uJAx6wl58phv2Am2sp2uTvV2kzLv1PvNYK4IFIWVHUEGIGxQF2jI5N3Ty/T57H3ST7jyfrdABFMu6Djag2gJpgRbR2U033TRnXVE9xwgvKiwIz+zCmjA759Bn3Y3M8z7ykY8kGdsjlBD6Df30XjwjKCG0hy58T0fsTxWEX0EDfbY2QGzR/rrLUj3JGsrINrXDvPMDDzxwmG9NtuKUeAJFNO8awkXP4le1ExVEf/3OX8Qo9oP4HJ1AXpctWza8s9FcWrNmTW644YYh3rA9cYr8yZXerDFQTRcD+Zw4xJ/9ZXviiF3sPI+v1Z3J6HvvvfcebEuOFAOhrGKMmKFy4ndVKXG5nvdhLcMZZ5yRZO6MBH7ML+UHslG9tFZUpUQb8WG+vZ2TVJTq+TKqWHVtq12mxEp8md+vnfUMMzr1V/4Vy7fbbrvBr+UBfscu6AOCbQ2MWFN3NpVzxb5afWIvKh4QcnYHpbc+sM40wbvc7/01XrBP+RRCz1Y8R9ySH9mY3UbZhpirT1J3DKNrlUf2P7mLH5npmyFxuZ7Xx+5rn4/91t0sVbfkIW2fnKUzKbOaL2tVw3NUH9gI3anOyifsy/X13Cjfs0c28vnPf35W+/mZ/qlYod+pWi83JePqqn6DHKo/9N73vjfJePc9vqZfQp98jqxrv5zvkUGtRpEFe6rr6Z7xjGe0Sk2jRo0aNWrUqFGjRo2evDQVlZr169fnnnvuGUZy0FUjRWiwUaxRqNE2RL3uPAQ1qvMO/TX3FHLpvRAPo3cjSMj/PffcMyC/UBDIUt2H3+4zEAGjWW2tc/SNeo2koXMQKc8xAlcZMpfSjknQRLKxHqXu4gRZgPYho2OyIXu6IGvIO74gXfiBdNST4LWfLiAq7k/G6AdZ1jN5IEH0yy68k4wqUg0NdPI6lKee9A4xoAMygYxBniBjrocquo8NVJ2Sad1JDsoE4dBez4dSQrZcX890gQJBi6yvgqTcdNNNg0whlvQATYHcQknYK0QIOgydg+LRGX/QFvo3h1gFxk6HZESmUMtqf+wcskYGnge5hSyTMZSUTNglmxATqi49h61ArvFbKzd09N3vfnewdTKBoLKjWoGrvqfaxd7Q5PkyyRiFY0dkxce8RxXJjomQUrJutHHaZptt8oxnPGOwNadz8+daOa07/5lbXm2Kz9ETffgdOiufeY61haqi9VTzJUuWDLYGFXVtRWPlAX6vcl/XHODtox/9aJKxDbN/PMoP4sbll1+eZLwTn+cisoOAI/fVtUDaIY/53vvENe3gt/KSdst7ZGbNBCRYTvE8uqXTXXbZZYgd8lEFertvAAAYaklEQVTdZVK/wrpRn+vaKs9kX/xYLFMlkPPFRO9xXV0nWM89Yic+k72Yps11N04yEM/oQPVMH8baDH0in+Vh/MsFKkTuJw+55cADD5xThdYnY6/6R2KmtlkbpRrN7mo++cQnPpFk3HeU31SNVJfklwsuuGCjbdFm9sQ/3E/2tV/Hr8RoOuH7/MkOvWSlfyeWmwFDTvKyvM0f8PeWt7wlycy6KzGA/OWdug6I78iVNV5tav2td7JrsxucC6lqJB7KrWZzyFu/8Ru/8Yi7dLZKTaNGjRo1atSoUaNGjRY0TUWlJplBKYxaoUrmK/pslAxlMm8RiqN6Uk+z93tdKwMVMoo2wqxzRyEMRr277rrrgARAos1dnpyjmIxHnVAWSBIUBGpT95Q3B1IFBApvNO395iRDfVQFvBciDtmAVNVdaKA7RtdG9Cox2m5nJqgQJIPurKUxaodc0EGdH+l93nPuuecmmaks0QtU3V9oBx4gBggyAPGCPLGfyTNEkjE6QhbspJ7BU5Ez/GhTXS+lskc2EHU6gtaoGNER3UFEzGFlh1AXKI3dQaBPkBO2AgE0/91z7r///sEHVErqjlvQx09/+tNJxsgYPdMrVJBdQsTYLfSSPUO26i5qqkh19yN+xUfpQCygg3pCuf36zc+tO/JAsiHNUEWyZVuQ6roDI11BL7WHfR9++OHDTmp8XtugbxB6KBuf4pPaVPfxJzO8iyHapq1kw37NkYbEqaLdeeed7ZyaR6C1a9fmxz/+8aAneYTNkB1kkw9UmfqsqqHiyicglOyl7jpE/xDOeiYYuznqqKPmnOciLrMFyLazTT784Q/Pus6zxEb+zR/EKNUKtuQzfxab2CTb5U/yjTaTiZiJX+sRtRWCL17JDfyvViu11/XyrxzAV+pZMXSl/dDjCy+8cIjfcp13yg941lY5vuYRbVU195n+yU48kMdcp03ihH6TvALNF/Pkkbp2D3/aQ9d06bq6S5++Ad3U9bbaIXbyE7olWzlCzH/b29422BPe6hoUlYm6Rlk1oFZB9YPkM75knaNYysfFeXalv1d1Sff4k1frbmlkKH/Zjcxumqj2LVQrVP30m1Si2Eid1UFX7N176fS6664bfiMTuYxMzaLRBvZnXZ64pW3sXbys69ysLSRrvs8eyNx6OPx87WtfG/puG6NWqWnUqFGjRo0aNWrUqNGCpqmo1Gy11VbZaaedBmTeSaZGo5AIKIvRshGg340UjWqN/KHRRoAQcu+D5KumGB0b3UJOzG9ct27dMFKE0hlV+t5cS4gBtB4vKh+QAIg0RAiaD8WBNEPCoBjaAJ0xAockQCggGnYMe9e73pVkjCxA8rR5cm1LMkZXzD0la6NnqLMKk7UORv911x18k5+99O2Cs8022wzVJPqCBEHRyNoIHxIAhSELPEIgoBhkC2Whg3qeRN1JxHoOOlXRg+pDDyFlnodfSBTEHprD7tg1vq0Zgg7WakO1PX7DJiAmiE5WrVo1nDHgNGr2CLlUgSG7unYKOsiu6QZyhnef2Qn/oDt+okLid7qCLqrseY77/YUu4geaBN2mE5UVSPSkTCZ/r+vwtIMteA+0ij1DU6+55ppBrxApvonXurMavUOvIVvsUCygI/qmQ/PB6xkF2s5P2Av7rydBN5pNfd9n3bp1w9oDVUh+VncvZPv8tK4hYHPylzgiz4kvbAniLxarktSqCxtcu3btgODSPTvGk7WX7F7VQWVGjORvfmfv8pC8JIZC5dmo5/MXOVpsJAN5QhWRbZKVtaHWGbJl99dzaPCFyMx1KjtmefBrcYfO8F/PQDvyyCOHSoZcik488cQkY73IZ3hVqdC/0GYxFEItLhx99NFJxv2UujaPneCVzPg1/rxH3NEWzxGLofz6DvIwOycjsqxryuo5NGSnX+a9covnsil9ge9973vDPfQpFrJLud7v7J39sLtaHZOz2VGt6JndYN0QexGj5StnCrEFu3uS4VlnnZVk7s6I8pZqPvsjc7FFLCDb2ifAD35rdYPs64521pAdcMABQ34S31yjz6Zio79B9vpb7Mrv2iCuiSm13+I+do08XzyT9571rGc94nlqrVLTqFGjRo0aNWrUqFGjBU1TU6lZvHjxMOJXBTD/0WfrV6DDEC9oECRENQIS6XsjP/MgK8INfTYKNrKs62EOPvjgYV6h+apG0Eat0DMjb6gEBMFnI05IQp33aoQO0YLuux6aC9muO6pAItwHyfB8hB8oD0RDO8gECqm9kCyjcnNHyQo/dEt3EDNoNQRQe174whcOe69DOyBL9EcmZExW9IogUBCDiqhpO1TTmpiKbEN3IBoVyYAkqHZAyqA/3qMqBd3xO0TO54qukrkqV60q8BfoDZmTNXlNnmkEKcIze+Fb2q5igwd6JAsy1mZrvVRCoDHaQld0UdfBQQv5NlSTLWhz1Skkyl+IlefyNzJUJVPtUNXSbsgcJJAN4IOdv+51r0syRrPI+NBDDx3mg1ef5MPiD/urc+pVdvgk/bq/rvPRVr4nfqmeQv+g3mLZ0qVLHxEB+3WnrbfeOjvvvPOwzoweVSXol2/Qq9jJF8zzJ+tqw+ICG6onutMfX7QLmtjOV9euXTvwIvbwKzYklrJjsbSu/5ILtQV66vR4vKmm113EvM+MAFUnba8nqasgiyP4EdvJWr7RTrYMmXY/MgNCnJIHtd/ahsn1s8ncHc3Ehe9///tDDrRWTYwgq4qOqzL46/q662u1KzIQ+7RVjpaDxXL5gEzZDX7kObIXPy666KIk48qLqpj2kcGVV16ZZGxb7J39qhjLU3W2ivvYN3/QT8P/1VdfPciC3cg79OJZ8ovcV3fk8m4ylWPJxHPrmhN+gFSX8C4f0LH34sOuf/o32ljPOOK7/EX+kwM8Xz6Vp2v1RD+NfdPFJz/5yVm/T+YgscDfegaiNtZd+ugJT+yQXWpTPUuOvfAfvswOVMtUjthr13Vzzs2bpFapadSoUaNGjRo1atSo0YKmqYDl1q5dm1tvvXVAGIzwjMbMW4S+mCfvOgiktQ5GfFBTCEhF8iEddpeCvBkdG0niy8j161//+vBd3TffM+vJyxBjKIx3GKEbvdb1D9oIya5rCLzf+4zkPc+o2l8IQN19DcJRkXnPQZANo3NyMIqvlZ26e5T3qr7ZbcQcbbuQrFq1akA2nVWgIgJlUOWB3nzsYx9LMka6Vb0gBOecc84sXiALUBcoI/QCIqVN0ED2pSoHyfI7BA7iUE+v1w5opPm6EDvvN6fYHGOIPyQE+gSJq3vbQxuhNJBdfvXDH/5wqH5Bn8kaL55JxhArMmF30GKIl/nW9ZwL1Sx+wse02f38wFxqVQWVUdUIbWRXYoSqF34hVmTl+XRZq7gqn9AiiJ3PSPt9z9agSpNtrLvt0S8fqOcsVZ92P12QjTVWYgRfZ7/QQrKBnuNDhWDdunWDfBrNpUWLFmXJkiWDjfARehIP2DbfkZ/EVPlLLK07UorFdOE9Km5+p2c+UStCixcvHvzCOzyDbdQKq89ijlhTd8G0fotNQvHreggoq9kTYi30V5v5m+vlaJUSsYvsq6+QEVsmezavHeLPH/3RHyWZeyYK/xUzyVR+q2dH7bTTTkNOFf/JoO7OCZVXaYG++1wrLd6JBzFKdct1Krw1H4rRrlOFUPXAJ9uQr8QF/SX2xkY8V4zTpxFjXUeH1c7JQbVPbvBcups866iu/6FHdsA+9AHF/7oeSBVJtZR/uM57yJDP2kWT/6jk1Oor2clj8o/f2QLZ0olKof4PO9SHkR+1lwydl8POxR780r1ZL2YkkBf+HnjggYE3Pmp9Dl+Vc53PZac2ba2zNPiQtqkk82m+fvzxxycZ667u+ofEnvvvv3+wqY1Rq9Q0atSoUaNGjRo1atRoQdNUVGo2bNiQ1atXD0iHkbpRudGsUajrVAuM8KGu5uhBY+se5vbHhpy7HvJplOw+iIRKzT333DMgQtANKIURslGpOZT19Hfvso7H/VCPeoqw0XA93dVfz4M4IGgepAL6Z9T9+te/PskYeVOtqNUpsoVQQSfryc7kolpSK0ZQLVRRSkjJHXfcMaBidhmCLOGJLI3aPYPsIVxkDxGCSNR1RJ4H+fL75FqqSR4hGXbrg8DW6gREw/dQpAsvvDDJ2EbIWDvf+973JhmjM+ZxQ33YsfdD1CAhdIRff6G0L3rRiwZEClIJqaUXSBYZ0Ak7JSuVGe9gD9BDsqIjPECOIGLuq/OvVYLIuJ5PQ7YqfhA0siU7NgENElPoFtJW59R7j3aqjnhPPfMBSnnXXXcN95r/TCb+sjsypW+yEgPqmiuyER/JWFyDBLNvuqVrbYTCX3/99YOfNppLDz30UO64444hVtODmMiW2DY9ykMQdvGq7shHL+6rFSC+yv/djw+oq+sefPDBIUeqBoqNdVexuhOkPOJ6ecp5VfIN22O71m1YO+N36C5ZaRMUWN6QH/iTtsn5dddDfqvabdYF2cnt0GdU195ov3wqz+KTbrTTzIsXvehFA2oudqm8uLaeiYJ3O4WKMbWKzBfdLyaKG/RPF3gSG8Uqz6nnV9GtGK7tZlHoL/nLNvRFxBM6V1kSz8QdfNphTh/J748Wc+69996hjwfNJ0txHE/siP71Y+hbThe365k6dUdTeY/Pe57v6UpVocZ2vuy9ZEvXfFnM5md1h0o5gCzxx9fFHLZnvZPr2ag8SleqyHffffegB3lDG+kJT6pD+gJ8qO4aW6ul7N5sEP7h+fqiYgd/kufoet999x3esTFqlZpGjRo1atSoUaNGjRotaJqKSk3f91m/fv0wujZCNOKH9hixQQiQ0TUkAXJhVAy9NTqtZ12oxEDwvRdyYe4fROKVr3zlgBxA2U499dQk45EwZAkKY9QKyTY6hVDXNkPQPN9zoTYQZXORjeiN4OvJskbJtToBUSDDegoyxK6iNhAIo2kIypFHHplkXG0gJ7LXDqg15Nz76eQpT3nKgAzUndLq7in1dHptNo8UGgOtgIZADVWb8ALxrGet1B3uoDf40QayhSywAUgINAUaCWGHxvADVQzP1w7PV11jU2zH88y1dz0Exu/33nvvsKOgKlHdoQv6QrYqJeygnqjsOrJlR5BUPGg7pIqu6YbO6ZZs6dx7XKeK8fKXvzzJ3B3xIK8VscI/P2PvZMl+686M7JduPF/VA9977rnnwAt7oUekLXhQXaIbMoTA8u23vvWtScZ2Jj6JHfU8Cs/HD9+d3C1NXGk0l7baaqvssMMOQ+VXHuLPtZLGdsiXHsi4nqPmPmvn6u5VdW0opJ0viktyyrbbbjsg2mKZd9WTzeVYFUaIdI1tEG6VH/FevIBci+Pazn+gu96HyIZM+IA4IYaRqTxnN6d6dstJJ50063tVSTHcdbUizf+1Bz907Hpxa8OGDYPe+KE8RT+eQdZkWs8vEwfkVtd5jnhf1+XWNaFIrKwVX/Yiz4j52kE39bw0sY8uyF77rP9gA6pu7JgOzjvvvCTJMccck2Ssa/yRsdklDz300BC3VWLoiczFW7lNG/RDtIH+PUd/h+x8L3fWvIbXl770pUnG+YSu6MR7VHD4FdmL4XSnkskO5R9+w488n/3RCZugQzryvLqGVH9U3/aggw4a7FGeYRd1N1lxThuRtnmHOFR35mXP7M9z+I334Z3PT677brufNWrUqFGjRo0aNWrU6ElL3TTMoe667u4kq5OsnG9eHoF2zfTyN828JdPNX+Nt82ma+Ztm3pLp5e+Zfd8//dEv+/WjlqeeME0zb8l08zfNvCXTzV/jbfNpWvnbZJ6aikFNknRd942+7w+Zbz42RdPM3zTzlkw3f423zadp5m+aeUumn79GG6dp19s08zfNvCXTzd8085ZMN3+Nt82naedvY9SmnzVq1KhRo0aNGjVq1GhBUxvUNGrUqFGjRo0aNWrUaEHTNA1q/vd8M/AoNM38TTNvyXTz13jbfJpm/qaZt2T6+Wu0cZp2vU0zf9PMWzLd/E0zb8l089d423yadv7m0NSsqWnUqFGjRo0aNWrUqFGjzaFpqtQ0atSoUaNGjRo1atSo0eOmqRjUdF33qq7rbuy67uau6/56nnnZu+u6L3dd972u677bdd1/GH2/S9d1X+i67qbR36fNI49bd133ra7rLh193q/ruqtG8juv67rF88jbkq7rLui67oau677fdd2LpkV2Xdf9p5FOr++67pyu67abT9l1XXdG13Uruq67fuK7jcqqm6H/MeLzuq7rXjhP/P33kW6v67ruU13XLZn47bQRfzd2XXfEluZt4rd3dl3Xd1236+jzFpXdpnjruu4vRrL7btd17574fovJrdHmU8tTj5vHlqc2j7eWp544fy1PPQH+FnSu6vt+Xv8l2TrJLUn2T7I4ybeTHDiP/Oye5IWj/++U5AdJDkzy7iR/Pfr+r5P8wzzy+I4kH09y6ejz+UlOGP3/Q0lOnUfe/k+St4z+vzjJkmmQXZI9k9yaZPsJmb1xPmWX5NAkL0xy/cR3G5VVkj9IcnmSLsnvJrlqnvh7ZZJFo///wwR/B458d9sk+418eustydvo+72TfC7Jj5LsOh+y24TcDkvyxSTbjj7vNh9ya/82W6ctTz1+Hlueevx8tTz1y+Gv5anNl92CzlXTUKn57SQ3933/w77vH0pybpKj54uZvu/v7Pv+2tH/f57k+5kJNEdnJhBm9PeY+eCv67q9krw6yYdHn7skhye5YAp4e2pmnOQjSdL3/UN936/KlMguyaIk23ddtyjJDknuzDzKru/7K5PcW77elKyOTnJ2P0NfT7Kk67rdtzR/fd9/vu/79aOPX0+y1wR/5/Z9v7bv+1uT3JwZ395ivI3ofUn+KsnkYsEtKrtN8HZqkr/v+37t6JoVE7xtMbk12mxqeepxUMtTT4hannqC/LU89YT4W9C5ahoGNXsm+cnE59tH3807dV23b5KDk1yVZFnf93eOfrorybJ5Yuv9mXGGDaPPS5OsmnDg+ZTffknuTnLmaNrBh7uu2zFTILu+73+a5D1JfpyZJHFfkm9memSHNiWrafSTN2UGWUqmgL+u645O8tO+779dfpp33pIsT/J7oykkV3Rd91tTxFujR6ep1VPLU4+bWp564tTy1GbSlOepZIHnqmkY1EwldV33lCSfTPIf+76/f/K3fqYWt8W3jeu67jVJVvR9/80t/e7HSIsyU8o8ve/7g5OszkxpeqB5lN3TMoM07JdkjyQ7JnnVlubj8dB8yeqxUNd1f5NkfZJ/nm9ekqTruh2S/Ock/2W+edkELUqyS2amFbwryfkj9LpRo82mlqc2i1qe+iVSy1OPnRZAnkoWeK6ahkHNTzMzvxDtNfpu3qjrum0ykyj+ue/7C0df/0wpcPR3xabu/xXSi5Mc1XXdbZmZ/nB4kg9kpky5aHTNfMrv9iS3931/1ejzBZlJHtMgu1ckubXv+7v7vl+X5MLMyHNaZIc2Jaup8ZOu696Y5DVJTholtGT++XtWZjoC3x75x15Jru267hlTwFsy4xsXjqYWXJ0ZBHvXKeGt0aPT1Omp5anNppannji1PLV5NO15KlnguWoaBjXXJHl2N7O7x+IkJyS5eL6YGY1IP5Lk+33fv3fip4uTnDz6/8lJPr2leev7/rS+7/fq+37fzMjp//Z9f1KSLyc5bj55G/F3V5KfdF33nNFXL0/yvUyB7DJTzv/drut2GOkYb1MhuwnalKwuTvKG0Q4pv5vkvony/xajrutelZlpJUf1ff/gxE8XJzmh67ptu67bL8mzk1y9pfjq+/47fd/v1vf9viP/uD0zC6nvynTI7qLMLMBM13XLM7M4eWXmWW6NHjO1PPUYqeWpJ0QtT/0SqOWpJ0QLO1f1U7BbQWZ2ffhBZnZT+Jt55uUlmSmlXpfk/43+/UFm5gR/KclNmdkZYpd55vNlGe8qs39mjOvmJJ/IaNeKeeLr3yf5xkh+FyV52rTILsl/S3JDkuuTfDQzu3jMm+ySnJOZedPrMhPc3rwpWWVmR5T/NfKR7yQ5ZJ74uzkz82r5xocmrv+bEX83JjlyS/NWfr8t411ltqjsNiG3xUk+NrK9a5McPh9ya/+ekF5bnnr8fLY89fh5a3nqifPX8tTmy25B56puxGijRo0aNWrUqFGjRo0aLUiahulnjRo1atSoUaNGjRo1arTZ1AY1jRo1atSoUaNGjRo1WtDUBjWNGjVq1KhRo0aNGjVa0NQGNY0aNWrUqFGjRo0aNVrQ1AY1jRo1atSoUaNGjRo1WtDUBjWNGjVq1KhRo0aNGjVa0NQGNY0aNWrUqFGjRo0aNVrQ1AY1jRo1atSoUaNGjRo1WtD0/wHsv8bYnjOHtwAAAABJRU5ErkJggg==\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "# Let's look at one of our training and validation patches.\n", + "plt.figure(figsize=(14,7))\n", + "plt.subplot(1,2,1)\n", + "plt.imshow(X[0,...,0], cmap='gray')\n", + "plt.title('Training Patch');\n", + "plt.subplot(1,2,2)\n", + "plt.imshow(X_val[0,...,0], cmap='gray')\n", + "plt.title('Validation Patch');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Configure" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'means': ['110.72957232412905'],\n", + " 'stds': ['63.656060106500874'],\n", + " 'n_dim': 2,\n", + " 'axes': 'YXC',\n", + " 'n_channel_in': 1,\n", + " 'n_channel_out': 1,\n", + " 'unet_residual': True,\n", + " 'unet_n_depth': 2,\n", + " 'unet_kern_size': 3,\n", + " 'unet_n_first': 96,\n", + " 'unet_last_activation': 'linear',\n", + " 'unet_input_shape': (None, None, 1),\n", + " 'train_loss': 'mse',\n", + " 'train_epochs': 200,\n", + " 'train_steps_per_epoch': 400,\n", + " 'train_learning_rate': 0.0004,\n", + " 'train_batch_size': 128,\n", + " 'train_tensorboard': True,\n", + " 'train_checkpoint': 'weights_best.h5',\n", + " 'train_reduce_lr': {'factor': 0.5, 'patience': 10},\n", + " 'batch_norm': True,\n", + " 'n2v_perc_pix': 0.198,\n", + " 'n2v_patch_shape': (64, 64),\n", + " 'n2v_manipulator': 'uniform_withCP',\n", + " 'n2v_neighborhood_radius': 2,\n", + " 'single_net_per_channel': False,\n", + " 'structN2Vmask': None,\n", + " 'probabilistic': False}" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "config = N2VConfig(X, unet_kern_size=3, \n", + " train_steps_per_epoch=400, train_epochs=200, train_loss='mse', batch_norm=True, \n", + " train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64), \n", + " unet_n_first = 96,\n", + " unet_residual = True,\n", + " n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=2,\n", + " single_net_per_channel=False)\n", + "\n", + "# Let's look at the parameters stored in the config-object.\n", + "vars(config)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/tbuchhol/Gitrepos/n2v/n2v/models/n2v_standard.py:428: UserWarning: output path for model already exists, files may be overwritten: /home/tbuchhol/Gitrepos/n2v/examples/2D/denoising2D_BSD68/models/BSD68_reproducability_5x5\n", + " warnings.warn('output path for model already exists, files may be overwritten: %s' % str(self.logdir.resolve()))\n" + ] + } + ], + "source": [ + "# a name used to identify the model\n", + "model_name = 'BSD68_reproducability_5x5'\n", + "# the base directory in which our model will live\n", + "basedir = 'models'\n", + "# We are now creating our network model.\n", + "model = N2V(config, model_name, basedir=basedir)\n", + "model.prepare_for_training(metrics=())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Training\n", + "\n", + "Training the model will likely take some time. We recommend to monitor the progress with TensorBoard, which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.\n", + "\n", + "You can start TensorBoard in a terminal from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/tbuchhol/Gitrepos/n2v/n2v/models/n2v_standard.py:188: UserWarning: small number of validation images (only 0.1% of all images)\n", + " warnings.warn(\"small number of validation images (only %.1f%% of all images)\" % (100*frac_val))\n", + "Preparing validation data: 100%|██████████| 4/4 [00:00<00:00, 350.06it/s]" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "8 blind-spots will be generated per training patch of size (64, 64).\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Epoch 1/10\n", + "20/20 [==============================] - 14s 679ms/step - loss: 0.4832 - val_loss: 2.5607\n", + "Epoch 2/10\n", + "20/20 [==============================] - 7s 341ms/step - loss: 0.2406 - val_loss: 0.5315\n", + "Epoch 3/10\n", + "20/20 [==============================] - 7s 335ms/step - loss: 0.2304 - val_loss: 0.3726\n", + "Epoch 4/10\n", + "20/20 [==============================] - 7s 340ms/step - loss: 0.2264 - val_loss: 0.3239\n", + "Epoch 5/10\n", + "20/20 [==============================] - 7s 334ms/step - loss: 0.2223 - val_loss: 0.2887\n", + "Epoch 6/10\n", + "20/20 [==============================] - 7s 338ms/step - loss: 0.2157 - val_loss: 0.2965\n", + "Epoch 7/10\n", + "20/20 [==============================] - 7s 330ms/step - loss: 0.2154 - val_loss: 0.2835\n", + "Epoch 8/10\n", + "20/20 [==============================] - 7s 333ms/step - loss: 0.2150 - val_loss: 0.2841\n", + "Epoch 9/10\n", + "20/20 [==============================] - 7s 338ms/step - loss: 0.2156 - val_loss: 0.2842\n", + "Epoch 10/10\n", + "20/20 [==============================] - 7s 333ms/step - loss: 0.2151 - val_loss: 0.2769\n", + "\n", + "Loading network weights from 'weights_best.h5'.\n" + ] + } + ], + "source": [ + "# We are ready to start training now.\n", + "history = model.train(X, X_val, 10, 20)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### After training, lets plot training and validation loss." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['loss', 'lr', 'val_loss']\n" + ] + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAA6IAAAE9CAYAAAARV984AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3de3hV9YHv//c3F+4XuQRQQC5WUAEBiSltvfDT1rY6te04Hmu9W+H0MtP2TE/PdNq5/ebXM3Nm+jtt5+JTFWvVllqrta1T7cVpvZ5qQ0AuKoqKIiBCuAgiAkn29/yxdshO2CEJ2dlrJ3m/nidPstf67rU+e7mJ+ex1CzFGJEmSJEkqlrK0A0iSJEmS+heLqCRJkiSpqCyikiRJkqSisohKkiRJkorKIipJkiRJKiqLqCRJkiSpqCrSWvHYsWPj1KlT01q9JEmSJKkHrVixYkeMsSrfvNSK6NSpU6mrq0tr9ZIkSZKkHhRC2NjePA/NlSRJkiQVlUVUkiRJklRUFlFJkiRJUlGldo6oJEmSJJWyhoYGNm/ezIEDB9KOUtIGDRrEpEmTqKys7PRzLKKSJEmSlMfmzZsZPnw4U6dOJYSQdpySFGNk586dbN68mWnTpnX6eR6aK0mSJEl5HDhwgDFjxlhCjyKEwJgxY7q819giKkmSJEntsIR27Fi2kUVUkiRJkkrUsGHD0o7QIyyi7dlUC4//7+S7JEmSJKlgLKL5bPw93H4h/O7rcMfFllFJkiRJqYox8uUvf5nZs2czZ84c7r77bgC2bt3KOeecw7x585g9ezaPP/44TU1NXHvttYfHfutb30o5/ZG8am4+rzwGTQ3Jz02H4NXHYXJNupkkSZIklbwVG3fz1IadLJw+hgVTRhVsuffddx+rVq1i9erV7NixgzPPPJNzzjmHH/7wh3zwgx/ka1/7Gk1NTezfv59Vq1axZcsWnnnmGQDefPPNguUoFItoPiedB499AzKNUF4JU89OO5EkSZKkFP2///Esz72+96hj3jrQwPNvvEUmQlmAUyYMZ/ig9u+tedoJI/jbj8zq1PqfeOIJLr/8csrLyxk/fjznnnsuy5cv58wzz+T666+noaGBj33sY8ybN4/p06ezYcMG/uzP/oyLLrqICy64oEuvtRg8NDefyTVwya3Jz3MudW+oJEmSpA7tPdBIJiY/Z2LyuKedc845PPbYY0ycOJFrr72WO++8k1GjRrF69WoWLVrETTfdxA033NDjObqqwz2iIYTJwJ3AeCACt8QY/6XNmEXAz4FXspPuizH+fWGjFtmsj8Oqu+DF30DjIagYkHYiSZIkSSnpzJ7LFRt3c8WtT9HQmKGyoox/+cT8gh2ee/bZZ3PzzTdzzTXXsGvXLh577DG+8Y1vsHHjRiZNmsTixYs5ePAgK1eu5MILL2TAgAFccsklzJw5kyuvvLIgGQqpM4fmNgJfijGuDCEMB1aEEB6KMT7XZtzjMcY/KnzEFNUsgWWXwPP/AbMvSTuNJEmSpBK2YMoolt2wsEfOEf34xz/Ok08+ydy5cwkh8M///M9MmDCBO+64g2984xtUVlYybNgw7rzzTrZs2cJ1111HJpMB4B//8R8LlqNQQoyxa08I4efAv8cYH8qZtgj4710potXV1bGurq5L6y66TAb+fQEMGw/X/yrtNJIkSZKKaN26dZx66qlpx+gV8m2rEMKKGGN1vvFdOkc0hDAVmA/8Ic/s94QQVocQfhlC6NwZt6WurAzOXAyvPQlb16SdRpIkSZL6hE4X0RDCMOAnwBdjjG0vF7USmBJjnAv8G/CzdpaxJIRQF0Koq6+vP9bMxTXvk1A5BJYvTTuJJEmSJPUJnSqiIYRKkhK6LMZ4X9v5Mca9McZ92Z8fBCpDCGPzjLslxlgdY6yuqqrqZvQiGXwcnH4ZrLkH9u9KO40kSZIk9XodFtEQQgC+C6yLMX6znTETsuMIIdRkl7uzkEFTVbMYGt+BVcvSTiJJkiRJvV5nrpr7PuAqYG0IYVV22leBEwFijDcBfwJ8JoTQCLwDfCJ29SpIpWz8LJhyFiy/FRZ+FsrK004kSZIkSb1Wh0U0xvgEEDoY8+/AvxcqVEmqWQz3XAMv/SfM+GDaaSRJkiSp1+rSVXP7tVMuguEnQO0taSeRJEmSpF7NItpZ5ZVQfX2yR3Tny2mnkSRJkqRWhg0b1u68V199ldmzZxcxzdFZRLtiwTVQVpmcKypJkiRJOiYW0a4YNg5mfRyeXgYH96WdRpIkSVKp2VQLj//v5Hs3feUrX+HGG288/Pjv/u7v+PrXv87555/PGWecwZw5c/j5z3/e5eUeOHCA6667jjlz5jB//nwefvhhAJ599llqamqYN28ep59+Oi+++CJvv/02F110EXPnzmX27Nncfffd3X5d0Lmr5ipXzRJY+2NYczec+am000iSJEkqhl9+Bd5Ye/QxB/fCtmcgZiCUwfjZMHBE++MnzIEP/692Z1922WV88Ytf5HOf+xwAP/7xj/n1r3/N5z//eUaMGMGOHTtYuHAhF198Mdm7aXbKjTfeSAiBtWvX8vzzz3PBBRewfv16brrpJr7whS9wxRVXcOjQIZqamnjwwQc54YQTeOCBBwDYs2dPp9dzNO4R7apJ1XD8PKhdCn3oDjWSJEmSuunAnqSEQvL9QPdK2/z589m+fTuvv/46q1evZtSoUUyYMIGvfvWrnH766bz//e9ny5YtbNu2rUvLfeKJJ7jyyisBOOWUU5gyZQrr16/nPe95D//wD//AP/3TP7Fx40YGDx7MnDlzeOihh/iLv/gLHn/8cUaOHNmt19TMPaJdFUKyV/Tnn4VXn4BpZ6edSJIkSVJPO8qey8M21cIdF0PTISgfAJfcCpNrurXaSy+9lHvvvZc33niDyy67jGXLllFfX8+KFSuorKxk6tSpHDhwoFvraPbJT36Sd7/73TzwwANceOGF3HzzzZx33nmsXLmSBx98kL/6q7/i/PPP52/+5m+6vS73iB6L2X8Mg0d7KxdJkiRJLSbXwDX3w3lfS753s4RCcnjuj370I+69914uvfRS9uzZw7hx46isrOThhx9m48aNXV7m2WefzbJlywBYv349r732GjNnzmTDhg1Mnz6dz3/+83z0ox9lzZo1vP766wwZMoQrr7ySL3/5y6xcubLbrwncI3psKgfDGVfD7/8N9myGkZPSTiRJkiSpFEyuKUgBbTZr1izeeustJk6cyPHHH88VV1zBRz7yEebMmUN1dTWnnHJKl5f52c9+ls985jPMmTOHiooKbr/9dgYOHMiPf/xjvv/971NZWXn4EODly5fz5S9/mbKyMiorK/nOd75TkNcVYkrnOVZXV8e6urpU1l0QuzfCv86Ds/4czv/rtNNIkiRJKrB169Zx6qmnph2jV8i3rUIIK2KM1fnGe2jusRo1BWZ8GFbcDo0H004jSZIkSb2Gh+Z2R81ieOEBePZnMPeytNNIkiRJ6ufWrl3LVVdd1WrawIED+cMf/pBSovwsot0xfRGMOTm5aJFFVJIkSVLK5syZw6pVq9KO0SEPze2O5lu5bKmDLSvSTiNJkiSpwNK6pk5vcizbyCLaXXM/AQOGQe2taSeRJEmSVECDBg1i586dltGjiDGyc+dOBg0a1KXneWhudw0aAXMvh5V3wgX/Hwwdm3YiSZIkSQUwadIkNm/eTH19fdpRStqgQYOYNKlrt7S0iBZCzWJYvjQpo2f/edppJEmSJBVAZWUl06ZNSztGn+ShuYVQNROmnQt1t0FTY9ppJEmSJKmkWUQLpWYJ7NkE63+VdhJJkiRJKmkW0UKZ8SEYOTm5lYskSZIkqV0W0UIpr4Dq6+GVR6H+hbTTSJIkSVLJsogW0hlXQ/lAqF2adhJJkiRJKlkW0UIaOhZmXwKr74IDe9NOI0mSJEklySJaaDWL4dA+WP2jtJNIkiRJUkmyiBbaxDNgYnVy0aIY004jSZIkSSXHItoTapbAzhdhwyNpJ5EkSZKkkmMR7QmzPgZDxnrRIkmSJEnKwyLaEyoGwoJrYf0vYffGtNNIkiRJUkmxiPaU6uuAAHW3pZ1EkiRJkkqKRbSnjJwEp1wEK++EhnfSTiNJkiRJJcMi2pNqlsA7u+CZ+9JOIkmSJEklwyLak6aeBVWnQu3N3spFkiRJkrIsoj0pBKhZDFtXw+a6tNNIkiRJUkmwiPa00y+DgSOg9pa0k0iSJElSSbCI9rSBw2DeFfDsT2Hf9rTTSJIkSVLqLKLFcOYNkGmAFXeknUSSJEmSUmcRLYax74KTzk/uKdrUkHYaSZIkSUqVRbRYapbAW6/D8w+knUSSJEmSUmURLZaTPwDHTYHapWknkSRJkqRUWUSLpaw8OVd04xOw7dm000iSJElSaiyixTT/SqgY5F5RSZIkSf2aRbSYhoyGOZfCmrvhnTfTTiNJkiRJqbCIFlvNYmjYD6t+mHYSSZIkSUqFRbTYjp8LkxfC8qWQyaSdRpIkSZKKrsMiGkKYHEJ4OITwXAjh2RDCF/KMCSGEfw0hvBRCWBNCOKNn4vYRNYth1wZ4+XdpJ5EkSZKkouvMHtFG4EsxxtOAhcDnQgintRnzYeDk7NcS4DsFTdnXnHoxDBsPtbeknUSSJEmSiq7DIhpj3BpjXJn9+S1gHTCxzbCPAnfGxFPAcSGE4wuetq+oGAALroMXf5PsGZUkSZKkfqRL54iGEKYC84E/tJk1EdiU83gzR5ZVQghLQgh1IYS6+vr6riXtaxZcm9xbdPl3004iSZIkSUXV6SIaQhgG/AT4Yoxx77GsLMZ4S4yxOsZYXVVVdSyL6DtGHJ8covv09+HQ/rTTSJIkSVLRdKqIhhAqSUroshjjfXmGbAEm5zyelJ2mo6lZAgf2wNp70k4iSZIkSUXTmavmBuC7wLoY4zfbGXY/cHX26rkLgT0xxq0FzNk3nbgQxs+B2qUQY9ppJEmSJKkoOrNH9H3AVcB5IYRV2a8LQwifDiF8OjvmQWAD8BKwFPhsz8TtY0JIbuWybS289lTaaSRJkiSpKCo6GhBjfAIIHYyJwOcKFapfmXMpPPTXya1cprwn7TSSJEmS1OO6dNVc9YABQ2D+VbDuftjr0cySJEmS+j6LaCk481OQaYIVt6edRJIkSZJ6nEW0FIyeDidfACu+B42H0k4jSZIkST3KIloqapbAvm3JIbqSJEmS1IdZREvFSecle0Zrl6adRJIkSZJ6lEW0VJSVwZmLYdNTsHV12mkkSZIkqcdYREvJvE9C5RD3ikqSJEnq0yyipWTwcXD6ZbD2Hti/K+00kiRJktQjLKKlpmYxNB6Ap3+QdhJJkiRJ6hEW0VIzfhZMOQuW35rcW1SSJEmS+hiLaCmqWQxvboQXH0o7iSRJkiQVnEW0FJ1yEQw/AWpvSTuJJEmSJBWcRbQUlVdC9fXw8m9hx0tpp5EkSZKkgrKIlqoF10BZZXKuqCRJkiT1IRbRUjVsHMz6OKxaBgf3pZ1GkiRJkgrGIlrKapbAwb2w5u60k0iSJElSwVhES9mkajh+HtQuhRjTTiNJkiRJBWERLWUhJHtF69fBq0+knUaSJEmSCsIiWupm/zEMHu2tXCRJkiT1GRbRUlc5GM64Gp5/APZsTjuNJEmSJHWbRbQ3qL4eiFD3vbSTSJIkSVK3WUR7g1FTYMaHYcXt0Hgw7TSSJEmS1C0W0d6iZjHs3wHP/iztJJIkSZLULRbR3mL6IhhzshctkiRJktTrWUR7i+ZbuWypgy0r0k4jSZIkScfMItqbzP0EDBgGtbemnUSSJEmSjplFtDcZNALmXg7P/ATe3pF2GkmSJEk6JhbR3qZmMTQdhJV3pp1EkiRJko6JRbS3qZoJ086FutugqTHtNJIkSZLUZRbR3qhmCezZBOt/lXYSSZIkSeoyi2hvNONDMHKyt3KRJEmS1CtZRHuj8gqovh5eeRTqX0g7jSRJkiR1iUW0tzrjaigfCLVL004iSZIkSV1iEe2tho6F2ZfA6rvgwN6000iSJElSp1lEe7OaxXBoH6z+UdpJJEmSJKnTLKK92cQzYGJ1ctGiGNNOI0mSJEmdYhHt7WqWwM4XYcMjaSeRJEmSpE6xiPZ2sz4GQ8Z60SJJkiRJvYZFtLerGAgLroX1v4TdG9NOI0mSJEkdsoj2BdXXAQHqbks7iSRJkiR1yCLaF4ycBKdcBCvvhIZ30k4jSZIkSUdlEe0rapbAO7vgmfvSTiJJkiRJR2UR7SumngVVp0Ltzd7KRZIkSVJJ67CIhhBuCyFsDyE80878RSGEPSGEVdmvvyl8THUoBKhZDFtXw+a6tNNIkiRJUrs6s0f0duBDHYx5PMY4L/v1992PpWNy+mUwcATU3pJ2EkmSJElqV4dFNMb4GLCrCFnUXQOHwbwr4Nmfwr7taaeRJEmSpLwKdY7oe0IIq0MIvwwhzCrQMnUszrwBMg2w4o60k0iSJElSXoUooiuBKTHGucC/AT9rb2AIYUkIoS6EUFdfX1+AVesIY98FJ52f3FO0qSHtNJIkSZJ0hG4X0Rjj3hjjvuzPDwKVIYSx7Yy9JcZYHWOsrqqq6u6q1Z6aJfDW6/D8A2knkSRJkqQjdLuIhhAmhBBC9uea7DJ3dne56oaTPwDHTYHapWknkSRJkqQjVHQ0IIRwF7AIGBtC2Az8LVAJEGO8CfgT4DMhhEbgHeATMXojy1SVlSfnij7017DtWRjvabuSJEmSSkdIqzNWV1fHujrvd9lj9u+Cb54Kcy+Hj3w77TSSJEmS+pkQwooYY3W+eYW6aq5KzZDRMOdSWHM3vPNm2mkkSZIk6TCLaF9Wsxga9sOqH6adRJIkSZIOs4j2ZcfPhckLYflSyGTSTiNJkiRJgEW076tZDLs2wMu/SzuJJEmSJAEW0b7v1Ith2HiovSXtJJIkSZIEWET7vooBsOA6ePE3yZ5RSZIkSUqZRbQ/WHBtcm/R5d9NO4kkSZIkWUT7hRHHJ4foPv19OLQ/7TSSJEmS+jmLaH9RswQO7IG196SdRJIkSVI/ZxHtL05cCOPnQO1SiDHtNJIkSZL6MYtofxFCciuXbWvhtafSTiNJkiSpH7OI9idzLoVBI72ViyRJkqRUWUT7kwFDYP5VsO5+2Ls17TSSJEmS+imLaH9z5qcg0wQrbk87iSRJkqR+yiLa34yeDidfACu+B42H0k4jSZIkqR+yiPZHNUtg37bkEF1JkiRJKjKLaH900nnJntHapWknkSRJktQPWUT7o7IyOHMxbHoKtq5OO40kSZKkfsYi2l/N+yRUDnGvqCRJkqSis4j2V4OPg9Mvg7X3wP5daaeRJEmS1I9YRPuzmsXQeACe/kHaSSRJkiT1IxbR/mz8LJhyFiy/Nbm3qCRJkiQVgUW0v6tZDG9uhBcfSjuJJEmSpH7CItrfnXIRDD8Bam9JO4kkSZKkfsIi2t+VV0L19fDyb2HHS2mnkSRJktQPWEQFC66BssrkXFFJkiRJ6mEWUcGwcTDr47BqGRzcl3YaSZIkSX2cRVSJmiVwcC+suTvtJJIkSZL6OIuoEpOq4fh5ULsUYkw7jSRJkqQ+zCKqRAjJXtH6dfDqE2mnkSRJktSHWUTVYvYfw+DR3spFkiRJUo+yiKpF5WA442p4/gHYszntNJIkSZL6KIuoWqu+HohQ9720k0iSJEnqoyyiam3UFJjxYVhxOzQeTDuNJEmSpD7IIqoj1SyG/Tvg2Z+lnUSSJElSH2QR1ZGmL4IxJ3vRIkmSJEk9wiKqIzXfymVLHWxZkXYaSZIkSX2MRVT5zf0EDBgGtbemnUSSJElSH2MRVX6DRsDcy+GZn8DbO9JOI0mSJKkPsYiqfTWLoekgrLwz7SSSJEmS+hCLqNpXNROmnQt1t0FTY9ppJEmSJPURFlEdXc0S2LMJ1v8q7SSSJEmS+giLqI5uxodg5GRv5SJJkiSpYCyiOrryCqi+Hl55FOpfSDuNJEmSpD6gwyIaQrgthLA9hPBMO/NDCOFfQwgvhRDWhBDOKHxMpeqMq6F8INQuTTuJJEmSpD6gM3tEbwc+dJT5HwZOzn4tAb7T/VgqKUPHwuxLYPVdcGBv2mkkSZIk9XIdFtEY42PArqMM+ShwZ0w8BRwXQji+UAFVImoWw6F9sPpHaSeRJEmS1MsV4hzRicCmnMebs9PUl0w8AyZWJxctijHtNJIkSZJ6saJerCiEsCSEUBdCqKuvry/mqlUINUtg54uw4ZG0k0iSJEnqxQpRRLcAk3MeT8pOO0KM8ZYYY3WMsbqqqqoAq1ZRzfoYDBnrRYskSZIkdUshiuj9wNXZq+cuBPbEGLcWYLkqNRUDYcG1sP6XsHtj2mkkSZIk9VKduX3LXcCTwMwQwuYQwqdCCJ8OIXw6O+RBYAPwErAU+GyPpVX6qq8DAtTdlnYSSZIkSb1URUcDYoyXdzA/Ap8rWCKVtpGT4JSLYOWdsOgrUDk47USSJEmSepmiXqxIfUTNEnhnFzxzX9pJJEmSJPVCFlF13dSzoOpUqL3ZW7lIkiRJ6jKLqLouBKhZDFtXw+a6tNNIkiRJ6mUsojo2p18GA0dA7S1pJ5EkSZLUy1hEdWwGDoN5V8CzP4V929NOI0mSJKkXsYjq2J15A2QaYMUdaSeRJEmS1ItYRHXsxr4LTjo/uadoU0PaaSRJkiT1EhZRdU/NEnjrdXj+gbSTSJIkSeolLKLqnpM/AMdNgdqlaSeRJEmS1EtYRNU9ZeXJuaIbn4Btz6adRpIkSVIvYBFV982/EioGuVdUkiRJUqdYRNV9Q0bDnEthzd3wzptpp5EkSZJU4iyiKoyaxdCwH1b9MO0kkiRJkkqcRVSFcfxcmLwQli+FTCbtNJIkSZJKmEVUhVOzGHZtgJd/l3YSSZIkSSXMIqrCOfViGDYeam9JO4kkSZKkEmYRVeFUDIAF18GLv0n2jEqSJElSHhZRFdaCa5N7iy7/btpJJEmSJJUoi6gKa8TxySG6T38fDu1PO40kSZKkEmQRVeHVLIEDe2DtPWknkSRJklSCLKIqvBMXwvg5ULsUYkw7jSRJkqQSYxFV4YWQ3Mpl21p47am000iSJEkqMRZR9Yw5l8Kgkd7KRZIkSdIRLKLqGQOGwPyrYN39sHdr2mkkSZIklRCLqHrOmZ+CTBOsuD3tJJIkSZJKiEVUPWf0dDj5AljxPWg8lHYaSZIkSSXCIqqeVbME9m1LDtGVJEmSJCyi6mknnZfsGa1dmnYSSZIkSSXCIqqeVVYGZy6GTU/B1tVpp5EkSZJUAiyi6nnzPgmVQ9wrKkmSJAmwiKoYBh8Hp18Ga++B/bvSTiNJkiQpZRZRFUfNYmg8AE//IO0kkiRJklJmEVVxjJ8FU86C5bcm9xaVJEmS1G9ZRFU8NYvhzY3w4kNpJ5EkSZKUIouoiueUi2D4CVB7S9pJJEmSJKXIIqriKa+E6uvh5d/CjpfSTiNJkiQpJRZRFdeCa6CsMjlXVJIkSVK/ZBFVcQ0bB7M+DquWwcF9aaeRJEmSlAKLqIqvZgkc3Atr7k47iSRJkqQUWERVfJOq4fh5ULsUYkw7jSRJkqQis4iq+EJI9orWr4NXn0g7jSRJkqQis4gqHbP/GAaP9lYukiRJUj9kEVU6KgfDGVfDul/AQ38Lm2rTTiRJkiSpSCyiSs/EBUAG/s+34XsXwtM/hMZDaaeSJEmS1MM6VURDCB8KIbwQQngphPCVPPOvDSHUhxBWZb9uKHxU9Tk7XwRC8nOmAX7+GfiHE+A7Z8FPPw1P3ggbHoX9u1KNKUmSJKmwKjoaEEIoB24EPgBsBpaHEO6PMT7XZujdMcY/7YGM6qumng0Vg6DpEJRXwFl/Do0H4I218PLDsPqulrEjJsKEOTB+dvJ9whwYNQ3K3KkvSZIk9TYdFlGgBngpxrgBIITwI+CjQNsiKnXN5Bq45n549fGklE6uaT1/Xz1sW5sU0zeeSb6/+BDEpmR+5VAYPytbTGfDhNNh3GkwYEjxX4skSZKkTutMEZ0IbMp5vBl4d55xl4QQzgHWA/8txrip7YAQwhJgCcCJJ57Y9bRFtGLjbp7asJOF08ewYMqotOP0XZNrjiygzYZVwbDz4KTzWqY1HEhu+9JcTN9YC2vvgbrvZgcEGPOubDGdA+Oze0+HT0huGyNJkiQpdZ0pop3xH8BdMcaDIYT/CtwBnNd2UIzxFuAWgOrq6ligdRfcio27+S83PUkmRioryvjhDe+meurotGMJoHIQnDA/+WoWI7y5saWcbnsGtqyAZ3/aMmbImJxDe09PiurYGVBeWfzXIEmSJPVznSmiW4DJOY8nZacdFmPcmfPwVuCfux8tPY+/WE9TTHryocYMV99Wy4dmT+DcGVWcc3IVo4YOSDmhWgkBRk1Nvk79o5bp77wJ255Niukba5KiWrsUmg4m88sHQNUp2WKaPbx3/GwYfFwar0KSJEnqNzpTRJcDJ4cQppEU0E8An8wdEEI4Psa4NfvwYmBdQVMW2dknV3HToy/T0JihrCywYMooHn5+O/et3EJZgLmTj2PRjHEsmlnFnIkjKSvzkM+SNPg4mPq+5KtZU2Nytd7mw3rfWAvrfwWrftAyZuSJOeedZveijprqob2SJElSgYQYOz5CNoRwIfBtoBy4Lcb4P0MIfw/UxRjvDyH8I0kBbQR2AZ+JMT5/tGVWV1fHurq6br+AntL2HNGmTGTN5jd55IV6Hllfz5rNbxIjjB46gHNOHsuimeM4Z0YVo91b2vvECPu2ZQ/tXZPdg7oWdr4EMZOMGTii5cJIzVfuHXcqVA5ON7skSZJUokIIK2KM1XnndaaI9oRSL6Id2bnvIE+8tINHXqjnsfX17Hz7ECHA6ZOOY9GMKs6dWcXcScdR7t7S3uvQfti+LufKvWuTQ30P7Uvmh7LkPNPDt5TJnn86bFy6uSVJkqQSYBHtYZlMZO2WPTzyQj2Prt/Oqk1vkokwakglZ59cxaKZVZwzo4qxwwamHVXdlcnA7lda9po2XyBp7+aWMUPHtS6m42cnV/ItL9S1wSRJkqTSZxEtst1vH+Lxl3bwyAvbeWx9PTv2HQLg9EkjOXdGUkznTR7l3tK+ZP+ubDltvnLvWtj+PGQakqTS8m4AABK6SURBVPkVg5JDeSfMaSmn42fBoBHp5pYkSZJ6iEU0RZlM5NnX9/LIC9t5dH09K1/bTSbCyMGVnH343NKxjBs+KO2oKrTGQ7BjfcstZZqv3PvOrpYxo6a2vt/phNkwcrIXRpIkSVKvZxEtIXv2N/D4S/XZw3jrqX8ruZXIrBNGsGhmFYtmjmP+5OOoKC9LOal6RIyw9/XWxfSNtbBrA5D9tzhoZOtiOmFOcpuZCg/tliRJUu9hES1RmUzkua17eXR9PY++UM+K13bTlImMGFTB2SdXcW72okfjR7i3tM87uA+2P5dzUaRnkgsjNexP5pdVwNiZrW8pM2EODB2bbm5JkiSpHRbRXmLPOw38n+y5pY+ur2fb3mRv6anHZ/eWzqjijCmjqHRvaf+QaYJdr7S+pcwbz8Bbr7eMGX5862I6YQ6Mng5l5enlliRJkrCI9koxRp5/463kvqUvbGfFxt00ZiLDB1bwvneNPXwY74SR7i3td97e2fqWMm88AztegExjMr9yCIw7rfWVew/th9dXwNSzYXJNuvklSZLUL1hE+4C3DiR7Sx9dn5xfunXPAQBOmTCcc2dWsWjGOBZMGcWACveW9kuNB6H++da3lNm2Fg7saTMwJOebjpwIA4fDwBHJ90Ejc34ekTNvRMvjikFeREmSJEmdZhHtY2KMrN+2j0de2M4jL9RTt3EXDU2RYQMreO9JY1g0cxyLZlZxwnGD046qNMUIezbB774Oa37M4YshjZoGg0fBwb1w8C04sBca3+l4eWWV7ZfUtj8f8bi54I7wsGFJkqR+wiLax+072MjvX9rBI9mLHm15MykVM8YPY9HMcZw7o4rqqaMYWGEB6Jc21cIdF0PTISgfANfcf+ThuU0N2VK6J/meW1IPZr8O7M0z7y04uKflcWzqOE/l0HZK6nAYOLL9Apv7uHKIe2clSZJKnEW0H4kx8tL2fYdvD1P7yi4ONWUYMqCc957UfG5pFZNGDUk7qoppUy28+njPniMaY3KV3/ZKanOJzS23becdfAsO7et4XaG8/ZLaag9tO/Oa55dX9sy2kCRJkkW0P3v7YCNPvryTR9Ynh/Fu3p3sLT2paujhQ3hrpo12b6lKR6ap/ZJ6YE8H83LKbqah43VVDM5TUpvPme3kIccDhkFZ9tzsYhR+SZKkXsIiKiDZW7phx9uHr8T7h1d2cagxw+DK8uy5pVWcO2McJ45xb6l6uRiTCzjlK6mdPuQ4+52OfkeGpJSWD4D9O5LxoQwmL4QRJ8CAIcmhxM1fA4ZA5eDkEOXKwe3Mz35VDPQQZEmS1GtZRJXX/kONPLVhZ7aY1vParv0ATB87NLkS78xxvHvaaAZVurdU/VQmkxwqnO+82LaFdeOT8MbqlucOG5+UyYZ3kkOWD73duXNoc4WynJI6GAZky2u+0trh/DwluHJIy95cSZKkArOIqkMxRl7duf/wlXif2rCTg40ZBlWWsXD6GBbNSIrp1LFD044qlabOXBSq8VBSShv2JwX10NvZopr9fmh/zvz92ce5899uKbb55jce6HruikGd21t7rCXY83B7Nw83lyR1g0VUXXagoYknN+zk0exFj17Z8TYAU8cMSa7EO7OKhdPGMHiAe0ulw9L+oz3TdPSimrfI5ivGR5nf4aHKbZRVduKQ5Nwie7RDlrPzBwyBbc/Bpj8k23nC6RAz2a+m7PeYbI8jpmeSPd15p+c8N3d6q3mZo0xvyj433/TOrDt2PlOP58kk/83312f/QwYYfgIMHAZlFcltmMoqcr7K25mW8ziUHX1+p5bR0Zjyoyw7dxntjEnrUPi0f3f0F27n4nA7K4dFVN22cWfLuaVPbtjJgYYMAyvKePfhvaVVTBs7lOD5bFLf1XzubfOhxoXcm9v8/M5cZKpPCNliVp58D2VJQQplSRnKO70sOZS6S9Nzlpl3epuv3Onb18HWVS15x8+CMe+CTGNSbDONOV/Zx7Hp6POP+N7Y8rxScETRzVdWO5jfbjluZxlvb4fnfp5sk7JymH1Jcn45JP/m8mpnesHGt7eYHs7Tk+vYtw2efzD5oCWUwcwPJ6dQFEwP/j3dY3+r98By922H9b9O/k2HcjjlwuRDrMO/Z/L97sv93RXa/52U73dZ3t91eX4X5v1dmW/c0fJ18Duzs/kKpZcUfouoCupAQxO1r+xKiun67WyoT/aWnjh6COdmS+l7ThrDkAEVKSeV1Os0NXS8t/a5n8G6X5D8ERWSP3RO/mDXSlbuHzxdLn25fyyVt7OOjopfL/jQrjOHmxdK8x7so5bZ5mlNeQpubqnNtFOCc786GBPbrqe9cn2UMR0uIzu/7QcwzWW4ZUL+bdbue6i3jG9neE9lav790az5aItC6tF/1z207EJnPvR269uwVQxOLrx3+IiPNkdfNH/1J8dclHNKdtMh2LMZiMn2veYXJVtGLaLqUZt2tZxb+vuXd/JOQxMDKsp497TR2WI6jpOq3FsqqUCKWZD6u17yiXuv5vu5ONzOxXGs2zmT51SDVoU132kGeU5DaPf0hTynXBxx+sNRinKq+eKRy9u+DrY/l2y7UA7nfQ3O/lLP/rc9RhZRFc3BxiaWv7I7Kabr63lpe/Kp2KRRgw+X0veeNIahA91bKqkbLEjqS3w/F4fbuTjczj2vF32wYhFVajbv3s+j65Pbw/z+pR28faiJAeVlnDltFOfOqGLc8EFseXM/C6ePZcGUUWnHlSRJkkpfLyn8FlGVhEONGepe3cUj65OLHq3ftq/V/PIyqCgro6IsUFYWKC8LlIfszyF5XFbGkdMOzwuUB1pNK293OcnYI6flLrNlfkXO/COz0er5+ZdJ+5ly84bsazzKspq3Qe6yWj//yEOgV2zczVMbdrJw+hgLfw9yO0uSJLWwiKok/a9fruPmRzc0X26EmmmjmTf5OJoykaYYyWS/N2U4/HPLtEgmRhqbku/Jc7Lj2ow92nJyn394OZlIJnLEclL6p3JMckstEQ40tlwIYPigCgaUl2WvTxAoC9nz3wnZ7xBC9uf2pjf/TDKmLHv+b+605unJanKmk52eXTZtp+Wsk1Zj2y4/d5k5zz08vXW+7Mtt9Xpy10m+6dnlHJmjzXoCbNtzgPtXv05TJlJeFvj4GRM5YeTgVs9r/pAgN1Pb7VeWs96y0LLtWi0n5N9uRy4nZKe3vL4j1pvnv0my3pafm1/jUZeT+7paPTf3fdb6tZDdJu39d2r7HHJe5+pNb1L36i5qpo1mwZTRlOWu2/PRC8YPV4rD7VwcbuficDsXR2/Zzkcrop6op9R84LQJ3P77V2lozFBZUcb/+NApJf0PKcbcksvhQtxSXo/8OZMtwK3mN5fbnFLd3tgjl0mr5zfPb8y0U7xjZOXG3dRt3A0kheOkqqHMOmEkkeYrwievJ5KU7ebphx/HmDMt+7jV/OTnTGxZXu7YTM7P5Cy77TqbX8/hdXS4zjzTSV4zufOzy8nkfJhwxHSOzN48vnksbcbGNq+z+QMMgMZM5J66zT31VlQHcgt7WWgpzm3Lasvjlp9bF+/Wz8ldXr7ntLf8thlCnufkjmn7vW3Otlnajmn+0KDzr631GLLfN+3az3efeIXGpkhFeWDJ2dOZMmZodiPn2e5H/Hc4clC+jwjyfW7QdlrI88zOft7QNkenM3RinZ1dVr6RzeNert/Htx5af3g7//kHZnBS1bB8C1E3vFy/j2+2s53zfc6c/8Pn/J9I5xvb+WUm/z851mUmYzv3SXm319+JTK/ufJubH3358Hb+r+eexNTm3xtZ7f3Tbe/fdLvT21lSVz+LbO/Dyy7n7GKeo8Vs/zUkMzbU7+Nb/5m8nwdWlLFs8cKS/hu6Pe4RVap6y6c5vdmKjbu54tanDhf+ZTf0zl9WpS7fdj7jxOOOKLuZnEKeW7wzkWyBbi718Yhifbgck+zdb3c52ZKcySnMuUW81XIOT2/5MKHlA4SYLeBtlpMM73g5Oa+l1XrzLYe209tuh5blPPHiDh5dX3/4aIqzTh5LzdTRZA5nasmfycmZycRWY5pzNy87k2n9OHf+4eXmjCFn+YeXkTMm979JvjEt89vkzE7L3Vatn996ufnGxJibU5LUV5UF+NIFM/nc//OutKPk5R5RlawFU0ZZinrYgimjWHbDQgt/D2tvO4cAZT11/7d+av6Jo3jqlZ2HS/8X3z/D9/VRtC7DRy+4zdOIsHrTHv70rpU0NGWoLC/jXy+fz+yJI/PufenMZ9r596Z0vKz8e5byPK9T6zz2PU9H5urOXqSWieu27uWrP32GxqYMFeVl/M+PzebU40fkSaDuWLd1L1/7Wfvb+Vj3ihfs+Z1cZnv70fKvP9+4zu/9O5bX9MyWPXzx7lWHf298+7J5zJ448vD8ruyVPfr4dqa384T2x7czo2B52hvf/i/Ndp+TM33d1r385U/X0tiUYUB5GQunj2l3eaXMPaKSpF7HoymKw+1cHG7n4nA7F4fbuTh6y3b2YkWSJEmSpKI6WhEtK3YYSZIkSVL/ZhGVJEmSJBWVRVSSJEmSVFQWUUmSJElSUVlEJUmSJElFZRGVJEmSJBWVRVSSJEmSVFQWUUmSJElSUVlEJUmSJElFFWKM6aw4hHpgYyor77yxwI60Q0gF4vtZfY3vafUlvp/Vl/h+VrMpMcaqfDNSK6K9QQihLsZYnXYOqRB8P6uv8T2tvsT3s/oS38/qDA/NlSRJkiQVlUVUkiRJklRUFtGjuyXtAFIB+X5WX+N7Wn2J72f1Jb6f1SHPEZUkSZIkFZV7RCVJkiRJRWURzSOE8KEQwgshhJdCCF9JO4/UHSGEySGEh0MIz4UQng0hfCHtTFJ3hRDKQwhPhxB+kXYWqTtCCMeFEO4NITwfQlgXQnhP2pmk7ggh/Lfs3xvPhBDuCiEMSjuTSpNFtI0QQjlwI/Bh4DTg8hDCaemmkrqlEfhSjPE0YCHwOd/T6gO+AKxLO4RUAP8C/CrGeAowF9/X6sVCCBOBzwPVMcbZQDnwiXRTqVRZRI9UA7wUY9wQYzwE/Aj4aMqZpGMWY9waY1yZ/fktkj9yJqabSjp2IYRJwEXArWlnkbojhDASOAf4LkCM8VCM8c10U0ndVgEMDiFUAEOA11POoxJlET3SRGBTzuPN+Ee7+ogQwlRgPvCHdJNI3fJt4H8AmbSDSN00DagHvpc91PzWEMLQtENJxyrGuAX4/4HXgK3Anhjjb9JNpVJlEZX6iRDCMOAnwBdjjHvTziMdixDCHwHbY4wr0s4iFUAFcAbwnRjjfOBtwGtTqNcKIYwiOZJwGnACMDSEcGW6qVSqLKJH2gJMznk8KTtN6rVCCJUkJXRZjPG+tPNI3fA+4OIQwqskp06cF0L4QbqRpGO2GdgcY2w+SuVekmIq9VbvB16JMdbHGBuA+4D3ppxJJcoieqTlwMkhhGkhhAEkJ1jfn3Im6ZiFEALJ+UfrYozfTDuP1B0xxr+MMU6KMU4l+f38uxijn7arV4oxvgFsCiHMzE46H3guxUhSd70GLAwhDMn+/XE+XoBL7ahIO0CpiTE2hhD+FPg1yZW+bosxPptyLKk73gdcBawNIazKTvtqjPHBFDNJkhJ/BizLfvi9Abgu5TzSMYsx/iGEcC+wkuSq/U8Dt6SbSqUqxBjTziBJkiRJ6kc8NFeSJEmSVFQWUUmSJElSUVlEJUmSJElFZRGVJEmSJBWVRVSSJEmSVFQWUUmSSkQIYVEI4Rdp55AkqadZRCVJkiRJRWURlSSpi0IIV4YQakMIq0IIN4cQykMI+0II3wohPBtC+G0IoSo7dl4I4akQwpoQwk9DCKOy098VQvjPEMLqEMLKEMJJ2cUPCyHcG0J4PoSwLIQQUnuhkiT1EIuoJEldEEI4FbgMeF+McR7QBFwBDAXqYoyzgEeBv80+5U7gL2KMpwNrc6YvA26MMc4F3gtszU6fD3wROA2YDryvx1+UJElFVpF2AEmSepnzgQXA8uzOysHAdiAD3J0d8wPgvhDCSOC4GOOj2el3APeEEIYDE2OMPwWIMR4AyC6vNsa4Oft4FTAVeKLnX5YkScVjEZUkqWsCcEeM8S9bTQzhr9uMi8e4/IM5Pzfh/6slSX2Qh+ZKktQ1vwX+JIQwDiCEMDqEMIXk/6l/kh3zSeCJGOMeYHcI4ezs9KuAR2OMbwGbQwgfyy5jYAhhSFFfhSRJKfJTVkmSuiDG+FwI4a+A34QQyoAG4HPA20BNdt52kvNIAa4BbsoWzQ3AddnpVwE3hxD+PruMS4v4MiRJSlWI8ViPHJIkSc1CCPtijMPSziFJUm/gobmSJEmSpKJyj6gkSZIkqajcIypJkiRJKiqLqCRJkiSpqCyikiRJkqSisohKkiRJkorKIipJkiRJKiqLqCRJkiSpqP4vVPlfAfG0df0AAAAASUVORK5CYII=\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "print(sorted(list(history.history.keys())))\n", + "plt.figure(figsize=(16,5))\n", + "plot_history(history,['loss','val_loss']);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Compute PSNR to GT" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "groundtruth_data = np.load('data/BSD68_reproducibility_data/test/bsd68_groundtruth.npy', allow_pickle=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "test_data = np.load('data/BSD68_reproducibility_data/test/bsd68_gaussian25.npy', allow_pickle=True)\n", + "# Note that we do not round or clip the noisy data to [0,255]\n", + "# If you want to enable clipping and rounding to emulate an 8 bit image format,\n", + "# uncomment the following line.\n", + "# test_data = np.round(np.clip(test_data, 0, 255.))" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "def PSNR(gt, img):\n", + " mse = np.mean(np.square(gt - img))\n", + " return 20 * np.log10(255) - 10 * np.log10(mse)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "# Weights corresponding to the smallest validation loss\n", + "# Smallest validation loss does not necessarily correspond to best performance, \n", + "# because the loss is computed to noisy target pixels.\n", + "model.load_weights('weights_best.h5')" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "pred = []\n", + "psnrs = []\n", + "for gt, img in zip(groundtruth_data, test_data):\n", + " p_ = model.predict(img.astype(np.float32), 'YX');\n", + " pred.append(p_)\n", + " psnrs.append(PSNR(gt, p_))\n", + "\n", + "psnrs = np.array(psnrs)" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "PSNR: 23.81\n" + ] + } + ], + "source": [ + "print(\"PSNR:\", np.round(np.mean(psnrs), 2))" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "# The weights of the converged network. \n", + "model.load_weights('weights_last.h5')" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [], + "source": [ + "pred = []\n", + "psnrs = []\n", + "for gt, img in zip(groundtruth_data, test_data):\n", + " p_ = model.predict(img.astype(np.float32), 'YX')\n", + " pred.append(p_)\n", + " psnrs.append(PSNR(gt, p_))\n", + "\n", + "psnrs = np.array(psnrs)" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "PSNR: 23.81\n" + ] + } + ], + "source": [ + "print(\"PSNR:\", np.round(np.mean(psnrs), 2))" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "n2v", + "language": "python", + "name": "n2v" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/ColabNotebooks/Noise2Void_2D_ZeroCostDL4Mic copy.ipynb b/ColabNotebooks/Noise2Void_2D_ZeroCostDL4Mic copy.ipynb new file mode 100644 index 00000000..558dfeda --- /dev/null +++ b/ColabNotebooks/Noise2Void_2D_ZeroCostDL4Mic copy.ipynb @@ -0,0 +1 @@ +{"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Noise2Void_2D_ZeroCostDL4Mic.ipynb","provenance":[{"file_id":"1hMjEc-Ex7j-jeYGclaPw2x3OgbkeC6Bl","timestamp":1610626439596},{"file_id":"1_W4q9V1ExGFldTUBvGK91E0LG5QMc7K6","timestamp":1602523405636},{"file_id":"1t9a-44km730bI7F4I08-6Xh7wEZuL98p","timestamp":1591013189418},{"file_id":"11TigzvLl4FSSwFHUNwLzZKI2IAix4Nmu","timestamp":1586415689249},{"file_id":"1_dSnxUg_qtNWjrPc7D6RWDWlCanEL4Ve","timestamp":1585153449937},{"file_id":"1bKo8jYVZPPgXPa_-Gdu1KhDnNN4vYfLx","timestamp":1583200150464}],"collapsed_sections":[],"toc_visible":true,"machine_shape":"hm"},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.4"}},"cells":[{"cell_type":"markdown","metadata":{"id":"V9zNGvape2-I"},"source":["# **Noise2Void (2D)**\n","\n","---\n","\n"," Noise2Void is a deep-learning method that can be used to denoise many types of images, including microscopy images and which was originally published by [Krull *et al.* on arXiv](https://arxiv.org/abs/1811.10980). It allows denoising of image data in a self-supervised manner, therefore high-quality, low noise equivalent images are not necessary to train this network. This is performed by \"masking\" a random subset of pixels in the noisy image and training the network to predict the values in these pixels. The resulting output is a denoised version of the image. Noise2Void is based on the popular U-Net network architecture, adapted from [CARE](https://www.nature.com/articles/s41592-018-0216-7).\n","\n"," **This particular notebook enables self-supervised denoised of 2D dataset. If you are interested in 3D dataset, you should use the Noise2Void 3D notebook instead.**\n","\n","---\n","\n","*Disclaimer*:\n","\n","This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.\n","\n","This notebook is largely based on the following paper:\n","\n","**Noise2Void - Learning Denoising from Single Noisy Images**\n","from Krull *et al.* published on arXiv in 2018 (https://arxiv.org/abs/1811.10980)\n","\n","And source code found in: https://github.com/juglab/n2v\n","\n","**Please also cite this original paper when using or developing this notebook.**\n"]},{"cell_type":"markdown","metadata":{"id":"jWAz2i7RdxUV"},"source":["# **How to use this notebook?**\n","\n","---\n","\n","Video describing how to use our notebooks are available on youtube:\n"," - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook\n"," - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook\n","\n","\n","---\n","###**Structure of a notebook**\n","\n","The notebook contains two types of cell: \n","\n","**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.\n","\n","**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.\n","\n","---\n","###**Table of contents, Code snippets** and **Files**\n","\n","On the top left side of the notebook you find three tabs which contain from top to bottom:\n","\n","*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.\n","\n","*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.\n","\n","*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. \n","\n","**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.\n","\n","**Note:** The \"sample data\" in \"Files\" contains default files. Do not upload anything in here!\n","\n","---\n","###**Making changes to the notebook**\n","\n","**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.\n","\n","To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).\n","You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment."]},{"cell_type":"markdown","metadata":{"id":"vNMDQHm0Ah-Z"},"source":["# **0. Before getting started**\n","---\n","\n","Before you run the notebook, please ensure that you are logged into your Google account and have the training and/or data to process in your Google Drive.\n","\n","For Noise2Void to train, it only requires a single noisy image but multiple images can be used. Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\n","\n","Please note that you currently can **only use .tif files!**\n","\n","**We strongly recommend that you generate high signal to noise ration version of your noisy images (Quality control dataset). These images can be used to assess the quality of your trained model**. The quality control assessment can be done directly in this notebook.\n","\n"," You can also provide a folder that contains the data that you wish to analyse with the trained network once all training has been performed.\n","\n","Here is a common data structure that can work:\n","\n","* Data\n"," - **Training dataset**\n"," - **Quality control dataset** (Optional but recomended)\n"," - Low SNR images\n"," - img_1.tif, img_2.tif\n"," - High SNR images\n"," - img_1.tif, img_2.tif \n"," - **Data to be predicted** \n"," - Results\n","\n","\n","The **Results** folder will contain the processed images, trained model and network parameters as csv file. Your original images remain unmodified.\n","\n","---\n","**Important note**\n","\n","- If you wish to **train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.\n","\n","- If you wish to **evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.\n","\n","- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.\n","---\n"]},{"cell_type":"markdown","metadata":{"id":"b4-r1gE7Iamv"},"source":["# **1. Initialise the Colab session**\n","---"]},{"cell_type":"markdown","metadata":{"id":"DMNHVZfHmbKb"},"source":["\n","## **1.1. Check for GPU access**\n","---\n","\n","By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:\n","\n","Go to **Runtime -> Change the Runtime type**\n","\n","**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*\n","\n","**Accelator: GPU** *(Graphics processing unit)*\n"]},{"cell_type":"code","metadata":{"id":"BDhmUgqCStlm","cellView":"form"},"source":["#@markdown ##Run this cell to check if you have GPU access\n","%tensorflow_version 1.x\n","\n","\n","import tensorflow as tf\n","if tf.test.gpu_device_name()=='':\n"," print('You do not have GPU access.') \n"," print('Did you change your runtime ?') \n"," print('If the runtime setting is correct then Google did not allocate a GPU for your session')\n"," print('Expect slow performance. To access GPU try reconnecting later')\n","\n","else:\n"," print('You have GPU access')\n"," !nvidia-smi"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"-oqBTeLaImnU"},"source":["## **1.2. Mount your Google Drive**\n","---\n"," To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.\n","\n"," Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. \n","\n"," Once this is done, your data are available in the **Files** tab on the top left of notebook."]},{"cell_type":"code","metadata":{"id":"01Djr8v-5pPk","cellView":"form"},"source":["#@markdown ##Play the cell to connect your Google Drive to Colab\n","\n","#@markdown * Click on the URL. \n","\n","#@markdown * Sign in your Google Account. \n","\n","#@markdown * Copy the authorization code. \n","\n","#@markdown * Enter the authorization code. \n","\n","#@markdown * Click on \"Files\" site on the right. Refresh the site. Your Google Drive folder should now be available here as \"drive\". \n","\n","# mount user's Google Drive to Google Colab.\n","from google.colab import drive\n","drive.mount('/content/gdrive')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"n4yWFoJNnoin"},"source":["# **2. Install Noise2Void and dependencies**\n","---"]},{"cell_type":"code","metadata":{"id":"3u2mXn3XsWzd","cellView":"form"},"source":["Notebook_version = ['1.12']\n","\n","\n","#@markdown ##Install Noise2Void and dependencies\n","\n","# Here we enable Tensorflow 1.\n","!pip install q keras==2.2.5\n","\n","%tensorflow_version 1.x\n","import tensorflow\n","print(tensorflow.__version__)\n","print(\"Tensorflow enabled.\")\n","\n","\n","# Here we install Noise2Void and other required packages\n","!pip install n2v\n","!pip install wget\n","!pip install fpdf\n","!pip install memory_profiler\n","%load_ext memory_profiler\n","\n","print(\"Noise2Void installed.\")\n","\n","# Here we install all libraries and other depencies to run the notebook.\n","\n","# ------- Variable specific to N2V -------\n","from n2v.models import N2VConfig, N2V\n","from csbdeep.utils import plot_history\n","from n2v.utils.n2v_utils import manipulate_val_data\n","from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n","from csbdeep.io import save_tiff_imagej_compatible\n","\n","# ------- Common variable to all ZeroCostDL4Mic notebooks -------\n","import numpy as np\n","from matplotlib import pyplot as plt\n","import urllib\n","import os, random\n","import shutil \n","import zipfile\n","from tifffile import imread, imsave\n","import time\n","import sys\n","import wget\n","from pathlib import Path\n","import pandas as pd\n","import csv\n","from glob import glob\n","from scipy import signal\n","from scipy import ndimage\n","from skimage import io\n","from sklearn.linear_model import LinearRegression\n","from skimage.util import img_as_uint\n","import matplotlib as mpl\n","from skimage.metrics import structural_similarity\n","from skimage.metrics import peak_signal_noise_ratio as psnr\n","from astropy.visualization import simple_norm\n","from skimage import img_as_float32\n","from fpdf import FPDF, HTMLMixin\n","from datetime import datetime\n","from pip._internal.operations.freeze import freeze\n","import subprocess\n","from datetime import datetime\n","\n","# Colors for the warning messages\n","class bcolors:\n"," WARNING = '\\033[31m'\n","W = '\\033[0m' # white (normal)\n","R = '\\033[31m' # red\n","\n","#Disable some of the tensorflow warnings\n","import warnings\n","warnings.filterwarnings(\"ignore\")\n","\n","print(\"Libraries installed\")\n","\n","\n","# Check if this is the latest version of the notebook\n","Latest_notebook_version = pd.read_csv(\"https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv\")\n","print('Notebook version: '+Notebook_version[0])\n","strlist = Notebook_version[0].split('.')\n","Notebook_version_main = strlist[0]+'.'+strlist[1]\n","if Notebook_version_main == Latest_notebook_version.columns:\n"," print(\"This notebook is up-to-date.\")\n","else:\n"," print(bcolors.WARNING +\"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\")\n","\n","def pdf_export(trained = False, augmentation = False, pretrained_model = False):\n"," class MyFPDF(FPDF, HTMLMixin):\n"," pass\n","\n"," pdf = MyFPDF()\n"," pdf.add_page()\n"," pdf.set_right_margin(-1)\n"," pdf.set_font(\"Arial\", size = 11, style='B') \n","\n"," Network = 'Noise2Void 2D'\n"," day = datetime.now()\n"," datetime_str = str(day)[0:10]\n","\n"," Header = 'Training report for '+Network+' model ('+model_name+')\\nDate: '+datetime_str\n"," pdf.multi_cell(180, 5, txt = Header, align = 'L') \n","\n"," # add another cell \n"," if trained:\n"," training_time = \"Training time: \"+str(hour)+ \"hour(s) \"+str(mins)+\"min(s) \"+str(round(sec))+\"sec(s)\"\n"," pdf.cell(190, 5, txt = training_time, ln = 1, align='L')\n"," pdf.ln(1)\n","\n"," Header_2 = 'Information for your materials and method:'\n"," pdf.cell(190, 5, txt=Header_2, ln=1, align='L')\n","\n"," all_packages = ''\n"," for requirement in freeze(local_only=True):\n"," all_packages = all_packages+requirement+', '\n"," #print(all_packages)\n","\n"," #Main Packages\n"," main_packages = ''\n"," version_numbers = []\n"," for name in ['tensorflow','numpy','Keras','csbdeep']:\n"," find_name=all_packages.find(name)\n"," main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '\n"," #Version numbers only here:\n"," version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])\n","\n"," cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)\n"," cuda_version = cuda_version.stdout.decode('utf-8')\n"," cuda_version = cuda_version[cuda_version.find(', V')+3:-1]\n"," gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)\n"," gpu_name = gpu_name.stdout.decode('utf-8')\n"," gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]\n"," #print(cuda_version[cuda_version.find(', V')+3:-1])\n"," #print(gpu_name)\n","\n"," shape = io.imread(Training_source+'/'+os.listdir(Training_source)[0]).shape\n"," dataset_size = len(os.listdir(Training_source))\n","\n"," text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(Xdata.shape[0])+' image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'\n","\n"," if pretrained_model:\n"," text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(Xdata.shape[0])+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'\n","\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n"," pdf.multi_cell(190, 5, txt = text, align='L')\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.ln(1)\n"," pdf.cell(26, 5, txt='Augmentation: ', ln=0)\n"," pdf.set_font('')\n"," if augmentation:\n"," aug_text = 'The dataset was augmented by default.'\n"," else:\n"," aug_text = 'No augmentation was used for training.'\n"," pdf.multi_cell(190, 5, txt=aug_text, align='L')\n"," pdf.set_font('Arial', size = 11, style = 'B')\n"," pdf.ln(1)\n"," pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n"," if Use_Default_Advanced_Parameters:\n"," pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')\n"," pdf.cell(200, 5, txt='The following parameters were used for training:')\n"," pdf.ln(1)\n"," html = \"\"\" \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n","
ParameterValue
number_of_epochs{0}
patch_size{1}
batch_size{2}
number_of_steps{3}
percentage_validation{4}
initial_learning_rate{5}
\n"," \"\"\".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),batch_size,number_of_steps,percentage_validation,initial_learning_rate)\n"," pdf.write_html(html)\n","\n"," #pdf.multi_cell(190, 5, txt = text_2, align='L')\n"," pdf.set_font(\"Arial\", size = 11, style='B')\n"," pdf.ln(1)\n"," pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.cell(28, 5, txt= 'Training_source:', align = 'L', ln=0)\n"," pdf.set_font('')\n"," pdf.multi_cell(170, 5, txt = Training_source, align = 'L')\n"," # pdf.set_font('')\n"," # pdf.set_font('Arial', size = 10, style = 'B')\n"," # pdf.cell(28, 5, txt= 'Training_target:', align = 'L', ln=0)\n"," # pdf.set_font('')\n"," # pdf.multi_cell(170, 5, txt = Training_target, align = 'L')\n"," #pdf.cell(190, 5, txt=aug_text, align='L', ln=1)\n"," pdf.ln(1)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)\n"," pdf.set_font('')\n"," pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')\n"," pdf.ln(1)\n"," pdf.cell(60, 5, txt = 'Example Training Image', ln=1)\n"," pdf.ln(1)\n"," exp_size = io.imread('/content/TrainingDataExample_N2V2D.png').shape\n"," pdf.image('/content/TrainingDataExample_N2V2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))\n"," pdf.ln(1)\n"," ref_1 = 'References:\\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. \"ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy.\" BioRxiv (2020).'\n"," pdf.multi_cell(190, 5, txt = ref_1, align='L')\n"," ref_2 = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. \"Noise2void-learning denoising from single noisy images.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.'\n"," pdf.multi_cell(190, 5, txt = ref_2, align='L')\n"," pdf.ln(3)\n"," reminder = 'Important:\\nRemember to perform the quality control step on all newly trained models\\nPlease consider depositing your training dataset on Zenodo'\n"," pdf.set_font('Arial', size = 11, style='B')\n"," pdf.multi_cell(190, 5, txt=reminder, align='C')\n","\n"," pdf.output(model_path+'/'+model_name+'/'+model_name+\"_training_report.pdf\")\n","\n","\n","\n"," #Make a pdf summary of the QC results\n","\n","def qc_pdf_export():\n"," class MyFPDF(FPDF, HTMLMixin):\n"," pass\n","\n"," pdf = MyFPDF()\n"," pdf.add_page()\n"," pdf.set_right_margin(-1)\n"," pdf.set_font(\"Arial\", size = 11, style='B') \n","\n"," Network = 'Noise2Void 2D'\n","\n"," day = datetime.now()\n"," datetime_str = str(day)[0:10]\n","\n"," Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\\nDate: '+datetime_str\n"," pdf.multi_cell(180, 5, txt = Header, align = 'L') \n","\n"," all_packages = ''\n"," for requirement in freeze(local_only=True):\n"," all_packages = all_packages+requirement+', '\n","\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 11, style = 'B')\n"," pdf.ln(2)\n"," pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')\n"," pdf.ln(1)\n"," exp_size = io.imread(full_QC_model_path+'/Quality Control/lossCurvePlots.png').shape\n"," if os.path.exists(full_QC_model_path+'/Quality Control/lossCurvePlots.png'):\n"," pdf.image(full_QC_model_path+'/Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))\n"," else:\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size=10)\n"," pdf.cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')\n"," pdf.ln(2)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.ln(3)\n"," pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)\n"," pdf.ln(1)\n"," exp_size = io.imread(full_QC_model_path+'/Quality Control/QC_example_data.png').shape\n"," pdf.image(full_QC_model_path+'/Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))\n"," pdf.ln(1)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 11, style = 'B')\n"," pdf.ln(1)\n"," pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n","\n"," pdf.ln(1)\n"," html = \"\"\"\n"," \n"," \n"," \"\"\"\n"," with open(full_QC_model_path+'/Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:\n"," metrics = csv.reader(csvfile)\n"," header = next(metrics)\n"," image = header[0]\n"," mSSIM_PvsGT = header[1]\n"," mSSIM_SvsGT = header[2]\n"," NRMSE_PvsGT = header[3]\n"," NRMSE_SvsGT = header[4]\n"," PSNR_PvsGT = header[5]\n"," PSNR_SvsGT = header[6]\n"," header = \"\"\"\n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \"\"\".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)\n"," html = html+header\n"," for row in metrics:\n"," image = row[0]\n"," mSSIM_PvsGT = row[1]\n"," mSSIM_SvsGT = row[2]\n"," NRMSE_PvsGT = row[3]\n"," NRMSE_SvsGT = row[4]\n"," PSNR_PvsGT = row[5]\n"," PSNR_SvsGT = row[6]\n"," cells = \"\"\"\n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \"\"\".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))\n"," html = html+cells\n"," html = html+\"\"\"
{0}{1}{2}{3}{4}{5}{6}
{0}{1}{2}{3}{4}{5}{6}
\"\"\"\n"," \n"," pdf.write_html(html)\n","\n"," pdf.ln(1)\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n"," ref_1 = 'References:\\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. \"ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy.\" BioRxiv (2020).'\n"," pdf.multi_cell(190, 5, txt = ref_1, align='L')\n"," ref_2 = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. \"Noise2void-learning denoising from single noisy images.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.'\n"," pdf.multi_cell(190, 5, txt = ref_2, align='L')\n"," pdf.ln(3)\n"," reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'\n","\n"," pdf.set_font('Arial', size = 11, style='B')\n"," pdf.multi_cell(190, 5, txt=reminder, align='C')\n","\n"," pdf.output(full_QC_model_path+'/Quality Control/'+QC_model_name+'_QC_report.pdf')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Fw0kkTU6CsU4"},"source":["# **3. Select your parameters and paths**\n","\n","---\n"]},{"cell_type":"markdown","metadata":{"id":"WzYAA-MuaYrT"},"source":["## **3.1. Setting main training parameters**\n","---\n","\n","\n"]},{"cell_type":"markdown","metadata":{"id":"CB6acvUFtWqd"},"source":[" **Paths for training, predictions and results**\n","\n","**`Training_source:`:** These is the path to your folders containing the Training_source (noisy images). To find the path of the folder containing your datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.\n","\n","**`model_name`:** Use only my_model -style, not my-model (Use \"_\" not \"-\"). Do not use spaces in the name. Do not re-use the name of an existing model (saved in the same folder), otherwise it will be overwritten.\n","\n","**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).\n","\n","\n","**Training Parameters**\n","\n","**`number_of_epochs`:** Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-200 epochs. Evaluate the performance after training (see 4.3.). **Default value: 100**\n"," \n","**`patch_size`:** Noise2Void divides the image into patches for training. Input the size of the patches (length of a side). The value should be between 64 and the dimensions of the image and divisible by 8. **Default value: 64**\n","\n","**Advanced Parameters - experienced users only**\n","\n","**`batch_size:`** This parameter defines the number of patches seen in each training step. Noise2Void requires a large batch size for stable training. Reduce this parameter if your GPU runs out of memory. **Default value: 128**\n","\n","**`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each image / patch is seen at least once per epoch. **Default value: Number of patch / batch_size**\n","\n","**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during the training. **Default value: 10**\n","\n","**`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**\n"]},{"cell_type":"code","metadata":{"id":"ewpNJ_I0Mv47","cellView":"form"},"source":["# create DataGenerator-object.\n","\n","datagen = N2V_DataGenerator()\n","\n","#@markdown ###Path to training image(s): \n","Training_source = \"\" #@param {type:\"string\"}\n","\n","#compatibility to easily change the name of the parameters\n","training_images = Training_source \n","imgs = datagen.load_imgs_from_directory(directory = Training_source)\n","\n","#@markdown ### Model name and path:\n","model_name = \"\" #@param {type:\"string\"}\n","model_path = \"\" #@param {type:\"string\"}\n","\n","full_model_path = model_path+'/'+model_name+'/'\n","\n","#@markdown ###Training Parameters\n","#@markdown Number of epochs:\n","number_of_epochs = 100#@param {type:\"number\"}\n","\n","#@markdown Patch size (pixels)\n","patch_size = 64#@param {type:\"number\"}\n","\n","#@markdown ###Advanced Parameters\n","\n","Use_Default_Advanced_Parameters = True#@param {type:\"boolean\"}\n","\n","#@markdown ###If not, please input:\n","batch_size = 128#@param {type:\"number\"}\n","number_of_steps = 100#@param {type:\"number\"}\n","percentage_validation = 10#@param {type:\"number\"}\n","initial_learning_rate = 0.0004 #@param {type:\"number\"}\n","\n","\n","if (Use_Default_Advanced_Parameters): \n"," print(\"Default advanced parameters enabled\")\n"," # number_of_steps is defined in the following cell in this case\n"," batch_size = 128\n"," percentage_validation = 10\n"," initial_learning_rate = 0.0004\n"," \n","\n","#here we check that no model with the same name already exist, if so print a warning\n","\n","if os.path.exists(model_path+'/'+model_name):\n"," print(bcolors.WARNING +\"!! WARNING: \"+model_name+\" already exists and will be deleted in the following cell !!\")\n"," print(bcolors.WARNING +\"To continue training \"+model_name+\", choose a new model_name here, and load \"+model_name+\" in section 3.3\"+W)\n"," \n","\n","# This will open a randomly chosen dataset input image\n","random_choice = random.choice(os.listdir(Training_source))\n","x = imread(Training_source+\"/\"+random_choice)\n","\n","# Here we check that the input images contains the expected dimensions\n","if len(x.shape) == 2:\n"," print(\"Image dimensions (y,x)\",x.shape)\n","\n","if not len(x.shape) == 2:\n"," print(bcolors.WARNING +\"Your images appear to have the wrong dimensions. Image dimension\",x.shape)\n","\n","\n","#Find image XY dimension\n","Image_Y = x.shape[0]\n","Image_X = x.shape[1]\n","\n","#Hyperparameters failsafes\n","\n","# Here we check that patch_size is smaller than the smallest xy dimension of the image \n","if patch_size > min(Image_Y, Image_X):\n"," patch_size = min(Image_Y, Image_X)\n"," print (bcolors.WARNING + \" Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:\",patch_size)\n","\n","# Here we check that patch_size is divisible by 8\n","if not patch_size % 8 == 0:\n"," patch_size = ((int(patch_size / 8)-1) * 8)\n"," print (bcolors.WARNING + \" Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:\",patch_size)\n","\n","# Here we disable pre-trained model by default (in case the next cell is not run)\n","Use_pretrained_model = False\n","\n","# Here we enable data augmentation by default (in case the cell is not ran)\n","Use_Data_augmentation = True\n","\n","print(\"Parameters initiated.\")\n","\n","#Here we display one image\n","norm = simple_norm(x, percent = 99)\n","\n","f=plt.figure(figsize=(16,8))\n","plt.subplot(1,2,1)\n","plt.imshow(x, interpolation='nearest', norm=norm, cmap='magma')\n","plt.title('Training source')\n","plt.axis('off');\n","plt.savefig('/content/TrainingDataExample_N2V2D.png',bbox_inches='tight',pad_inches=0)\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"xGcl7WGP4WHt"},"source":["## **3.2. Data augmentation**\n","---"]},{"cell_type":"markdown","metadata":{"id":"5Lio8hpZ4PJ1"},"source":["Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it.\n","\n","Data augmentation is performed here by rotating the patches in XY-Plane and flip them along X-Axis. This only works if the patches are square in XY.\n","\n"," **By default data augmentation is enabled. Disable this option is you run out of RAM during the training**.\n"," "]},{"cell_type":"code","metadata":{"id":"htqjkJWt5J_8","cellView":"form"},"source":["#Data augmentation\n","\n","#@markdown ##Play this cell to enable or disable data augmentation: \n","\n","Use_Data_augmentation = True #@param {type:\"boolean\"}\n","\n","if Use_Data_augmentation:\n"," print(\"Data augmentation enabled\")\n","\n","if not Use_Data_augmentation:\n"," print(\"Data augmentation disabled\")"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bQDuybvyadKU"},"source":["\n","## **3.3. Using weights from a pre-trained model as initial weights**\n","---\n"," Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a N2V 2D model**. \n","\n"," This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.\n","\n"," In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used. "]},{"cell_type":"code","metadata":{"id":"8vPkzEBNamE4","cellView":"form"},"source":["# @markdown ##Loading weights from a pre-trained network\n","\n","Use_pretrained_model = False #@param {type:\"boolean\"}\n","\n","pretrained_model_choice = \"Model_from_file\" #@param [\"Model_from_file\"]\n","\n","Weights_choice = \"best\" #@param [\"last\", \"best\"]\n","\n","\n","#@markdown ###If you chose \"Model_from_file\", please provide the path to the model folder:\n","pretrained_model_path = \"\" #@param {type:\"string\"}\n","\n","# --------------------- Check if we load a previously trained model ------------------------\n","if Use_pretrained_model:\n","\n","# --------------------- Load the model from the choosen path ------------------------\n"," if pretrained_model_choice == \"Model_from_file\":\n"," h5_file_path = os.path.join(pretrained_model_path, \"weights_\"+Weights_choice+\".h5\")\n","\n","\n","# --------------------- Download the a model provided in the XXX ------------------------\n","\n"," if pretrained_model_choice == \"Model_name\":\n"," pretrained_model_name = \"Model_name\"\n"," pretrained_model_path = \"/content/\"+pretrained_model_name\n"," print(\"Downloading the 2D_Demo_Model_from_Stardist_2D_paper\")\n"," if os.path.exists(pretrained_model_path):\n"," shutil.rmtree(pretrained_model_path)\n"," os.makedirs(pretrained_model_path)\n"," wget.download(\"\", pretrained_model_path)\n"," wget.download(\"\", pretrained_model_path)\n"," wget.download(\"\", pretrained_model_path) \n"," wget.download(\"\", pretrained_model_path)\n"," h5_file_path = os.path.join(pretrained_model_path, \"weights_\"+Weights_choice+\".h5\")\n","\n","# --------------------- Add additional pre-trained models here ------------------------\n","\n","\n","\n","# --------------------- Check the model exist ------------------------\n","# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled, \n"," if not os.path.exists(h5_file_path):\n"," print(bcolors.WARNING+'WARNING: weights_last.h5 pretrained model does not exist')\n"," Use_pretrained_model = False\n","\n"," \n","# If the model path contains a pretrain model, we load the training rate, \n"," if os.path.exists(h5_file_path):\n","#Here we check if the learning rate can be loaded from the quality control folder\n"," if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):\n","\n"," with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:\n"," csvRead = pd.read_csv(csvfile, sep=',')\n"," #print(csvRead)\n"," \n"," if \"learning rate\" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)\n"," print(\"pretrained network learning rate found\")\n"," #find the last learning rate\n"," lastLearningRate = csvRead[\"learning rate\"].iloc[-1]\n"," #Find the learning rate corresponding to the lowest validation loss\n"," min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]\n"," #print(min_val_loss)\n"," bestLearningRate = min_val_loss['learning rate'].iloc[-1]\n","\n"," if Weights_choice == \"last\":\n"," print('Last learning rate: '+str(lastLearningRate))\n","\n"," if Weights_choice == \"best\":\n"," print('Learning rate of best validation loss: '+str(bestLearningRate))\n","\n"," if not \"learning rate\" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead\n"," bestLearningRate = initial_learning_rate\n"," lastLearningRate = initial_learning_rate\n"," print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead' + W)\n","\n","#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used\n"," if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):\n"," print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+ W)\n"," bestLearningRate = initial_learning_rate\n"," lastLearningRate = initial_learning_rate\n","\n","\n","# Display info about the pretrained model to be loaded (or not)\n","if Use_pretrained_model:\n"," print('Weights found in:')\n"," print(h5_file_path)\n"," print('will be loaded prior to training.')\n","\n","else:\n"," print(bcolors.WARNING+'No pretrained nerwork will be used.')\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"rQndJj70FzfL"},"source":["# **4. Train the network**\n","---"]},{"cell_type":"markdown","metadata":{"id":"tGW2iaU6X5zi"},"source":["## **4.1. Prepare the training data and model for training**\n","---\n","Here, we use the information from 3. to build the model and convert the training data into a suitable format for training."]},{"cell_type":"code","metadata":{"id":"WMJnGJpCMa4y","cellView":"form"},"source":["#@markdown ##Create the model and dataset objects\n","\n","# --------------------- Here we delete the model folder if it already exist ------------------------\n","\n","if os.path.exists(model_path+'/'+model_name):\n"," print(bcolors.WARNING +\"!! WARNING: Model folder already exists and has been removed !!\" + W)\n"," shutil.rmtree(model_path+'/'+model_name)\n","\n","\n","# split patches from the training images\n","Xdata = datagen.generate_patches_from_list(imgs, shape=(patch_size,patch_size), augment=Use_Data_augmentation)\n","shape_of_Xdata = Xdata.shape\n","# create a threshold (10 % patches for the validation)\n","threshold = int(shape_of_Xdata[0]*(percentage_validation/100))\n","# split the patches into training patches and validation patches\n","X = Xdata[threshold:]\n","X_val = Xdata[:threshold]\n","print(Xdata.shape[0],\"patches created.\")\n","print(threshold,\"patch images for validation (\",percentage_validation,\"%).\")\n","print(Xdata.shape[0]-threshold,\"patch images for training.\")\n","%memit\n","\n","#Here we automatically define number_of_step in function of training data and batch size\n","if (Use_Default_Advanced_Parameters): \n"," number_of_steps= int(X.shape[0]/batch_size)+1\n","\n","\n","# --------------------- Using pretrained model ------------------------\n","#Here we ensure that the learning rate set correctly when using pre-trained models\n","if Use_pretrained_model:\n"," if Weights_choice == \"last\":\n"," initial_learning_rate = lastLearningRate\n","\n"," if Weights_choice == \"best\": \n"," initial_learning_rate = bestLearningRate\n","# --------------------- ---------------------- ------------------------\n","\n","# create a Config object\n","config = N2VConfig(X, unet_kern_size=3, \n"," train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, \n"," train_loss='mse', batch_norm=True, train_batch_size=batch_size, n2v_perc_pix=0.198, \n"," n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, train_learning_rate = initial_learning_rate)\n","\n","# Let's look at the parameters stored in the config-object.\n","vars(config)\n"," \n"," \n","# create network model.\n","model = N2V(config=config, name=model_name, basedir=model_path)\n","\n","# --------------------- Using pretrained model ------------------------\n","# Load the pretrained weights \n","if Use_pretrained_model:\n"," model.load_weights(h5_file_path)\n","# --------------------- ---------------------- ------------------------\n","\n","\n","print(\"Setup done.\")\n","print(config)\n","\n","\n","# creates a plot and shows one training patch and one validation patch.\n","plt.figure(figsize=(16,87))\n","plt.subplot(1,2,1)\n","plt.imshow(X[0,...,0], cmap='magma')\n","plt.axis('off')\n","plt.title('Training Patch');\n","plt.subplot(1,2,2)\n","plt.imshow(X_val[0,...,0], cmap='magma')\n","plt.axis('off')\n","plt.title('Validation Patch');\n","\n","pdf_export(pretrained_model = Use_pretrained_model)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"wQPz0F6JlvJR"},"source":["## **4.2. Start Training**\n","---\n","When playing the cell below you should see updates after each epoch (round). Network training can take some time.\n","\n","* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this \n","point.\n","\n","Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.\n","\n","**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSB Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).\n"]},{"cell_type":"code","metadata":{"id":"j_Qm5JBmlvJg","cellView":"form"},"source":["start = time.time()\n","\n","#@markdown ##Start training\n","%memit\n","\n","history = model.train(X, X_val)\n","print(\"Training done.\")\n","%memit\n","\n","\n","print(\"Training, done.\")\n","\n","# convert the history.history dict to a pandas DataFrame: \n","lossData = pd.DataFrame(history.history) \n","\n","if os.path.exists(model_path+\"/\"+model_name+\"/Quality Control\"):\n"," shutil.rmtree(model_path+\"/\"+model_name+\"/Quality Control\")\n","\n","os.makedirs(model_path+\"/\"+model_name+\"/Quality Control\")\n","\n","# The training evaluation.csv is saved (overwrites the Files if needed). \n","lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'\n","with open(lossDataCSVpath, 'w') as f:\n"," writer = csv.writer(f)\n"," writer.writerow(['loss','val_loss', 'learning rate'])\n"," for i in range(len(history.history['loss'])):\n"," writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])\n","\n","\n","# Displaying the time elapsed for training\n","dt = time.time() - start\n","mins, sec = divmod(dt, 60) \n","hour, mins = divmod(mins, 60) \n","print(\"Time elapsed:\",hour, \"hour(s)\",mins,\"min(s)\",round(sec),\"sec(s)\")\n","\n","model.export_TF(name='Noise2Void', \n"," description='Noise2Void 2D trained using ZeroCostDL4Mic.', \n"," authors=[\"You\"],\n"," test_img=X_val[0,...,0], axes='YX',\n"," patch_shape=(patch_size, patch_size))\n","\n","print(\"Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin\")\n","\n","pdf_export(trained = True, pretrained_model = Use_pretrained_model)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"QYuIOWQ3imuU"},"source":["# **5. Evaluate your model**\n","---\n","\n","This section allows the user to perform important quality checks on the validity and generalisability of the trained model. \n","\n","**We highly recommend to perform quality control on all newly trained models.**\n","\n"]},{"cell_type":"code","metadata":{"id":"zazOZ3wDx0zQ","cellView":"form"},"source":["# model name and path\n","#@markdown ###Do you want to assess the model you just trained ?\n","Use_the_current_trained_model = True #@param {type:\"boolean\"}\n","\n","#@markdown ###If not, please provide the path to the model folder:\n","\n","QC_model_folder = \"\" #@param {type:\"string\"}\n","\n","#Here we define the loaded model name and path\n","QC_model_name = os.path.basename(QC_model_folder)\n","QC_model_path = os.path.dirname(QC_model_folder)\n","\n","if (Use_the_current_trained_model): \n"," QC_model_name = model_name\n"," QC_model_path = model_path\n","\n","full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'\n","if os.path.exists(full_QC_model_path):\n"," print(\"The \"+QC_model_name+\" network will be evaluated\")\n","else:\n"," \n"," print(bcolors.WARNING + '!! WARNING: The chosen model does not exist !!')\n"," print('Please make sure you provide a valid model path and model name before proceeding further.')\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yDY9dtzdUTLh"},"source":["## **5.1. Inspection of the loss function**\n","---\n","\n","It is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*\n","\n","**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.\n","\n","**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.\n","\n","During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.\n","\n","Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact noise patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased."]},{"cell_type":"code","metadata":{"id":"vMzSP50kMv5p","cellView":"form"},"source":["#@markdown ##Play the cell to show a plot of training errors vs. epoch number\n","\n","lossDataFromCSV = []\n","vallossDataFromCSV = []\n","\n","with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:\n"," csvRead = csv.reader(csvfile, delimiter=',')\n"," next(csvRead)\n"," for row in csvRead:\n"," lossDataFromCSV.append(float(row[0]))\n"," vallossDataFromCSV.append(float(row[1]))\n","\n","epochNumber = range(len(lossDataFromCSV))\n","plt.figure(figsize=(15,10))\n","\n","plt.subplot(2,1,1)\n","plt.plot(epochNumber,lossDataFromCSV, label='Training loss')\n","plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')\n","plt.title('Training loss and validation loss vs. epoch number (linear scale)')\n","plt.ylabel('Loss')\n","plt.xlabel('Epoch number')\n","plt.legend()\n","\n","plt.subplot(2,1,2)\n","plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')\n","plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')\n","plt.title('Training loss and validation loss vs. epoch number (log scale)')\n","plt.ylabel('Loss')\n","plt.xlabel('Epoch number')\n","plt.legend()\n","plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png')\n","plt.show()\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"biT9FI9Ri77_"},"source":["## **5.2. Error mapping and quality metrics estimation**\n","---\n","\n","This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the \"Source_QC_folder\" and \"Target_QC_folder\" !\n","\n","**1. The SSIM (structural similarity) map** \n","\n","The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). \n","\n","**mSSIM** is the SSIM value calculated across the entire window of both images.\n","\n","**The output below shows the SSIM maps with the mSSIM**\n","\n","**2. The RSE (Root Squared Error) map** \n","\n","This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).\n","\n","\n","**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.\n","\n","**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.\n","\n","**The output below shows the RSE maps with the NRMSE and PSNR values.**\n"]},{"cell_type":"code","metadata":{"id":"nAs4Wni7VYbq","cellView":"form"},"source":["#@markdown ##Choose the folders that contain your Quality Control dataset\n","\n","Source_QC_folder = \"\" #@param{type:\"string\"}\n","Target_QC_folder = \"\" #@param{type:\"string\"}\n","\n","# Create a quality control/Prediction Folder\n","if os.path.exists(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\"):\n"," shutil.rmtree(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n","\n","os.makedirs(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n","\n","# Activate the pretrained model. \n","model_training = N2V(config=None, name=QC_model_name, basedir=QC_model_path)\n","\n","\n","# List Tif images in Source_QC_folder\n","Source_QC_folder_tif = Source_QC_folder+\"/*.tif\"\n","Z = sorted(glob(Source_QC_folder_tif))\n","Z = list(map(imread,Z))\n","\n","print('Number of test dataset found in the folder: '+str(len(Z)))\n","\n","\n","# Perform prediction on all datasets in the Source_QC folder\n","for filename in os.listdir(Source_QC_folder):\n"," img = imread(os.path.join(Source_QC_folder, filename))\n"," predicted = model_training.predict(img, axes='YX', n_tiles=(2,1))\n"," os.chdir(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n"," imsave(filename, predicted)\n","\n","def ssim(img1, img2):\n"," return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)\n","\n","\n","def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):\n"," \"\"\"This function is adapted from Martin Weigert\"\"\"\n"," \"\"\"Percentile-based image normalization.\"\"\"\n","\n"," mi = np.percentile(x,pmin,axis=axis,keepdims=True)\n"," ma = np.percentile(x,pmax,axis=axis,keepdims=True)\n"," return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)\n","\n","\n","def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32\n"," \"\"\"This function is adapted from Martin Weigert\"\"\"\n"," if dtype is not None:\n"," x = x.astype(dtype,copy=False)\n"," mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)\n"," ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)\n"," eps = dtype(eps)\n","\n"," try:\n"," import numexpr\n"," x = numexpr.evaluate(\"(x - mi) / ( ma - mi + eps )\")\n"," except ImportError:\n"," x = (x - mi) / ( ma - mi + eps )\n","\n"," if clip:\n"," x = np.clip(x,0,1)\n","\n"," return x\n","\n","def norm_minmse(gt, x, normalize_gt=True):\n"," \"\"\"This function is adapted from Martin Weigert\"\"\"\n","\n"," \"\"\"\n"," normalizes and affinely scales an image pair such that the MSE is minimized \n"," \n"," Parameters\n"," ----------\n"," gt: ndarray\n"," the ground truth image \n"," x: ndarray\n"," the image that will be affinely scaled \n"," normalize_gt: bool\n"," set to True of gt image should be normalized (default)\n"," Returns\n"," -------\n"," gt_scaled, x_scaled \n"," \"\"\"\n"," if normalize_gt:\n"," gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)\n"," x = x.astype(np.float32, copy=False) - np.mean(x)\n"," #x = x - np.mean(x)\n"," gt = gt.astype(np.float32, copy=False) - np.mean(gt)\n"," #gt = gt - np.mean(gt)\n"," scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())\n"," return gt, scale * x\n","\n","# Open and create the csv file that will contain all the QC metrics\n","with open(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/QC_metrics_\"+QC_model_name+\".csv\", \"w\", newline='') as file:\n"," writer = csv.writer(file)\n","\n"," # Write the header in the csv file\n"," writer.writerow([\"image #\",\"Prediction v. GT mSSIM\",\"Input v. GT mSSIM\", \"Prediction v. GT NRMSE\", \"Input v. GT NRMSE\", \"Prediction v. GT PSNR\", \"Input v. GT PSNR\"]) \n","\n"," # Let's loop through the provided dataset in the QC folders\n","\n","\n"," for i in os.listdir(Source_QC_folder):\n"," if not os.path.isdir(os.path.join(Source_QC_folder,i)):\n"," print('Running QC on: '+i)\n"," # -------------------------------- Target test data (Ground truth) --------------------------------\n"," test_GT = io.imread(os.path.join(Target_QC_folder, i))\n","\n"," # -------------------------------- Source test data --------------------------------\n"," test_source = io.imread(os.path.join(Source_QC_folder,i))\n","\n"," # Normalize the images wrt each other by minimizing the MSE between GT and Source image\n"," test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)\n","\n"," # -------------------------------- Prediction --------------------------------\n"," test_prediction = io.imread(os.path.join(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\",i))\n","\n"," # Normalize the images wrt each other by minimizing the MSE between GT and prediction\n"," test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True) \n","\n","\n"," # -------------------------------- Calculate the metric maps and save them --------------------------------\n","\n"," # Calculate the SSIM maps\n"," index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)\n"," index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)\n","\n"," #Save ssim_maps\n"," img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)\n"," img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)\n"," \n"," # Calculate the Root Squared Error (RSE) maps\n"," img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))\n"," img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))\n","\n"," # Save SE maps\n"," img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)\n"," img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)\n","\n","\n"," # -------------------------------- Calculate the RSE metrics and save them --------------------------------\n","\n"," # Normalised Root Mean Squared Error (here it's valid to take the mean of the image)\n"," NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))\n"," NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))\n"," \n"," # We can also measure the peak signal to noise ratio between the images\n"," PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)\n"," PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)\n","\n"," writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])\n","\n","\n","# All data is now processed saved\n","Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same\n","\n","plt.figure(figsize=(15,15))\n","# Currently only displays the last computed set, from memory\n","# Target (Ground-truth)\n","plt.subplot(3,3,1)\n","plt.axis('off')\n","img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))\n","plt.imshow(img_GT)\n","plt.title('Target',fontsize=15)\n","\n","# Source\n","plt.subplot(3,3,2)\n","plt.axis('off')\n","img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))\n","plt.imshow(img_Source)\n","plt.title('Source',fontsize=15)\n","\n","#Prediction\n","plt.subplot(3,3,3)\n","plt.axis('off')\n","img_Prediction = io.imread(os.path.join(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction/\", Test_FileList[-1]))\n","plt.imshow(img_Prediction)\n","plt.title('Prediction',fontsize=15)\n","\n","#Setting up colours\n","cmap = plt.cm.CMRmap\n","\n","#SSIM between GT and Source\n","plt.subplot(3,3,5)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)\n","plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)\n","plt.title('Target vs. Source',fontsize=15)\n","plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)\n","plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)\n","\n","#SSIM between GT and Prediction\n","plt.subplot(3,3,6)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)\n","plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)\n","plt.title('Target vs. Prediction',fontsize=15)\n","plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)\n","\n","#Root Squared Error between GT and Source\n","plt.subplot(3,3,8)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)\n","plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)\n","plt.title('Target vs. Source',fontsize=15)\n","plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)\n","#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))\n","plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)\n","\n","#Root Squared Error between GT and Prediction\n","plt.subplot(3,3,9)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)\n","plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)\n","plt.title('Target vs. Prediction',fontsize=15)\n","plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)\n","plt.savefig(full_QC_model_path+'/Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)\n","\n","qc_pdf_export()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"69aJVFfsqXbY"},"source":["# **6. Using the trained model**\n","\n","---\n","\n","In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive."]},{"cell_type":"markdown","metadata":{"id":"tcPNRq1TrMPB"},"source":["## **6.1. Generate prediction(s) from unseen dataset**\n","---\n","\n","The current trained model (from section 4.2) can now be used to process images. If an older model needs to be used, please untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).\n","\n","**`Data_folder`:** This folder should contains the images that you want to predict using the network that you will train.\n","\n","**`Result_folder`:** This folder will contain the predicted output images.\n","\n","**`Data_type`:** Please indicate if the images you want to predict are single images or stacks"]},{"cell_type":"code","metadata":{"id":"Am2JSmpC0frj","cellView":"form"},"source":["Single_Images = 1\n","Stacks = 2\n","\n","#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved, then play the cell to predict output on your unseen images.\n","\n","#@markdown ###Path to data to analyse and where predicted output should be saved:\n","Data_folder = \"\" #@param {type:\"string\"}\n","Result_folder = \"\" #@param {type:\"string\"}\n","\n","#@markdown ###Are your data single images or stacks?\n","\n","Data_type = Single_Images #@param [\"Single_Images\", \"Stacks\"] {type:\"raw\"}\n","\n","# model name and path\n","#@markdown ###Do you want to use the current trained model?\n","Use_the_current_trained_model = True #@param {type:\"boolean\"}\n","\n","#@markdown ###If not, please provide the path to the model folder:\n","\n","Prediction_model_folder = \"\" #@param {type:\"string\"}\n","\n","#Here we find the loaded model name and parent path\n","Prediction_model_name = os.path.basename(Prediction_model_folder)\n","Prediction_model_path = os.path.dirname(Prediction_model_folder)\n","\n","if (Use_the_current_trained_model): \n"," print(\"Using current trained network\")\n"," Prediction_model_name = model_name\n"," Prediction_model_path = model_path\n","\n","full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'\n","if os.path.exists(full_Prediction_model_path):\n"," print(\"The \"+Prediction_model_name+\" network will be used.\")\n","else:\n"," print(bcolors.WARNING +'!! WARNING: The chosen model does not exist !!')\n"," print('Please make sure you provide a valid model path and model name before proceeding further.')\n","\n","\n","#Activate the pretrained model. \n","config = None\n","model = N2V(config, Prediction_model_name, basedir=Prediction_model_path)\n","\n","thisdir = Path(Data_folder)\n","outputdir = Path(Result_folder)\n","\n"," # r=root, d=directories, f = files\n","for r, d, f in os.walk(thisdir):\n"," for file in f:\n"," if \".tif\" in file:\n"," print(os.path.join(r, file))\n","\n","if Data_type == 1 :\n"," print(\"Single images are now beeing predicted\")\n","\n","# Loop through the files\n"," for r, d, f in os.walk(thisdir):\n"," for file in f:\n"," base_filename = os.path.basename(file)\n"," input_train = imread(os.path.join(r, file))\n"," pred_train = model.predict(input_train, axes='YX', n_tiles=(2,1))\n"," save_tiff_imagej_compatible(os.path.join(outputdir, base_filename), pred_train, axes='YX') \n","\n"," print(\"Images saved into folder:\", Result_folder)\n","\n","if Data_type == 2 :\n"," print(\"Stacks are now beeing predicted\")\n"," for r, d, f in os.walk(thisdir):\n"," for file in f:\n"," base_filename = os.path.basename(file)\n"," timelapse = imread(os.path.join(r, file))\n"," n_timepoint = timelapse.shape[0]\n"," prediction_stack = np.zeros((n_timepoint, timelapse.shape[1], timelapse.shape[2]))\n","\n"," for t in range(n_timepoint):\n"," img_t = timelapse[t]\n"," prediction_stack[t] = model.predict(img_t, axes='YX', n_tiles=(2,1))\n","\n"," prediction_stack_32 = img_as_float32(prediction_stack, force_copy=False)\n"," imsave(os.path.join(outputdir, base_filename), prediction_stack_32) \n"," \n"," \n","\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"67_8rEKp8C-z"},"source":["## **6.2. Assess predicted output**\n","---\n","\n","\n"]},{"cell_type":"code","metadata":{"cellView":"form","id":"n-stU-f08Cae"},"source":["# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.\n","\n","# This will display a randomly chosen dataset input and predicted output\n","\n","\n","random_choice = random.choice(os.listdir(Data_folder))\n","x = imread(Data_folder+\"/\"+random_choice)\n","\n","os.chdir(Result_folder)\n","y = imread(Result_folder+\"/\"+random_choice)\n","\n","if Data_type == 1 :\n","\n"," f=plt.figure(figsize=(16,8))\n"," plt.subplot(1,2,1)\n"," plt.imshow(x, interpolation='nearest')\n"," plt.title('Input')\n"," plt.axis('off');\n"," plt.subplot(1,2,2)\n"," plt.imshow(y, interpolation='nearest')\n"," plt.title('Predicted output')\n"," plt.axis('off');\n","\n","if Data_type == 2 :\n","\n"," f=plt.figure(figsize=(16,8))\n"," plt.subplot(1,2,1)\n"," plt.imshow(x[1], interpolation='nearest')\n"," plt.title('Input')\n"," plt.axis('off');\n"," plt.subplot(1,2,2)\n"," plt.imshow(y[1], interpolation='nearest')\n"," plt.title('Predicted output')\n"," plt.axis('off');\n","\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"hvkd66PldsXB"},"source":["## **6.3. Download your predictions**\n","---\n","\n","**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name."]},{"cell_type":"markdown","metadata":{"id":"u4pcBe8Z3T2J"},"source":["#**Thank you for using Noise2Void 2D!**"]}]} \ No newline at end of file diff --git a/ColabNotebooks/Noise2Void_2D_ZeroCostDL4Mic.ipynb b/ColabNotebooks/Noise2Void_2D_ZeroCostDL4Mic.ipynb new file mode 100644 index 00000000..100d661c --- /dev/null +++ b/ColabNotebooks/Noise2Void_2D_ZeroCostDL4Mic.ipynb @@ -0,0 +1 @@ +{"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Noise2Void_2D_ZeroCostDL4Mic.ipynb","provenance":[{"file_id":"1hMjEc-Ex7j-jeYGclaPw2x3OgbkeC6Bl","timestamp":1610626439596},{"file_id":"1_W4q9V1ExGFldTUBvGK91E0LG5QMc7K6","timestamp":1602523405636},{"file_id":"1t9a-44km730bI7F4I08-6Xh7wEZuL98p","timestamp":1591013189418},{"file_id":"11TigzvLl4FSSwFHUNwLzZKI2IAix4Nmu","timestamp":1586415689249},{"file_id":"1_dSnxUg_qtNWjrPc7D6RWDWlCanEL4Ve","timestamp":1585153449937},{"file_id":"1bKo8jYVZPPgXPa_-Gdu1KhDnNN4vYfLx","timestamp":1583200150464}],"collapsed_sections":[],"toc_visible":true,"machine_shape":"hm"},"kernelspec":{"name":"python3","display_name":"Python 3.8.8 64-bit ('base': conda)"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.8.8"}},"cells":[{"cell_type":"markdown","metadata":{"id":"V9zNGvape2-I"},"source":["# **Noise2Void (2D)**\n","\n","---\n","\n"," Noise2Void is a deep-learning method that can be used to denoise many types of images, including microscopy images and which was originally published by [Krull *et al.* on arXiv](https://arxiv.org/abs/1811.10980). It allows denoising of image data in a self-supervised manner, therefore high-quality, low noise equivalent images are not necessary to train this network. This is performed by \"masking\" a random subset of pixels in the noisy image and training the network to predict the values in these pixels. The resulting output is a denoised version of the image. Noise2Void is based on the popular U-Net network architecture, adapted from [CARE](https://www.nature.com/articles/s41592-018-0216-7).\n","\n"," **This particular notebook enables self-supervised denoised of 2D dataset. If you are interested in 3D dataset, you should use the Noise2Void 3D notebook instead.**\n","\n","---\n","\n","*Disclaimer*:\n","\n","This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.\n","\n","This notebook is largely based on the following paper:\n","\n","**Noise2Void - Learning Denoising from Single Noisy Images**\n","from Krull *et al.* published on arXiv in 2018 (https://arxiv.org/abs/1811.10980)\n","\n","And source code found in: https://github.com/juglab/n2v\n","\n","**Please also cite this original paper when using or developing this notebook.**\n"]},{"cell_type":"markdown","metadata":{"id":"jWAz2i7RdxUV"},"source":["# **How to use this notebook?**\n","\n","---\n","\n","Video describing how to use our notebooks are available on youtube:\n"," - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook\n"," - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook\n","\n","\n","---\n","###**Structure of a notebook**\n","\n","The notebook contains two types of cell: \n","\n","**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.\n","\n","**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.\n","\n","---\n","###**Table of contents, Code snippets** and **Files**\n","\n","On the top left side of the notebook you find three tabs which contain from top to bottom:\n","\n","*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.\n","\n","*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.\n","\n","*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. \n","\n","**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.\n","\n","**Note:** The \"sample data\" in \"Files\" contains default files. Do not upload anything in here!\n","\n","---\n","###**Making changes to the notebook**\n","\n","**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.\n","\n","To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).\n","You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment."]},{"cell_type":"markdown","metadata":{"id":"vNMDQHm0Ah-Z"},"source":["# **0. Before getting started**\n","---\n","\n","Before you run the notebook, please ensure that you are logged into your Google account and have the training and/or data to process in your Google Drive.\n","\n","For Noise2Void to train, it only requires a single noisy image but multiple images can be used. Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\n","\n","Please note that you currently can **only use .tif files!**\n","\n","**We strongly recommend that you generate high signal to noise ration version of your noisy images (Quality control dataset). These images can be used to assess the quality of your trained model**. The quality control assessment can be done directly in this notebook.\n","\n"," You can also provide a folder that contains the data that you wish to analyse with the trained network once all training has been performed.\n","\n","Here is a common data structure that can work:\n","\n","* Data\n"," - **Training dataset**\n"," - **Quality control dataset** (Optional but recomended)\n"," - Low SNR images\n"," - img_1.tif, img_2.tif\n"," - High SNR images\n"," - img_1.tif, img_2.tif \n"," - **Data to be predicted** \n"," - Results\n","\n","\n","The **Results** folder will contain the processed images, trained model and network parameters as csv file. Your original images remain unmodified.\n","\n","---\n","**Important note**\n","\n","- If you wish to **train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.\n","\n","- If you wish to **evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.\n","\n","- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.\n","---\n"]},{"cell_type":"markdown","metadata":{"id":"b4-r1gE7Iamv"},"source":["# **1. Initialise the Colab session**\n","---"]},{"cell_type":"markdown","metadata":{"id":"DMNHVZfHmbKb"},"source":["\n","## **1.1. Check for GPU access**\n","---\n","\n","By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:\n","\n","Go to **Runtime -> Change the Runtime type**\n","\n","**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*\n","\n","**Accelator: GPU** *(Graphics processing unit)*\n"]},{"cell_type":"code","metadata":{"id":"BDhmUgqCStlm","cellView":"form"},"source":["#@markdown ##Run this cell to check if you have GPU access\n","# %tensorflow_version 1.x\n","\n","\n","import tensorflow as tf\n","if tf.test.gpu_device_name()=='':\n"," print('You do not have GPU access.') \n"," print('Did you change your runtime ?') \n"," print('If the runtime setting is correct then Google did not allocate a GPU for your session')\n"," print('Expect slow performance. To access GPU try reconnecting later')\n","\n","else:\n"," print('You have GPU access')\n"," !nvidia-smi"],"execution_count":2,"outputs":[{"output_type":"error","ename":"ModuleNotFoundError","evalue":"No module named 'tensorflow'","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mtensorflow\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtf\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtest\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgpu_device_name\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m==\u001b[0m\u001b[0;34m''\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'You do not have GPU access.'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'tensorflow'"]}]},{"cell_type":"markdown","metadata":{"id":"-oqBTeLaImnU"},"source":["## **1.2. Mount your Google Drive**\n","---\n"," To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.\n","\n"," Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. \n","\n"," Once this is done, your data are available in the **Files** tab on the top left of notebook."]},{"cell_type":"code","metadata":{"id":"01Djr8v-5pPk","cellView":"form"},"source":["#@markdown ##Play the cell to connect your Google Drive to Colab\n","\n","#@markdown * Click on the URL. \n","\n","#@markdown * Sign in your Google Account. \n","\n","#@markdown * Copy the authorization code. \n","\n","#@markdown * Enter the authorization code. \n","\n","#@markdown * Click on \"Files\" site on the right. Refresh the site. Your Google Drive folder should now be available here as \"drive\". \n","\n","# mount user's Google Drive to Google Colab.\n","from google.colab import drive\n","drive.mount('/content/gdrive')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"n4yWFoJNnoin"},"source":["# **2. Install Noise2Void and dependencies**\n","---"]},{"cell_type":"code","metadata":{"id":"3u2mXn3XsWzd","cellView":"form"},"source":["Notebook_version = ['1.12']\n","\n","\n","#@markdown ##Install Noise2Void and dependencies\n","\n","# Here we enable Tensorflow 1.\n","!pip install q keras==2.2.5\n","\n","%tensorflow_version 1.x\n","import tensorflow\n","print(tensorflow.__version__)\n","print(\"Tensorflow enabled.\")\n","\n","\n","# Here we install Noise2Void and other required packages\n","!pip install n2v\n","!pip install wget\n","!pip install fpdf\n","!pip install memory_profiler\n","%load_ext memory_profiler\n","\n","print(\"Noise2Void installed.\")\n","\n","# Here we install all libraries and other depencies to run the notebook.\n","\n","# ------- Variable specific to N2V -------\n","from n2v.models import N2VConfig, N2V\n","from csbdeep.utils import plot_history\n","from n2v.utils.n2v_utils import manipulate_val_data\n","from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n","from csbdeep.io import save_tiff_imagej_compatible\n","\n","# ------- Common variable to all ZeroCostDL4Mic notebooks -------\n","import numpy as np\n","from matplotlib import pyplot as plt\n","import urllib\n","import os, random\n","import shutil \n","import zipfile\n","from tifffile import imread, imsave\n","import time\n","import sys\n","import wget\n","from pathlib import Path\n","import pandas as pd\n","import csv\n","from glob import glob\n","from scipy import signal\n","from scipy import ndimage\n","from skimage import io\n","from sklearn.linear_model import LinearRegression\n","from skimage.util import img_as_uint\n","import matplotlib as mpl\n","from skimage.metrics import structural_similarity\n","from skimage.metrics import peak_signal_noise_ratio as psnr\n","from astropy.visualization import simple_norm\n","from skimage import img_as_float32\n","from fpdf import FPDF, HTMLMixin\n","from datetime import datetime\n","from pip._internal.operations.freeze import freeze\n","import subprocess\n","from datetime import datetime\n","\n","# Colors for the warning messages\n","class bcolors:\n"," WARNING = '\\033[31m'\n","W = '\\033[0m' # white (normal)\n","R = '\\033[31m' # red\n","\n","#Disable some of the tensorflow warnings\n","import warnings\n","warnings.filterwarnings(\"ignore\")\n","\n","print(\"Libraries installed\")\n","\n","\n","# Check if this is the latest version of the notebook\n","Latest_notebook_version = pd.read_csv(\"https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv\")\n","print('Notebook version: '+Notebook_version[0])\n","strlist = Notebook_version[0].split('.')\n","Notebook_version_main = strlist[0]+'.'+strlist[1]\n","if Notebook_version_main == Latest_notebook_version.columns:\n"," print(\"This notebook is up-to-date.\")\n","else:\n"," print(bcolors.WARNING +\"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\")\n","\n","def pdf_export(trained = False, augmentation = False, pretrained_model = False):\n"," class MyFPDF(FPDF, HTMLMixin):\n"," pass\n","\n"," pdf = MyFPDF()\n"," pdf.add_page()\n"," pdf.set_right_margin(-1)\n"," pdf.set_font(\"Arial\", size = 11, style='B') \n","\n"," Network = 'Noise2Void 2D'\n"," day = datetime.now()\n"," datetime_str = str(day)[0:10]\n","\n"," Header = 'Training report for '+Network+' model ('+model_name+')\\nDate: '+datetime_str\n"," pdf.multi_cell(180, 5, txt = Header, align = 'L') \n","\n"," # add another cell \n"," if trained:\n"," training_time = \"Training time: \"+str(hour)+ \"hour(s) \"+str(mins)+\"min(s) \"+str(round(sec))+\"sec(s)\"\n"," pdf.cell(190, 5, txt = training_time, ln = 1, align='L')\n"," pdf.ln(1)\n","\n"," Header_2 = 'Information for your materials and method:'\n"," pdf.cell(190, 5, txt=Header_2, ln=1, align='L')\n","\n"," all_packages = ''\n"," for requirement in freeze(local_only=True):\n"," all_packages = all_packages+requirement+', '\n"," #print(all_packages)\n","\n"," #Main Packages\n"," main_packages = ''\n"," version_numbers = []\n"," for name in ['tensorflow','numpy','Keras','csbdeep']:\n"," find_name=all_packages.find(name)\n"," main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '\n"," #Version numbers only here:\n"," version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])\n","\n"," cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)\n"," cuda_version = cuda_version.stdout.decode('utf-8')\n"," cuda_version = cuda_version[cuda_version.find(', V')+3:-1]\n"," gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)\n"," gpu_name = gpu_name.stdout.decode('utf-8')\n"," gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]\n"," #print(cuda_version[cuda_version.find(', V')+3:-1])\n"," #print(gpu_name)\n","\n"," shape = io.imread(Training_source+'/'+os.listdir(Training_source)[0]).shape\n"," dataset_size = len(os.listdir(Training_source))\n","\n"," text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(Xdata.shape[0])+' image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'\n","\n"," if pretrained_model:\n"," text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(Xdata.shape[0])+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'\n","\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n"," pdf.multi_cell(190, 5, txt = text, align='L')\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.ln(1)\n"," pdf.cell(26, 5, txt='Augmentation: ', ln=0)\n"," pdf.set_font('')\n"," if augmentation:\n"," aug_text = 'The dataset was augmented by default.'\n"," else:\n"," aug_text = 'No augmentation was used for training.'\n"," pdf.multi_cell(190, 5, txt=aug_text, align='L')\n"," pdf.set_font('Arial', size = 11, style = 'B')\n"," pdf.ln(1)\n"," pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n"," if Use_Default_Advanced_Parameters:\n"," pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')\n"," pdf.cell(200, 5, txt='The following parameters were used for training:')\n"," pdf.ln(1)\n"," html = \"\"\" \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n","
ParameterValue
number_of_epochs{0}
patch_size{1}
batch_size{2}
number_of_steps{3}
percentage_validation{4}
initial_learning_rate{5}
\n"," \"\"\".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),batch_size,number_of_steps,percentage_validation,initial_learning_rate)\n"," pdf.write_html(html)\n","\n"," #pdf.multi_cell(190, 5, txt = text_2, align='L')\n"," pdf.set_font(\"Arial\", size = 11, style='B')\n"," pdf.ln(1)\n"," pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.cell(28, 5, txt= 'Training_source:', align = 'L', ln=0)\n"," pdf.set_font('')\n"," pdf.multi_cell(170, 5, txt = Training_source, align = 'L')\n"," # pdf.set_font('')\n"," # pdf.set_font('Arial', size = 10, style = 'B')\n"," # pdf.cell(28, 5, txt= 'Training_target:', align = 'L', ln=0)\n"," # pdf.set_font('')\n"," # pdf.multi_cell(170, 5, txt = Training_target, align = 'L')\n"," #pdf.cell(190, 5, txt=aug_text, align='L', ln=1)\n"," pdf.ln(1)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)\n"," pdf.set_font('')\n"," pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')\n"," pdf.ln(1)\n"," pdf.cell(60, 5, txt = 'Example Training Image', ln=1)\n"," pdf.ln(1)\n"," exp_size = io.imread('/content/TrainingDataExample_N2V2D.png').shape\n"," pdf.image('/content/TrainingDataExample_N2V2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))\n"," pdf.ln(1)\n"," ref_1 = 'References:\\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. \"ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy.\" BioRxiv (2020).'\n"," pdf.multi_cell(190, 5, txt = ref_1, align='L')\n"," ref_2 = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. \"Noise2void-learning denoising from single noisy images.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.'\n"," pdf.multi_cell(190, 5, txt = ref_2, align='L')\n"," pdf.ln(3)\n"," reminder = 'Important:\\nRemember to perform the quality control step on all newly trained models\\nPlease consider depositing your training dataset on Zenodo'\n"," pdf.set_font('Arial', size = 11, style='B')\n"," pdf.multi_cell(190, 5, txt=reminder, align='C')\n","\n"," pdf.output(model_path+'/'+model_name+'/'+model_name+\"_training_report.pdf\")\n","\n","\n","\n"," #Make a pdf summary of the QC results\n","\n","def qc_pdf_export():\n"," class MyFPDF(FPDF, HTMLMixin):\n"," pass\n","\n"," pdf = MyFPDF()\n"," pdf.add_page()\n"," pdf.set_right_margin(-1)\n"," pdf.set_font(\"Arial\", size = 11, style='B') \n","\n"," Network = 'Noise2Void 2D'\n","\n"," day = datetime.now()\n"," datetime_str = str(day)[0:10]\n","\n"," Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\\nDate: '+datetime_str\n"," pdf.multi_cell(180, 5, txt = Header, align = 'L') \n","\n"," all_packages = ''\n"," for requirement in freeze(local_only=True):\n"," all_packages = all_packages+requirement+', '\n","\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 11, style = 'B')\n"," pdf.ln(2)\n"," pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')\n"," pdf.ln(1)\n"," exp_size = io.imread(full_QC_model_path+'/Quality Control/lossCurvePlots.png').shape\n"," if os.path.exists(full_QC_model_path+'/Quality Control/lossCurvePlots.png'):\n"," pdf.image(full_QC_model_path+'/Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))\n"," else:\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size=10)\n"," pdf.cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')\n"," pdf.ln(2)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 10, style = 'B')\n"," pdf.ln(3)\n"," pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)\n"," pdf.ln(1)\n"," exp_size = io.imread(full_QC_model_path+'/Quality Control/QC_example_data.png').shape\n"," pdf.image(full_QC_model_path+'/Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))\n"," pdf.ln(1)\n"," pdf.set_font('')\n"," pdf.set_font('Arial', size = 11, style = 'B')\n"," pdf.ln(1)\n"," pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n","\n"," pdf.ln(1)\n"," html = \"\"\"\n"," \n"," \n"," \"\"\"\n"," with open(full_QC_model_path+'/Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:\n"," metrics = csv.reader(csvfile)\n"," header = next(metrics)\n"," image = header[0]\n"," mSSIM_PvsGT = header[1]\n"," mSSIM_SvsGT = header[2]\n"," NRMSE_PvsGT = header[3]\n"," NRMSE_SvsGT = header[4]\n"," PSNR_PvsGT = header[5]\n"," PSNR_SvsGT = header[6]\n"," header = \"\"\"\n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \"\"\".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)\n"," html = html+header\n"," for row in metrics:\n"," image = row[0]\n"," mSSIM_PvsGT = row[1]\n"," mSSIM_SvsGT = row[2]\n"," NRMSE_PvsGT = row[3]\n"," NRMSE_SvsGT = row[4]\n"," PSNR_PvsGT = row[5]\n"," PSNR_SvsGT = row[6]\n"," cells = \"\"\"\n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \n"," \"\"\".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))\n"," html = html+cells\n"," html = html+\"\"\"
{0}{1}{2}{3}{4}{5}{6}
{0}{1}{2}{3}{4}{5}{6}
\"\"\"\n"," \n"," pdf.write_html(html)\n","\n"," pdf.ln(1)\n"," pdf.set_font('')\n"," pdf.set_font_size(10.)\n"," ref_1 = 'References:\\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. \"ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy.\" BioRxiv (2020).'\n"," pdf.multi_cell(190, 5, txt = ref_1, align='L')\n"," ref_2 = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. \"Noise2void-learning denoising from single noisy images.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.'\n"," pdf.multi_cell(190, 5, txt = ref_2, align='L')\n"," pdf.ln(3)\n"," reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'\n","\n"," pdf.set_font('Arial', size = 11, style='B')\n"," pdf.multi_cell(190, 5, txt=reminder, align='C')\n","\n"," pdf.output(full_QC_model_path+'/Quality Control/'+QC_model_name+'_QC_report.pdf')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Fw0kkTU6CsU4"},"source":["# **3. Select your parameters and paths**\n","\n","---\n"]},{"cell_type":"markdown","metadata":{"id":"WzYAA-MuaYrT"},"source":["## **3.1. Setting main training parameters**\n","---\n","\n","\n"]},{"cell_type":"markdown","metadata":{"id":"CB6acvUFtWqd"},"source":[" **Paths for training, predictions and results**\n","\n","**`Training_source:`:** These is the path to your folders containing the Training_source (noisy images). To find the path of the folder containing your datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.\n","\n","**`model_name`:** Use only my_model -style, not my-model (Use \"_\" not \"-\"). Do not use spaces in the name. Do not re-use the name of an existing model (saved in the same folder), otherwise it will be overwritten.\n","\n","**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).\n","\n","\n","**Training Parameters**\n","\n","**`number_of_epochs`:** Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-200 epochs. Evaluate the performance after training (see 4.3.). **Default value: 100**\n"," \n","**`patch_size`:** Noise2Void divides the image into patches for training. Input the size of the patches (length of a side). The value should be between 64 and the dimensions of the image and divisible by 8. **Default value: 64**\n","\n","**Advanced Parameters - experienced users only**\n","\n","**`batch_size:`** This parameter defines the number of patches seen in each training step. Noise2Void requires a large batch size for stable training. Reduce this parameter if your GPU runs out of memory. **Default value: 128**\n","\n","**`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each image / patch is seen at least once per epoch. **Default value: Number of patch / batch_size**\n","\n","**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during the training. **Default value: 10**\n","\n","**`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**\n"]},{"cell_type":"code","metadata":{"id":"ewpNJ_I0Mv47","cellView":"form"},"source":["# create DataGenerator-object.\n","\n","datagen = N2V_DataGenerator()\n","\n","#@markdown ###Path to training image(s): \n","Training_source = \"\" #@param {type:\"string\"}\n","\n","#compatibility to easily change the name of the parameters\n","training_images = Training_source \n","imgs = datagen.load_imgs_from_directory(directory = Training_source)\n","\n","#@markdown ### Model name and path:\n","model_name = \"\" #@param {type:\"string\"}\n","model_path = \"\" #@param {type:\"string\"}\n","\n","full_model_path = model_path+'/'+model_name+'/'\n","\n","#@markdown ###Training Parameters\n","#@markdown Number of epochs:\n","number_of_epochs = 100#@param {type:\"number\"}\n","\n","#@markdown Patch size (pixels)\n","patch_size = 64#@param {type:\"number\"}\n","\n","#@markdown ###Advanced Parameters\n","\n","Use_Default_Advanced_Parameters = True#@param {type:\"boolean\"}\n","\n","#@markdown ###If not, please input:\n","batch_size = 128#@param {type:\"number\"}\n","number_of_steps = 100#@param {type:\"number\"}\n","percentage_validation = 10#@param {type:\"number\"}\n","initial_learning_rate = 0.0004 #@param {type:\"number\"}\n","\n","\n","if (Use_Default_Advanced_Parameters): \n"," print(\"Default advanced parameters enabled\")\n"," # number_of_steps is defined in the following cell in this case\n"," batch_size = 128\n"," percentage_validation = 10\n"," initial_learning_rate = 0.0004\n"," \n","\n","#here we check that no model with the same name already exist, if so print a warning\n","\n","if os.path.exists(model_path+'/'+model_name):\n"," print(bcolors.WARNING +\"!! WARNING: \"+model_name+\" already exists and will be deleted in the following cell !!\")\n"," print(bcolors.WARNING +\"To continue training \"+model_name+\", choose a new model_name here, and load \"+model_name+\" in section 3.3\"+W)\n"," \n","\n","# This will open a randomly chosen dataset input image\n","random_choice = random.choice(os.listdir(Training_source))\n","x = imread(Training_source+\"/\"+random_choice)\n","\n","# Here we check that the input images contains the expected dimensions\n","if len(x.shape) == 2:\n"," print(\"Image dimensions (y,x)\",x.shape)\n","\n","if not len(x.shape) == 2:\n"," print(bcolors.WARNING +\"Your images appear to have the wrong dimensions. Image dimension\",x.shape)\n","\n","\n","#Find image XY dimension\n","Image_Y = x.shape[0]\n","Image_X = x.shape[1]\n","\n","#Hyperparameters failsafes\n","\n","# Here we check that patch_size is smaller than the smallest xy dimension of the image \n","if patch_size > min(Image_Y, Image_X):\n"," patch_size = min(Image_Y, Image_X)\n"," print (bcolors.WARNING + \" Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:\",patch_size)\n","\n","# Here we check that patch_size is divisible by 8\n","if not patch_size % 8 == 0:\n"," patch_size = ((int(patch_size / 8)-1) * 8)\n"," print (bcolors.WARNING + \" Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:\",patch_size)\n","\n","# Here we disable pre-trained model by default (in case the next cell is not run)\n","Use_pretrained_model = False\n","\n","# Here we enable data augmentation by default (in case the cell is not ran)\n","Use_Data_augmentation = True\n","\n","print(\"Parameters initiated.\")\n","\n","#Here we display one image\n","norm = simple_norm(x, percent = 99)\n","\n","f=plt.figure(figsize=(16,8))\n","plt.subplot(1,2,1)\n","plt.imshow(x, interpolation='nearest', norm=norm, cmap='magma')\n","plt.title('Training source')\n","plt.axis('off');\n","plt.savefig('/content/TrainingDataExample_N2V2D.png',bbox_inches='tight',pad_inches=0)\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"xGcl7WGP4WHt"},"source":["## **3.2. Data augmentation**\n","---"]},{"cell_type":"markdown","metadata":{"id":"5Lio8hpZ4PJ1"},"source":["Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it.\n","\n","Data augmentation is performed here by rotating the patches in XY-Plane and flip them along X-Axis. This only works if the patches are square in XY.\n","\n"," **By default data augmentation is enabled. Disable this option is you run out of RAM during the training**.\n"," "]},{"cell_type":"code","metadata":{"id":"htqjkJWt5J_8","cellView":"form"},"source":["#Data augmentation\n","\n","#@markdown ##Play this cell to enable or disable data augmentation: \n","\n","Use_Data_augmentation = True #@param {type:\"boolean\"}\n","\n","if Use_Data_augmentation:\n"," print(\"Data augmentation enabled\")\n","\n","if not Use_Data_augmentation:\n"," print(\"Data augmentation disabled\")"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bQDuybvyadKU"},"source":["\n","## **3.3. Using weights from a pre-trained model as initial weights**\n","---\n"," Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a N2V 2D model**. \n","\n"," This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.\n","\n"," In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used. "]},{"cell_type":"code","metadata":{"id":"8vPkzEBNamE4","cellView":"form"},"source":["# @markdown ##Loading weights from a pre-trained network\n","\n","Use_pretrained_model = False #@param {type:\"boolean\"}\n","\n","pretrained_model_choice = \"Model_from_file\" #@param [\"Model_from_file\"]\n","\n","Weights_choice = \"best\" #@param [\"last\", \"best\"]\n","\n","\n","#@markdown ###If you chose \"Model_from_file\", please provide the path to the model folder:\n","pretrained_model_path = \"\" #@param {type:\"string\"}\n","\n","# --------------------- Check if we load a previously trained model ------------------------\n","if Use_pretrained_model:\n","\n","# --------------------- Load the model from the choosen path ------------------------\n"," if pretrained_model_choice == \"Model_from_file\":\n"," h5_file_path = os.path.join(pretrained_model_path, \"weights_\"+Weights_choice+\".h5\")\n","\n","\n","# --------------------- Download the a model provided in the XXX ------------------------\n","\n"," if pretrained_model_choice == \"Model_name\":\n"," pretrained_model_name = \"Model_name\"\n"," pretrained_model_path = \"/content/\"+pretrained_model_name\n"," print(\"Downloading the 2D_Demo_Model_from_Stardist_2D_paper\")\n"," if os.path.exists(pretrained_model_path):\n"," shutil.rmtree(pretrained_model_path)\n"," os.makedirs(pretrained_model_path)\n"," wget.download(\"\", pretrained_model_path)\n"," wget.download(\"\", pretrained_model_path)\n"," wget.download(\"\", pretrained_model_path) \n"," wget.download(\"\", pretrained_model_path)\n"," h5_file_path = os.path.join(pretrained_model_path, \"weights_\"+Weights_choice+\".h5\")\n","\n","# --------------------- Add additional pre-trained models here ------------------------\n","\n","\n","\n","# --------------------- Check the model exist ------------------------\n","# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled, \n"," if not os.path.exists(h5_file_path):\n"," print(bcolors.WARNING+'WARNING: weights_last.h5 pretrained model does not exist')\n"," Use_pretrained_model = False\n","\n"," \n","# If the model path contains a pretrain model, we load the training rate, \n"," if os.path.exists(h5_file_path):\n","#Here we check if the learning rate can be loaded from the quality control folder\n"," if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):\n","\n"," with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:\n"," csvRead = pd.read_csv(csvfile, sep=',')\n"," #print(csvRead)\n"," \n"," if \"learning rate\" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)\n"," print(\"pretrained network learning rate found\")\n"," #find the last learning rate\n"," lastLearningRate = csvRead[\"learning rate\"].iloc[-1]\n"," #Find the learning rate corresponding to the lowest validation loss\n"," min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]\n"," #print(min_val_loss)\n"," bestLearningRate = min_val_loss['learning rate'].iloc[-1]\n","\n"," if Weights_choice == \"last\":\n"," print('Last learning rate: '+str(lastLearningRate))\n","\n"," if Weights_choice == \"best\":\n"," print('Learning rate of best validation loss: '+str(bestLearningRate))\n","\n"," if not \"learning rate\" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead\n"," bestLearningRate = initial_learning_rate\n"," lastLearningRate = initial_learning_rate\n"," print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead' + W)\n","\n","#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used\n"," if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):\n"," print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+ W)\n"," bestLearningRate = initial_learning_rate\n"," lastLearningRate = initial_learning_rate\n","\n","\n","# Display info about the pretrained model to be loaded (or not)\n","if Use_pretrained_model:\n"," print('Weights found in:')\n"," print(h5_file_path)\n"," print('will be loaded prior to training.')\n","\n","else:\n"," print(bcolors.WARNING+'No pretrained nerwork will be used.')\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"rQndJj70FzfL"},"source":["# **4. Train the network**\n","---"]},{"cell_type":"markdown","metadata":{"id":"tGW2iaU6X5zi"},"source":["## **4.1. Prepare the training data and model for training**\n","---\n","Here, we use the information from 3. to build the model and convert the training data into a suitable format for training."]},{"cell_type":"code","metadata":{"id":"WMJnGJpCMa4y","cellView":"form"},"source":["#@markdown ##Create the model and dataset objects\n","\n","# --------------------- Here we delete the model folder if it already exist ------------------------\n","\n","if os.path.exists(model_path+'/'+model_name):\n"," print(bcolors.WARNING +\"!! WARNING: Model folder already exists and has been removed !!\" + W)\n"," shutil.rmtree(model_path+'/'+model_name)\n","\n","\n","# split patches from the training images\n","Xdata = datagen.generate_patches_from_list(imgs, shape=(patch_size,patch_size), augment=Use_Data_augmentation)\n","shape_of_Xdata = Xdata.shape\n","# create a threshold (10 % patches for the validation)\n","threshold = int(shape_of_Xdata[0]*(percentage_validation/100))\n","# split the patches into training patches and validation patches\n","X = Xdata[threshold:]\n","X_val = Xdata[:threshold]\n","print(Xdata.shape[0],\"patches created.\")\n","print(threshold,\"patch images for validation (\",percentage_validation,\"%).\")\n","print(Xdata.shape[0]-threshold,\"patch images for training.\")\n","%memit\n","\n","#Here we automatically define number_of_step in function of training data and batch size\n","if (Use_Default_Advanced_Parameters): \n"," number_of_steps= int(X.shape[0]/batch_size)+1\n","\n","\n","# --------------------- Using pretrained model ------------------------\n","#Here we ensure that the learning rate set correctly when using pre-trained models\n","if Use_pretrained_model:\n"," if Weights_choice == \"last\":\n"," initial_learning_rate = lastLearningRate\n","\n"," if Weights_choice == \"best\": \n"," initial_learning_rate = bestLearningRate\n","# --------------------- ---------------------- ------------------------\n","\n","# create a Config object\n","config = N2VConfig(X, unet_kern_size=3, \n"," train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, \n"," train_loss='mse', batch_norm=True, train_batch_size=batch_size, n2v_perc_pix=0.198, \n"," n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, train_learning_rate = initial_learning_rate)\n","\n","# Let's look at the parameters stored in the config-object.\n","vars(config)\n"," \n"," \n","# create network model.\n","model = N2V(config=config, name=model_name, basedir=model_path)\n","\n","# --------------------- Using pretrained model ------------------------\n","# Load the pretrained weights \n","if Use_pretrained_model:\n"," model.load_weights(h5_file_path)\n","# --------------------- ---------------------- ------------------------\n","\n","\n","print(\"Setup done.\")\n","print(config)\n","\n","\n","# creates a plot and shows one training patch and one validation patch.\n","plt.figure(figsize=(16,87))\n","plt.subplot(1,2,1)\n","plt.imshow(X[0,...,0], cmap='magma')\n","plt.axis('off')\n","plt.title('Training Patch');\n","plt.subplot(1,2,2)\n","plt.imshow(X_val[0,...,0], cmap='magma')\n","plt.axis('off')\n","plt.title('Validation Patch');\n","\n","pdf_export(pretrained_model = Use_pretrained_model)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"wQPz0F6JlvJR"},"source":["## **4.2. Start Training**\n","---\n","When playing the cell below you should see updates after each epoch (round). Network training can take some time.\n","\n","* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this \n","point.\n","\n","Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.\n","\n","**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSB Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).\n"]},{"cell_type":"code","metadata":{"id":"j_Qm5JBmlvJg","cellView":"form"},"source":["start = time.time()\n","\n","#@markdown ##Start training\n","%memit\n","\n","history = model.train(X, X_val)\n","print(\"Training done.\")\n","%memit\n","\n","\n","print(\"Training, done.\")\n","\n","# convert the history.history dict to a pandas DataFrame: \n","lossData = pd.DataFrame(history.history) \n","\n","if os.path.exists(model_path+\"/\"+model_name+\"/Quality Control\"):\n"," shutil.rmtree(model_path+\"/\"+model_name+\"/Quality Control\")\n","\n","os.makedirs(model_path+\"/\"+model_name+\"/Quality Control\")\n","\n","# The training evaluation.csv is saved (overwrites the Files if needed). \n","lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'\n","with open(lossDataCSVpath, 'w') as f:\n"," writer = csv.writer(f)\n"," writer.writerow(['loss','val_loss', 'learning rate'])\n"," for i in range(len(history.history['loss'])):\n"," writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])\n","\n","\n","# Displaying the time elapsed for training\n","dt = time.time() - start\n","mins, sec = divmod(dt, 60) \n","hour, mins = divmod(mins, 60) \n","print(\"Time elapsed:\",hour, \"hour(s)\",mins,\"min(s)\",round(sec),\"sec(s)\")\n","\n","model.export_TF(name='Noise2Void', \n"," description='Noise2Void 2D trained using ZeroCostDL4Mic.', \n"," authors=[\"You\"],\n"," test_img=X_val[0,...,0], axes='YX',\n"," patch_shape=(patch_size, patch_size))\n","\n","print(\"Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin\")\n","\n","pdf_export(trained = True, pretrained_model = Use_pretrained_model)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"QYuIOWQ3imuU"},"source":["# **5. Evaluate your model**\n","---\n","\n","This section allows the user to perform important quality checks on the validity and generalisability of the trained model. \n","\n","**We highly recommend to perform quality control on all newly trained models.**\n","\n"]},{"cell_type":"code","metadata":{"id":"zazOZ3wDx0zQ","cellView":"form"},"source":["# model name and path\n","#@markdown ###Do you want to assess the model you just trained ?\n","Use_the_current_trained_model = True #@param {type:\"boolean\"}\n","\n","#@markdown ###If not, please provide the path to the model folder:\n","\n","QC_model_folder = \"\" #@param {type:\"string\"}\n","\n","#Here we define the loaded model name and path\n","QC_model_name = os.path.basename(QC_model_folder)\n","QC_model_path = os.path.dirname(QC_model_folder)\n","\n","if (Use_the_current_trained_model): \n"," QC_model_name = model_name\n"," QC_model_path = model_path\n","\n","full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'\n","if os.path.exists(full_QC_model_path):\n"," print(\"The \"+QC_model_name+\" network will be evaluated\")\n","else:\n"," \n"," print(bcolors.WARNING + '!! WARNING: The chosen model does not exist !!')\n"," print('Please make sure you provide a valid model path and model name before proceeding further.')\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yDY9dtzdUTLh"},"source":["## **5.1. Inspection of the loss function**\n","---\n","\n","It is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*\n","\n","**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.\n","\n","**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.\n","\n","During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.\n","\n","Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact noise patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased."]},{"cell_type":"code","metadata":{"id":"vMzSP50kMv5p","cellView":"form"},"source":["#@markdown ##Play the cell to show a plot of training errors vs. epoch number\n","\n","lossDataFromCSV = []\n","vallossDataFromCSV = []\n","\n","with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:\n"," csvRead = csv.reader(csvfile, delimiter=',')\n"," next(csvRead)\n"," for row in csvRead:\n"," lossDataFromCSV.append(float(row[0]))\n"," vallossDataFromCSV.append(float(row[1]))\n","\n","epochNumber = range(len(lossDataFromCSV))\n","plt.figure(figsize=(15,10))\n","\n","plt.subplot(2,1,1)\n","plt.plot(epochNumber,lossDataFromCSV, label='Training loss')\n","plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')\n","plt.title('Training loss and validation loss vs. epoch number (linear scale)')\n","plt.ylabel('Loss')\n","plt.xlabel('Epoch number')\n","plt.legend()\n","\n","plt.subplot(2,1,2)\n","plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')\n","plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')\n","plt.title('Training loss and validation loss vs. epoch number (log scale)')\n","plt.ylabel('Loss')\n","plt.xlabel('Epoch number')\n","plt.legend()\n","plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png')\n","plt.show()\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"biT9FI9Ri77_"},"source":["## **5.2. Error mapping and quality metrics estimation**\n","---\n","\n","This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the \"Source_QC_folder\" and \"Target_QC_folder\" !\n","\n","**1. The SSIM (structural similarity) map** \n","\n","The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). \n","\n","**mSSIM** is the SSIM value calculated across the entire window of both images.\n","\n","**The output below shows the SSIM maps with the mSSIM**\n","\n","**2. The RSE (Root Squared Error) map** \n","\n","This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).\n","\n","\n","**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.\n","\n","**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.\n","\n","**The output below shows the RSE maps with the NRMSE and PSNR values.**\n"]},{"cell_type":"code","metadata":{"id":"nAs4Wni7VYbq","cellView":"form"},"source":["#@markdown ##Choose the folders that contain your Quality Control dataset\n","\n","Source_QC_folder = \"\" #@param{type:\"string\"}\n","Target_QC_folder = \"\" #@param{type:\"string\"}\n","\n","# Create a quality control/Prediction Folder\n","if os.path.exists(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\"):\n"," shutil.rmtree(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n","\n","os.makedirs(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n","\n","# Activate the pretrained model. \n","model_training = N2V(config=None, name=QC_model_name, basedir=QC_model_path)\n","\n","\n","# List Tif images in Source_QC_folder\n","Source_QC_folder_tif = Source_QC_folder+\"/*.tif\"\n","Z = sorted(glob(Source_QC_folder_tif))\n","Z = list(map(imread,Z))\n","\n","print('Number of test dataset found in the folder: '+str(len(Z)))\n","\n","\n","# Perform prediction on all datasets in the Source_QC folder\n","for filename in os.listdir(Source_QC_folder):\n"," img = imread(os.path.join(Source_QC_folder, filename))\n"," predicted = model_training.predict(img, axes='YX', n_tiles=(2,1))\n"," os.chdir(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\")\n"," imsave(filename, predicted)\n","\n","def ssim(img1, img2):\n"," return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)\n","\n","\n","def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):\n"," \"\"\"This function is adapted from Martin Weigert\"\"\"\n"," \"\"\"Percentile-based image normalization.\"\"\"\n","\n"," mi = np.percentile(x,pmin,axis=axis,keepdims=True)\n"," ma = np.percentile(x,pmax,axis=axis,keepdims=True)\n"," return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)\n","\n","\n","def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32\n"," \"\"\"This function is adapted from Martin Weigert\"\"\"\n"," if dtype is not None:\n"," x = x.astype(dtype,copy=False)\n"," mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)\n"," ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)\n"," eps = dtype(eps)\n","\n"," try:\n"," import numexpr\n"," x = numexpr.evaluate(\"(x - mi) / ( ma - mi + eps )\")\n"," except ImportError:\n"," x = (x - mi) / ( ma - mi + eps )\n","\n"," if clip:\n"," x = np.clip(x,0,1)\n","\n"," return x\n","\n","def norm_minmse(gt, x, normalize_gt=True):\n"," \"\"\"This function is adapted from Martin Weigert\"\"\"\n","\n"," \"\"\"\n"," normalizes and affinely scales an image pair such that the MSE is minimized \n"," \n"," Parameters\n"," ----------\n"," gt: ndarray\n"," the ground truth image \n"," x: ndarray\n"," the image that will be affinely scaled \n"," normalize_gt: bool\n"," set to True of gt image should be normalized (default)\n"," Returns\n"," -------\n"," gt_scaled, x_scaled \n"," \"\"\"\n"," if normalize_gt:\n"," gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)\n"," x = x.astype(np.float32, copy=False) - np.mean(x)\n"," #x = x - np.mean(x)\n"," gt = gt.astype(np.float32, copy=False) - np.mean(gt)\n"," #gt = gt - np.mean(gt)\n"," scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())\n"," return gt, scale * x\n","\n","# Open and create the csv file that will contain all the QC metrics\n","with open(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/QC_metrics_\"+QC_model_name+\".csv\", \"w\", newline='') as file:\n"," writer = csv.writer(file)\n","\n"," # Write the header in the csv file\n"," writer.writerow([\"image #\",\"Prediction v. GT mSSIM\",\"Input v. GT mSSIM\", \"Prediction v. GT NRMSE\", \"Input v. GT NRMSE\", \"Prediction v. GT PSNR\", \"Input v. GT PSNR\"]) \n","\n"," # Let's loop through the provided dataset in the QC folders\n","\n","\n"," for i in os.listdir(Source_QC_folder):\n"," if not os.path.isdir(os.path.join(Source_QC_folder,i)):\n"," print('Running QC on: '+i)\n"," # -------------------------------- Target test data (Ground truth) --------------------------------\n"," test_GT = io.imread(os.path.join(Target_QC_folder, i))\n","\n"," # -------------------------------- Source test data --------------------------------\n"," test_source = io.imread(os.path.join(Source_QC_folder,i))\n","\n"," # Normalize the images wrt each other by minimizing the MSE between GT and Source image\n"," test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)\n","\n"," # -------------------------------- Prediction --------------------------------\n"," test_prediction = io.imread(os.path.join(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction\",i))\n","\n"," # Normalize the images wrt each other by minimizing the MSE between GT and prediction\n"," test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True) \n","\n","\n"," # -------------------------------- Calculate the metric maps and save them --------------------------------\n","\n"," # Calculate the SSIM maps\n"," index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)\n"," index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)\n","\n"," #Save ssim_maps\n"," img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)\n"," img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)\n"," \n"," # Calculate the Root Squared Error (RSE) maps\n"," img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))\n"," img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))\n","\n"," # Save SE maps\n"," img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)\n"," img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)\n"," io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)\n","\n","\n"," # -------------------------------- Calculate the RSE metrics and save them --------------------------------\n","\n"," # Normalised Root Mean Squared Error (here it's valid to take the mean of the image)\n"," NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))\n"," NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))\n"," \n"," # We can also measure the peak signal to noise ratio between the images\n"," PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)\n"," PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)\n","\n"," writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])\n","\n","\n","# All data is now processed saved\n","Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same\n","\n","plt.figure(figsize=(15,15))\n","# Currently only displays the last computed set, from memory\n","# Target (Ground-truth)\n","plt.subplot(3,3,1)\n","plt.axis('off')\n","img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))\n","plt.imshow(img_GT)\n","plt.title('Target',fontsize=15)\n","\n","# Source\n","plt.subplot(3,3,2)\n","plt.axis('off')\n","img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))\n","plt.imshow(img_Source)\n","plt.title('Source',fontsize=15)\n","\n","#Prediction\n","plt.subplot(3,3,3)\n","plt.axis('off')\n","img_Prediction = io.imread(os.path.join(QC_model_path+\"/\"+QC_model_name+\"/Quality Control/Prediction/\", Test_FileList[-1]))\n","plt.imshow(img_Prediction)\n","plt.title('Prediction',fontsize=15)\n","\n","#Setting up colours\n","cmap = plt.cm.CMRmap\n","\n","#SSIM between GT and Source\n","plt.subplot(3,3,5)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)\n","plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)\n","plt.title('Target vs. Source',fontsize=15)\n","plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)\n","plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)\n","\n","#SSIM between GT and Prediction\n","plt.subplot(3,3,6)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)\n","plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)\n","plt.title('Target vs. Prediction',fontsize=15)\n","plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)\n","\n","#Root Squared Error between GT and Source\n","plt.subplot(3,3,8)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)\n","plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)\n","plt.title('Target vs. Source',fontsize=15)\n","plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)\n","#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))\n","plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)\n","\n","#Root Squared Error between GT and Prediction\n","plt.subplot(3,3,9)\n","#plt.axis('off')\n","plt.tick_params(\n"," axis='both', # changes apply to the x-axis and y-axis\n"," which='both', # both major and minor ticks are affected\n"," bottom=False, # ticks along the bottom edge are off\n"," top=False, # ticks along the top edge are off\n"," left=False, # ticks along the left edge are off\n"," right=False, # ticks along the right edge are off\n"," labelbottom=False,\n"," labelleft=False) \n","imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)\n","plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)\n","plt.title('Target vs. Prediction',fontsize=15)\n","plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)\n","plt.savefig(full_QC_model_path+'/Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)\n","\n","qc_pdf_export()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"69aJVFfsqXbY"},"source":["# **6. Using the trained model**\n","\n","---\n","\n","In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive."]},{"cell_type":"markdown","metadata":{"id":"tcPNRq1TrMPB"},"source":["## **6.1. Generate prediction(s) from unseen dataset**\n","---\n","\n","The current trained model (from section 4.2) can now be used to process images. If an older model needs to be used, please untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).\n","\n","**`Data_folder`:** This folder should contains the images that you want to predict using the network that you will train.\n","\n","**`Result_folder`:** This folder will contain the predicted output images.\n","\n","**`Data_type`:** Please indicate if the images you want to predict are single images or stacks"]},{"cell_type":"code","metadata":{"id":"Am2JSmpC0frj","cellView":"form"},"source":["Single_Images = 1\n","Stacks = 2\n","\n","#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved, then play the cell to predict output on your unseen images.\n","\n","#@markdown ###Path to data to analyse and where predicted output should be saved:\n","Data_folder = \"\" #@param {type:\"string\"}\n","Result_folder = \"\" #@param {type:\"string\"}\n","\n","#@markdown ###Are your data single images or stacks?\n","\n","Data_type = Single_Images #@param [\"Single_Images\", \"Stacks\"] {type:\"raw\"}\n","\n","# model name and path\n","#@markdown ###Do you want to use the current trained model?\n","Use_the_current_trained_model = True #@param {type:\"boolean\"}\n","\n","#@markdown ###If not, please provide the path to the model folder:\n","\n","Prediction_model_folder = \"\" #@param {type:\"string\"}\n","\n","#Here we find the loaded model name and parent path\n","Prediction_model_name = os.path.basename(Prediction_model_folder)\n","Prediction_model_path = os.path.dirname(Prediction_model_folder)\n","\n","if (Use_the_current_trained_model): \n"," print(\"Using current trained network\")\n"," Prediction_model_name = model_name\n"," Prediction_model_path = model_path\n","\n","full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'\n","if os.path.exists(full_Prediction_model_path):\n"," print(\"The \"+Prediction_model_name+\" network will be used.\")\n","else:\n"," print(bcolors.WARNING +'!! WARNING: The chosen model does not exist !!')\n"," print('Please make sure you provide a valid model path and model name before proceeding further.')\n","\n","\n","#Activate the pretrained model. \n","config = None\n","model = N2V(config, Prediction_model_name, basedir=Prediction_model_path)\n","\n","thisdir = Path(Data_folder)\n","outputdir = Path(Result_folder)\n","\n"," # r=root, d=directories, f = files\n","for r, d, f in os.walk(thisdir):\n"," for file in f:\n"," if \".tif\" in file:\n"," print(os.path.join(r, file))\n","\n","if Data_type == 1 :\n"," print(\"Single images are now beeing predicted\")\n","\n","# Loop through the files\n"," for r, d, f in os.walk(thisdir):\n"," for file in f:\n"," base_filename = os.path.basename(file)\n"," input_train = imread(os.path.join(r, file))\n"," pred_train = model.predict(input_train, axes='YX', n_tiles=(2,1))\n"," save_tiff_imagej_compatible(os.path.join(outputdir, base_filename), pred_train, axes='YX') \n","\n"," print(\"Images saved into folder:\", Result_folder)\n","\n","if Data_type == 2 :\n"," print(\"Stacks are now beeing predicted\")\n"," for r, d, f in os.walk(thisdir):\n"," for file in f:\n"," base_filename = os.path.basename(file)\n"," timelapse = imread(os.path.join(r, file))\n"," n_timepoint = timelapse.shape[0]\n"," prediction_stack = np.zeros((n_timepoint, timelapse.shape[1], timelapse.shape[2]))\n","\n"," for t in range(n_timepoint):\n"," img_t = timelapse[t]\n"," prediction_stack[t] = model.predict(img_t, axes='YX', n_tiles=(2,1))\n","\n"," prediction_stack_32 = img_as_float32(prediction_stack, force_copy=False)\n"," imsave(os.path.join(outputdir, base_filename), prediction_stack_32) \n"," \n"," \n","\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"67_8rEKp8C-z"},"source":["## **6.2. Assess predicted output**\n","---\n","\n","\n"]},{"cell_type":"code","metadata":{"cellView":"form","id":"n-stU-f08Cae"},"source":["# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.\n","\n","# This will display a randomly chosen dataset input and predicted output\n","\n","\n","random_choice = random.choice(os.listdir(Data_folder))\n","x = imread(Data_folder+\"/\"+random_choice)\n","\n","os.chdir(Result_folder)\n","y = imread(Result_folder+\"/\"+random_choice)\n","\n","if Data_type == 1 :\n","\n"," f=plt.figure(figsize=(16,8))\n"," plt.subplot(1,2,1)\n"," plt.imshow(x, interpolation='nearest')\n"," plt.title('Input')\n"," plt.axis('off');\n"," plt.subplot(1,2,2)\n"," plt.imshow(y, interpolation='nearest')\n"," plt.title('Predicted output')\n"," plt.axis('off');\n","\n","if Data_type == 2 :\n","\n"," f=plt.figure(figsize=(16,8))\n"," plt.subplot(1,2,1)\n"," plt.imshow(x[1], interpolation='nearest')\n"," plt.title('Input')\n"," plt.axis('off');\n"," plt.subplot(1,2,2)\n"," plt.imshow(y[1], interpolation='nearest')\n"," plt.title('Predicted output')\n"," plt.axis('off');\n","\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"hvkd66PldsXB"},"source":["## **6.3. Download your predictions**\n","---\n","\n","**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name."]},{"cell_type":"markdown","metadata":{"id":"u4pcBe8Z3T2J"},"source":["#**Thank you for using Noise2Void 2D!**"]}]} \ No newline at end of file diff --git a/ColabNotebooks/minimal.ipynb b/ColabNotebooks/minimal.ipynb new file mode 100644 index 00000000..ba4b399b --- /dev/null +++ b/ColabNotebooks/minimal.ipynb @@ -0,0 +1,121 @@ +{ + "metadata": { + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10-final" + }, + "orig_nbformat": 2, + "kernelspec": { + "name": "python3", + "display_name": "Python 3.7.10 64-bit ('tf': conda)", + "metadata": { + "interpreter": { + "hash": "01a8c3e581587ef845b14c27476cc2daada005a8e900d0cc550301b789c363ba" + } + } + } + }, + "nbformat": 4, + "nbformat_minor": 2, + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@markdown ##Run this cell to check if you have GPU access\n", + "# %tensorflow_version 1.x\n", + "\n", + "import tensorflow as tf\n", + "if tf.test.gpu_device_name()=='':\n", + " print('You do not have GPU access.') \n", + " print('Did you change your runtime ?') \n", + " print('If the runtime setting is correct then Google did not allocate a GPU for your session')\n", + " print('Expect slow performance. To access GPU try reconnecting later')\n", + "\n", + "else:\n", + " print('You have GPU access')\n", + " !nvidia-smi" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Notebook_version = ['1.12']\n", + "import tensorflow\n", + "# ------- Variable specific to N2V -------\n", + "from n2v.models import N2VConfig, N2V\n", + "from csbdeep.utils import plot_history\n", + "from n2v.utils.n2v_utils import manipulate_val_data\n", + "from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n", + "from csbdeep.io import save_tiff_imagej_compatible\n", + "\n", + "# ------- Common variable to all ZeroCostDL4Mic notebooks -------\n", + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "import urllib\n", + "import os, random\n", + "import shutil \n", + "import zipfile\n", + "from tifffile import imread, imsave\n", + "import time\n", + "import sys\n", + "import wget\n", + "from pathlib import Path\n", + "import pandas as pd\n", + "import csv\n", + "from glob import glob\n", + "from scipy import signal\n", + "from scipy import ndimage\n", + "from skimage import io\n", + "from sklearn.linear_model import LinearRegression\n", + "from skimage.util import img_as_uint\n", + "import matplotlib as mpl\n", + "from skimage.metrics import structural_similarity\n", + "from skimage.metrics import peak_signal_noise_ratio as psnr\n", + "from astropy.visualization import simple_norm\n", + "from skimage import img_as_float32\n", + "from fpdf import FPDF, HTMLMixin\n", + "from datetime import datetime\n", + "from pip._internal.operations.freeze import freeze\n", + "import subprocess\n", + "from datetime import datetime\n", + "\n", + "# Colors for the warning messages\n", + "class bcolors:\n", + " WARNING = '\\033[31m'\n", + "W = '\\033[0m' # white (normal)\n", + "R = '\\033[31m' # red\n", + "\n", + "#Disable some of the tensorflow warnings\n", + "import warnings\n", + "warnings.filterwarnings(\"ignore\")\n", + "\n", + "print(\"Libraries installed\")\n", + "\n", + "\n", + "# Check if this is the latest version of the notebook\n", + "Latest_notebook_version = pd.read_csv(\"https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv\")\n", + "print('Notebook version: '+Notebook_version[0])\n", + "strlist = Notebook_version[0].split('.')\n", + "Notebook_version_main = strlist[0]+'.'+strlist[1]\n", + "if Notebook_version_main == Latest_notebook_version.columns:\n", + " print(\"This notebook is up-to-date.\")\n", + "else:\n", + " print(bcolors.WARNING +\"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki\")\n" + ] + } + ] +} \ No newline at end of file diff --git a/LICENSE b/LICENSE index 7b1d5bcd..2b1d891b 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,7 @@ MIT License Copyright (c) 2020 Quantitative Imaging and Nanobiophysics Group +======= Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index 00c0c882..dfbf4724 100644 --- a/README.md +++ b/README.md @@ -40,3 +40,34 @@ DOI: [https://doi.org/10.1038/s41467-021-22518-0](https://www.nature.com/article [8]: https://github.com/HenriquesLab/ZeroCostDL4Mic/blob/master/Wiki_files/VideoDemoScreenshot1.png [wikiPage]: https://github.com/HenriquesLab/DeepLearning_Collab/wiki [wikiPageContributors]: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki#contributors + +======= + +# dl4mic + +Packaged form of [ZeroCostDl4Mic](https://github.com/HenriquesLab/ZeroCostDL4Mic) to make the process more platform agnostic. +Attempts to bundle reusable code and structure model training and prediction into a no-code config file toolset. + + + pip install git+https://github.com/ctr26/dl4mic + + +Currently working with Noise2Void and Care2D + +## Build and test + +This project uses poetry to build, test and manage dependnecies: + +Quick start: + peotry build + poetry install + poetry run pytest + +Note that testing is (rightly) slow due to running model epochs for testing + +## Todo: + +- Find all the bugs +- Implement the full roster of ZeroCostDL4Mic models. +- Command line interface +- Implement lazy loading of large/uninstalled packages (looking at you pyTorch) diff --git a/_Dockerfile b/_Dockerfile new file mode 100644 index 00000000..bbc44130 --- /dev/null +++ b/_Dockerfile @@ -0,0 +1,26 @@ +FROM tensorflow/tensorflow:1.15.5-gpu-jupyter + +# --- Jupyter + +# install the notebook package +RUN pip install --no-cache --upgrade pip && \ + pip install --no-cache notebook + +# create user with a home directory +ARG NB_USER +ARG NB_UID +ENV USER ${NB_USER} +ENV HOME /home/${NB_USER} + +RUN adduser --disabled-password \ + --gecos "Default user" \ + --uid ${NB_UID} \ + ${NB_USER} +WORKDIR ${HOME} +USER ${USER} + +# RUN conda install pip --yes + +COPY . . + +RUN pip install --no-cache-dir -r requirements.txt diff --git a/dl4mic/__init__.py b/dl4mic/__init__.py new file mode 100644 index 00000000..2b550b4c --- /dev/null +++ b/dl4mic/__init__.py @@ -0,0 +1,53 @@ +# import tensorflow as tf +# ------- Common variable to all ZeroCostDL4Mic notebooks ------- + +import numpy as np +from matplotlib import pyplot as plt +import urllib +import os, random +import shutil +import zipfile +from tifffile import imread, imsave +import time +import sys +import wget +from pathlib import Path +import pandas as pd +import csv +from glob import glob +from scipy import signal +from scipy import ndimage +from skimage import io +from sklearn.linear_model import LinearRegression +from skimage.util import img_as_uint +import matplotlib as mpl +from skimage.metrics import structural_similarity +from skimage.metrics import peak_signal_noise_ratio as psnr +from astropy.visualization import simple_norm +from skimage import img_as_float32 +from fpdf import FPDF, HTMLMixin +from datetime import datetime +from pip._internal.operations.freeze import freeze +import subprocess +from datetime import datetime + +class bcolors: + WARNING = "\033[31m" + + +W = "\033[0m" # white (normal) +R = "\033[31m" # red + +ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).' + + +from . import models + + +# def __main__(): +# read_latest_notebook_version() + + + + + diff --git a/dl4mic/assess.py b/dl4mic/assess.py new file mode 100644 index 00000000..ad4f7490 --- /dev/null +++ b/dl4mic/assess.py @@ -0,0 +1,52 @@ +import matplotlib as plt +from tifffile.tifffile import imread +from . import models +import random +import os + + +def full(Data_folder, Result_folder, Data_type): + if not ((Data_folder is None) or (Result_folder is None) or (Data_type is None)): + display_random_image(Data_folder, Result_folder, Data_type) + pass + + +def display_random_image(Data_folder, Result_folder, Data_type): + try: + # if Data_folder is not None: + file_list = os.listdir(Data_folder) + random_choice = random.choice(file_list) + x = imread(os.path.join(Data_folder,random_choice)) + + os.chdir(Result_folder) + y = imread(os.path.join(Result_folder,random_choice)) + + if Data_type == models.params.Data_type.SINGLE_IMAGES: + + f = plt.figure(figsize=(16, 8)) + plt.subplot(1, 2, 1) + plt.imshow(x, interpolation="nearest") + plt.title("Input") + plt.axis("off") + plt.subplot(1, 2, 2) + plt.imshow(y, interpolation="nearest") + plt.title("Predicted output") + plt.axis("off") + plt.show() + + if Data_type == models.params.Data_type.STACKS: + + f = plt.figure(figsize=(16, 8)) + plt.subplot(1, 2, 1) + plt.imshow(x[1], interpolation="nearest") + plt.title("Input") + plt.axis("off") + plt.subplot(1, 2, 2) + plt.imshow(y[1], interpolation="nearest") + plt.title("Predicted output") + plt.axis("off") + plt.show() + except FileExistsError: + print("Couldn't find a random image") + except IndexError: + print("Couldn't find a random image") \ No newline at end of file diff --git a/dl4mic/augment.py b/dl4mic/augment.py new file mode 100644 index 00000000..354544ea --- /dev/null +++ b/dl4mic/augment.py @@ -0,0 +1,194 @@ +import Augmentor +from dataclasses import dataclass +import os +import shutil +from .. import bcolors + + +@dataclass +class AugmentParams: + rotate_90_degrees: 0 + rotate_270_degrees: 0 + flip_left_right: 0 + flip_top_bottom: 0 + random_zoom: 0 + random_zoom_magnification: 0 + random_distortion: 0 + image_shear: 0 + max_image_shear: 1 + skew_image: 0 + skew_image_magnitude: 0 + Use_Default_Augmentation_Parameters: True + Multiply_dataset_by: 2 + + def __init__(self, Use_Default_Augmentation_Parameters=True, Multiply_dataset_by=2): + self.default_params( + self, Use_Default_Augmentation_Parameters, Multiply_dataset_by + ) + self.folder_managment( + self, + ) + + def default_params(self, Use_Default_Augmentation_Parameters, Multiply_dataset_by): + self.Use_Default_Augmentation_Parameters = Use_Default_Augmentation_Parameters + self.Multiply_dataset_by = 2 + + if Use_Default_Augmentation_Parameters: + self.rotate_90_degrees = 0.5 + self.rotate_270_degrees = 0.5 + self.flip_left_right = 0.5 + self.flip_top_bottom = 0.5 + + if not Multiply_dataset_by > 5: + self.random_zoom = 0 + self.random_zoom_magnification = 0.9 + self.random_distortion = 0 + self.image_shear = 0 + self.max_image_shear = 10 + self.skew_image = 0 + self.skew_image_magnitude = 0 + + if Multiply_dataset_by > 5: + self.random_zoom = 0.1 + self.random_zoom_magnification = 0.9 + self.random_distortion = 0.5 + self.image_shear = 0.2 + self.max_image_shear = 5 + self.skew_image = 0.2 + self.skew_image_magnitude = 0.4 + + if Multiply_dataset_by > 25: + self.random_zoom = 0.5 + self.random_zoom_magnification = 0.8 + self.random_distortion = 0.5 + self.image_shear = 0.5 + self.max_image_shear = 20 + self.skew_image = 0.5 + self.skew_image_magnitude = 0.6 + + +def get_nb_augmented_files(Training_source, Multiply_dataset_by): + list_files = os.listdir(Training_source) + Nb_files = len(list_files) + + Nb_augmented_files = Nb_files * Multiply_dataset_by + return Nb_augmented_files + + +def folder_management(Use_Data_augmentation, Save_augmented_images,Saving_path="./content"): + + if Use_Data_augmentation: + print("Data augmentation enabled") + # Here we set the path for the various folder were the augmented images will be loaded + + # All images are first saved into the augmented folder + # Augmented_folder = "/content/Augmented_Folder" + + if not Save_augmented_images: + Saving_path = "./content" + + Augmented_folder = Saving_path + "/Augmented_Folder" + if os.path.exists(Augmented_folder): + shutil.rmtree(Augmented_folder) + os.makedirs(Augmented_folder) + + # Training_source_augmented = "/content/Training_source_augmented" + Training_source_augmented = Saving_path + "/Training_source_augmented" + + if os.path.exists(Training_source_augmented): + shutil.rmtree(Training_source_augmented) + os.makedirs(Training_source_augmented) + + # Training_target_augmented = "/content/Training_target_augmented" + Training_target_augmented = Saving_path + "/Training_target_augmented" + + if os.path.exists(Training_target_augmented): + shutil.rmtree(Training_target_augmented) + os.makedirs(Training_target_augmented) + + +def generate_augmented_images( + Training_source, + Augmented_folder, + Training_target, + rotate_90_degrees, + rotate_270_degrees, + flip_left_right, + flip_top_bottom, + random_zoom, + random_zoom_magnification, + random_distortion, + image_shear, + skew_image, + skew_image_magnitude, + Nb_augmented_files, + Training_target_augmented, + Training_source_augmented, +): + # Here we generate the augmented images + # Load the images + p = Augmentor.Pipeline(Training_source, Augmented_folder) + + # Define the matching images + p.ground_truth(Training_target) + # Define the augmentation possibilities + if not rotate_90_degrees == 0: + p.rotate90(probability=rotate_90_degrees) + + if not rotate_270_degrees == 0: + p.rotate270(probability=rotate_270_degrees) + + if not flip_left_right == 0: + p.flip_left_right(probability=flip_left_right) + + if not flip_top_bottom == 0: + p.flip_top_bottom(probability=flip_top_bottom) + + if not random_zoom == 0: + p.zoom_random( + probability=random_zoom, percentage_area=random_zoom_magnification + ) + + if not random_distortion == 0: + p.random_distortion( + probability=random_distortion, grid_width=4, grid_height=4, magnitude=8 + ) + + if not image_shear == 0: + p.shear(probability=image_shear, max_shear_left=20, max_shear_right=20) + + if not skew_image == 0: + p.skew(probability=skew_image, magnitude=skew_image_magnitude) + + p.sample(int(Nb_augmented_files)) + + print(int(Nb_augmented_files), "matching images generated") + + # Here we sort through the images and move them back to augmented trainning source and targets folders + + augmented_files = os.listdir(Augmented_folder) + + for f in augmented_files: + + if f.startswith("_groundtruth_(1)_"): + shortname_noprefix = f[17:] + shutil.copyfile( + Augmented_folder + "/" + f, + Training_target_augmented + "/" + shortname_noprefix, + ) + if not (f.startswith("_groundtruth_(1)_")): + shutil.copyfile( + Augmented_folder + "/" + f, Training_source_augmented + "/" + f + ) + + for filename in os.listdir(Training_source_augmented): + os.chdir(Training_source_augmented) + os.rename(filename, filename.replace("_original", "")) + + # Here we clean up the extra files + shutil.rmtree(Augmented_folder) + + +def warning(Use_Data_augmentation): + if not Use_Data_augmentation: + print(bcolors.WARNING + "Data augmentation disabled") diff --git a/dl4mic/checks.py b/dl4mic/checks.py new file mode 100644 index 00000000..6764d8fb --- /dev/null +++ b/dl4mic/checks.py @@ -0,0 +1,88 @@ +import os, random +from tifffile import imread, imsave +import matplotlib.pyplot as plt +from astropy.visualization import simple_norm +import wget +import shutil +from enum import Enum +import pandas as pd + +from . import models +from . import bcolors + +def full(Training_source,output_folder,patch_size,show_image): + image = get_random_image(Training_source) + check_data(image) + filename = os.path.join(output_folder, "TrainingDataExample.png") + # if show_image: + display_image(image, filename,show_image) + check_image_dims(image,patch_size) + return image + +def check_image_dims(image,patch_size): + # This will open a randomly chosen dataset input image + x = image + Image_Y = x.shape[0] + Image_X = x.shape[1] + if patch_size > min(Image_Y, Image_X): + patch_size = min(Image_Y, Image_X) + print (bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size) + + # Here we check that patch_size is divisible by 8 + if not patch_size % 8 == 0: + patch_size = ((int(patch_size / 8)-1) * 8) + print (bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:",patch_size) + + return patch_size + +def display_image(image,filename=None,show_images=False): + + # '/content/TrainingDataExample_N2V2D.png' + norm = simple_norm(image, percent = 99) + + f=plt.figure(figsize=(16,8)) + plt.subplot(1,2,1) + plt.imshow(image, interpolation='nearest', norm=norm, cmap='magma') + plt.title('Training source') + plt.axis('off') + if filename != None: + plt.savefig(filename,bbox_inches='tight',pad_inches=0) + if show_images: + plt.show() + else: + plt.close() + +def check_model_exists(h5_file_path): + if not os.path.exists(h5_file_path): + print(bcolors.WARNING+'WARNING: weights_last.h5 pretrained model does not exist') + return os.path.exists(h5_file_path) + # If the model path contains a pretrain model, we load the training rate, + + +#here we check that no model with the same name already exist, if so print a warning +def check_for_prexisiting_model(model_path,model_name): + check_model = os.path.exists(model_path+'/'+model_name) + if check_model: + print(bcolors.WARNING +"!! WARNING: "+model_name+" already exists and will be deleted in the following cell !!") + print(bcolors.WARNING +"To continue training "+model_name+", choose a new model_name here, and load "+model_name+" in section 3.3") + assert not(check_model) + return check_model + +def check_data(image): + # This will open a randomly chosen dataset input image + x = image + len_dims = len(x.shape) + if not len_dims == 2: + print(bcolors.WARNING + "Your images appear to have the wrong dimensions. Image dimension", x.shape) + assert len_dims == 2 + # Here we check that the input images contains the expected dimensions + if len(x.shape) == 2: + print("Image dimensions (y,x)",x.shape) + return len_dims + +def get_random_image(Training_source): + random_choice = get_random_image_path(Training_source) + return imread(os.path.join(Training_source,random_choice)) + +def get_random_image_path(Training_source): + return random.choice(os.listdir(Training_source)) \ No newline at end of file diff --git a/dl4mic/models/CARE.py b/dl4mic/models/CARE.py new file mode 100644 index 00000000..5d2e2135 --- /dev/null +++ b/dl4mic/models/CARE.py @@ -0,0 +1,332 @@ +# from __future__ import print_function, unicode_literals, absolute_import, division + +import os +from random import triangular +import shutil +from dl4mic.reporting import pdf_export +import time +import numpy as np +import csv +import pandas as pd +from .. import models + + + +# ------- Variable specific to CARE ------- +from csbdeep.utils import ( + download_and_extract_zip_file, + plot_some, + axes_dict, + plot_history, + Path, + download_and_extract_zip_file, +) +from csbdeep.data import RawData, create_patches +from csbdeep.io import load_training_data, save_tiff_imagej_compatible +# from csbdeep.models import Config, CARE +from csbdeep import data +import csbdeep.models + +from typing import List + +# def __init__(self): +# return self.N2V + +# from models import params + +# default_params = { +# "model": "CARE", +# "model_name": None, +# "model_path": None, +# "ref_str": None, +# "Notebook_version": 1.12, +# "initial_learning_rate": 0.0004, +# "number_of_steps": 400, +# "number_of_patches": 100, +# "percentage_validation": 10, +# "image_patches": None, +# "loss_function": None, +# "batch_size": 16, +# "patch_size": 80, +# "Training_source": None, +# "number_of_epochs": 100, +# "Use_Default_Advanced_Parameters": True, +# "trained": False, +# "augmentation": False, +# # "pretrained_model": False, +# "Pretrained_model_choice": models.params.Pretrained_model_choice.MODEL_NAME, +# "Weights_choice": models.params.Weights_choice.BEST, +# # "QC_model_path": os.path.join(".dl4mic", "qc"), +# "QC_model_path": "", +# "QC_model_name": None, +# "Multiply_dataset_by": 2, +# "Save_augmented_images": False, +# "Saving_path": "", +# "Use_Default_Augmentation_Parameters": True, +# "rotate_90_degrees": 0.5, +# "rotate_270_degrees": 0.5, +# "flip_left_right": 0.5, +# "flip_top_bottom": 0.5, +# "random_zoom": 0, +# "random_zoom_magnification": 0.9, +# "random_distortion": 0, +# "image_shear": 0, +# "max_image_shear": 10, +# "skew_image": 0, +# "skew_image_magnitude": 0, +# } + + +class CARE(models.DL4MicModelTF): + + # model: str ="CARE" + # model_name: str = None + # model_path: str = None + # Notebook_version": 1.12, + initial_learning_rate: float = 0.0004 + number_of_steps : float = 400 + number_of_patches: float = 100 + percentage_validation: int = 10 + # image_patches": None, + # loss_function": None, + batch_size: int = 16 + patch_size: int = 80 + # Training_source": None, + number_of_epochs: int = 100 + Use_Default_Advanced_Parameters: bool = True + # trained": False, + # augmentation": False, + # "pretrained_model": False, + Pretrained_model_choice: str = models.params.Pretrained_model_choice.MODEL_NAME + Weights_choice: str = models.params.Weights_choice.BEST + model_name: str = "care" + network: str = "CARE 2D" + description: str = "CARE 2D trained using ZeroCostDL4Mic." + ref_str: str = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.' + # authors: List[str] = ["You"] + + # "QC_model_path": os.path.join(".dl4mic", "qc"), + # QC_model_path": "", + # QC_model_name": None, + + + # import N2V + # config=None + # self.dl4mic_model_config={} + + # def init(self): + # self.network = "CARE 2D" + # self.model_name = "CARE" + # self.description = "Noise2Void 2D trained using ZeroCostDL4Mic.'" + # self.authors = ["You"] + # self.ref_str = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.' + + def get_data(self): + (self.X_train, self.Y_train), (self.X_test, self.Y_test), self.axes = get_data( + self.folders.Training_source, + self.folders.Training_target, + self.patch_size, + "", + self.number_of_patches, + self.percentage_validation, + self.folders.model_path, + ) + + def get_config(self): + self.get_data() + self.get_channels() + self.config = get_care_config( + self.X_train, + self.Use_Default_Advanced_Parameters, + self.batch_size, + self.Use_pretrained_model, + self.Weights_choice, + self.initial_learning_rate, + self.lastLearningRate, + self.bestLearningRate, + self.number_of_epochs, + self.axes, + self.n_channel_in, + self.n_channel_out, + ) + + def get_channels(self): + (self.n_channel_in, self.n_channel_out) = get_channels( + self.X_train, self.Y_train, self.axes + ) + + def get_model(self): + self.get_config() + self.model = get_care_model( + self.config, + self.model_name, + self.folders.model_path, + self.Use_pretrained_model, + self.folders.h5_file_path, + ) + + def train_model(self): + train_model( + self.X_train, + self.Y_train, + self.X_test, + self.Y_test, + self.model, + self.folders.model_path, + self.model_name, + ) + + def run(self): + self.model = self.get_model() + self.pre_training(self.X_train) + self.history = self.train_model() + self.post_training(self.history) + + def gleen_data(self,*args,**kwargs): + self.get_channels() + self.get_model() + self.get_data() + self.get_config() + def split_data(self, Xdata): + pass +def train_model(X, Y, X_val, Y_val, model_training, model_path, model_name): + start = time.time() + + # Start Training + history = model_training.train(X, Y, validation_data=(X_val, Y_val)) + + print("Training, done.") + + # convert the history.history dict to a pandas DataFrame: + lossData = pd.DataFrame(history.history) + qc_path = os.path.join(model_path, model_name, "Quality Control") + if os.path.exists(qc_path): + shutil.rmtree(qc_path) + + os.makedirs(qc_path) + + # The training evaluation.csv is saved (overwrites the Files if needed). + lossDataCSVpath = os.path.join(qc_path, "training_evaluation.csv") + with open(lossDataCSVpath, "w") as f: + writer = csv.writer(f) + writer.writerow(["loss", "val_loss", "learning rate"]) + for i in range(len(history.history["loss"])): + writer.writerow( + [ + history.history["loss"][i], + history.history["val_loss"][i], + history.history["lr"][i], + ] + ) + + # Displaying the time elapsed for training + dt = time.time() - start + mins, sec = divmod(dt, 60) + hour, mins = divmod(mins, 60) + print("Time elapsed:", hour, "hour(s)", mins, "min(s)", round(sec), "sec(s)") + + model_training.export_TF() + + print( + "Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin" + ) + return history + # pass + + +def get_data( + Training_source, + Training_target, + patch_size, + base_path, + number_of_patches, + percentage_validation, + model_path, +): + percentage = percentage_validation / 100 + # def get_data(): + raw_data = data.RawData.from_folder( + basepath=base_path, + source_dirs=[Training_source], + target_dir=Training_target, + axes="CYX", + pattern="*.tif*", + ) + + X, Y, XY_axes = data.create_patches( + raw_data, + patch_filter=None, + patch_size=(patch_size, patch_size), + n_patches_per_image=number_of_patches, + ) + + print("Creating 2D training dataset") + training_path = os.path.join(model_path,"rawdata") + rawdata1 = training_path + ".npz" + np.savez(training_path, X=X, Y=Y, axes=XY_axes) + + # Load Training Data + return load_training_data(rawdata1, validation_split=percentage, verbose=True) + + +def get_channels(X, Y, axes): + c = axes_dict(axes)["C"] + n_channel_in, n_channel_out = X.shape[c], Y.shape[c] + return (n_channel_in, n_channel_out) + + +def get_care_config( + X, + Use_Default_Advanced_Parameters, + batch_size, + Use_pretrained_model, + Weights_choice, + initial_learning_rate, + lastLearningRate, + bestLearningRate, + number_of_epochs, + axes, + n_channel_in, + n_channel_out, +): + # Here we automatically define number_of_step in function of training data and batch size + + if Use_Default_Advanced_Parameters: + number_of_steps = int(X.shape[0] / batch_size) + 1 + + # --------------------- Using pretrained model ------------------------ + # Here we ensure that the learning rate set correctly when using pre-trained models + if Use_pretrained_model: + if Weights_choice == "last": + initial_learning_rate = lastLearningRate + + if Weights_choice == "best": + initial_learning_rate = bestLearningRate + # --------------------- ---------------------- ------------------------ + + # Here we create the configuration file + + config = csbdeep.models.Config( + axes, + n_channel_in, + n_channel_out, + probabilistic=True, + train_steps_per_epoch=number_of_steps, + train_epochs=number_of_epochs, + unet_kern_size=5, + unet_n_depth=3, + train_batch_size=batch_size, + train_learning_rate=initial_learning_rate, + ) + return config + + +def get_care_model(config, model_name, model_path, Use_pretrained_model, h5_file_path): + model_training = csbdeep.models.CARE(config, model_name, basedir=model_path) + # --------------------- Using pretrained model ------------------------ + # Load the pretrained weights + if Use_pretrained_model: + model_training.load_weights(h5_file_path) + # --------------------- ---------------------- ------------------------ + return model_training + # pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model) diff --git a/dl4mic/models/N2V.py b/dl4mic/models/N2V.py new file mode 100644 index 00000000..2aacec45 --- /dev/null +++ b/dl4mic/models/N2V.py @@ -0,0 +1,343 @@ +from pathlib import Path +import os +from tifffile import imread, imsave +from tifffile.tifffile import read_uic1tag +from .. import predict, quality, checks, utils, prepare, reporting, assess +import time +from skimage import img_as_float32 +import numpy as np +from csbdeep.io import save_tiff_imagej_compatible + +from n2v.models import N2VConfig, N2V +from csbdeep.utils import plot_history +from n2v.utils.n2v_utils import manipulate_val_data +from n2v.internals.N2V_DataGenerator import N2V_DataGenerator +from csbdeep.io import save_tiff_imagej_compatible + +from .. import models + +from typing import List + + +# def __init__(self): +# return self.N2V +# defaults = { +# # "model":"N2V", +# "model_name": None, +# "model_path": None, +# # "ref_str"=, +# "Notebook_version": 1.12, +# "initial_learning_rate": 0.0004, +# "number_of_steps": 100, +# "percentage_validation": 10, +# # "image_patches"=, +# # "loss_function"=, +# "batch_size": 128, +# "patch_size": 64, +# "Training_source": None, +# "number_of_epochs": 100, +# "Use_Default_Advanced_Parameters": False, +# "trained": False, +# "augmentation": False, +# "pretrained_model": False, +# "Pretrained_model_choice": models.params.Pretrained_model_choice.MODEL_NAME, +# "Weights_choice": models.params.Pretrained_model_choice.BEST, +# } + + +class N2V(models.DL4MicModelTF): + # model_name: str = None + # model_path: str = None + ref_str = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. "Noise2void-learning denoising from single noisy images." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.' + initial_learning_rate: float = 0.0004 + number_of_steps: int = 100 + percentage_validation: int = 10 + # image_patches= + loss_function: str = "mse" + batch_size: int = 128 + patch_size: int = 64 + # Training_source: None + number_of_epochs: int = 100 + Use_Default_Advanced_Parameters: bool = False + trained: bool = False + augmentation: bool = False + pretrained_model: bool = False + Pretrained_model_choice: str = models.params.Pretrained_model_choice.MODEL_NAME + Weights_choice: str = models.params.Weights_choice.BEST + network: str = "Noise2Void" + model_name: str = "n2v" + description: str = "Noise2Void 2D trained using ZeroCostDL4Mic.'" + # authors: List[str] = ["You"] + + # import N2V + # config=None + # super().__init__(**model_config) + # self.dl4mic_model_config={} + # def init(self): + # self.network = "Noise2Void" + # self.model_name = "n2v" + # self.description = "Noise2Void 2D trained using ZeroCostDL4Mic.'" + # self.authors = ["You"] + # self.ref_str = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. "Noise2void-learning denoising from single noisy images." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.' + # pass + + def set_model_config(self): + self.model_config = [ + "train_steps_per_epoch", + "train_epochs", + "train_batch_size", + ] + + def set_model_params(self): + self.model_params = ["model_name", "model_path"] + + def interface(self): + # self.full_config = self.dl4mic_model_config + interface_dict = { + "name": self.model_name, + "basedir": self.model_path, + "train_steps_per_epoch": self.number_of_steps, + "train_epochs": self.number_of_epochs, + "train_batch_size": self.batch_size, + "directory": self.Training_source, + } + self.append_config(interface_dict) + + def model_specifics(self): + pass + + def gleen_data(self, Xdata): + self.shape_of_Xdata = Xdata.shape + + # self.shape_of_Xdata = shape_of_Xdata + + self.get_threshold(self.shape_of_Xdata) + self.get_image_patches(self.shape_of_Xdata) + if self.Use_Default_Advanced_Parameters: + self.number_of_steps = self.get_default_steps(self.shape_of_Xdata) + + def get_threshold(self, shape_of_Xdata): + self.threshold = int(shape_of_Xdata[0] * (self.percentage_validation / 100)) + return self.threshold + + def get_image_patches(self, shape_of_Xdata): + self.image_patches = int(shape_of_Xdata[0]) + return self.image_patches + + def get_default_steps(self, shape_of_Xdata): + self.number_of_steps = int(shape_of_Xdata[0] / self.batch_size) + 1 + return self.number_of_steps + + def save_model(model): + pass + + def quality_extra(self, history=None): + # history = self.history + # model_path = self.model_path + # model_name = self.model_name + # QC_model_name = self.QC_model_name + # QC_model_path = self.QC_model_path + + if self.data.history is not None: + history = self.data.history + if history is None: + return + + quality.quality_tf( + history, + self.model_path, + self.model_name, + self.QC_model_name, + self.QC_model_path, + ) + + def get_model(self): + return get_model( + self.threshold, + self.image_patches, + self.shape_of_Xdata, + self.X_train, + self.percentage_validation, + self.number_of_steps, + self.number_of_epochs, + self.initial_learning_rate, + self.loss_function, + self.batch_size, + self.model_name, + ) + + def run(self): + # import os + # TF1 Hack + import tensorflow.compat.v1 as tf + + tf.disable_v2_behavior() + tf.__version__ = 1.14 + os.environ["KERAS_BACKEND"] = "tensorflow" + + # from n2v.internals.N2V_DataGenerator import N2V_DataGenerator + + # dl4mic_model = self.dl4mic_model_config + + # datagen = N2V_DataGenerator() + + imgs = get_imgs( + self.Training_source, self.patch_size, self.Use_Data_augmentation + ) + + Xdata = get_Xdata(imgs, self.patch_size, self.Use_Data_augmentation) + + self.pre_training(Xdata) + + self.start = time.time() + + model = self.get_model() + # threshold = self.threshold + + X = Xdata[self.threshold :] + X_val = Xdata[: self.threshold] + + self.data.X_train = X + self.data.X_test = X_val + + self.data.history = model.train(X, X_val) + print("Training done.") + + pdf_post = self.post_report(self.data.history) + return self + + +def predict_on_folder( + Prediction_model_name, Prediction_model_path, Data_folder, Result_folder, Data_type +): + + # Activate the pretrained model. + config = None + model = N2V(config, Prediction_model_name, basedir=Prediction_model_path) + + thisdir = Path(Data_folder) + outputdir = Path(Result_folder) + + # r=root, d=directories, f = files + for r, d, f in os.walk(thisdir): + for file in f: + if ".tif" in file: + print(os.path.join(r, file)) + + if Data_type == models.params.Data_type.SINGLE_IMAGES: + print("Single images are now beeing predicted") + + # Loop through the files + for r, d, f in os.walk(thisdir): + for file in f: + base_filename = os.path.basename(file) + input_train = imread(os.path.join(r, file)) + pred_train = model.predict(input_train, axes="YX", n_tiles=(2, 1)) + save_tiff_imagej_compatible( + os.path.join(outputdir, base_filename), pred_train, axes="YX" + ) + + print("Images saved into folder:", Result_folder) + + if Data_type == models.params.Data_type.STACKS: + print("Stacks are now beeing predicted") + for r, d, f in os.walk(thisdir): + for file in f: + base_filename = os.path.basename(file) + timelapse = imread(os.path.join(r, file)) + n_timepoint = timelapse.shape[0] + prediction_stack = np.zeros( + (n_timepoint, timelapse.shape[1], timelapse.shape[2]) + ) + + for t in range(n_timepoint): + img_t = timelapse[t] + prediction_stack[t] = model.predict(img_t, axes="YX", n_tiles=(2, 1)) + + prediction_stack_32 = img_as_float32(prediction_stack, force_copy=False) + imsave(os.path.join(outputdir, base_filename), prediction_stack_32) + + +def get_model( + threshold, + image_patches, + shape_of_Xdata, + X_train, + percentage_validation, + number_of_steps, + number_of_epochs, + initial_learning_rate, + loss_function, + batch_size, + model_name, +): + + # dl4mic_model = self.dl4mic_model_config + # def n2v_get_model(dl4mic_model, Xdata): + + ################ N2V ###################### + + from n2v.models import N2VConfig, N2V + from csbdeep.utils import plot_history + from n2v.utils.n2v_utils import manipulate_val_data + from n2v.internals.N2V_DataGenerator import N2V_DataGenerator + from csbdeep.io import save_tiff_imagej_compatible + + # threshold = self.threshold + # image_patches = self.image_patches + # shape_of_Xdata = self.shape_of_Xdata + + print(shape_of_Xdata[0], "patches created.") + print( + threshold, + "patch images for validation (", + percentage_validation, + "%).", + ) + print(image_patches - threshold, "patch images for training.") + + config = N2VConfig( + X_train, + unet_kern_size=3, + train_steps_per_epoch=number_of_steps, + train_epochs=number_of_epochs, + train_loss=loss_function, + batch_norm=True, + train_batch_size=batch_size, + n2v_perc_pix=0.198, + n2v_manipulator="uniform_withCP", + n2v_neighborhood_radius=5, + train_learning_rate=initial_learning_rate, + ) + + model = N2V( + config=config, + name=model_name, + basedir="tests", + ) + + print("Setup done.") + print(config) + return model + + +def get_Xdata(imgs, patch_size, Use_Data_augmentation): + from n2v.internals.N2V_DataGenerator import N2V_DataGenerator + + datagen = N2V_DataGenerator() + + Xdata = datagen.generate_patches_from_list( + imgs, + shape=(patch_size, patch_size), + augment=Use_Data_augmentation, + ) + return Xdata + + +def get_imgs(Training_source, patch_size, Use_Data_augmentation): + + # dl4mic_model = self.dl4mic_model_config + + datagen = N2V_DataGenerator() + imgs = datagen.load_imgs_from_directory(directory=Training_source) + return imgs diff --git a/dl4mic/models/__init__.py b/dl4mic/models/__init__.py new file mode 100644 index 00000000..37b364dd --- /dev/null +++ b/dl4mic/models/__init__.py @@ -0,0 +1,957 @@ +import numpy as np +from .. import predict, quality, checks, utils, prepare, reporting, assess +import os, random +from tifffile import imread, imsave +import matplotlib.pyplot as plt +from astropy.visualization import simple_norm +import wget +import shutil +from enum import Enum +import pandas as pd +import time + +from mashumaro import DataClassDictMixin +from collections.abc import Mapping + +from pathlib import Path +from dataclasses import dataclass + +from typing import List + +class params: + class Weights_choice(Enum): + BEST = "best" + LAST = "last" + + class Pretrained_model_choice(Enum): + MODEL_NAME = "Model_name" + MODEL_FROM_FILE = "Model_from_file" + + class Data_type(Enum): + SINGLE_IMAGES = "Single_Images" + STACKS = "Stacks" + + # Defaults should be loaded in per submodule + # def get_defaults(): + # # default_params(): + # return { + # # "model":"N2V", + # "model_name": None, + # "model_path": None, + # "ref_str": None, + # "Notebook_version": 1.12, + # "initial_learning_rate": 0.0004, + # "number_of_steps": 100, + # "percentage_validation": 10, + # "image_patches": None, + # "loss_function": None, + # "batch_size": 128, + # "patch_size": 64, + # "Training_source": None, + # "number_of_epochs": 100, + # "Use_Default_Advanced_Parameters": False, + # "trained": False, + # "augmentation": False, + # # "pretrained_model": False, + # "Pretrained_model_choice": params.Pretrained_model_choice.MODEL_NAME, + # "Weights_choice": params.Weights_choice.BEST, + # # "QC_model_path": os.path.join(".dl4mic", "qc"), + # "QC_model_path": "", + # "QC_model_name": None, + # } + + +# if (Use_Default_Advanced_Parameters): +# print("Default advanced parameters enabled") +# # number_of_steps is defined in the following cell in this case +# batch_size = 128 +# percentage_validation = 10 +# initial_learning_rate = 0.0004 + + +class DictLike(object): + def __iter__(self): + return iter(self.__dict__) + + def __len__(self): + return len(self.__dict__) + + def __getitem__(self, arg): + # return getattr(self,arg) #Move away from bloody dict + return getattr(self, arg) + + def __setitem__(self, key, value): + setattr(self, key, value) + # return + + pass + + +@dataclass +class Folders(DataClassDictMixin, DictLike): + """ + Extends DataClassDictMixin and DictLike (probably better alternative + availiable) so that it can be initialised with a dict easy + """ + + # model_name: str + base_out_folder: str = ".dl4mic" + output_folder: str = base_out_folder + QC_model_path: str = None + Training_source: str = None + Training_target: str = None + model_path: str = None + pretrained_model_path: str = None + Source_QC_folder: str = None + Target_QC_folder: str = None + Prediction_model_folder: str = None + Prediction_model_path: str = None + Data_folder: str = None + h5_file_path: str = None + Saving_path: str = None + + def __post_init__(self): + defaults = { + "QC_model_path": "qc", + "Training_source": "training", + "Training_target": "target", + "model_path": "model", + "pretrained_model_path": "pretrained_model", + "Prediction_model_path": "prediction_model", + "Source_QC_folder": "qc_source", + "Target_QC_folder": "qc_target", + "Prediction_model_folder": "pred", + "Data_folder": "data", + "h5_file_path": "weights", + "Saving_path": "augment" + } + for key in defaults: + if self[key] is None: + self[key] = Path(os.path.join(self.output_folder, defaults[key])) + self[key].mkdir(parents=True, exist_ok=True) + + # self.QC_model_path = os.path.join(output_folder, "qc") + # self.Training_source = os.path.join(output_folder, "training") + # self.Training_target= os.path.join(output_folder, "target") + # self.model_path = os.path.join(output_folder, "model") + # self.pretrained_model_path = os.path.join(output_folder, "pretrained_model") + # self.Source_QC_folder = os.path.join(output_folder, "qc_source") + # self.Target_QC_folder = os.path.join(output_folder, "qc_target") + # self.Prediction_model_folder = os.path.join(output_folder, "pred") + # self.Data_folder = os.path.join(output_folder, "data") + # self.h5_file_path = os.path.join(output_folder, "weights") + + # # self.model_name = model_name + # self.output_folder = os.path.join(self.base_out_folder, self.model_name) + # self.QC_model_path = os.path.join(self.output_folder, "qc") + # self.Training_source = os.path.join(self.output_folder, "training") + # self.Training_target = os.path.join(self.output_folder, "target") + # self.model_path = os.path.join(self.output_folder, "model") + # self.pretrained_model_path = os.path.join(self.output_folder, "pretrained_model") + # self.Source_QC_folder = os.path.join(self.output_folder, "qc_source") + # self.Target_QC_folder = os.path.join(self.output_folder, "qc_target") + # self.Prediction_model_folder = os.path.join(self.output_folder, "pred") + # self.Data_folder = os.path.join(self.output_folder, "data") + # self.h5_file_path = os.path.join(self.output_folder, "weights") + + +@dataclass +class DL4MicModelParams(DataClassDictMixin, DictLike): + # folders: dataclass + # folders.base_out_folder: str = ".dl4mic" + # X_train: np.array = None + # X_test: np.array = None + # example_image: np.array = None + # TODO make all of these None type and then default in submodule + # May have solved this? + # folders: Folders = Folders() + model_name: str = "temp" + folders: Folders = Folders() + model: str = "dl4mic" + image_patches: int = 100 + ref_str: str = "ref" + loss_function: str = "loss" + pretrained_model_choice: bool = False + Use_pretrained_model: bool = False + Use_the_current_trained_model: bool = False + Use_Data_augmentation: bool = False + Notebook_version: float = 1.12 + initial_learning_rate: float = 0.0004 + number_of_steps: int = 100 + number_of_patches: int = 100 + percentage_validation: int = 10 + batch_size: int = 128 + patch_size: int = 64 + number_of_epochs: int = 100 + Use_Default_Advanced_Parameters: bool = False + trained: bool = False + augmentation: bool = False + # pretrained_model: bool = False + Pretrained_model_choice: str = params.Pretrained_model_choice.MODEL_NAME + Weights_choice: str = params.Weights_choice.BEST + base_out_folder: str = ".dl4mic" + # QC_model_path: str = os.path.join(base_out_folder, "qc") + # Training_source: str = os.path.join(base_out_folder, "training") + # Training_target: str = os.path.join(base_out_folder, "target") + # model_path: str = base_out_folder + # pretrained_model_path: str = os.path.join(base_out_folder, "model") + pretrained_model_name: str = "model" + Source_QC_folder: str = None + Target_QC_folder: str = None + # Prediction_model_folder: str = os.path.join(base_out_folder, "pred") + Prediction_model_name: str = "pred" + # Prediction_model_path: str = Prediction_model_folder + QC_model_name: str = None + Data_type: str = "" + ref_aug: str = str( + '- Augmentor: Bloice, Marcus D., Christof Stocker,' + 'and Andreas Holzinger. "Augmentor: an image augmentation ' + 'library for machine learning." arXiv ' + 'preprint arXiv:1708.04680 (2017).' + ) + + bestLearningRate: float = initial_learning_rate + lastLearningRate: float = initial_learning_rate + Multiply_dataset_by: int = 2 + Save_augmented_images: bool = False + Use_Default_Augmentation_Parameters: bool = True + rotate_90_degrees: str = 0.5 + rotate_270_degrees: str = 0.5 + flip_left_right: str = 0.5 + flip_top_bottom: str = 0.5 + random_zoom: str = 0 + random_zoom_magnification: str = 0.9 + random_distortion: str = 0 + image_shear: str = 0 + max_image_shear: str = 10 + skew_image: str = 0 + skew_image_magnitude: str = 0 + + def __post_init__(self): + # pass + self.folders.output_folder = os.path.join(self.base_out_folder, self.model_name) + self.folders.QC_dir = Path(os.path.join(self.QC_model_path, self.QC_model_name)) + self.folders.__post__init__() + # self.folders.output_folder = self.output_folder + + # def __init__(self,*args,**kwargs): + # super().__init__() + # from_dict(self,kwargs) + # super().__init__(**model_config) + # h5_file_path: str = None + # output_folder: str = os.path.join(base_out_folder, model_name) + # folders : object = Folders(model_name) + + # folder_list: list = [ + # "base_out_folder", + # "QC_model_path", + # "Training_source", + # "Training_target", + # "model_path", + # "pretrained_model_path", + # "pretrained_model_name", + # "Source_QC_folder", + # "Target_QC_folder", + # "Prediction_model_folder", + # "Prediction_model_path", + # "Data_folder", + # "output_folder" + # ] + # def __init__(self,model_config={}): + # super().__init__(model_config) + + +# DL4MicModelParams = from_dict(data_class=B, data=data) + + +class DL4MicModel(DL4MicModelParams): + + # @dataclass + class data(DictLike): + example_image: np.array = None + X_train: np.array = None + Y_train: np.array = None + X_test: np.array = None + Y_test: np.array = None + time_start: float = None + trained: bool = False + history: np.array = None + def __post_init__(self): + # super().__init__(**model_config) + self.init() + self.paths_and_dirs() + self.model_specifics() + # self.dl4mic_model_config.update(model_config) + + self.interface() + + def paths_and_dirs(self): + # self.output_folder = os.path.join(self.base_out_folder, self.model_name) + + # Path(self.output_folder).mkdir(parents=True, exist_ok=True) + + # folder_dict = {k: self.__dict__[k] for k in self.folder_list} + # folder_dict = self.folders.__dict__ + self.append_config(utils.make_folders(self.folders.__dict__)) + + def init(self): + self.authors = ["You"] + pass + + def step_3(self): + self.step_3_1() + self.step_3_2() + pass + def step_3_1(self): + self.checks() + pass + def step_3_2(self): + ''' + Data augmentation + ''' + self.augmentation() + pass + def step_3_3(self): + ''' + Load pretrained model + ''' + self.load_pretrained_model() + pass + + + def step_4(self): + ''' + Train the network + ''' + self.step_4_1() + self.step_4_2() + pass + def step_4_1(self): + ''' + Prepare the training data and model for training + ''' + self.prepare() + def step_4_2(self): + ''' + Start Training + ''' + self.train_model() + pass + + def step_5(self): + ''' + Evaluate your model + ''' + self.step_5_1() + self.step_5_2() + pass + def step_5_1(self): + ''' + Inspection of the loss function + ''' + pass + def step_5_2(self): + ''' + Error mapping and quality metrics estimation + ''' + self.quality() + + def step_6(self): + ''' + Using the trained model + ''' + self.step_6_1() + self.step_6_2() + def step_6_1(self): + ''' + Generate prediction(s) from unseen dataset + ''' + self.predict() + def step_6_2(self): + ''' + Assess predicted output + ''' + self.assess() + + + def model_specifics(self): + pass + + def import_checks(self): + pass + + # def __iter__(self): + # return iter(self.__dict__) + + # def __len__(self): + # return len(self.__dict__) + + # def __getitem__(self, arg): + # # return getattr(self,arg) #Move away from bloody dict + # return getattr(self, arg) + + # def __setitem__(self, key, value): + # setattr(self, key, value) + # # return + + def model_specifics(self): + pass + + def interface(self): + pass + + def set_model_config(self): + pass + + def set_model_params(self): + pass + + def check_model_params(self): + self.check_model_specific_params() + pass + + def check_model_specific_params(self): + pass + + def get_ref(self): + return self.ref_str + + # def __repr__(self): + # self.dl4mic_model_config + + def append_config(self, config_dict): + self.__dict__.update(config_dict) + # return self.dl4mic_model_config + + def get_config(self): + return self.__dict__ + + def get_config_df(self): + return pd.DataFrame(self.__dict__) + + # def data_checks(self): + # self.patch_size = checks.check_image_dims( + # self.patch_size, self.Training_source + # ) + + def get_h5_path(self): + self.h5_file_path = utils.get_h5_path( + self.pretrained_model_path, self.Weights_choice + ) + return self.h5_file_path + + def use_pretrained_model(self): + pass + def train_model(): + pass + def get_model_params(self): + return self[self.model_params] + + def get_config_params(self): + return self[self.model_config] + + def model_export_tf(self, model, X_val): + patch_size = self.batch_size + model.export_TF( + name=self.model_name, + description=self.model_description, + authors=self.authors, + test_img=X_val[0, ..., 0], + axes="YX", + patch_shape=( + self.patch_size, + self.patch_size, + ), + ) + + def data_checks(self, show_image=False): + # checks.check_for_prexisiting_model() + + # image = checks.get_random_image(self.) + # Training_source = self.Training_source + Training_source = self.folders.Training_source + output_folder = self.folders.output_folder + patch_size = self.patch_size + + # checks.check_data(image) + + # filename = os.path.join(self.output_folder, "TrainingDataExample.png") + # if show_image: + # checks.display_image(image, filename) + + # checks.check_image_dims(image, self.patch_size) + + return checks.full(Training_source, output_folder, patch_size, show_image) + + def data_augmentation(self): + pass + + def load_pretrained_model(self): + if self.Use_pretrained_model: + + self.h5_file_path = utils.download_model( + self.pretrained_model_path, + self.pretrained_model_choice, + self.pretrained_model_name, + self.Weights_choice, + self.model_path, + ) + + learning_rates_dict = utils.load_model( + self.h5_file_path, + self.pretrained_model_path, + self.Weights_choice, + self.initial_learning_rate, + ) + + self.append_config(learning_rates_dict) + return self.h5_file_path + else: + pass + def prepare(self): + pass + + def train(self): + pass + + def augment(self): + pass + + def checks(self): + pass + + def reporting(self): + pass + + def report(self, time_start=None, trained=None, show_image=False): + # report_args = [ + # "model_name", + # "model_path", + # "ref_str", + # "ref_aug", + # "Notebook_version", + # "initial_learning_rate", + # "number_of_steps", + # "percentage_validation", + # "image_patches", + # "loss_function", + # "batch_size", + # "patch_size", + # "Training_source", + # "number_of_epochs", + # # "time_start", + # "Use_Default_Advanced_Parameters", + # # "trained", + # "augmentation", + # "Use_pretrained_model", + # ] + # extra_args = { + # "time_start": time_start, + # "example_image": self.data.example_image, + # "trained": trained, + # } + + # report_config = {key: self[key] for key in report_args} + # report_config.update(extra_args) + + # # return reporting.pdf_export(**report_config) + self.data.trained = trained + self.data.time_start = time_start + + return reporting.pdf_export( + self.model_name, + self.model_path, + self.ref_str, + self.ref_aug, + self.Notebook_version, + self.initial_learning_rate, + self.number_of_steps, + self.percentage_validation, + self.image_patches, + self.loss_function, + self.batch_size, + self.patch_size, + self.Training_source, + self.number_of_epochs, + self.Use_Default_Advanced_Parameters, + self.data.time_start, + self.data.example_image, + self.data.trained, + self.augmentation, + self.Use_pretrained_model, + ) + + def pre_report( + self, + X_train=None, + X_test=None, + time_start=None, + trained=False, + show_image=False, + ): + if show_image: + prepare.setup_complete(X_train=X_train, X_test=X_test) + # return self.report(time_start=time_start, trained=None, show_image=False) + return self.report(time_start=time_start, trained=trained, show_image=False) + + def post_report( + self, X_train=None, X_test=None, time_start=None, trained=True, show_image=False + ): + return self.report( + time_start=time_start, trained=trained, show_image=show_image + ) + + # def quality_stock(self): + # # Path(self.QC_model_path).mkdir(parents=True, exist_ok=True) + + # return quality.quality_sequence( + # model_path, + # model_name, + # QC_model_name, + # QC_model_path, + # ref_str, + # network, + # Use_the_current_trained_model, + # Source_QC_folder, + # Target_QC_folder, + # ) + def quality_extra(self, **kwargs): + pass + + def quality(self, history=None, show_images=False): + + # model_path = self.model_path + # model_name = self.model_name + + # if self.QC_model_name is None: + # self.QC_model_name = model_name + + # if self.QC_model_path is None: + # self.QC_model_path = model_path + + # QC_model_name = self.QC_model_name + # QC_model_path = self.QC_model_path + + # ref_str = self.ref_str + # network = self.network + # Use_the_current_trained_model = self.Use_the_current_trained_model + # Source_QC_folder = self.Source_QC_folder + # Target_QC_folder = self.Target_QC_folder + # self.QC_dir = Path(os.path.join(QC_model_path,QC_model_name)) + # self.QC_dir.mkdir(parents=True, exist_ok=True) + + # return self.quality_stock() + # def quality(self): + + if history != None: + self.quality_extra(history=history) + + return quality.full( + self.model_path, + self.model_name, + self.QC_model_name, + self.QC_model_path, + self.ref_str, + self.network, + self.Use_the_current_trained_model, + self.Source_QC_folder, + self.Target_QC_folder, + show_images=show_images, + ) + + def predict(self): + + Prediction_model_path = self.folders.Prediction_model_path + Prediction_model_name = self.Prediction_model_name + + return predict.full(Prediction_model_path, Prediction_model_name) + + def assess(self): + + Prediction_model_path = self.Prediction_model_path + Prediction_model_name = self.Prediction_model_name + Data_type = self.Data_type + + return assess.full(Prediction_model_path, Prediction_model_name, Data_type) + + def save_model(self): + pass + + def get_model(self, **kwargs): + pass + + def run(self, config): + pass + + def pre_training(self, X): + + # if data_checks.__name__ == self.__class__ + self.data_checks() + # self.data_checks_specific() #Be smarter with class inheritence + + self.data_augmentation() + # self.data_augmentation_specific() + + self.gleen_data(X) + self.split_data(X) + self.check_model_params() + pdf = self.pre_report( + X_train=self.X_train, + X_test=self.X_test, + show_image=False, + ) + self.pre_training_specific() + self.check_model_params() + return pdf + + def pre_training_specific(self): + pass + + def post_training(self, history=None, show_image=False): + self.post_training_specific() + self.quality(history) + pdf = self.post_report(show_image) + self.predict() + self.assess() + return pdf + + def post_training_specific(self): + pass + + def split_data(self, Xdata): + threshold = self.threshold + X = Xdata[threshold:] + X_val = Xdata[:threshold] + self.X_train = X + self.X_test = X_val + return X, X_val + + # def default_augment(self): + # Use_Default_Augmentation_Parameters = self.Use_Default_Augmentation_Parameters + + # if Use_Default_Augmentation_Parameters: + # rotate_90_degrees = 0.5 + # rotate_270_degrees = 0.5 + # flip_left_right = 0.5 + # flip_top_bottom = 0.5 + + # if not Multiply_dataset_by >5: + # random_zoom = 0 + # random_zoom_magnification = 0.9 + # random_distortion = 0 + # image_shear = 0 + # max_image_shear = 10 + # skew_image = 0 + # skew_image_magnitude = 0 + + # if Multiply_dataset_by >5: + # random_zoom = 0.1 + # random_zoom_magnification = 0.9 + # random_distortion = 0.5 + # image_shear = 0.2 + # max_image_shear = 5 + # skew_image = 0.2 + # skew_image_magnitude = 0.4 + + # if Multiply_dataset_by >25: + # random_zoom = 0.5 + # random_zoom_magnification = 0.8 + # random_distortion = 0.5 + # image_shear = 0.5 + # max_image_shear = 20 + # skew_image = 0.5 + # skew_image_magnitude = 0.6 + + # def quality_tf(self, model, model_path, model_name,QC_model_name,QC_model_path): + # df = self.get_history_df_from_model_tf(model) + # quality.df_to_csv(df, model_path, model_name) + # quality.display_training_errors(model, QC_model_name, QC_model_path) + + # return df + # model_path = self.model_path + # model_name = self.model_name + + # QC_model_name = self.QC_model_name + # QC_model_path = self.QC_model_path + + # Source_QC_folder = self.Source_QC_folder + # Target_QC_folder = self.Target_QC_folder + # def quality_sequence(self,model_path,model_name,QC_model_name,QC_model_path): + + # Use_the_current_trained_model = self.dl4mic_model_config[ + # "Use_the_current_trained_model" + # ] + # # quality_tf(self, model, model_path, model_name) + # quality.quality_folder_reset(model_path, model_name) + # quality.qc_model_checks( + # QC_model_name, + # QC_model_path, + # model_name, + # model_path, + # Use_the_current_trained_model, + # ) + + # reporting.qc_pdf_export() + # self.post_report() + + # def get_history_df_from_model_tf(self, model): + # history = model.history + # return pd.DataFrame(history.history) + + +class DL4MicModelTF(DL4MicModel): + def save_model(self, model, X_val): + patch_size = self.patch_size + model.export_TF( + name=self.model_name, + description=self.description, + authors=self.authors, + test_img=X_val[0, ..., 0], + axes="YX", + patch_shape=(patch_size, patch_size), + ) + print( + "Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin" + ) + + def history_to_df(history): + return pd.DataFrame(history.history) + + def quality_checks(self, history): + pass + + # def quality(self, history): + # if self.Use_the_current_trained_model: + # self.QC_model_path = self.dl4mic_model_config[ + # "model_path" + # ] + # self.QC_model_name = self.dl4mic_model_config[ + # "model_name" + # ] + # # model = self." + + # model_path = self.model_path + # model_name = self.model_name + # QC_model_name = self.QC_model_name + # QC_model_path = self.QC_model_path + + # qc_folder = os.path.join(model_path, model_name, "Quality Control") + + # quality.quality_tf( + # history, model_path, model_name, QC_model_name, QC_model_path + # ) + + # return self.quality_stock() + + +""" +TODO +Fix loading of modules, unsure if the load when the +class is loaded or if the init needs to happen first? +""" + +from .N2V import N2V +from .CARE import CARE + + +# class N2V(): +# # import N2V +# # config=None +# # self.dl4mic_model_config={} +# def __init__(self): +# self.dl4mic_model_config = { +# # "model":"N2V", +# "model_name": None, +# "model_path": None, +# # "ref_str"=, +# "Notebook_version": 1.12, +# "initial_learning_rate": 0.0004, +# "number_of_steps": 100, +# "percentage_validation": 10, +# # "image_patches"=, +# # "loss_function"=, +# "batch_size": 128, +# "patch_size": 64, +# "Training_source": None, +# "number_of_epochs": 100, +# "Use_Default_Advanced_Parameters": False, +# "trained": False, +# "augmentation": False, +# "pretrained_model": False, +# "Pretrained_model_choice": params.Pretrained_model_choice.Model_name, +# "Weights_choice": params.Pretrained_model_choice.best, +# } +# self.model_specifics() + +# def set_model_config(self): +# self.model_config = ["train_steps_per_epoch","train_epochs","train_batch_size +# def set_model_params(self): +# self.model_params = ["model_name","model_path + +# # def __init__(): +# # datagen = N2V_DataGenerator() +# # return +# def get_ref(self): +# return self.ref_str + +# def __getitem__(self, arg): +# return self.dl4mic_model_config[arg] + +# def append_config(self, config_dict): +# self.dl4mic_model_config = self.dl4mic_model_config.update(config_dict) +# return self.dl4mic_model_config + +# def get_config(self): +# # dl4mic_model_config = { +# # "image_patches" = None} +# # dl4mic_model_config = {"image_patches"=1} +# # Xdata.shape[0], +# # "loss_function" = config.train_loss +# return self.dl4mic_model_config + +# def data_checks(self): +# self.patch_size = checks.check_image_dims( +# self.patch_size, self.Training_source +# ) + +# def get_h5_path(self): +# self.h5_file_path = os.path.join( +# self.pretrained_model_path, +# "weights_" + self.Weights_choice + ".h5", +# ) + +# def use_pretrained_model(self): +# pass + +# def interface(self): +# self.full_config = self.dl4mic_model_config +# interface_dict = { +# "name":self.model_name, +# "basedir": self.model_path, +# "train_steps_per_epoch":self.number_of_steps, +# "train_epochs":self.number_of_epochs, +# "train_batch_size":self.batch_size, +# } +# self.N2V_config.update(interface_dict) + +# def get_model_params(self): +# return self.full_config[self.model_params] + +# def get_config_params(self): +# return self.full_config[self.model_config] +# # self.N2V_config["name = self.model_name +# # self.N2V_config["basedir = self.model_path +# # self.N2V_config["basedir = self.model_path +# def model_export_tf(self,model,X_val): +# patch_size = self.batch_size +# model.export_TF( +# name=self.model_name, +# description=self.model_description, +# authors=self.authors, +# test_img=X_val[0,...,0], axes='YX', +# patch_shape=(self.patch_size, +# self.patch_size)) +# def model_specifics(self): +# self.model_name = "N2V" +# self.description = "Noise2Void 2D trained using ZeroCostDL4Mic.'" +# self.authors = ["You diff --git a/dl4mic/predict.py b/dl4mic/predict.py new file mode 100644 index 00000000..e15732e6 --- /dev/null +++ b/dl4mic/predict.py @@ -0,0 +1,31 @@ +from . import bcolors +import os + + +def full(Prediction_model_path, Prediction_model_name): + if (Prediction_model_path or Prediction_model_name) is not None: + check_folder(Prediction_model_path, Prediction_model_name) + pass + + +def check_folder(Prediction_model_path, Prediction_model_name): + + # full_Prediction_model_path = ( + # Prediction_model_path + "/" + Prediction_model_name + "/" + # ) + try: + full_Prediction_model_path = os.path.join( + Prediction_model_path, Prediction_model_name + ) + except TypeError: + print("Bad or empty model path or name") + return + if os.path.exists(full_Prediction_model_path): + print("The " + Prediction_model_name + " network will be used.") + return + else: + print(bcolors.WARNING + "!! WARNING: The chosen model does not exist !!") + print( + "Please make sure you provide a valid model path and model name before proceeding further." + ) + return diff --git a/dl4mic/prepare.py b/dl4mic/prepare.py new file mode 100644 index 00000000..63bac43d --- /dev/null +++ b/dl4mic/prepare.py @@ -0,0 +1,27 @@ +# --------------------- Here we delete the model folder if it already exist ------------------------ +from . import bcolors +import shutil +import os +import matplotlib.pyplot as plt +from . import reporting +from . import bcolors + + +def setup_complete(X_train,X_test): + + X = X_train + validation = X_test + + print("Setup done.") + # creates a plot and shows one training patch and one validation patch. + plt.figure(figsize=(16,87)) + plt.subplot(1,2,1) + plt.imshow(X[0,...,0], cmap='magma') + plt.axis('off') + plt.title('Training Patch'); + plt.subplot(1,2,2) + plt.imshow(validation[0,...,0], cmap='magma') + plt.axis('off') + plt.title('Validation Patch'); + + # reporting.pdf_export(pretrained_model = Use_pretrained_model) diff --git a/dl4mic/quality.py b/dl4mic/quality.py new file mode 100644 index 00000000..ceac9c7c --- /dev/null +++ b/dl4mic/quality.py @@ -0,0 +1,750 @@ +from . import reporting +from glob import glob +# import io +from matplotlib import pyplot as plt +import numpy as np +import pandas as pd +from tifffile.tifffile import imread, imsave +from . import bcolors +import shutil +import os +from pathlib import Path +import csv +from skimage.metrics import structural_similarity +import numexpr +from skimage.metrics import peak_signal_noise_ratio as psnr +from skimage import io + + +# qc_folder = "Quality Control" + + +def quality_folder_reset(QC_model_path, QC_model_name): + folder = os.path.join(QC_model_path, QC_model_name) + if os.path.exists(folder): + shutil.rmtree(folder) + + Path(folder).mkdir(parents=True, exist_ok=True) + return folder + + +def df_to_csv(df, QC_model_path, QC_model_name): + # lossDataCSVpath = os.path.join(model_path+'/'+model_name+'/Quality Control/','training_evaluation.csv') + try: + lossDataCSVpath = os.path.join( + QC_model_path, QC_model_name, "training_evaluation.csv" + ) + df.to_csv(lossDataCSVpath) + return lossDataCSVpath + except FileNotFoundError: + print("Couldn't find training_evaluation") + return None + + + # with open(lossDataCSVpath, 'w') as f: + # writer = csv.writer(f) + # writer.writerow(['loss','val_loss', 'learning rate']) + # for i in range(len(history.history['loss'])): + # writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]]) + + +def qc_model_checks( + QC_model_name, QC_model_path, model_name, model_path, Use_the_current_trained_model +): + # Here we define the loaded model name and path + # QC_model_name = os.path.basename(QC_model_folder) + # QC_model_path = os.path.dirname(QC_model_folder) + + if Use_the_current_trained_model: + QC_model_name = model_name + QC_model_path = model_path + + # full_QC_model_path = QC_model_path+'/'+QC_model_name+'/' + full_QC_model_path = os.path.join(QC_model_path, QC_model_name) + + if os.path.exists(full_QC_model_path): + print("The " + QC_model_name + " network will be evaluated") + else: + print(bcolors.WARNING + "!! WARNING: The chosen model does not exist !!") + print( + "Please make sure you provide a valid model path and model name before proceeding further." + ) + return full_QC_model_path + + +def inspect_loss(QC_model_name, QC_model_path, show_images=False): + return display_training_errors(QC_model_name, QC_model_path,show_images=show_images) + + +# def make_dir_at_file(file): + +# plot of training errors vs. epoch number +def display_training_errors(QC_model_name, QC_model_path,show_images=False): + # Pandas surely? + lossDataFromCSV = [] + vallossDataFromCSV = [] + + qd_training_eval_csv = os.path.join( + QC_model_path, QC_model_name, "training_evaluation.csv" + ) + + Path(qd_training_eval_csv).parent.mkdir(parents=True, exist_ok=True) + print(Path(qd_training_eval_csv).parent) + try: + with open(qd_training_eval_csv, "r") as csvfile: + csvRead = csv.reader(csvfile, delimiter=",") + next(csvRead) + for row in csvRead: + lossDataFromCSV.append(float(row[0])) + vallossDataFromCSV.append(float(row[1])) + + epochNumber = range(len(lossDataFromCSV)) + plt.figure(figsize=(15, 10)) + + plt.subplot(2, 1, 1) + plt.plot(epochNumber, lossDataFromCSV, label="Training loss") + plt.plot(epochNumber, vallossDataFromCSV, label="Validation loss") + plt.title("Training loss and validation loss vs. epoch number (linear scale)") + plt.ylabel("Loss") + plt.xlabel("Epoch number") + plt.legend() + + plt.subplot(2, 1, 2) + plt.semilogy(epochNumber, lossDataFromCSV, label="Training loss") + plt.semilogy(epochNumber, vallossDataFromCSV, label="Validation loss") + plt.title("Training loss and validation loss vs. epoch number (log scale)") + plt.ylabel("Loss") + plt.xlabel("Epoch number") + plt.legend() + loss_curve_path = os.path.join( + QC_model_path, QC_model_name, "lossCurvePlots.png" + ) + plt.savefig(loss_curve_path) + if show_images: + plt.show() + else: + plt.close() + except FileNotFoundError: + print("CSV not found") + # Source_QC_folder = "" # @param{type:"string"} + # Target_QC_folder = "" # @param{type:"string"} + + # # Create a quality control/Prediction Folder + # if os.path.exists( + # QC_model_path + "/" + QC_model_name + "/Quality Control/Prediction" + # ): + # shutil.rmtree( + # QC_model_path + "/" + QC_model_name + "/Quality Control/Prediction" + # ) + + # os.makedirs(QC_model_path + "/" + QC_model_name + "/Quality Control/Prediction") + + # # tf_model_predictions_save(model,Source_QC_folder,QC_model_path,QC_model_name) + + # # Activate the pretrained model. + + +def tf_model_predictions_save( + model_training, Source_QC_folder, QC_model_path, QC_model_name +): + # model_training = N2V(config=None, name=QC_model_name, basedir=QC_model_path) + + qc_image_path = os.path.join( + QC_model_path, QC_model_name, "Prediction" + ) + + # List Tif images in Source_QC_folder + Source_QC_folder_tif = Source_QC_folder + "/*.tif" + Z = sorted(glob(Source_QC_folder_tif)) + Z = list(map(imread, Z)) + + print("Number of test dataset found in the folder: " + str(len(Z))) + + # Perform prediction on all datasets in the Source_QC folder + for filename in os.listdir(Source_QC_folder): + img = imread(os.path.join(Source_QC_folder, filename)) + predicted = model_training.predict(img, axes="YX", n_tiles=(2, 1)) + # os.chdir(qc_image_path) #Lethal surely + imsave(filename, predicted) + + +def ssim(img1, img2): + return structural_similarity( + img1, + img2, + data_range=1.0, + full=True, + gaussian_weights=True, + use_sample_covariance=False, + sigma=1.5, + ) + + +def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32): + """This function is adapted from Martin Weigert""" + """Percentile-based image normalization.""" + + mi = np.percentile(x, pmin, axis=axis, keepdims=True) + ma = np.percentile(x, pmax, axis=axis, keepdims=True) + return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype) + + +def normalize_mi_ma( + x, mi, ma, clip=False, eps=1e-20, dtype=np.float32 +): # dtype=np.float32 + """This function is adapted from Martin Weigert""" + if dtype is not None: + x = x.astype(dtype, copy=False) + mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype, copy=False) + ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype, copy=False) + eps = dtype(eps) + + try: + x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )") + except ImportError: + x = (x - mi) / (ma - mi + eps) + + if clip: + x = np.clip(x, 0, 1) + + return x + + +def norm_minmse(gt, x, normalize_gt=True): + """This function is adapted from Martin Weigert""" + + """ + normalizes and affinely scales an image pair such that the MSE is minimized + + Parameters + ---------- + gt: ndarray + the ground truth image + x: ndarray + the image that will be affinely scaled + normalize_gt: bool + set to True of gt image should be normalized (default) + Returns + ------- + gt_scaled, x_scaled + """ + if normalize_gt: + gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy=False) + x = x.astype(np.float32, copy=False) - np.mean(x) + # x = x - np.mean(x) + gt = gt.astype(np.float32, copy=False) - np.mean(gt) + # gt = gt - np.mean(gt) + scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten()) + return gt, scale * x + + +# Source_QC_folder = "" #@param{type:"string"} +# Target_QC_folder = "" #@param{type:"string"} + +# # Create a quality control/Prediction Folder +# if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"): +# shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction") + +# os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction") + +# # Activate the pretrained model. +# model_training = N2V(config=None, name=QC_model_name, basedir=QC_model_path) + + +# # List Tif images in Source_QC_folder +# Source_QC_folder_tif = Source_QC_folder+"/*.tif" +# Z = sorted(glob(Source_QC_folder_tif)) +# Z = list(map(imread,Z)) + +# print('Number of test dataset found in the folder: '+str(len(Z))) + + +def create_qc_csv(QC_model_path, QC_model_name, Source_QC_folder, Target_QC_folder): + + # Open and create the csv file that will contain all the QC metrics + + qc_csv_path = os.path.join( + QC_model_path, + QC_model_name, + "QC_metrics_"+QC_model_name+".csv", + ) + with open( + qc_csv_path, + "w", + newline="", + ) as file: + writer = csv.writer(file) + + # Write the header in the csv file + writer.writerow( + [ + "image #", + "Prediction v. GT mSSIM", + "Input v. GT mSSIM", + "Prediction v. GT NRMSE", + "Input v. GT NRMSE", + "Prediction v. GT PSNR", + "Input v. GT PSNR", + ] + ) + + # Let's loop through the provided dataset in the QC folders + try: + for i in os.listdir(Source_QC_folder): + if not os.path.isdir(os.path.join(Source_QC_folder, i)): + print("Running QC on: " + i) + # -------------------------------- Target test data (Ground truth) -------------------------------- + test_GT = io.imread(os.path.join(Target_QC_folder, i)) + + # -------------------------------- Source test data -------------------------------- + test_source = io.imread(os.path.join(Source_QC_folder, i)) + + # Normalize the images wrt each other by minimizing the MSE between GT and Source image + test_GT_norm, test_source_norm = norm_minmse( + test_GT, test_source, normalize_gt=True + ) + + # -------------------------------- Prediction -------------------------------- + test_prediction = io.imread( + os.path.join( + QC_model_path, + QC_model_name, + "Prediction", + i, + ) + ) + + # Normalize the images wrt each other by minimizing the MSE between GT and prediction + test_GT_norm, test_prediction_norm = norm_minmse( + test_GT, test_prediction, normalize_gt=True + ) + + # -------------------------------- Calculate the metric maps and save them -------------------------------- + + # Calculate the SSIM maps + index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim( + test_GT_norm, test_prediction_norm + ) + index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim( + test_GT_norm, test_source_norm + ) + + # Save ssim_maps + img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction) + io.imsave( + os.path.join( + QC_model_path, + QC_model_name, + "SSIM_GTvsPrediction_", + i, + ), + img_SSIM_GTvsPrediction_32bit, + ) + + img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource) + io.imsave( + os.path.join( + QC_model_path, + QC_model_name, + "SSIM_GTvsSource_", + i, + ), + img_SSIM_GTvsSource_32bit, + ) + + # Calculate the Root Squared Error (RSE) maps + img_RSE_GTvsPrediction = np.sqrt( + np.square(test_GT_norm - test_prediction_norm) + ) + img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm)) + + # Save SE maps + img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction) + img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource) + io.imsave( + os.path.join( + QC_model_path, + QC_model_name, + "RSE_GTvsPrediction_", + i, + ), + img_RSE_GTvsPrediction_32bit, + ) + io.imsave( + os.path.join( + QC_model_path, + QC_model_name, + # "Quality Control", + "RSE_GTvsSource_", + i, + ), + img_RSE_GTvsSource_32bit, + ) + + # -------------------------------- Calculate the RSE metrics and save them -------------------------------- + + # Normalised Root Mean Squared Error (here it's valid to take the mean of the image) + NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction)) + NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource)) + + # We can also measure the peak signal to noise ratio between the images + PSNR_GTvsPrediction = psnr( + test_GT_norm, test_prediction_norm, data_range=1.0 + ) + PSNR_GTvsSource = psnr(test_GT_norm, test_source_norm, data_range=1.0) + + writer.writerow( + [ + i, + str(index_SSIM_GTvsPrediction), + str(index_SSIM_GTvsSource), + str(NRMSE_GTvsPrediction), + str(NRMSE_GTvsSource), + str(PSNR_GTvsPrediction), + str(PSNR_GTvsSource), + ] + ) + + # error_mapping_report( + # Target_QC_folder, + # Source_QC_folder, + # QC_model_path, + # QC_model_name, + # img_SSIM_GTvsPrediction, + # index_SSIM_GTvsSource, + # img_SSIM_GTvsSource, + # index_SSIM_GTvsPrediction, + # NRMSE_GTvsSource, + # PSNR_GTvsSource, + # img_RSE_GTvsSource, + # NRMSE_GTvsPrediction, + # PSNR_GTvsPrediction, + # img_RSE_GTvsPrediction, + # ) + + full_QC_model_path = os.path.join(QC_model_path, QC_model_name) + # All data is now processed saved + Test_FileList = os.listdir( + Source_QC_folder + ) # this assumes, as it should, that both source and target are named the same + if len(Test_FileList)==0: + print("No files in QC_folder") + else: + plt.figure(figsize=(15, 15)) + # Currently only displays the last computed set, from memory + # Target (Ground-truth) + plt.subplot(3, 3, 1) + plt.axis("off") + img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1])) + plt.imshow(img_GT) + plt.title("Target", fontsize=15) + + # Source + plt.subplot(3, 3, 2) + plt.axis("off") + img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1])) + plt.imshow(img_Source) + plt.title("Source", fontsize=15) + + # Prediction + plt.subplot(3, 3, 3) + plt.axis("off") + img_Prediction_path = os.path.join( + QC_model_path, + QC_model_name, + # "Quality Control", + "Prediction", Test_FileList[-1] + ) + img_Prediction = io.imread( + img_Prediction_path, + ) + plt.imshow(img_Prediction) + plt.title("Prediction", fontsize=15) + + # Setting up colours + cmap = plt.cm.CMRmap + + # SSIM between GT and Source + plt.subplot(3, 3, 5) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap=cmap, vmin=0, vmax=1) + plt.colorbar(imSSIM_GTvsSource, fraction=0.046, pad=0.04) + plt.title("Target vs. Source", fontsize=15) + plt.xlabel("mSSIM: " + str(round(index_SSIM_GTvsSource, 3)), fontsize=14) + plt.ylabel("SSIM maps", fontsize=20, rotation=0, labelpad=75) + + # SSIM between GT and Prediction + plt.subplot(3, 3, 6) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imSSIM_GTvsPrediction = plt.imshow( + img_SSIM_GTvsPrediction, cmap=cmap, vmin=0, vmax=1 + ) + plt.colorbar(imSSIM_GTvsPrediction, fraction=0.046, pad=0.04) + plt.title("Target vs. Prediction", fontsize=15) + plt.xlabel("mSSIM: " + str(round(index_SSIM_GTvsPrediction, 3)), fontsize=14) + + # Root Squared Error between GT and Source + plt.subplot(3, 3, 8) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap=cmap, vmin=0, vmax=1) + plt.colorbar(imRSE_GTvsSource, fraction=0.046, pad=0.04) + plt.title("Target vs. Source", fontsize=15) + plt.xlabel( + "NRMSE: " + + str(round(NRMSE_GTvsSource, 3)) + + ", PSNR: " + + str(round(PSNR_GTvsSource, 3)), + fontsize=14, + ) + # plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3))) + plt.ylabel("RSE maps", fontsize=20, rotation=0, labelpad=75) + + # Root Squared Error between GT and Prediction + plt.subplot(3, 3, 9) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap=cmap, vmin=0, vmax=1) + plt.colorbar(imRSE_GTvsPrediction, fraction=0.046, pad=0.04) + plt.title("Target vs. Prediction", fontsize=15) + plt.xlabel( + "NRMSE: " + + str(round(NRMSE_GTvsPrediction, 3)) + + ", PSNR: " + + str(round(PSNR_GTvsPrediction, 3)), + fontsize=14, + ) + QC_example_data_path = os.path.join( + QC_model_path, QC_model_name, "QC_example_data.png" + ) + plt.savefig(QC_example_data_path, bbox_inches="tight", pad_inches=0) + except FileNotFoundError: + print("No prediction example") + + +def error_mapping_report( + Target_QC_folder, + Source_QC_folder, + QC_model_path, + QC_model_name, + img_SSIM_GTvsPrediction, + index_SSIM_GTvsSource, + img_SSIM_GTvsSource, + index_SSIM_GTvsPrediction, + NRMSE_GTvsSource, + PSNR_GTvsSource, + img_RSE_GTvsSource, + NRMSE_GTvsPrediction, + PSNR_GTvsPrediction, + img_RSE_GTvsPrediction, +): + full_QC_model_path = os.path.join(QC_model_path, QC_model_name) + # All data is now processed saved + Test_FileList = os.listdir( + Source_QC_folder + ) # this assumes, as it should, that both source and target are named the same + + plt.figure(figsize=(15, 15)) + # Currently only displays the last computed set, from memory + # Target (Ground-truth) + plt.subplot(3, 3, 1) + plt.axis("off") + img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1])) + plt.imshow(img_GT) + plt.title("Target", fontsize=15) + + # Source + plt.subplot(3, 3, 2) + plt.axis("off") + img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1])) + plt.imshow(img_Source) + plt.title("Source", fontsize=15) + + # Prediction + plt.subplot(3, 3, 3) + plt.axis("off") + img_Prediction_path = os.path.join( + QC_model_path, QC_model_name, "Prediction", Test_FileList[-1] + ) + img_Prediction = io.imread( + img_Prediction_path, + ) + plt.imshow(img_Prediction) + plt.title("Prediction", fontsize=15) + + # Setting up colours + cmap = plt.cm.CMRmap + + # SSIM between GT and Source + plt.subplot(3, 3, 5) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap=cmap, vmin=0, vmax=1) + plt.colorbar(imSSIM_GTvsSource, fraction=0.046, pad=0.04) + plt.title("Target vs. Source", fontsize=15) + plt.xlabel("mSSIM: " + str(round(index_SSIM_GTvsSource, 3)), fontsize=14) + plt.ylabel("SSIM maps", fontsize=20, rotation=0, labelpad=75) + + # SSIM between GT and Prediction + plt.subplot(3, 3, 6) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imSSIM_GTvsPrediction = plt.imshow( + img_SSIM_GTvsPrediction, cmap=cmap, vmin=0, vmax=1 + ) + plt.colorbar(imSSIM_GTvsPrediction, fraction=0.046, pad=0.04) + plt.title("Target vs. Prediction", fontsize=15) + plt.xlabel("mSSIM: " + str(round(index_SSIM_GTvsPrediction, 3)), fontsize=14) + + # Root Squared Error between GT and Source + plt.subplot(3, 3, 8) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap=cmap, vmin=0, vmax=1) + plt.colorbar(imRSE_GTvsSource, fraction=0.046, pad=0.04) + plt.title("Target vs. Source", fontsize=15) + plt.xlabel( + "NRMSE: " + + str(round(NRMSE_GTvsSource, 3)) + + ", PSNR: " + + str(round(PSNR_GTvsSource, 3)), + fontsize=14, + ) + # plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3))) + plt.ylabel("RSE maps", fontsize=20, rotation=0, labelpad=75) + + # Root Squared Error between GT and Prediction + plt.subplot(3, 3, 9) + # plt.axis('off') + plt.tick_params( + axis="both", # changes apply to the x-axis and y-axis + which="both", # both major and minor ticks are affected + bottom=False, # ticks along the bottom edge are off + top=False, # ticks along the top edge are off + left=False, # ticks along the left edge are off + right=False, # ticks along the right edge are off + labelbottom=False, + labelleft=False, + ) + imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap=cmap, vmin=0, vmax=1) + plt.colorbar(imRSE_GTvsPrediction, fraction=0.046, pad=0.04) + plt.title("Target vs. Prediction", fontsize=15) + plt.xlabel( + "NRMSE: " + + str(round(NRMSE_GTvsPrediction, 3)) + + ", PSNR: " + + str(round(PSNR_GTvsPrediction, 3)), + fontsize=14, + ) + QC_example_data_path = os.path.join( + QC_model_path, QC_model_name, "QC_example_data.png" + ) + plt.savefig(QC_example_data_path, bbox_inches="tight", pad_inches=0) + + +def quality_tf(history, model_path, model_name, QC_model_name, QC_model_path): + df = get_history_df_from_model_tf(history) + df_to_csv(df, model_path, model_name) + try: + display_training_errors(model_name, model_path) + except FileNotFoundError: + print("Couldn't find loss csv") + + return df + + +def get_history_df_from_model_tf(history): + return pd.DataFrame(history) + + +def full( + model_path, + model_name, + QC_model_name, + QC_model_path, + ref_str, + network, + Use_the_current_trained_model=True, + Source_QC_folder=None, + Target_QC_folder=None, + show_images=False +): + full_QC_model_path = os.path.join(QC_model_path, QC_model_name) + # quality_tf(self, model, model_path, model_name) + quality_folder_reset(QC_model_path, QC_model_name) + qc_model_checks( + QC_model_name, + QC_model_path, + model_name, + model_path, + Use_the_current_trained_model, + ) + inspect_loss(QC_model_name, QC_model_path, show_images=show_images) + if Source_QC_folder is not None: + create_qc_csv(QC_model_path, QC_model_name, Source_QC_folder, Target_QC_folder) + reporting.qc_pdf_export(QC_model_name, QC_model_path, ref_str, network) diff --git a/dl4mic/reporting.py b/dl4mic/reporting.py new file mode 100644 index 00000000..ca997d8e --- /dev/null +++ b/dl4mic/reporting.py @@ -0,0 +1,485 @@ +import numpy as np +from matplotlib import pyplot as plt +import urllib +import os, random +import shutil +import zipfile +from tifffile import imread, imsave +import time +import sys +import wget +from pathlib import Path +import pandas as pd +import csv +from glob import glob +from scipy import signal +from scipy import ndimage +from skimage import io +from sklearn.linear_model import LinearRegression +from skimage.util import img_as_uint +import matplotlib as mpl +from skimage.metrics import structural_similarity +from skimage.metrics import peak_signal_noise_ratio as psnr +from astropy.visualization import simple_norm +from skimage import img_as_float32 +from fpdf import FPDF, HTMLMixin +from datetime import datetime +from pip._internal.operations.freeze import freeze +import subprocess +from datetime import datetime + +from . import utils + + +def pdf_export( + model_name, + model_path, + ref_str, + ref_aug, + Notebook_version, + initial_learning_rate, + number_of_steps, + percentage_validation, + image_patches, + loss_function, + batch_size, + patch_size, + Training_source, + number_of_epochs, + Use_Default_Advanced_Parameters, + time_start=None, + example_image=None, + trained=False, + augmentation=False, + Use_pretrained_model=False, +): + class MyFPDF(FPDF, HTMLMixin): + pass + + if time_start != None: + hour, mins, sec = utils.time_elapsed(time_start) + else: + hour, mins, sec = [0] * 3 + + pdf = MyFPDF() + pdf.add_page() + pdf.set_right_margin(-1) + pdf.set_font("Arial", size=11, style="B") + + Network = "Noise2Void 2D" + day = datetime.now() + datetime_str = str(day)[0:10] + + Header = ( + "Training report for " + + Network + + " model (" + + model_name + + ")\nDate: " + + datetime_str + ) + pdf.multi_cell(180, 5, txt=Header, align="L") + + # add another cell + if trained: + training_time = ( + "Training time: " + + str(hour) + + "hour(s) " + + str(mins) + + "min(s) " + + str(round(sec)) + + "sec(s)" + ) + pdf.cell(190, 5, txt=training_time, ln=1, align="L") + pdf.ln(1) + + Header_2 = "Information for your materials and method:" + pdf.cell(190, 5, txt=Header_2, ln=1, align="L") + + all_packages = "" + for requirement in freeze(local_only=True): + all_packages = all_packages + requirement + ", " + # print(all_packages) + + # Main Packages + main_packages = "" + version_numbers = [] + for name in ["tensorflow", "numpy", "Keras", "csbdeep"]: + find_name = all_packages.find(name) + main_packages = ( + main_packages + + all_packages[find_name : all_packages.find(",", find_name)] + + ", " + ) + # Version numbers only here: + version_numbers.append( + all_packages[find_name + len(name) + 2 : all_packages.find(",", find_name)] + ) + + cuda_version = subprocess.run("nvcc --version", stdout=subprocess.PIPE, shell=True) + cuda_version = cuda_version.stdout.decode("utf-8") + cuda_version = cuda_version[cuda_version.find(", V") + 3 : -1] + gpu_name = subprocess.run("nvidia-smi", stdout=subprocess.PIPE, shell=True) + gpu_name = gpu_name.stdout.decode("utf-8") + gpu_name = gpu_name[gpu_name.find("Tesla") : gpu_name.find("Tesla") + 10] + # if gpu_name == None: + gpu_name = "CPU" + + # print(cuda_version[cuda_version.find(', V')+3:-1]) + # print(gpu_name) + + shape = io.imread( + os.path.join(Training_source, os.listdir(Training_source)[0]) + ).shape + dataset_size = len(os.listdir(Training_source)) + + text = ( + "The " + + str(Network) + + " model was trained from scratch for " + + str(number_of_epochs) + + " epochs on " + + str(image_patches) + + " image patches (image dimensions: " + + str(shape) + + ", patch size: (" + + str(patch_size) + + "," + + str(patch_size) + + ")) with a batch size of " + + str(batch_size) + + " and a " + + str(loss_function) + + " loss function, using the " + + str(Network) + + " ZeroCostDL4Mic notebook (v " + + str(Notebook_version) + + ") (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v " + + str(version_numbers[0]) + + "), Keras (v " + + str(version_numbers[2]) + + "), csbdeep (v " + + str(version_numbers[3]) + + "), numpy (v " + + str(version_numbers[1]) + + "), cuda (v " + + str(cuda_version) + + "). The training was accelerated using a " + + str(gpu_name) + + "GPU." + ) + + if Use_pretrained_model: + text = ( + "The " + + Network + + " model was trained for " + + str(number_of_epochs) + + " epochs on " + + str(image_patches) + + " paired image patches (image dimensions: " + + str(shape) + + ", patch size: (" + + str(patch_size) + + "," + + str(patch_size) + + ")) with a batch size of " + + str(batch_size) + + " and a " + + str(loss_function) + + " loss function, using the " + + str(Network) + + " ZeroCostDL4Mic notebook (v " + + str(Notebook_version) + + ") (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v " + + str(version_numbers[0]) + + "), Keras (v " + + str(version_numbers[2]) + + "), csbdeep (v " + + str(version_numbers[3]) + + "), numpy (v " + + str(version_numbers[1]) + + "), cuda (v " + + str(cuda_version) + + "). The training was accelerated using a " + + str(gpu_name) + + "GPU." + ) + + pdf.set_font("") + pdf.set_font_size(10.0) + pdf.multi_cell(190, 5, txt=text, align="L") + pdf.set_font("") + pdf.set_font("Arial", size=10, style="B") + pdf.ln(1) + pdf.cell(26, 5, txt="Augmentation: ", ln=0) + pdf.set_font("") + if augmentation: + aug_text = "The dataset was augmented by default." + else: + aug_text = "No augmentation was used for training." + pdf.multi_cell(190, 5, txt=aug_text, align="L") + pdf.set_font("Arial", size=11, style="B") + pdf.ln(1) + pdf.cell(180, 5, txt="Parameters", align="L", ln=1) + pdf.set_font("") + pdf.set_font_size(10.0) + if Use_Default_Advanced_Parameters: + pdf.cell(200, 5, txt="Default Advanced Parameters were enabled") + pdf.cell(200, 5, txt="The following parameters were used for training:") + pdf.ln(1) + html = """ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterValue
number_of_epochs{0}
patch_size{1}
batch_size{2}
number_of_steps{3}
percentage_validation{4}
initial_learning_rate{5}
+ """.format( + number_of_epochs, + str(patch_size) + "x" + str(patch_size), + batch_size, + number_of_steps, + percentage_validation, + initial_learning_rate, + ) + pdf.write_html(html) + + # pdf.multi_cell(190, 5, txt = text_2, align='L') + pdf.set_font("Arial", size=11, style="B") + pdf.ln(1) + pdf.cell(190, 5, txt="Training Dataset", align="L", ln=1) + pdf.set_font("") + pdf.set_font("Arial", size=10, style="B") + pdf.cell(28, 5, txt="Training_source:", align="L", ln=0) + pdf.set_font("") + pdf.multi_cell(170, 5, txt=str(Training_source), align="L") + # pdf.set_font('') + # pdf.set_font('Arial', size = 10, style = 'B') + # pdf.cell(28, 5, txt= 'Training_target:', align = 'L', ln=0) + # pdf.set_font('') + # pdf.multi_cell(170, 5, txt = Training_target, align = 'L') + # pdf.cell(190, 5, txt=aug_text, align='L', ln=1) + pdf.ln(1) + pdf.set_font("") + pdf.set_font("Arial", size=10, style="B") + pdf.cell(21, 5, txt="Model Path:", align="L", ln=0) + pdf.set_font("") + pdf.multi_cell(170, 5, txt=str(model_path) + "/" + str(model_name), align="L") + pdf.ln(1) + pdf.cell(60, 5, txt="Example Training Image", ln=1) + pdf.ln(1) + if example_image != None: + exp_size = example_image.shape + pdf.image( + example_image, + x=11, + y=None, + w=round(exp_size[1] / 8), + h=round(exp_size[0] / 8), + ) + pdf.ln(1) + ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).' + pdf.multi_cell(190, 5, txt=ref_1, align="L") + ref_2 = ref_str + pdf.multi_cell(190, 5, txt=ref_str, align="L") + if augmentation: + pdf.multi_cell(190, 5, txt=ref_aug, align="L") + pdf.ln(3) + reminder = "Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo" + pdf.set_font("Arial", size=11, style="B") + pdf.multi_cell(190, 5, txt=reminder, align="C") + + pdf.output(os.path.join(model_path, model_name) + "_training_report.pdf") + return pdf + + +def qc_pdf_export(QC_model_name, QC_model_path, ref_str, Network): + class MyFPDF(FPDF, HTMLMixin): + pass + + pdf = MyFPDF() + pdf.add_page() + pdf.set_right_margin(-1) + pdf.set_font("Arial", size=11, style="B") + + # Network = "Noise2Void 2D" + + day = datetime.now() + datetime_str = str(day)[0:10] + + Header = ( + "Quality Control report for " + + Network + + " model (" + + QC_model_name + + ")\nDate: " + + datetime_str + ) + pdf.multi_cell(180, 5, txt=Header, align="L") + + all_packages = "" + for requirement in freeze(local_only=True): + all_packages = all_packages + requirement + ", " + + pdf.set_font("") + pdf.set_font("Arial", size=11, style="B") + pdf.ln(2) + pdf.cell(190, 5, txt="Development of Training Losses", ln=1, align="L") + pdf.ln(1) + if os.path.exists(os.path.join(QC_model_path, "lossCurvePlots.png")): + exp_size = io.imread( + os.path.join(QC_model_path, "lossCurvePlots.png") + ).shape + pdf.image( + os.path.join(QC_model_path, "lossCurvePlots.png"), + x=11, + y=None, + w=round(exp_size[1] / 8), + h=round(exp_size[0] / 8), + ) + else: + pdf.set_font("") + pdf.set_font("Arial", size=10) + pdf.cell( + 190, + 5, + txt="If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.", + ) + pdf.ln(2) + pdf.set_font("") + pdf.set_font("Arial", size=10, style="B") + pdf.ln(3) + pdf.cell(80, 5, txt="Example Quality Control Visualisation", ln=1) + pdf.ln(1) + try: + exp_size = io.imread( + os.path.join(QC_model_path, "QC_example_data.png") + ).shape + pdf.image( + os.path.join(QC_model_path, "QC_example_data.png"), + x=16, + y=None, + w=round(exp_size[1] / 10), + h=round(exp_size[0] / 10), + ) + except FileNotFoundError: + print("Not QC example image found") + + pdf.ln(1) + pdf.set_font("") + pdf.set_font("Arial", size=11, style="B") + pdf.ln(1) + pdf.cell(180, 5, txt="Quality Control Metrics", align="L", ln=1) + pdf.set_font("") + pdf.set_font_size(10.0) + + pdf.ln(1) + html = """ + + + """ + try: + with open( + os.path.join(QC_model_path, "QC_metrics_" + QC_model_name + ".csv"), + "r", + ) as csvfile: + metrics = csv.reader(csvfile) + header = next(metrics) + image = header[0] + mSSIM_PvsGT = header[1] + mSSIM_SvsGT = header[2] + NRMSE_PvsGT = header[3] + NRMSE_SvsGT = header[4] + PSNR_PvsGT = header[5] + PSNR_SvsGT = header[6] + header = """ + + + + + + + + + """.format( + image, + mSSIM_PvsGT, + mSSIM_SvsGT, + NRMSE_PvsGT, + NRMSE_SvsGT, + PSNR_PvsGT, + PSNR_SvsGT, + ) + html = html + header + for row in metrics: + image = row[0] + mSSIM_PvsGT = row[1] + mSSIM_SvsGT = row[2] + NRMSE_PvsGT = row[3] + NRMSE_SvsGT = row[4] + PSNR_PvsGT = row[5] + PSNR_SvsGT = row[6] + cells = """ + + + + + + + + + """.format( + image, + str(round(float(mSSIM_PvsGT), 3)), + str(round(float(mSSIM_SvsGT), 3)), + str(round(float(NRMSE_PvsGT), 3)), + str(round(float(NRMSE_SvsGT), 3)), + str(round(float(PSNR_PvsGT), 3)), + str(round(float(PSNR_SvsGT), 3)), + ) + html = html + cells + html = html + """
{0}{1}{2}{3}{4}{5}{6}
{0}{1}{2}{3}{4}{5}{6}
""" + except FileNotFoundError: + print("No qc csv found") + pdf.write_html(html) + + pdf.ln(1) + pdf.set_font("") + pdf.set_font_size(10.0) + ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).' + pdf.multi_cell(190, 5, txt=ref_1, align="L") + # ref_2 = '- Noise2Void: Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. "Noise2void-learning denoising from single noisy images." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.' + pdf.multi_cell(190, 5, txt=ref_str, align="L") + pdf.ln(3) + reminder = "To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name." + + pdf.set_font("Arial", size=11, style="B") + pdf.multi_cell(190, 5, txt=reminder, align="C") + + pdf.output(os.path.join(QC_model_path, QC_model_name + "_QC_report.pdf")) diff --git a/dl4mic/results.py b/dl4mic/results.py new file mode 100644 index 00000000..952befa0 --- /dev/null +++ b/dl4mic/results.py @@ -0,0 +1,47 @@ +import pandas as pd +import os +import shutil +import time +from . import pdf_export +import csv + +def tf_history_convert(history): + lossData = pd.DataFrame(history.history) +def torch_history_convert(history): + pass + +def df_history_to_report(lossData,model_path,model_name,history,start,model): + if os.path.exists(model_path+"/"+model_name+"/Quality Control"): + shutil.rmtree(model_path+"/"+model_name+"/Quality Control") + + os.makedirs(model_path+"/"+model_name+"/Quality Control") + + # The training evaluation.csv is saved (overwrites the Files if needed). + lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv' + with open(lossDataCSVpath, 'w') as f: + writer = csv.writer(f) + writer.writerow(['loss','val_loss', 'learning rate']) + for i in range(len(history.history['loss'])): + writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]]) + + + # Displaying the time elapsed for training + dt = time.time() - start + mins, sec = divmod(dt, 60) + hour, mins = divmod(mins, 60) + print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)") + +def tf_model_export(model,model_name,model_description,patch_size,X_val,Use_pretrained_model,authors=["You"]): + model.export_TF(name=model_name, + description=model_description, + authors=authors, + test_img=X_val[0,...,0], axes='YX', + patch_shape=(patch_size, patch_size)) + + print("Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin") + + pdf_export(trained = True, pretrained_model = Use_pretrained_model) + +def torch_model_export(): + pass + diff --git a/dl4mic/train.py b/dl4mic/train.py new file mode 100644 index 00000000..9224cb8c --- /dev/null +++ b/dl4mic/train.py @@ -0,0 +1,13 @@ +# --------------------- Here we delete the model folder if it already exist ------------------------ +from . import bcolors +import shutil +import os +import matplotlib.pyplot as plt +from . import pdf_export +from . import bcolors + +def delete_model_if_folder(model_path,model_name): + if os.path.exists(model_path+'/'+model_name): + print(bcolors.WARNING +"!! WARNING: Model folder already exists and has been removed !!") + shutil.rmtree(model_path+'/'+model_name) + diff --git a/dl4mic/utils.py b/dl4mic/utils.py new file mode 100644 index 00000000..91bd9cdd --- /dev/null +++ b/dl4mic/utils.py @@ -0,0 +1,205 @@ +from . import bcolors +import time +import pandas as pd +import os +import wget +import shutil + +import inspect +import functools +from pathlib import Path + +# def test_tf_gpu(): +# if tf.test.gpu_device_name() == "": +# print("You do not have GPU access.") +# print("Did you change your runtime ?") +# print( +# "If the runtime setting is correct then Google did not allocate a GPU for your session" +# ) +# print("Expect slow performance. To access GPU try reconnecting later") +# else: +# print("You have GPU access") +# # !nvidia-smi + + + +def info_about_model(Use_pretrained_model, h5_file_path): + # Display info about the pretrained model to be loaded (or not) + if Use_pretrained_model: + print("Weights found in:") + print(h5_file_path) + print("will be loaded prior to training.") + else: + print(bcolors.WARNING + "No pretrained network will be used.") + + +def time_elapsed(time_start): + dt = time.time() - time_start + mins, sec = divmod(dt, 60) + hour, mins = divmod(mins, 60) + print("Time elapsed:", hour, "hour(s)", mins, "min(s)", round(sec), "sec(s)") + return hour, mins, sec + + +def read_latest_notebook_version(Notebook_version, csv_url): + Latest_notebook_version = pd.read_csv(csv_url) + # "https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv" + print("Notebook version: " + Notebook_version[0]) + strlist = Notebook_version[0].split(".") + Notebook_version_main = strlist[0] + "." + strlist[1] + if Notebook_version_main == Latest_notebook_version.columns: + print("This notebook is up-to-date.") + else: + print( + bcolors.WARNING + + "A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki" + ) + return Latest_notebook_version + + +def get_h5_path(pretrained_model_path, Weights_choice): + h5_file_path = os.path.join( + pretrained_model_path, + "weights_" + Weights_choice + ".h5", + ) + return h5_file_path + + +def download_model( + pretrained_model_path, + pretrained_model_choice, + pretrained_model_name, + Weights_choice, + output_folder, +): + # params.Pretrained_model_choice.Model_from_file + + if pretrained_model_choice == "Model_from_file": + h5_file_path = os.path.join( + pretrained_model_path, "weights_" + str(Weights_choice) + ".h5" + ) + if pretrained_model_choice == "Model_name": + # pretrained_model_name = "Model_name" + pretrained_model_path = os.path.join(output_folder, pretrained_model_name) + print("Downloading the model") + if os.path.exists(pretrained_model_path): + shutil.rmtree(pretrained_model_path) + os.makedirs(pretrained_model_path) + wget.download("", pretrained_model_path) + wget.download("", pretrained_model_path) + wget.download("", pretrained_model_path) + wget.download("", pretrained_model_path) + h5_file_path = os.path.join( + pretrained_model_path, "weights_" + Weights_choice + ".h5" + ) + return h5_file_path + + +def load_model( + h5_file_path, pretrained_model_path, Weights_choice, initial_learning_rate +): + # If the model path contains a pretrain model, we load the training rate, + if os.path.exists(h5_file_path): + # Here we check if the learning rate can be loaded from the quality control folder + if os.path.exists( + os.path.join( + pretrained_model_path, "Quality Control", "training_evaluation.csv" + ) + ): + + with open( + os.path.join( + pretrained_model_path, "Quality Control", "training_evaluation.csv" + ), + "r", + ) as csvfile: + csvRead = pd.read_csv(csvfile, sep=",") + # print(csvRead) + + if ( + "learning rate" in csvRead.columns + ): # Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4) + print("pretrained network learning rate found") + # find the last learning rate + lastLearningRate = csvRead["learning rate"].iloc[-1] + # Find the learning rate corresponding to the lowest validation loss + min_val_loss = csvRead[ + csvRead["val_loss"] == min(csvRead["val_loss"]) + ] + # print(min_val_loss) + bestLearningRate = min_val_loss["learning rate"].iloc[-1] + + if Weights_choice == "last": + print("Last learning rate: " + str(lastLearningRate)) + + if Weights_choice == "best": + print( + "Learning rate of best validation loss: " + + str(bestLearningRate) + ) + + if ( + not "learning rate" in csvRead.columns + ): # if the column does not exist, then initial learning rate is used instead + bestLearningRate = initial_learning_rate + lastLearningRate = initial_learning_rate + print( + bcolors.WARNING + + "WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of " + + str(bestLearningRate) + + " will be used instead" + ) + + # Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used + if not os.path.exists( + os.path.join( + pretrained_model_path, "Quality Control", "training_evaluation.csv" + ) + ): + print( + bcolors.WARNING + + "WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of " + + str(initial_learning_rate) + + " will be used instead" + ) + bestLearningRate = initial_learning_rate + lastLearningRate = initial_learning_rate + return {"bestLearningRate": bestLearningRate, "lastLearningRate": lastLearningRate} + return {"bestLearningRate": initial_learning_rate, "lastLearningRate": initial_learning_rate} + + +def dl4mic(f): + """Make function ignore unmatched kwargs. + + If the function already has the catch all **kwargs, do nothing. + """ + if any( + param.kind == inspect.Parameter.VAR_KEYWORD + for param in inspect.signature(f).parameters.values() + ): + return f + # + @functools.wraps(f) + def inner(*args, **kwargs): + # For each keyword arguments recognised by f, + # take their binding from **kwargs received + filtered_kwargs = { + name: kwargs[name] + for name, param in inspect.signature(f).parameters.items() + if ( + param.kind is inspect.Parameter.KEYWORD_ONLY + or param.kind is inspect.Parameter.POSITIONAL_OR_KEYWORD + ) + and name in kwargs + } + return f(*args, **filtered_kwargs) + + return inner + + +def make_folders(folders_dict): + for key in folders_dict: + if folders_dict[key] is not None: + folders_dict[key] = Path(folders_dict[key]) + folders_dict[key].parent.mkdir(parents=True, exist_ok=True) + return folders_dict \ No newline at end of file diff --git a/environment.yml b/environment.yml new file mode 100644 index 00000000..eb517752 --- /dev/null +++ b/environment.yml @@ -0,0 +1,15 @@ +name: torch +channels: + - pytorch + # - conda-forge + - defaults + - hcc + - anaconda +dependencies: + - python + - ipython + - pytorch + - cudatoolkit=10.2 + - cuda-driver + - torchvision + - numpy diff --git a/notebooks/Noise2Void.ipynb b/notebooks/Noise2Void.ipynb new file mode 100644 index 00000000..2a19ab53 --- /dev/null +++ b/notebooks/Noise2Void.ipynb @@ -0,0 +1,176 @@ +{ + "metadata": { + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10" + }, + "orig_nbformat": 2, + "kernelspec": { + "name": "python3710jvsc74a57bd048517f11722045a744e97573c00295bf2a89786ab8c6ccbc15dbecb8c86d6621", + "display_name": "Python 3.7.10 64-bit ('py37': conda)" + } + }, + "nbformat": 4, + "nbformat_minor": 2, + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "# Notebook_version = '1.12'" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "import dl4mic\n", + "import dl4mic.models as models" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "model_config = {\n", + " \"model\":None,\n", + " \"X_train\": None,\n", + " \"X_test\": None,\n", + " \"model_name\":None,\n", + " \"model_path\":None,\n", + " # \"ref_str\"=,\n", + " \"Notebook_version\":1.12,\n", + " \"initial_learning_rate\":0.0004,\n", + " \"number_of_steps\":100,\n", + " \"percentage_validation\":10,\n", + " # \"image_patches\"=,\n", + " # \"loss_function\"=,\n", + " \"batch_size\":128,\n", + " \"patch_size\":64,\n", + " \"Training_source\":None,\n", + " \"number_of_epochs\":100,\n", + " \"Use_Default_Advanced_Parameters\":False,\n", + " \"Use_Data_augmentation\":True,\n", + " \"trained\":False,\n", + " \"augmentation\":False,\n", + " \"pretrained_model\":False,\n", + " \"pretrained_model_choice\":\"Model_from_file\"}\n", + "\n", + "dl4mic_model = models.N2V(model_config)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%load_ext memory_profiler" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!pip install n2v\n", + "# ------- Variable specific to N2V -------\n", + "from n2v.models import N2VConfig, N2V\n", + "from csbdeep.utils import plot_history\n", + "from n2v.utils.n2v_utils import manipulate_val_data\n", + "from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n", + "from csbdeep.io import save_tiff_imagej_compatible\n", + "\n", + "datagen = N2V_DataGenerator()\n", + "training_images = Training_source \n", + "imgs = datagen.load_imgs_from_directory(directory = Training_source)\n", + "\n", + "dl4mic_model.data_checks()\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# model_config = model.append_config(\n", + "# {}\n", + "# )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Training\n", + "dl4mic_model.get_config[\"Use_Data_augmentation\"]\n", + "Xdata = datagen.generate_patches_from_list(imgs, shape=(patch_size,patch_size), augment=model.get_config[\"Use_Data_augmentation\"])\n", + "dl4mic_model.append_config(\n", + " {\"image_patches\"=Xdata.shape[0]\n", + " \"loss_function\"=config.train_loss}\n", + " )\n", + "\n", + "\n", + "threshold = int(shape_of_Xdata[0]*(percentage_validation/100))\n", + "# split the patches into training patches and validation patches\n", + "X = Xdata[threshold:]\n", + "X_val = Xdata[:threshold]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "config = N2VConfig(X,\n", + " unet_kern_size=3,\n", + " train_steps_per_epoch=dl4mic_model[\"number_of_steps\"],\n", + " train_epochs=dl4mic_model[\"number_of_epochs\"]\n", + " train_loss='mse',\n", + " batch_norm=True,\n", + " train_batch_size=dl4mic_model[\"batch_size\"],n2v_perc_pix=0.198,\n", + " n2v_manipulator='uniform_withCP',\n", + " n2v_neighborhood_radius=5, \n", + " train_learning_rate=initial_learning_rate)\n", + "model = N2V(config=config, name=dl4mic_model[\"model_name\"], basedir=dl4mic_model[\"model_path\"])\n", + "\n", + "if dl4mic_model[\"Use_pretrained_model\"]:\n", + " model.load_weights(dl4mic_model[\"h5_file_path\"])\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Begin training\n", + "start = time.time()\n", + "\n", + "history = model.train(X, X_val)\n", + "\n", + "print(\"Training done.\")\n" + ] + } + ] +} \ No newline at end of file diff --git a/poetry.lock b/poetry.lock new file mode 100644 index 00000000..04b0eb9f --- /dev/null +++ b/poetry.lock @@ -0,0 +1,1255 @@ +[[package]] +name = "astropy" +version = "4.2.1" +description = "Astronomy and astrophysics core library" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +numpy = ">=1.17" +pyerfa = "*" + +[package.extras] +all = ["scipy (>=1.1)", "dask", "h5py", "beautifulsoup4", "html5lib", "bleach", "PyYAML (>=3.13)", "pandas", "sortedcontainers", "pytz", "jplephem", "matplotlib (>=3.0)", "mpmath", "asdf (>=2.6)", "bottleneck", "ipython", "pytest"] +docs = ["sphinx", "sphinx-astropy (>=1.3)", "pytest", "PyYAML (>=3.13)", "scipy (>=1.1)", "matplotlib (>=3.1)"] +test = ["pytest-astropy (>=0.8)", "pytest-xdist", "objgraph", "ipython", "coverage", "skyfield (>=1.20)", "sgp4 (>=2.3)"] + +[[package]] +name = "atomicwrites" +version = "1.4.0" +description = "Atomic file writes." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "attrs" +version = "21.2.0" +description = "Classes Without Boilerplate" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[package.extras] +dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit"] +docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"] +tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface"] +tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins"] + +[[package]] +name = "cached-property" +version = "1.5.2" +description = "A decorator for caching properties in classes." +category = "main" +optional = false +python-versions = "*" + +[[package]] +name = "colorama" +version = "0.4.4" +description = "Cross-platform colored terminal text." +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" + +[[package]] +name = "csbdeep" +version = "0.5.2" +description = "CSBDeep - a toolbox for Content-aware Image Restoration (CARE)" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +h5py = "*" +keras = ">=2.1.2,<2.4" +matplotlib = "*" +numpy = "*" +scipy = "*" +six = "*" +tifffile = "*" +tqdm = "*" + +[[package]] +name = "cycler" +version = "0.10.0" +description = "Composable style cycles" +category = "main" +optional = false +python-versions = "*" + +[package.dependencies] +six = "*" + +[[package]] +name = "decorator" +version = "4.4.2" +description = "Decorators for Humans" +category = "main" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*" + +[[package]] +name = "fpdf" +version = "1.7.2" +description = "Simple PDF generation for Python" +category = "main" +optional = false +python-versions = "*" + +[[package]] +name = "h5py" +version = "3.2.1" +description = "Read and write HDF5 files from Python" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +cached-property = {version = "*", markers = "python_version < \"3.8\""} +numpy = [ + {version = ">=1.14.5", markers = "python_version == \"3.7\""}, + {version = ">=1.17.5", markers = "python_version == \"3.8\""}, + {version = ">=1.19.3", markers = "python_version >= \"3.9\""}, +] + +[[package]] +name = "imagecodecs" +version = "2021.4.28" +description = "Image transformation, compression, and decompression codecs" +category = "dev" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +numpy = ">=1.15.1" + +[package.extras] +all = ["matplotlib (>=3.2)", "tifffile (>=2021.1.11)", "numcodecs"] + +[[package]] +name = "imageio" +version = "2.9.0" +description = "Library for reading and writing a wide range of image, video, scientific, and volumetric data formats." +category = "main" +optional = false +python-versions = ">=3.5" + +[package.dependencies] +numpy = "*" +pillow = "*" + +[package.extras] +ffmpeg = ["imageio-ffmpeg"] +fits = ["astropy"] +full = ["astropy", "gdal", "imageio-ffmpeg", "itk"] +gdal = ["gdal"] +itk = ["itk"] + +[[package]] +name = "importlib-metadata" +version = "4.0.1" +description = "Read metadata from Python packages" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""} +zipp = ">=0.5" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"] + +[[package]] +name = "iniconfig" +version = "1.1.1" +description = "iniconfig: brain-dead simple config-ini parsing" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "joblib" +version = "1.0.1" +description = "Lightweight pipelining with Python functions" +category = "main" +optional = false +python-versions = ">=3.6" + +[[package]] +name = "keras" +version = "2.2.5" +description = "Deep Learning for humans" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +h5py = "*" +keras-applications = ">=1.0.8" +keras-preprocessing = ">=1.1.0" +numpy = ">=1.9.1" +pyyaml = "*" +scipy = ">=0.14" +six = ">=1.9.0" + +[package.extras] +tests = ["pytest", "pytest-pep8", "pytest-xdist", "flaky", "pytest-cov", "pandas", "requests", "markdown"] +visualize = ["pydot (>=1.2.4)"] + +[[package]] +name = "keras-applications" +version = "1.0.8" +description = "Reference implementations of popular deep learning models" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +h5py = "*" +numpy = ">=1.9.1" + +[package.extras] +tests = ["pytest", "pytest-pep8", "pytest-xdist", "pytest-cov"] + +[[package]] +name = "keras-preprocessing" +version = "1.1.2" +description = "Easy data preprocessing and data augmentation for deep learning models" +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +numpy = ">=1.9.1" +six = ">=1.9.0" + +[package.extras] +image = ["scipy (>=0.14)", "Pillow (>=5.2.0)"] +pep8 = ["flake8"] +tests = ["pandas", "pillow", "tensorflow", "keras", "pytest", "pytest-xdist", "pytest-cov"] + +[[package]] +name = "kiwisolver" +version = "1.3.1" +description = "A fast implementation of the Cassowary constraint solver" +category = "main" +optional = false +python-versions = ">=3.6" + +[[package]] +name = "mashumaro" +version = "2.5" +description = "Fast serialization framework on top of dataclasses" +category = "main" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +msgpack = ">=0.5.6" +pyyaml = ">=3.13" +typing_extensions = "*" + +[[package]] +name = "matplotlib" +version = "3.4.2" +description = "Python plotting package" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +cycler = ">=0.10" +kiwisolver = ">=1.0.1" +numpy = ">=1.16" +pillow = ">=6.2.0" +pyparsing = ">=2.2.1" +python-dateutil = ">=2.7" + +[[package]] +name = "msgpack" +version = "1.0.2" +description = "MessagePack (de)serializer." +category = "main" +optional = false +python-versions = "*" + +[[package]] +name = "n2v" +version = "0.2.1" +description = "Noise2Void allows the training of a denoising CNN from individual noisy images. This implementationextends CSBDeep." +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +csbdeep = ">=0.4.0,<0.6.0" +imagecodecs = ">=2020.2.18" +keras = ">=2.2.4,<2.3.0" +matplotlib = "*" +numpy = "*" +Pillow = "*" +"ruamel.yaml" = ">=0.16.10" +scipy = "*" +six = "*" +tifffile = ">=2020.5.11" +tqdm = "*" + +[[package]] +name = "networkx" +version = "2.5.1" +description = "Python package for creating and manipulating graphs and networks" +category = "main" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +decorator = ">=4.3,<5" + +[package.extras] +all = ["numpy", "scipy", "pandas", "matplotlib", "pygraphviz", "pydot", "pyyaml", "lxml", "pytest"] +gdal = ["gdal"] +lxml = ["lxml"] +matplotlib = ["matplotlib"] +numpy = ["numpy"] +pandas = ["pandas"] +pydot = ["pydot"] +pygraphviz = ["pygraphviz"] +pytest = ["pytest"] +pyyaml = ["pyyaml"] +scipy = ["scipy"] + +[[package]] +name = "numexpr" +version = "2.7.3" +description = "Fast numerical expression evaluator for NumPy" +category = "main" +optional = false +python-versions = "*" + +[package.dependencies] +numpy = ">=1.7" + +[[package]] +name = "numpy" +version = "1.20.3" +description = "NumPy is the fundamental package for array computing with Python." +category = "main" +optional = false +python-versions = ">=3.7" + +[[package]] +name = "packaging" +version = "20.9" +description = "Core utilities for Python packages" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[package.dependencies] +pyparsing = ">=2.0.2" + +[[package]] +name = "pandas" +version = "1.2.4" +description = "Powerful data structures for data analysis, time series, and statistics" +category = "main" +optional = false +python-versions = ">=3.7.1" + +[package.dependencies] +numpy = ">=1.16.5" +python-dateutil = ">=2.7.3" +pytz = ">=2017.3" + +[package.extras] +test = ["pytest (>=5.0.1)", "pytest-xdist", "hypothesis (>=3.58)"] + +[[package]] +name = "pillow" +version = "8.2.0" +description = "Python Imaging Library (Fork)" +category = "main" +optional = false +python-versions = ">=3.6" + +[[package]] +name = "pluggy" +version = "0.13.1" +description = "plugin and hook calling mechanisms for python" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[package.dependencies] +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} + +[package.extras] +dev = ["pre-commit", "tox"] + +[[package]] +name = "py" +version = "1.10.0" +description = "library with cross-python path, ini-parsing, io, code, log facilities" +category = "dev" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" + +[[package]] +name = "pyerfa" +version = "1.7.3" +description = "Python bindings for ERFA" +category = "main" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +numpy = ">=1.16" + +[package.extras] +docs = ["sphinx-astropy (>=1.3)"] +test = ["pytest", "pytest-doctestplus (>=0.7)"] + +[[package]] +name = "pyparsing" +version = "2.4.7" +description = "Python parsing module" +category = "main" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "pytest" +version = "6.2.4" +description = "pytest: simple powerful testing with Python" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""} +attrs = ">=19.2.0" +colorama = {version = "*", markers = "sys_platform == \"win32\""} +importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=0.12,<1.0.0a1" +py = ">=1.8.2" +toml = "*" + +[package.extras] +testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"] + +[[package]] +name = "python-dateutil" +version = "2.8.1" +description = "Extensions to the standard Python datetime module" +category = "main" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" + +[package.dependencies] +six = ">=1.5" + +[[package]] +name = "pytz" +version = "2021.1" +description = "World timezone definitions, modern and historical" +category = "main" +optional = false +python-versions = "*" + +[[package]] +name = "pywavelets" +version = "1.1.1" +description = "PyWavelets, wavelet transform module" +category = "main" +optional = false +python-versions = ">=3.5" + +[package.dependencies] +numpy = ">=1.13.3" + +[[package]] +name = "pyyaml" +version = "5.4.1" +description = "YAML parser and emitter for Python" +category = "main" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" + +[[package]] +name = "ruamel.yaml" +version = "0.17.4" +description = "ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order" +category = "dev" +optional = false +python-versions = ">=3" + +[package.dependencies] +"ruamel.yaml.clib" = {version = ">=0.1.2", markers = "platform_python_implementation == \"CPython\" and python_version < \"3.10\""} + +[package.extras] +docs = ["ryd"] +jinja2 = ["ruamel.yaml.jinja2 (>=0.2)"] + +[[package]] +name = "ruamel.yaml.clib" +version = "0.2.2" +description = "C version of reader, parser and emitter for ruamel.yaml derived from libyaml" +category = "dev" +optional = false +python-versions = "*" + +[[package]] +name = "scikit-image" +version = "0.18.1" +description = "Image processing in Python" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +imageio = ">=2.3.0" +matplotlib = ">=2.0.0,<3.0.0 || >3.0.0" +networkx = ">=2.0" +numpy = ">=1.16.5" +pillow = ">=4.3.0,<7.1.0 || >7.1.0,<7.1.1 || >7.1.1" +PyWavelets = ">=1.1.1" +scipy = ">=1.0.1" +tifffile = ">=2019.7.26" + +[package.extras] +data = ["pooch (>=1.3.0)"] +docs = ["sphinx (>=1.8,<=2.4.4)", "sphinx-gallery (>=0.7.0,!=0.8.0)", "numpydoc (>=1.0)", "sphinx-copybutton", "pytest-runner", "scikit-learn", "matplotlib (>=3.0.1)", "dask[array] (>=0.15.0,!=2.17.0)", "cloudpickle (>=0.2.1)", "pandas (>=0.23.0)", "seaborn (>=0.7.1)", "pooch (>=1.3.0)", "tifffile (>=2020.5.30)", "myst-parser", "ipywidgets", "plotly (>=4.10.0)"] +optional = ["simpleitk", "astropy (>=3.1.2)", "qtpy", "pyamg", "dask[array] (>=1.0.0,!=2.17.0)", "cloudpickle (>=0.2.1)", "pooch (>=1.3.0)"] +test = ["pytest (>=5.2.0)", "pytest-cov (>=2.7.0)", "pytest-localserver", "pytest-faulthandler", "flake8", "codecov", "pooch (>=1.3.0)"] + +[[package]] +name = "scikit-learn" +version = "0.24.2" +description = "A set of python modules for machine learning and data mining" +category = "main" +optional = false +python-versions = ">=3.6" + +[package.dependencies] +joblib = ">=0.11" +numpy = ">=1.13.3" +scipy = ">=0.19.1" +threadpoolctl = ">=2.0.0" + +[package.extras] +benchmark = ["matplotlib (>=2.1.1)", "pandas (>=0.25.0)", "memory-profiler (>=0.57.0)"] +docs = ["matplotlib (>=2.1.1)", "scikit-image (>=0.13)", "pandas (>=0.25.0)", "seaborn (>=0.9.0)", "memory-profiler (>=0.57.0)", "sphinx (>=3.2.0)", "sphinx-gallery (>=0.7.0)", "numpydoc (>=1.0.0)", "Pillow (>=7.1.2)", "sphinx-prompt (>=1.3.0)"] +examples = ["matplotlib (>=2.1.1)", "scikit-image (>=0.13)", "pandas (>=0.25.0)", "seaborn (>=0.9.0)"] +tests = ["matplotlib (>=2.1.1)", "scikit-image (>=0.13)", "pandas (>=0.25.0)", "pytest (>=5.0.1)", "pytest-cov (>=2.9.0)", "flake8 (>=3.8.2)", "mypy (>=0.770)", "pyamg (>=4.0.0)"] + +[[package]] +name = "scipy" +version = "1.6.3" +description = "SciPy: Scientific Library for Python" +category = "main" +optional = false +python-versions = ">=3.7,<3.10" + +[package.dependencies] +numpy = ">=1.16.5,<1.23.0" + +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +category = "main" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "theano" +version = "1.0.5" +description = "Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs." +category = "dev" +optional = false +python-versions = "*" + +[package.dependencies] +numpy = ">=1.9.1" +scipy = ">=0.14" +six = ">=1.9.0" + +[package.extras] +doc = ["Sphinx (>=0.5.1)", "pygments"] +test = ["nose (>=1.3.0)", "parameterized", "flake8"] + +[[package]] +name = "threadpoolctl" +version = "2.1.0" +description = "threadpoolctl" +category = "main" +optional = false +python-versions = ">=3.5" + +[[package]] +name = "tifffile" +version = "2021.4.8" +description = "Read and write TIFF files" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +numpy = ">=1.15.1" + +[package.extras] +all = ["imagecodecs (>=2021.3.31)", "matplotlib (>=3.2)", "lxml"] + +[[package]] +name = "toml" +version = "0.10.2" +description = "Python Library for Tom's Obvious, Minimal Language" +category = "dev" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" + +[[package]] +name = "tqdm" +version = "4.60.0" +description = "Fast, Extensible Progress Meter" +category = "dev" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" + +[package.extras] +dev = ["py-make (>=0.1.0)", "twine", "wheel"] +notebook = ["ipywidgets (>=6)"] +telegram = ["requests"] + +[[package]] +name = "typing-extensions" +version = "3.10.0.0" +description = "Backported and Experimental Type Hints for Python 3.5+" +category = "main" +optional = false +python-versions = "*" + +[[package]] +name = "wget" +version = "3.2" +description = "pure python download utility" +category = "main" +optional = false +python-versions = "*" + +[[package]] +name = "zipp" +version = "3.4.1" +description = "Backport of pathlib-compatible object wrapper for zip files" +category = "dev" +optional = false +python-versions = ">=3.6" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"] +testing = ["pytest (>=4.6)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pytest-cov", "pytest-enabler", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"] + +[metadata] +lock-version = "1.1" +python-versions = ">=3.7.1,<3.10" +content-hash = "baf20e85f14681f3a3debd7719a2b7b496429db5c24a3c6cb28d71b0c5a7bef1" + +[metadata.files] +astropy = [ + {file = "astropy-4.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:db694c10eb3cc10068859ba1eab30b38b7e821dbfff142960c5a99c4af059747"}, + {file = "astropy-4.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:2ff194e15b03afd575f278e2187b71d7ee9d85f302356050b2257b6c4788f1cc"}, + {file = "astropy-4.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:12c76c119f7a0a8fcf0e72269be9faa88319f12e4ba346180d910e58fda36bf2"}, + {file = "astropy-4.2.1-cp37-cp37m-win32.whl", hash = "sha256:6d8c8bc1eef048ad873395d2a620b9b5f308bef9a508f542e6dc3b33fbfbe66d"}, + {file = "astropy-4.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:3d5516ba20e6cbc208250dd8f414243839cc40e957616e3f336a517967ee34d0"}, + {file = "astropy-4.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c327cfdede8d5fce1224153b8d3a060226161ddc2e1b2170f076aaddb4953965"}, + {file = "astropy-4.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:03428ca1baa4fba99e37d3767c12c038c456a27176bcb8f407f9b2b0743ef8ee"}, + {file = "astropy-4.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:a6164013de3732a67a5a1a2743565f5aaef0d895ce33d5aef482d88b05318893"}, + {file = "astropy-4.2.1-cp38-cp38-win32.whl", hash = "sha256:a1f707283822c2f7df97d9de151c29d49ed9cc0bf3ae952f91012d7a4c5872a7"}, + {file = "astropy-4.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:2035ca521d86c88ea6d8da07f977a9727f0d7d8f85b5c287558c1891f885e548"}, + {file = "astropy-4.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5be2f01d1b35202c0989f4502d25fe850ae5e891acbd3be107eaf6eeab81826d"}, + {file = "astropy-4.2.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:009a26f795adad1f0b26ba3a434e5be9cfa82cb629ba87c0547b567bad6e1695"}, + {file = "astropy-4.2.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:2d4d328892c7b09a23361f44182cf89be3dadaec60a270bd4fe754f3829052a4"}, + {file = "astropy-4.2.1-cp39-cp39-win32.whl", hash = "sha256:09965d5e8ffd7e96e7fcc596b631f366cf729df75efb792918cb6637acf1ad4e"}, + {file = "astropy-4.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:223610cc612aaddac890fefd9e20dc6a39c92ef01692354e2bcb273c79fb8842"}, + {file = "astropy-4.2.1.tar.gz", hash = "sha256:ed483e472241153daec45f4b0c318c2c63d9f47305b78e6e63d32fc388c18427"}, +] +atomicwrites = [ + {file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"}, + {file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"}, +] +attrs = [ + {file = "attrs-21.2.0-py2.py3-none-any.whl", hash = "sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1"}, + {file = "attrs-21.2.0.tar.gz", hash = "sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"}, +] +cached-property = [ + {file = "cached-property-1.5.2.tar.gz", hash = "sha256:9fa5755838eecbb2d234c3aa390bd80fbd3ac6b6869109bfc1b499f7bd89a130"}, + {file = "cached_property-1.5.2-py2.py3-none-any.whl", hash = "sha256:df4f613cf7ad9a588cc381aaf4a512d26265ecebd5eb9e1ba12f1319eb85a6a0"}, +] +colorama = [ + {file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"}, + {file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"}, +] +csbdeep = [ + {file = "csbdeep-0.5.2-py2.py3-none-any.whl", hash = "sha256:d2f0d64cd31ff20fbd798e49fc8786e6052150dbaf333b34b79347244e5416aa"}, + {file = "csbdeep-0.5.2.tar.gz", hash = "sha256:17a2951c380dd756fcb5cfcd5420d9121c152846a9d393712e86c6263726443a"}, +] +cycler = [ + {file = "cycler-0.10.0-py2.py3-none-any.whl", hash = "sha256:1d8a5ae1ff6c5cf9b93e8811e581232ad8920aeec647c37316ceac982b08cb2d"}, + {file = "cycler-0.10.0.tar.gz", hash = "sha256:cd7b2d1018258d7247a71425e9f26463dfb444d411c39569972f4ce586b0c9d8"}, +] +decorator = [ + {file = "decorator-4.4.2-py2.py3-none-any.whl", hash = "sha256:41fa54c2a0cc4ba648be4fd43cff00aedf5b9465c9bf18d64325bc225f08f760"}, + {file = "decorator-4.4.2.tar.gz", hash = "sha256:e3a62f0520172440ca0dcc823749319382e377f37f140a0b99ef45fecb84bfe7"}, +] +fpdf = [ + {file = "fpdf-1.7.2.tar.gz", hash = "sha256:125840783289e7d12552b1e86ab692c37322e7a65b96a99e0ea86cca041b6779"}, + {file = "fpdf-1.7.2.win-amd64.exe", hash = "sha256:9542f6ad0791d673955da954a0cf3554a0affac79deab87bee06b9b4d4e60990"}, + {file = "fpdf-1.7.2.win32.exe", hash = "sha256:0a94eb783ee933e32a44ad949a1aa6c3ca6fb35b608db53f8b216aec52fc1fb5"}, +] +h5py = [ + {file = "h5py-3.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6766104ed13ff40b3b7bfd49f13fced5274103ee9af53667e7a97c5236b14741"}, + {file = "h5py-3.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:4160cb0d35a83c6fb9f1cad65e826dfaeb044e001549ea78003573fb6bee4042"}, + {file = "h5py-3.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:fdabe99139a9c5e1a416b7ed38c89505f8501b376d54496e1bb737cb33df61cf"}, + {file = "h5py-3.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d8467fa56356ad2efad2b5986326e71d4d74505de6f6c7bb46dbba09b37459ac"}, + {file = "h5py-3.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:a6632ac11167bbad1a8fc5c82508b97ab8c12bdfe4b659254b6f7f63d3c76744"}, + {file = "h5py-3.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:90ee8a00aca5c4e0bbd821c1f6118cb9a814c15dcfdb03572c615a4431166480"}, + {file = "h5py-3.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:25294f2690c4813475f566663a21ef1c1b11ef892b26d46454bf0a59e507d5aa"}, + {file = "h5py-3.2.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:d791b710d3e54c4d2c32cb881b183db5674ceb03bf6a0c1f3fb3cf50d8997e0a"}, + {file = "h5py-3.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:7c5b5f18c96fb63399280a724734fd91e1781c6b60e385e439ad8e654a294ba4"}, + {file = "h5py-3.2.1.tar.gz", hash = "sha256:89474be911bfcdb34cbf0d98b8ec48b578c27a89fdb1ae4ee7513f1ef8d9249e"}, +] +imagecodecs = [ + {file = "imagecodecs-2021.4.28-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e053aebb2f614b8cf0b0e8fe0cade13e212a418391cf284d55bdb79812b61426"}, + {file = "imagecodecs-2021.4.28-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:4ea36727d8c3b226bdc5b0835d51f7fb4a610ee1cde0c238c26598b28b2546a3"}, + {file = "imagecodecs-2021.4.28-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:38dd687f53c83c1781df585da9b859175b834a937f6554a3d91eea9c93134ed9"}, + {file = "imagecodecs-2021.4.28-cp37-cp37m-win32.whl", hash = "sha256:933fbbe106a7df743ed44f778822750ac693844687fb4f565d1edcba88ea56d6"}, + {file = "imagecodecs-2021.4.28-cp37-cp37m-win_amd64.whl", hash = "sha256:206c57954149f3580bceb5ac10320085224dbdb6ef68330cc18ec19e9eebdacf"}, + {file = "imagecodecs-2021.4.28-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24036f8da3262dd32b43f7cb2126ce0ab13402c43a0b2972b2e74724a082dace"}, + {file = "imagecodecs-2021.4.28-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:13cca54d4ef66f4ca0ac5dadb7ce7817cca5650023133fa031dc25a5bf4499fb"}, + {file = "imagecodecs-2021.4.28-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:b075e3e2d22adcee600242ed1ecf7f130618d5e5467fbd4812a43156e2e7df43"}, + {file = "imagecodecs-2021.4.28-cp38-cp38-win32.whl", hash = "sha256:161bfa6ce7b32891cb6d8539e8b403ede89dfa968d90b672d30a797649dbb308"}, + {file = "imagecodecs-2021.4.28-cp38-cp38-win_amd64.whl", hash = "sha256:f94e2f19d723295ffd00134239d4fad2a8bec3019e6423e27fa2233bc9af1ba5"}, + {file = "imagecodecs-2021.4.28-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a027e639e2fb088465819d2818103d1099261b655ac87ee69bbabf1205247ecf"}, + {file = "imagecodecs-2021.4.28-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:2899bdc17c1c2ba859f3c3dc706f40337cd9813e253e6c52ae090ae04d62fffa"}, + {file = "imagecodecs-2021.4.28-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:b259849e9ef074b69783f246e6b08793d7282ec2350b093ce20021ab8f17aa46"}, + {file = "imagecodecs-2021.4.28-cp39-cp39-win32.whl", hash = "sha256:f53b905e61df50a06983fd22f001e6585ae65aa1d2c8aa124e99d8eb0f7e9ab2"}, + {file = "imagecodecs-2021.4.28-cp39-cp39-win_amd64.whl", hash = "sha256:b269511e7de20bd9eb24625f9e32a1cc3510f9592253fe3713ad77f793df3e74"}, + {file = "imagecodecs-2021.4.28-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c9b03d8560d365d3f16f356d8ee912df7f14fe89cf5c203734096a23c79a8138"}, + {file = "imagecodecs-2021.4.28.tar.gz", hash = "sha256:26ffa26b884fb03b93aba9e2222ede6a388bd862683d6c19bb53fc8094ff4a3d"}, +] +imageio = [ + {file = "imageio-2.9.0-py3-none-any.whl", hash = "sha256:3604d751f03002e8e0e7650aa71d8d9148144a87daf17cb1f3228e80747f2e6b"}, + {file = "imageio-2.9.0.tar.gz", hash = "sha256:52ddbaeca2dccf53ba2d6dec5676ca7bc3b2403ef8b37f7da78b7654bb3e10f0"}, +] +importlib-metadata = [ + {file = "importlib_metadata-4.0.1-py3-none-any.whl", hash = "sha256:d7eb1dea6d6a6086f8be21784cc9e3bcfa55872b52309bc5fad53a8ea444465d"}, + {file = "importlib_metadata-4.0.1.tar.gz", hash = "sha256:8c501196e49fb9df5df43833bdb1e4328f64847763ec8a50703148b73784d581"}, +] +iniconfig = [ + {file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"}, + {file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"}, +] +joblib = [ + {file = "joblib-1.0.1-py3-none-any.whl", hash = "sha256:feeb1ec69c4d45129954f1b7034954241eedfd6ba39b5e9e4b6883be3332d5e5"}, + {file = "joblib-1.0.1.tar.gz", hash = "sha256:9c17567692206d2f3fb9ecf5e991084254fe631665c450b443761c4186a613f7"}, +] +keras = [ + {file = "Keras-2.2.5-py2.py3-none-any.whl", hash = "sha256:5a75cfdf69c6cb9de81a82aa19542ac69a5c2e78a48a58c1649fc5cdb55c917c"}, + {file = "Keras-2.2.5.tar.gz", hash = "sha256:0fb448b95643a708d25d2394183a2f3a84eefb55fb64917152a46826990113ea"}, +] +keras-applications = [ + {file = "Keras_Applications-1.0.8-py3-none-any.whl", hash = "sha256:df4323692b8c1174af821bf906f1e442e63fa7589bf0f1230a0b6bdc5a810c95"}, + {file = "Keras_Applications-1.0.8.tar.gz", hash = "sha256:5579f9a12bcde9748f4a12233925a59b93b73ae6947409ff34aa2ba258189fe5"}, +] +keras-preprocessing = [ + {file = "Keras_Preprocessing-1.1.2-py2.py3-none-any.whl", hash = "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b"}, + {file = "Keras_Preprocessing-1.1.2.tar.gz", hash = "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3"}, +] +kiwisolver = [ + {file = "kiwisolver-1.3.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:fd34fbbfbc40628200730bc1febe30631347103fc8d3d4fa012c21ab9c11eca9"}, + {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:d3155d828dec1d43283bd24d3d3e0d9c7c350cdfcc0bd06c0ad1209c1bbc36d0"}, + {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:5a7a7dbff17e66fac9142ae2ecafb719393aaee6a3768c9de2fd425c63b53e21"}, + {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:f8d6f8db88049a699817fd9178782867bf22283e3813064302ac59f61d95be05"}, + {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux2014_ppc64le.whl", hash = "sha256:5f6ccd3dd0b9739edcf407514016108e2280769c73a85b9e59aa390046dbf08b"}, + {file = "kiwisolver-1.3.1-cp36-cp36m-win32.whl", hash = "sha256:225e2e18f271e0ed8157d7f4518ffbf99b9450fca398d561eb5c4a87d0986dd9"}, + {file = "kiwisolver-1.3.1-cp36-cp36m-win_amd64.whl", hash = "sha256:cf8b574c7b9aa060c62116d4181f3a1a4e821b2ec5cbfe3775809474113748d4"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:232c9e11fd7ac3a470d65cd67e4359eee155ec57e822e5220322d7b2ac84fbf0"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:b38694dcdac990a743aa654037ff1188c7a9801ac3ccc548d3341014bc5ca278"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ca3820eb7f7faf7f0aa88de0e54681bddcb46e485beb844fcecbcd1c8bd01689"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:c8fd0f1ae9d92b42854b2979024d7597685ce4ada367172ed7c09edf2cef9cb8"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux2014_ppc64le.whl", hash = "sha256:1e1bc12fb773a7b2ffdeb8380609f4f8064777877b2225dec3da711b421fda31"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-win32.whl", hash = "sha256:72c99e39d005b793fb7d3d4e660aed6b6281b502e8c1eaf8ee8346023c8e03bc"}, + {file = "kiwisolver-1.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:8be8d84b7d4f2ba4ffff3665bcd0211318aa632395a1a41553250484a871d454"}, + {file = "kiwisolver-1.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:31dfd2ac56edc0ff9ac295193eeaea1c0c923c0355bf948fbd99ed6018010b72"}, + {file = "kiwisolver-1.3.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:563c649cfdef27d081c84e72a03b48ea9408c16657500c312575ae9d9f7bc1c3"}, + {file = "kiwisolver-1.3.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:78751b33595f7f9511952e7e60ce858c6d64db2e062afb325985ddbd34b5c131"}, + {file = "kiwisolver-1.3.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:a357fd4f15ee49b4a98b44ec23a34a95f1e00292a139d6015c11f55774ef10de"}, + {file = "kiwisolver-1.3.1-cp38-cp38-manylinux2014_ppc64le.whl", hash = "sha256:5989db3b3b34b76c09253deeaf7fbc2707616f130e166996606c284395da3f18"}, + {file = "kiwisolver-1.3.1-cp38-cp38-win32.whl", hash = "sha256:c08e95114951dc2090c4a630c2385bef681cacf12636fb0241accdc6b303fd81"}, + {file = "kiwisolver-1.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:44a62e24d9b01ba94ae7a4a6c3fb215dc4af1dde817e7498d901e229aaf50e4e"}, + {file = "kiwisolver-1.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:50af681a36b2a1dee1d3c169ade9fdc59207d3c31e522519181e12f1b3ba7000"}, + {file = "kiwisolver-1.3.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:a53d27d0c2a0ebd07e395e56a1fbdf75ffedc4a05943daf472af163413ce9598"}, + {file = "kiwisolver-1.3.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:834ee27348c4aefc20b479335fd422a2c69db55f7d9ab61721ac8cd83eb78882"}, + {file = "kiwisolver-1.3.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:5c3e6455341008a054cccee8c5d24481bcfe1acdbc9add30aa95798e95c65621"}, + {file = "kiwisolver-1.3.1-cp39-cp39-manylinux2014_ppc64le.whl", hash = "sha256:acef3d59d47dd85ecf909c359d0fd2c81ed33bdff70216d3956b463e12c38a54"}, + {file = "kiwisolver-1.3.1-cp39-cp39-win32.whl", hash = "sha256:c5518d51a0735b1e6cee1fdce66359f8d2b59c3ca85dc2b0813a8aa86818a030"}, + {file = "kiwisolver-1.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:b9edd0110a77fc321ab090aaa1cfcaba1d8499850a12848b81be2222eab648f6"}, + {file = "kiwisolver-1.3.1-pp36-pypy36_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0cd53f403202159b44528498de18f9285b04482bab2a6fc3f5dd8dbb9352e30d"}, + {file = "kiwisolver-1.3.1-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:33449715e0101e4d34f64990352bce4095c8bf13bed1b390773fc0a7295967b3"}, + {file = "kiwisolver-1.3.1-pp36-pypy36_pp73-win32.whl", hash = "sha256:401a2e9afa8588589775fe34fc22d918ae839aaaf0c0e96441c0fdbce6d8ebe6"}, + {file = "kiwisolver-1.3.1.tar.gz", hash = "sha256:950a199911a8d94683a6b10321f9345d5a3a8433ec58b217ace979e18f16e248"}, +] +mashumaro = [ + {file = "mashumaro-2.5.tar.gz", hash = "sha256:ec402ecbbcc6b5d9b12a1ebfa90af4954fcd7583b745bcf22da156f2a55d1355"}, +] +matplotlib = [ + {file = "matplotlib-3.4.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c541ee5a3287efe066bbe358320853cf4916bc14c00c38f8f3d8d75275a405a9"}, + {file = "matplotlib-3.4.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:3a5c18dbd2c7c366da26a4ad1462fe3e03a577b39e3b503bbcf482b9cdac093c"}, + {file = "matplotlib-3.4.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:a9d8cb5329df13e0cdaa14b3b43f47b5e593ec637f13f14db75bb16e46178b05"}, + {file = "matplotlib-3.4.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:7ad19f3fb6145b9eb41c08e7cbb9f8e10b91291396bee21e9ce761bb78df63ec"}, + {file = "matplotlib-3.4.2-cp37-cp37m-win32.whl", hash = "sha256:7a58f3d8fe8fac3be522c79d921c9b86e090a59637cb88e3bc51298d7a2c862a"}, + {file = "matplotlib-3.4.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6382bc6e2d7e481bcd977eb131c31dee96e0fb4f9177d15ec6fb976d3b9ace1a"}, + {file = "matplotlib-3.4.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6a6a44f27aabe720ec4fd485061e8a35784c2b9ffa6363ad546316dfc9cea04e"}, + {file = "matplotlib-3.4.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1c1779f7ab7d8bdb7d4c605e6ffaa0614b3e80f1e3c8ccf7b9269a22dbc5986b"}, + {file = "matplotlib-3.4.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:5826f56055b9b1c80fef82e326097e34dc4af8c7249226b7dd63095a686177d1"}, + {file = "matplotlib-3.4.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:0bea5ec5c28d49020e5d7923c2725b837e60bc8be99d3164af410eb4b4c827da"}, + {file = "matplotlib-3.4.2-cp38-cp38-win32.whl", hash = "sha256:6475d0209024a77f869163ec3657c47fed35d9b6ed8bccba8aa0f0099fbbdaa8"}, + {file = "matplotlib-3.4.2-cp38-cp38-win_amd64.whl", hash = "sha256:21b31057bbc5e75b08e70a43cefc4c0b2c2f1b1a850f4a0f7af044eb4163086c"}, + {file = "matplotlib-3.4.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b26535b9de85326e6958cdef720ecd10bcf74a3f4371bf9a7e5b2e659c17e153"}, + {file = "matplotlib-3.4.2-cp39-cp39-manylinux1_i686.whl", hash = "sha256:32fa638cc10886885d1ca3d409d4473d6a22f7ceecd11322150961a70fab66dd"}, + {file = "matplotlib-3.4.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:956c8849b134b4a343598305a3ca1bdd3094f01f5efc8afccdebeffe6b315247"}, + {file = "matplotlib-3.4.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:85f191bb03cb1a7b04b5c2cca4792bef94df06ef473bc49e2818105671766fee"}, + {file = "matplotlib-3.4.2-cp39-cp39-win32.whl", hash = "sha256:b1d5a2cedf5de05567c441b3a8c2651fbde56df08b82640e7f06c8cd91e201f6"}, + {file = "matplotlib-3.4.2-cp39-cp39-win_amd64.whl", hash = "sha256:df815378a754a7edd4559f8c51fc7064f779a74013644a7f5ac7a0c31f875866"}, + {file = "matplotlib-3.4.2.tar.gz", hash = "sha256:d8d994cefdff9aaba45166eb3de4f5211adb4accac85cbf97137e98f26ea0219"}, +] +msgpack = [ + {file = "msgpack-1.0.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:b6d9e2dae081aa35c44af9c4298de4ee72991305503442a5c74656d82b581fe9"}, + {file = "msgpack-1.0.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:a99b144475230982aee16b3d249170f1cccebf27fb0a08e9f603b69637a62192"}, + {file = "msgpack-1.0.2-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:1026dcc10537d27dd2d26c327e552f05ce148977e9d7b9f1718748281b38c841"}, + {file = "msgpack-1.0.2-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:fe07bc6735d08e492a327f496b7850e98cb4d112c56df69b0c844dbebcbb47f6"}, + {file = "msgpack-1.0.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:9ea52fff0473f9f3000987f313310208c879493491ef3ccf66268eff8d5a0326"}, + {file = "msgpack-1.0.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:26a1759f1a88df5f1d0b393eb582ec022326994e311ba9c5818adc5374736439"}, + {file = "msgpack-1.0.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:497d2c12426adcd27ab83144057a705efb6acc7e85957a51d43cdcf7f258900f"}, + {file = "msgpack-1.0.2-cp36-cp36m-win32.whl", hash = "sha256:e89ec55871ed5473a041c0495b7b4e6099f6263438e0bd04ccd8418f92d5d7f2"}, + {file = "msgpack-1.0.2-cp36-cp36m-win_amd64.whl", hash = "sha256:a4355d2193106c7aa77c98fc955252a737d8550320ecdb2e9ac701e15e2943bc"}, + {file = "msgpack-1.0.2-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:d6c64601af8f3893d17ec233237030e3110f11b8a962cb66720bf70c0141aa54"}, + {file = "msgpack-1.0.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f484cd2dca68502de3704f056fa9b318c94b1539ed17a4c784266df5d6978c87"}, + {file = "msgpack-1.0.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:f3e6aaf217ac1c7ce1563cf52a2f4f5d5b1f64e8729d794165db71da57257f0c"}, + {file = "msgpack-1.0.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:8521e5be9e3b93d4d5e07cb80b7e32353264d143c1f072309e1863174c6aadb1"}, + {file = "msgpack-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:31c17bbf2ae5e29e48d794c693b7ca7a0c73bd4280976d408c53df421e838d2a"}, + {file = "msgpack-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:8ffb24a3b7518e843cd83538cf859e026d24ec41ac5721c18ed0c55101f9775b"}, + {file = "msgpack-1.0.2-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b28c0876cce1466d7c2195d7658cf50e4730667196e2f1355c4209444717ee06"}, + {file = "msgpack-1.0.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:87869ba567fe371c4555d2e11e4948778ab6b59d6cc9d8460d543e4cfbbddd1c"}, + {file = "msgpack-1.0.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:b55f7db883530b74c857e50e149126b91bb75d35c08b28db12dcb0346f15e46e"}, + {file = "msgpack-1.0.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:ac25f3e0513f6673e8b405c3a80500eb7be1cf8f57584be524c4fa78fe8e0c83"}, + {file = "msgpack-1.0.2-cp38-cp38-win32.whl", hash = "sha256:0cb94ee48675a45d3b86e61d13c1e6f1696f0183f0715544976356ff86f741d9"}, + {file = "msgpack-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:e36a812ef4705a291cdb4a2fd352f013134f26c6ff63477f20235138d1d21009"}, + {file = "msgpack-1.0.2-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:2a5866bdc88d77f6e1370f82f2371c9bc6fc92fe898fa2dec0c5d4f5435a2694"}, + {file = "msgpack-1.0.2-cp39-cp39-manylinux1_i686.whl", hash = "sha256:92be4b12de4806d3c36810b0fe2aeedd8d493db39e2eb90742b9c09299eb5759"}, + {file = "msgpack-1.0.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:de6bd7990a2c2dabe926b7e62a92886ccbf809425c347ae7de277067f97c2887"}, + {file = "msgpack-1.0.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:5a9ee2540c78659a1dd0b110f73773533ee3108d4e1219b5a15a8d635b7aca0e"}, + {file = "msgpack-1.0.2-cp39-cp39-win32.whl", hash = "sha256:c747c0cc08bd6d72a586310bda6ea72eeb28e7505990f342552315b229a19b33"}, + {file = "msgpack-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:d8167b84af26654c1124857d71650404336f4eb5cc06900667a493fc619ddd9f"}, + {file = "msgpack-1.0.2.tar.gz", hash = "sha256:fae04496f5bc150eefad4e9571d1a76c55d021325dcd484ce45065ebbdd00984"}, +] +n2v = [ + {file = "n2v-0.2.1-py2.py3-none-any.whl", hash = "sha256:8122623ba69ce01946ddd8132f91d607a582b54237d208160308cf80b1b831fc"}, +] +networkx = [ + {file = "networkx-2.5.1-py3-none-any.whl", hash = "sha256:0635858ed7e989f4c574c2328380b452df892ae85084144c73d8cd819f0c4e06"}, + {file = "networkx-2.5.1.tar.gz", hash = "sha256:109cd585cac41297f71103c3c42ac6ef7379f29788eb54cb751be5a663bb235a"}, +] +numexpr = [ + {file = "numexpr-2.7.3-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:74df157ab4577bfc83c14f4e39d14781b06ade5406d3efef049f90c88d8c28ea"}, + {file = "numexpr-2.7.3-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:99472731bc1111f5d73285dd2a4c228b5bfb176f785a567872e0fbfec6584f2b"}, + {file = "numexpr-2.7.3-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:24cdb8c0e93f31387a4c2ddd09a687874c006e6139fd68bcf77b96e51d17cb01"}, + {file = "numexpr-2.7.3-cp27-cp27m-win32.whl", hash = "sha256:c9218aeb76717768f617362b72a87e9219da95ba7cdec0732ccecc4a4719124c"}, + {file = "numexpr-2.7.3-cp27-cp27m-win_amd64.whl", hash = "sha256:97753d17d1ea39e082b1907b99b6cb63cac7d1dfa512d2ff5079eb7bfab1ea88"}, + {file = "numexpr-2.7.3-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:0732c9989bff8568ee78fa461f3698166d4ac79363860be22ff49eae1dcd15e7"}, + {file = "numexpr-2.7.3-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:c978c49bd9dded6a4ba6b3501e3a34e3aba9312cbb7d800bed7ac6fcd2d5949d"}, + {file = "numexpr-2.7.3-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:602df9b5c500d0a887dc96b4cfd16fb60ae7ef39ccd6f013f4df2ee11ae70553"}, + {file = "numexpr-2.7.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f9df0a74d39616fd011071c5850418f244bac414f24ed55c00dcf3c5385e8374"}, + {file = "numexpr-2.7.3-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:eeeb6325df6cf3f3ab7d9dbabf3bc03ac88b7e2f2aed21419c31e23c3048dce1"}, + {file = "numexpr-2.7.3-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:5223a519f48754dd350723d9fbcadbcd0476881bc954a281a09a6538ecabfc27"}, + {file = "numexpr-2.7.3-cp35-cp35m-win32.whl", hash = "sha256:785065819ce98e3d3dd853794244e0de190d7ba36ab42c8fd79e0e9cd40de7af"}, + {file = "numexpr-2.7.3-cp35-cp35m-win_amd64.whl", hash = "sha256:23718ac5f2ebae995f5899509624781b375da568f2b645b5d1fd6dbb17f41a56"}, + {file = "numexpr-2.7.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3daa55515ee3cb40bf5ab8263c0c13fff8d484d64d107a9c414e8ca151dc08a6"}, + {file = "numexpr-2.7.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:a3f1cec8657bd3920869a2ea27f98d68ac3000334f366d844a9670ae671fe4bd"}, + {file = "numexpr-2.7.3-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:d423441593a952ac56d1f774068b81fb22f514fb68873c066578345a6af74c0d"}, + {file = "numexpr-2.7.3-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:90ea6d5813e1906bb203ef220a600b30d83e75aea2607a7e7037cceae9e93346"}, + {file = "numexpr-2.7.3-cp36-cp36m-win32.whl", hash = "sha256:8b76bcca930cbf0db0fe98b6a51d6286dff77d525dad670cb7750e29a138d434"}, + {file = "numexpr-2.7.3-cp36-cp36m-win_amd64.whl", hash = "sha256:833a363c86266424349467b53f4060f77aaa7ec03c1e6f38c54e69c65ceebf30"}, + {file = "numexpr-2.7.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:618259287b8b81a352a7d088ad03fe3b393a842ccb45f0b3cfc6a712d41b7595"}, + {file = "numexpr-2.7.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:51277a530a353e0f94665b44615249d7e7075f0c73f78d4743da632fc44bc648"}, + {file = "numexpr-2.7.3-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:5f4122bd58aa4e4891814c2f72bd47b1cdb202c9d863ea96c5394dffb72a16e2"}, + {file = "numexpr-2.7.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b0a9124a66a61b05ea84b832358d6aa5561c30e69b4dcaea819b296f4f025f89"}, + {file = "numexpr-2.7.3-cp37-cp37m-win32.whl", hash = "sha256:e985026e64350dd59fd91a09bc364edf706d58b12e01362ddfa63829878bd434"}, + {file = "numexpr-2.7.3-cp37-cp37m-win_amd64.whl", hash = "sha256:e000570a6a704c594832ff4fc45f18864b721b7b444a185b365dbb03d3fe3abb"}, + {file = "numexpr-2.7.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4527a0a7b04f858a73c348c9c4ce8441b7a54965db74a32ba808c51d9d53b7cd"}, + {file = "numexpr-2.7.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:dc707486b1f3dda18a39bc4d06a0a09d3c0ea47bd6b99fdb98adb26d1277253f"}, + {file = "numexpr-2.7.3-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:5d6dbf050a9b8ebff0b7706ebeaf1cd57d64ef4dfe61aef3790851b481daf6b5"}, + {file = "numexpr-2.7.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:aae4ce158da53ebc47df053de90fed9d0d51fa0df8cc481abc8a901ea4f0cec7"}, + {file = "numexpr-2.7.3-cp38-cp38-win32.whl", hash = "sha256:dfdca3d1f4c83fa8fd3ee7573110efd13e838543896641b89367622ec6a67eb4"}, + {file = "numexpr-2.7.3-cp38-cp38-win_amd64.whl", hash = "sha256:d14ae09318ad86579e35aacf1596c83d5db1139cd68615967ee23605e11f5d82"}, + {file = "numexpr-2.7.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a8e0e48d72391543b68d0471fac2e31c614efdce4036e2a0a8a182fde1edb0e0"}, + {file = "numexpr-2.7.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:05b97b19e864a5d1a0b106933b1637233a2444fd375685bead264a818f847ef2"}, + {file = "numexpr-2.7.3-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:7ab40e2b438f4ea2ea8234c63639cdf5072cdb29d0ac521307854efe0281a567"}, + {file = "numexpr-2.7.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:8fc23a49f4266c24a23310c0cb92ff54c4b4f535635f90372b3a2d5cb1f83329"}, + {file = "numexpr-2.7.3-cp39-cp39-win32.whl", hash = "sha256:2e14b44a79030fbe25f16393162a4d21ced14056fac49ff73856f661a78db731"}, + {file = "numexpr-2.7.3-cp39-cp39-win_amd64.whl", hash = "sha256:c2605e5665b0d7362e0d2b92683387c12e15c7440daf702a7637f7502a967810"}, + {file = "numexpr-2.7.3.tar.gz", hash = "sha256:43616529f9b7d1afc83386f943dc66c4da5e052f00217ba7e3ad8dd1b5f3a825"}, +] +numpy = [ + {file = "numpy-1.20.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:70eb5808127284c4e5c9e836208e09d685a7978b6a216db85960b1a112eeace8"}, + {file = "numpy-1.20.3-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6ca2b85a5997dabc38301a22ee43c82adcb53ff660b89ee88dded6b33687e1d8"}, + {file = "numpy-1.20.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c5bf0e132acf7557fc9bb8ded8b53bbbbea8892f3c9a1738205878ca9434206a"}, + {file = "numpy-1.20.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db250fd3e90117e0312b611574cd1b3f78bec046783195075cbd7ba9c3d73f16"}, + {file = "numpy-1.20.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:637d827248f447e63585ca3f4a7d2dfaa882e094df6cfa177cc9cf9cd6cdf6d2"}, + {file = "numpy-1.20.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8b7bb4b9280da3b2856cb1fc425932f46fba609819ee1c62256f61799e6a51d2"}, + {file = "numpy-1.20.3-cp37-cp37m-win32.whl", hash = "sha256:67d44acb72c31a97a3d5d33d103ab06d8ac20770e1c5ad81bdb3f0c086a56cf6"}, + {file = "numpy-1.20.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43909c8bb289c382170e0282158a38cf306a8ad2ff6dfadc447e90f9961bef43"}, + {file = "numpy-1.20.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f1452578d0516283c87608a5a5548b0cdde15b99650efdfd85182102ef7a7c17"}, + {file = "numpy-1.20.3-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6e51534e78d14b4a009a062641f465cfaba4fdcb046c3ac0b1f61dd97c861b1b"}, + {file = "numpy-1.20.3-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:e515c9a93aebe27166ec9593411c58494fa98e5fcc219e47260d9ab8a1cc7f9f"}, + {file = "numpy-1.20.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c1c09247ccea742525bdb5f4b5ceeacb34f95731647fe55774aa36557dbb5fa4"}, + {file = "numpy-1.20.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:66fbc6fed94a13b9801fb70b96ff30605ab0a123e775a5e7a26938b717c5d71a"}, + {file = "numpy-1.20.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea9cff01e75a956dbee133fa8e5b68f2f92175233de2f88de3a682dd94deda65"}, + {file = "numpy-1.20.3-cp38-cp38-win32.whl", hash = "sha256:f39a995e47cb8649673cfa0579fbdd1cdd33ea497d1728a6cb194d6252268e48"}, + {file = "numpy-1.20.3-cp38-cp38-win_amd64.whl", hash = "sha256:1676b0a292dd3c99e49305a16d7a9f42a4ab60ec522eac0d3dd20cdf362ac010"}, + {file = "numpy-1.20.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:830b044f4e64a76ba71448fce6e604c0fc47a0e54d8f6467be23749ac2cbd2fb"}, + {file = "numpy-1.20.3-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:55b745fca0a5ab738647d0e4db099bd0a23279c32b31a783ad2ccea729e632df"}, + {file = "numpy-1.20.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5d050e1e4bc9ddb8656d7b4f414557720ddcca23a5b88dd7cff65e847864c400"}, + {file = "numpy-1.20.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9c65473ebc342715cb2d7926ff1e202c26376c0dcaaee85a1fd4b8d8c1d3b2f"}, + {file = "numpy-1.20.3-cp39-cp39-win32.whl", hash = "sha256:16f221035e8bd19b9dc9a57159e38d2dd060b48e93e1d843c49cb370b0f415fd"}, + {file = "numpy-1.20.3-cp39-cp39-win_amd64.whl", hash = "sha256:6690080810f77485667bfbff4f69d717c3be25e5b11bb2073e76bb3f578d99b4"}, + {file = "numpy-1.20.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4e465afc3b96dbc80cf4a5273e5e2b1e3451286361b4af70ce1adb2984d392f9"}, + {file = "numpy-1.20.3.zip", hash = "sha256:e55185e51b18d788e49fe8305fd73ef4470596b33fc2c1ceb304566b99c71a69"}, +] +packaging = [ + {file = "packaging-20.9-py2.py3-none-any.whl", hash = "sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"}, + {file = "packaging-20.9.tar.gz", hash = "sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5"}, +] +pandas = [ + {file = "pandas-1.2.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c601c6fdebc729df4438ec1f62275d6136a0dd14d332fc0e8ce3f7d2aadb4dd6"}, + {file = "pandas-1.2.4-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:8d4c74177c26aadcfb4fd1de6c1c43c2bf822b3e0fc7a9b409eeaf84b3e92aaa"}, + {file = "pandas-1.2.4-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:b730add5267f873b3383c18cac4df2527ac4f0f0eed1c6cf37fcb437e25cf558"}, + {file = "pandas-1.2.4-cp37-cp37m-win32.whl", hash = "sha256:2cb7e8f4f152f27dc93f30b5c7a98f6c748601ea65da359af734dd0cf3fa733f"}, + {file = "pandas-1.2.4-cp37-cp37m-win_amd64.whl", hash = "sha256:2111c25e69fa9365ba80bbf4f959400054b2771ac5d041ed19415a8b488dc70a"}, + {file = "pandas-1.2.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:167693a80abc8eb28051fbd184c1b7afd13ce2c727a5af47b048f1ea3afefff4"}, + {file = "pandas-1.2.4-cp38-cp38-manylinux1_i686.whl", hash = "sha256:612add929bf3ba9d27b436cc8853f5acc337242d6b584203f207e364bb46cb12"}, + {file = "pandas-1.2.4-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:971e2a414fce20cc5331fe791153513d076814d30a60cd7348466943e6e909e4"}, + {file = "pandas-1.2.4-cp38-cp38-win32.whl", hash = "sha256:68d7baa80c74aaacbed597265ca2308f017859123231542ff8a5266d489e1858"}, + {file = "pandas-1.2.4-cp38-cp38-win_amd64.whl", hash = "sha256:bd659c11a4578af740782288cac141a322057a2e36920016e0fc7b25c5a4b686"}, + {file = "pandas-1.2.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9db70ffa8b280bb4de83f9739d514cd0735825e79eef3a61d312420b9f16b758"}, + {file = "pandas-1.2.4-cp39-cp39-manylinux1_i686.whl", hash = "sha256:298f0553fd3ba8e002c4070a723a59cdb28eda579f3e243bc2ee397773f5398b"}, + {file = "pandas-1.2.4-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:52d2472acbb8a56819a87aafdb8b5b6d2b3386e15c95bde56b281882529a7ded"}, + {file = "pandas-1.2.4-cp39-cp39-win32.whl", hash = "sha256:d0877407359811f7b853b548a614aacd7dea83b0c0c84620a9a643f180060950"}, + {file = "pandas-1.2.4-cp39-cp39-win_amd64.whl", hash = "sha256:2b063d41803b6a19703b845609c0b700913593de067b552a8b24dd8eeb8c9895"}, + {file = "pandas-1.2.4.tar.gz", hash = "sha256:649ecab692fade3cbfcf967ff936496b0cfba0af00a55dfaacd82bdda5cb2279"}, +] +pillow = [ + {file = "Pillow-8.2.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:dc38f57d8f20f06dd7c3161c59ca2c86893632623f33a42d592f097b00f720a9"}, + {file = "Pillow-8.2.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:a013cbe25d20c2e0c4e85a9daf438f85121a4d0344ddc76e33fd7e3965d9af4b"}, + {file = "Pillow-8.2.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8bb1e155a74e1bfbacd84555ea62fa21c58e0b4e7e6b20e4447b8d07990ac78b"}, + {file = "Pillow-8.2.0-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c5236606e8570542ed424849f7852a0ff0bce2c4c8d0ba05cc202a5a9c97dee9"}, + {file = "Pillow-8.2.0-cp36-cp36m-win32.whl", hash = "sha256:12e5e7471f9b637762453da74e390e56cc43e486a88289995c1f4c1dc0bfe727"}, + {file = "Pillow-8.2.0-cp36-cp36m-win_amd64.whl", hash = "sha256:5afe6b237a0b81bd54b53f835a153770802f164c5570bab5e005aad693dab87f"}, + {file = "Pillow-8.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:cb7a09e173903541fa888ba010c345893cd9fc1b5891aaf060f6ca77b6a3722d"}, + {file = "Pillow-8.2.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:0d19d70ee7c2ba97631bae1e7d4725cdb2ecf238178096e8c82ee481e189168a"}, + {file = "Pillow-8.2.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:083781abd261bdabf090ad07bb69f8f5599943ddb539d64497ed021b2a67e5a9"}, + {file = "Pillow-8.2.0-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:c6b39294464b03457f9064e98c124e09008b35a62e3189d3513e5148611c9388"}, + {file = "Pillow-8.2.0-cp37-cp37m-win32.whl", hash = "sha256:01425106e4e8cee195a411f729cff2a7d61813b0b11737c12bd5991f5f14bcd5"}, + {file = "Pillow-8.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b570f84a6161cf8865c4e08adf629441f56e32f180f7aa4ccbd2e0a5a02cba2"}, + {file = "Pillow-8.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:031a6c88c77d08aab84fecc05c3cde8414cd6f8406f4d2b16fed1e97634cc8a4"}, + {file = "Pillow-8.2.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:66cc56579fd91f517290ab02c51e3a80f581aba45fd924fcdee01fa06e635812"}, + {file = "Pillow-8.2.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:6c32cc3145928c4305d142ebec682419a6c0a8ce9e33db900027ddca1ec39178"}, + {file = "Pillow-8.2.0-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:624b977355cde8b065f6d51b98497d6cd5fbdd4f36405f7a8790e3376125e2bb"}, + {file = "Pillow-8.2.0-cp38-cp38-win32.whl", hash = "sha256:5cbf3e3b1014dddc45496e8cf38b9f099c95a326275885199f427825c6522232"}, + {file = "Pillow-8.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:463822e2f0d81459e113372a168f2ff59723e78528f91f0bd25680ac185cf797"}, + {file = "Pillow-8.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:95d5ef984eff897850f3a83883363da64aae1000e79cb3c321915468e8c6add5"}, + {file = "Pillow-8.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b91c36492a4bbb1ee855b7d16fe51379e5f96b85692dc8210831fbb24c43e484"}, + {file = "Pillow-8.2.0-cp39-cp39-manylinux1_i686.whl", hash = "sha256:d68cb92c408261f806b15923834203f024110a2e2872ecb0bd2a110f89d3c602"}, + {file = "Pillow-8.2.0-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f217c3954ce5fd88303fc0c317af55d5e0204106d86dea17eb8205700d47dec2"}, + {file = "Pillow-8.2.0-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:5b70110acb39f3aff6b74cf09bb4169b167e2660dabc304c1e25b6555fa781ef"}, + {file = "Pillow-8.2.0-cp39-cp39-win32.whl", hash = "sha256:a7d5e9fad90eff8f6f6106d3b98b553a88b6f976e51fce287192a5d2d5363713"}, + {file = "Pillow-8.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:238c197fc275b475e87c1453b05b467d2d02c2915fdfdd4af126145ff2e4610c"}, + {file = "Pillow-8.2.0-pp36-pypy36_pp73-macosx_10_10_x86_64.whl", hash = "sha256:0e04d61f0064b545b989126197930807c86bcbd4534d39168f4aa5fda39bb8f9"}, + {file = "Pillow-8.2.0-pp36-pypy36_pp73-manylinux2010_i686.whl", hash = "sha256:63728564c1410d99e6d1ae8e3b810fe012bc440952168af0a2877e8ff5ab96b9"}, + {file = "Pillow-8.2.0-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:c03c07ed32c5324939b19e36ae5f75c660c81461e312a41aea30acdd46f93a7c"}, + {file = "Pillow-8.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:4d98abdd6b1e3bf1a1cbb14c3895226816e666749ac040c4e2554231068c639b"}, + {file = "Pillow-8.2.0-pp37-pypy37_pp73-manylinux2010_i686.whl", hash = "sha256:aac00e4bc94d1b7813fe882c28990c1bc2f9d0e1aa765a5f2b516e8a6a16a9e4"}, + {file = "Pillow-8.2.0-pp37-pypy37_pp73-manylinux2010_x86_64.whl", hash = "sha256:22fd0f42ad15dfdde6c581347eaa4adb9a6fc4b865f90b23378aa7914895e120"}, + {file = "Pillow-8.2.0-pp37-pypy37_pp73-win32.whl", hash = "sha256:e98eca29a05913e82177b3ba3d198b1728e164869c613d76d0de4bde6768a50e"}, + {file = "Pillow-8.2.0.tar.gz", hash = "sha256:a787ab10d7bb5494e5f76536ac460741788f1fbce851068d73a87ca7c35fc3e1"}, +] +pluggy = [ + {file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"}, + {file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"}, +] +py = [ + {file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"}, + {file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"}, +] +pyerfa = [ + {file = "pyerfa-1.7.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:34083499af3cf1dab5673bf287b0025cd66fc54838f46b0bcfa539c08907cef3"}, + {file = "pyerfa-1.7.3-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:8baf8a4904d415127bed0137207f1bbf5764e9c96b35a98147e3b9159e952c18"}, + {file = "pyerfa-1.7.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:c8001f9a0c713d69d4dc837bb8d4dadc76385a51d639756150b8cd8c21f2f6db"}, + {file = "pyerfa-1.7.3-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:8ff4866f71aed68d4ddc54bab25d87b9ae5872eae95e9f3f26f164e872744084"}, + {file = "pyerfa-1.7.3-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:952bed5eeab094ccae87ffa4da3687244a11762561e686e5765d8cf49706fa73"}, + {file = "pyerfa-1.7.3-cp36-cp36m-win32.whl", hash = "sha256:424e1b1d1c153014d8c945b6541a14df91d826f26f709b47ece0aabaff77f00c"}, + {file = "pyerfa-1.7.3-cp36-cp36m-win_amd64.whl", hash = "sha256:3e31241d2e7ee28f74e4c27e31b69ca4f770c37787f33016599aadf865ab8b0e"}, + {file = "pyerfa-1.7.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3fa11c322984a5c5adc35ca860a7321a7f6919c6ec4fdb37202c22dc3ea62f"}, + {file = "pyerfa-1.7.3-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:b5244f2440294f4299f97e4029717f3de3234dbd16c63f4e1f9f68e3c1efb47d"}, + {file = "pyerfa-1.7.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:d6ae53d73863d3c65b949bcfdbdf4f62cca342346528e28a2b820c5a4615a10c"}, + {file = "pyerfa-1.7.3-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:496197facb43c6e2074bba7fd91fe1c93c2e6519ee23f8b5673894587ae9401d"}, + {file = "pyerfa-1.7.3-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:3d973f78244aaf9c4b3c8e12eed79d8471a67399811ce1e5e214bc741a97d52e"}, + {file = "pyerfa-1.7.3-cp37-cp37m-win32.whl", hash = "sha256:cd29258530e8005ebee8fd77c69bf97af231f8b0ce8960f3617446b0394b7690"}, + {file = "pyerfa-1.7.3-cp37-cp37m-win_amd64.whl", hash = "sha256:7a8eac2b30ca56d099a270a89e486d117efede516130daef4be1421d42213376"}, + {file = "pyerfa-1.7.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24932f31405cb3c29e3f4954af36bd2251ab968f382f3736cd885b046fa0c68d"}, + {file = "pyerfa-1.7.3-cp38-cp38-manylinux1_i686.whl", hash = "sha256:20ec4b99a7ec45071c21ff938da4929090d3c3c9ba98faace62296c87a6fe804"}, + {file = "pyerfa-1.7.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:3af5921651660c136fb1bbcec34d25a2621b0c8c985f1154c0941eb239f04dad"}, + {file = "pyerfa-1.7.3-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:b0bc078f6dacf3e169fc66b2a38de2f820388551317f4a86af54009c29eb5401"}, + {file = "pyerfa-1.7.3-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:83767b545c5f1bebf5fc7cbcf49da3bf106849f8daab3a92970e901fd4ba3771"}, + {file = "pyerfa-1.7.3-cp38-cp38-win32.whl", hash = "sha256:7397ca80a1a72c5277d6e803a9bd8c535c7199afdf287fe48515026a429e5821"}, + {file = "pyerfa-1.7.3-cp38-cp38-win_amd64.whl", hash = "sha256:b6f8d66439bc3c6f2366cafbbcb4d39467e95ecd3ebe901e8204c7886ed0dcbd"}, + {file = "pyerfa-1.7.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c65b889a41866d842877fb897b379c7a995a8f75e6a94a5ceed17ab24c7eb944"}, + {file = "pyerfa-1.7.3-cp39-cp39-manylinux1_i686.whl", hash = "sha256:3dcdcf45e0a083df71ee0d6488effeac86f2fea1589f82dc7b850aed7fc8f74d"}, + {file = "pyerfa-1.7.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:e64d5f08b30b62c80530620384894b3c3e0945ad178a47e3e2aec7001cf738c3"}, + {file = "pyerfa-1.7.3-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:d0b7d4f76d73ddb0aeeb06d51818335412e7e98da5469b1719d7eb91adeee70a"}, + {file = "pyerfa-1.7.3-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:d8c3e10f85d44f81432c72a6c38ca7f4880e708f741a46cdf24611462e8de9fa"}, + {file = "pyerfa-1.7.3-cp39-cp39-win32.whl", hash = "sha256:280656b5c425ff911a8c33dd870606cca7541efdd4677783fc8e53374f3165e6"}, + {file = "pyerfa-1.7.3-cp39-cp39-win_amd64.whl", hash = "sha256:0046fa759ce8e166d9f1bb9ee1dab521f691ce4f1c7ccc27347246c07b41d278"}, + {file = "pyerfa-1.7.3.tar.gz", hash = "sha256:6cf3a645d63e0c575a357797903eac5d2c6591d7cdb89217c8c4d39777cf18cb"}, +] +pyparsing = [ + {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"}, + {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"}, +] +pytest = [ + {file = "pytest-6.2.4-py3-none-any.whl", hash = "sha256:91ef2131a9bd6be8f76f1f08eac5c5317221d6ad1e143ae03894b862e8976890"}, + {file = "pytest-6.2.4.tar.gz", hash = "sha256:50bcad0a0b9c5a72c8e4e7c9855a3ad496ca6a881a3641b4260605450772c54b"}, +] +python-dateutil = [ + {file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"}, + {file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"}, +] +pytz = [ + {file = "pytz-2021.1-py2.py3-none-any.whl", hash = "sha256:eb10ce3e7736052ed3623d49975ce333bcd712c7bb19a58b9e2089d4057d0798"}, + {file = "pytz-2021.1.tar.gz", hash = "sha256:83a4a90894bf38e243cf052c8b58f381bfe9a7a483f6a9cab140bc7f702ac4da"}, +] +pywavelets = [ + {file = "PyWavelets-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:35959c041ec014648575085a97b498eafbbaa824f86f6e4a59bfdef8a3fe6308"}, + {file = "PyWavelets-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:55e39ec848ceec13c9fa1598253ae9dd5c31d09dfd48059462860d2b908fb224"}, + {file = "PyWavelets-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c06d2e340c7bf8b9ec71da2284beab8519a3908eab031f4ea126e8ccfc3fd567"}, + {file = "PyWavelets-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:be105382961745f88d8196bba5a69ee2c4455d87ad2a2e5d1eed6bd7fda4d3fd"}, + {file = "PyWavelets-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:076ca8907001fdfe4205484f719d12b4a0262dfe6652fa1cfc3c5c362d14dc84"}, + {file = "PyWavelets-1.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:7947e51ca05489b85928af52a34fe67022ab5b81d4ae32a4109a99e883a0635e"}, + {file = "PyWavelets-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:9e2528823ccf5a0a1d23262dfefe5034dce89cd84e4e124dc553dfcdf63ebb92"}, + {file = "PyWavelets-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:80b924edbc012ded8aa8b91cb2fd6207fb1a9a3a377beb4049b8a07445cec6f0"}, + {file = "PyWavelets-1.1.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c2a799e79cee81a862216c47e5623c97b95f1abee8dd1f9eed736df23fb653fb"}, + {file = "PyWavelets-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:d510aef84d9852653d079c84f2f81a82d5d09815e625f35c95714e7364570ad4"}, + {file = "PyWavelets-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:889d4c5c5205a9c90118c1980df526857929841df33e4cd1ff1eff77c6817a65"}, + {file = "PyWavelets-1.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:68b5c33741d26c827074b3d8f0251de1c3019bb9567b8d303eb093c822ce28f1"}, + {file = "PyWavelets-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:18a51b3f9416a2ae6e9a35c4af32cf520dd7895f2b69714f4aa2f4342fca47f9"}, + {file = "PyWavelets-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:cfe79844526dd92e3ecc9490b5031fca5f8ab607e1e858feba232b1b788ff0ea"}, + {file = "PyWavelets-1.1.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:2f7429eeb5bf9c7068002d0d7f094ed654c77a70ce5e6198737fd68ab85f8311"}, + {file = "PyWavelets-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:720dbcdd3d91c6dfead79c80bf8b00a1d8aa4e5d551dc528c6d5151e4efc3403"}, + {file = "PyWavelets-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:bc5e87b72371da87c9bebc68e54882aada9c3114e640de180f62d5da95749cd3"}, + {file = "PyWavelets-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:98b2669c5af842a70cfab33a7043fcb5e7535a690a00cd251b44c9be0be418e5"}, + {file = "PyWavelets-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e02a0558e0c2ac8b8bbe6a6ac18c136767ec56b96a321e0dfde2173adfa5a504"}, + {file = "PyWavelets-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:6162dc0ae04669ea04b4b51420777b9ea2d30b0a9d02901b2a3b4d61d159c2e9"}, + {file = "PyWavelets-1.1.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:39c74740718e420d38c78ca4498568fa57976d78d5096277358e0fa9629a7aea"}, + {file = "PyWavelets-1.1.1-cp38-cp38-win32.whl", hash = "sha256:79f5b54f9dc353e5ee47f0c3f02bebd2c899d49780633aa771fed43fa20b3149"}, + {file = "PyWavelets-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:935ff247b8b78bdf77647fee962b1cc208c51a7b229db30b9ba5f6da3e675178"}, + {file = "PyWavelets-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6ebfefebb5c6494a3af41ad8c60248a95da267a24b79ed143723d4502b1fe4d7"}, + {file = "PyWavelets-1.1.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:6bc78fb9c42a716309b4ace56f51965d8b5662c3ba19d4591749f31773db1125"}, + {file = "PyWavelets-1.1.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:411e17ca6ed8cf5e18a7ca5ee06a91c25800cc6c58c77986202abf98d749273a"}, + {file = "PyWavelets-1.1.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:83c5e3eb78ce111c2f0b45f46106cc697c3cb6c4e5f51308e1f81b512c70c8fb"}, + {file = "PyWavelets-1.1.1-cp39-cp39-win32.whl", hash = "sha256:2b634a54241c190ee989a4af87669d377b37c91bcc9cf0efe33c10ff847f7841"}, + {file = "PyWavelets-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:732bab78435c48be5d6bc75486ef629d7c8f112e07b313bf1f1a2220ab437277"}, + {file = "PyWavelets-1.1.1.tar.gz", hash = "sha256:1a64b40f6acb4ffbaccce0545d7fc641744f95351f62e4c6aaa40549326008c9"}, +] +pyyaml = [ + {file = "PyYAML-5.4.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:3b2b1824fe7112845700f815ff6a489360226a5609b96ec2190a45e62a9fc922"}, + {file = "PyYAML-5.4.1-cp27-cp27m-win32.whl", hash = "sha256:129def1b7c1bf22faffd67b8f3724645203b79d8f4cc81f674654d9902cb4393"}, + {file = "PyYAML-5.4.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4465124ef1b18d9ace298060f4eccc64b0850899ac4ac53294547536533800c8"}, + {file = "PyYAML-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:bb4191dfc9306777bc594117aee052446b3fa88737cd13b7188d0e7aa8162185"}, + {file = "PyYAML-5.4.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:6c78645d400265a062508ae399b60b8c167bf003db364ecb26dcab2bda048253"}, + {file = "PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:4e0583d24c881e14342eaf4ec5fbc97f934b999a6828693a99157fde912540cc"}, + {file = "PyYAML-5.4.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:72a01f726a9c7851ca9bfad6fd09ca4e090a023c00945ea05ba1638c09dc3347"}, + {file = "PyYAML-5.4.1-cp36-cp36m-manylinux2014_s390x.whl", hash = "sha256:895f61ef02e8fed38159bb70f7e100e00f471eae2bc838cd0f4ebb21e28f8541"}, + {file = "PyYAML-5.4.1-cp36-cp36m-win32.whl", hash = "sha256:3bd0e463264cf257d1ffd2e40223b197271046d09dadf73a0fe82b9c1fc385a5"}, + {file = "PyYAML-5.4.1-cp36-cp36m-win_amd64.whl", hash = "sha256:e4fac90784481d221a8e4b1162afa7c47ed953be40d31ab4629ae917510051df"}, + {file = "PyYAML-5.4.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5accb17103e43963b80e6f837831f38d314a0495500067cb25afab2e8d7a4018"}, + {file = "PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:e1d4970ea66be07ae37a3c2e48b5ec63f7ba6804bdddfdbd3cfd954d25a82e63"}, + {file = "PyYAML-5.4.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:cb333c16912324fd5f769fff6bc5de372e9e7a202247b48870bc251ed40239aa"}, + {file = "PyYAML-5.4.1-cp37-cp37m-manylinux2014_s390x.whl", hash = "sha256:fe69978f3f768926cfa37b867e3843918e012cf83f680806599ddce33c2c68b0"}, + {file = "PyYAML-5.4.1-cp37-cp37m-win32.whl", hash = "sha256:dd5de0646207f053eb0d6c74ae45ba98c3395a571a2891858e87df7c9b9bd51b"}, + {file = "PyYAML-5.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:08682f6b72c722394747bddaf0aa62277e02557c0fd1c42cb853016a38f8dedf"}, + {file = "PyYAML-5.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d2d9808ea7b4af864f35ea216be506ecec180628aced0704e34aca0b040ffe46"}, + {file = "PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:8c1be557ee92a20f184922c7b6424e8ab6691788e6d86137c5d93c1a6ec1b8fb"}, + {file = "PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:fd7f6999a8070df521b6384004ef42833b9bd62cfee11a09bda1079b4b704247"}, + {file = "PyYAML-5.4.1-cp38-cp38-manylinux2014_s390x.whl", hash = "sha256:bfb51918d4ff3d77c1c856a9699f8492c612cde32fd3bcd344af9be34999bfdc"}, + {file = "PyYAML-5.4.1-cp38-cp38-win32.whl", hash = "sha256:fa5ae20527d8e831e8230cbffd9f8fe952815b2b7dae6ffec25318803a7528fc"}, + {file = "PyYAML-5.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:0f5f5786c0e09baddcd8b4b45f20a7b5d61a7e7e99846e3c799b05c7c53fa696"}, + {file = "PyYAML-5.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:294db365efa064d00b8d1ef65d8ea2c3426ac366c0c4368d930bf1c5fb497f77"}, + {file = "PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:74c1485f7707cf707a7aef42ef6322b8f97921bd89be2ab6317fd782c2d53183"}, + {file = "PyYAML-5.4.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:d483ad4e639292c90170eb6f7783ad19490e7a8defb3e46f97dfe4bacae89122"}, + {file = "PyYAML-5.4.1-cp39-cp39-manylinux2014_s390x.whl", hash = "sha256:fdc842473cd33f45ff6bce46aea678a54e3d21f1b61a7750ce3c498eedfe25d6"}, + {file = "PyYAML-5.4.1-cp39-cp39-win32.whl", hash = "sha256:49d4cdd9065b9b6e206d0595fee27a96b5dd22618e7520c33204a4a3239d5b10"}, + {file = "PyYAML-5.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:c20cfa2d49991c8b4147af39859b167664f2ad4561704ee74c1de03318e898db"}, + {file = "PyYAML-5.4.1.tar.gz", hash = "sha256:607774cbba28732bfa802b54baa7484215f530991055bb562efbed5b2f20a45e"}, +] +"ruamel.yaml" = [ + {file = "ruamel.yaml-0.17.4-py3-none-any.whl", hash = "sha256:ac79fb25f5476e8e9ed1c53b8a2286d2c3f5dde49eb37dbcee5c7eb6a8415a22"}, + {file = "ruamel.yaml-0.17.4.tar.gz", hash = "sha256:44bc6b54fddd45e4bc0619059196679f9e8b79c027f4131bb072e6a22f4d5e28"}, +] +"ruamel.yaml.clib" = [ + {file = "ruamel.yaml.clib-0.2.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:28116f204103cb3a108dfd37668f20abe6e3cafd0d3fd40dba126c732457b3cc"}, + {file = "ruamel.yaml.clib-0.2.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:daf21aa33ee9b351f66deed30a3d450ab55c14242cfdfcd377798e2c0d25c9f1"}, + {file = "ruamel.yaml.clib-0.2.2-cp27-cp27m-win32.whl", hash = "sha256:30dca9bbcbb1cc858717438218d11eafb78666759e5094dd767468c0d577a7e7"}, + {file = "ruamel.yaml.clib-0.2.2-cp27-cp27m-win_amd64.whl", hash = "sha256:f6061a31880c1ed6b6ce341215336e2f3d0c1deccd84957b6fa8ca474b41e89f"}, + {file = "ruamel.yaml.clib-0.2.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:73b3d43e04cc4b228fa6fa5d796409ece6fcb53a6c270eb2048109cbcbc3b9c2"}, + {file = "ruamel.yaml.clib-0.2.2-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:53b9dd1abd70e257a6e32f934ebc482dac5edb8c93e23deb663eac724c30b026"}, + {file = "ruamel.yaml.clib-0.2.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:839dd72545ef7ba78fd2aa1a5dd07b33696adf3e68fae7f31327161c1093001b"}, + {file = "ruamel.yaml.clib-0.2.2-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:1236df55e0f73cd138c0eca074ee086136c3f16a97c2ac719032c050f7e0622f"}, + {file = "ruamel.yaml.clib-0.2.2-cp35-cp35m-win32.whl", hash = "sha256:b1e981fe1aff1fd11627f531524826a4dcc1f26c726235a52fcb62ded27d150f"}, + {file = "ruamel.yaml.clib-0.2.2-cp35-cp35m-win_amd64.whl", hash = "sha256:4e52c96ca66de04be42ea2278012a2342d89f5e82b4512fb6fb7134e377e2e62"}, + {file = "ruamel.yaml.clib-0.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:a873e4d4954f865dcb60bdc4914af7eaae48fb56b60ed6daa1d6251c72f5337c"}, + {file = "ruamel.yaml.clib-0.2.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:ab845f1f51f7eb750a78937be9f79baea4a42c7960f5a94dde34e69f3cce1988"}, + {file = "ruamel.yaml.clib-0.2.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:2fd336a5c6415c82e2deb40d08c222087febe0aebe520f4d21910629018ab0f3"}, + {file = "ruamel.yaml.clib-0.2.2-cp36-cp36m-win32.whl", hash = "sha256:e9f7d1d8c26a6a12c23421061f9022bb62704e38211fe375c645485f38df34a2"}, + {file = "ruamel.yaml.clib-0.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:2602e91bd5c1b874d6f93d3086f9830f3e907c543c7672cf293a97c3fabdcd91"}, + {file = "ruamel.yaml.clib-0.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:44c7b0498c39f27795224438f1a6be6c5352f82cb887bc33d962c3a3acc00df6"}, + {file = "ruamel.yaml.clib-0.2.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:8e8fd0a22c9d92af3a34f91e8a2594eeb35cba90ab643c5e0e643567dc8be43e"}, + {file = "ruamel.yaml.clib-0.2.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:75f0ee6839532e52a3a53f80ce64925ed4aed697dd3fa890c4c918f3304bd4f4"}, + {file = "ruamel.yaml.clib-0.2.2-cp37-cp37m-win32.whl", hash = "sha256:464e66a04e740d754170be5e740657a3b3b6d2bcc567f0c3437879a6e6087ff6"}, + {file = "ruamel.yaml.clib-0.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:52ae5739e4b5d6317b52f5b040b1b6639e8af68a5b8fd606a8b08658fbd0cab5"}, + {file = "ruamel.yaml.clib-0.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4df5019e7783d14b79217ad9c56edf1ba7485d614ad5a385d1b3c768635c81c0"}, + {file = "ruamel.yaml.clib-0.2.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:5254af7d8bdf4d5484c089f929cb7f5bafa59b4f01d4f48adda4be41e6d29f99"}, + {file = "ruamel.yaml.clib-0.2.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:8be05be57dc5c7b4a0b24edcaa2f7275866d9c907725226cdde46da09367d923"}, + {file = "ruamel.yaml.clib-0.2.2-cp38-cp38-win32.whl", hash = "sha256:74161d827407f4db9072011adcfb825b5258a5ccb3d2cd518dd6c9edea9e30f1"}, + {file = "ruamel.yaml.clib-0.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:058a1cc3df2a8aecc12f983a48bda99315cebf55a3b3a5463e37bb599b05727b"}, + {file = "ruamel.yaml.clib-0.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c6ac7e45367b1317e56f1461719c853fd6825226f45b835df7436bb04031fd8a"}, + {file = "ruamel.yaml.clib-0.2.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:b4b0d31f2052b3f9f9b5327024dc629a253a83d8649d4734ca7f35b60ec3e9e5"}, + {file = "ruamel.yaml.clib-0.2.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:1f8c0a4577c0e6c99d208de5c4d3fd8aceed9574bb154d7a2b21c16bb924154c"}, + {file = "ruamel.yaml.clib-0.2.2-cp39-cp39-win32.whl", hash = "sha256:46d6d20815064e8bb023ea8628cfb7402c0f0e83de2c2227a88097e239a7dffd"}, + {file = "ruamel.yaml.clib-0.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:6c0a5dc52fc74eb87c67374a4e554d4761fd42a4d01390b7e868b30d21f4b8bb"}, + {file = "ruamel.yaml.clib-0.2.2.tar.gz", hash = "sha256:2d24bd98af676f4990c4d715bcdc2a60b19c56a3fb3a763164d2d8ca0e806ba7"}, +] +scikit-image = [ + {file = "scikit-image-0.18.1.tar.gz", hash = "sha256:fbb618ca911867bce45574c1639618cdfb5d94e207432b19bc19563d80d2f171"}, + {file = "scikit_image-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:1cd05c882ffb2a271a1f20b4afe937d63d55b8753c3d652f11495883a7800ebe"}, + {file = "scikit_image-0.18.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:e972c628ad9ba52c298b032368e29af9bd5eeb81ce33bc2d9b039a81661c99c5"}, + {file = "scikit_image-0.18.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:1256017c513e8e1b8b9da73e5fd1e605d0077bbbc8e5c8d6c2cab36400131c6c"}, + {file = "scikit_image-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:ec25e4110951d3a280421bb10dd510a082ba83d86e20d706294faf7899cdb3d5"}, + {file = "scikit_image-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:2eea42706a25ae6e0cebaf1914e2ab1c04061b1f3c9966d76025d58a2e9188fc"}, + {file = "scikit_image-0.18.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:76446e2402e64d7dba78eeae8aa86e92a0cafe5b1c9e6235bd8d067471ed2788"}, + {file = "scikit_image-0.18.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:d5ad4a9b4c9797d4c4c48f45fa224c5ebff22b9b0af636c3ecb8addbb66c21e6"}, + {file = "scikit_image-0.18.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:23f9178b21c752bfb4e4ea3a3fa0ff79bc5a401bc75ddb4661f2cebd1c2b0e24"}, + {file = "scikit_image-0.18.1-cp38-cp38-win32.whl", hash = "sha256:d746540cafe7776c6d05a0b40ec744bb8d33d1ddc51faba601d26c02593d8bcc"}, + {file = "scikit_image-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:30447af3f5b7c9491f2d3db5bc275493d1b91bf1dd16b67e2fd79a6bb95d8ee9"}, + {file = "scikit_image-0.18.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ae6659b3a8bd4bba7e9dcbfd0064e443b32c7054bf09174749db896730fcf42e"}, + {file = "scikit_image-0.18.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:2c058770c6ad6e0fe6c30f59970c9c65fa740ff014d121d8c341664cd792cf49"}, + {file = "scikit_image-0.18.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:c700336a7f96109c74154090c5e693693a8e3fa09ed6156a5996cdc9a3bb1534"}, + {file = "scikit_image-0.18.1-cp39-cp39-win32.whl", hash = "sha256:3515b890e771f99bbe1051a0dcfe0fc477da961da933c34f89808a0f1eeb7dc2"}, + {file = "scikit_image-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:5f602779258807d03e72c0a439cfb221f647e628be166fb3594397435f13c76b"}, +] +scikit-learn = [ + {file = "scikit-learn-0.24.2.tar.gz", hash = "sha256:d14701a12417930392cd3898e9646cf5670c190b933625ebe7511b1f7d7b8736"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-macosx_10_13_x86_64.whl", hash = "sha256:d5bf9c863ba4717b3917b5227463ee06860fc43931dc9026747de416c0a10fee"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:5beaeb091071625e83f5905192d8aecde65ba2f26f8b6719845bbf586f7a04a1"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:06ffdcaaf81e2a3b1b50c3ac6842cfb13df2d8b737d61f64643ed61da7389cde"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:fec42690a2eb646b384eafb021c425fab48991587edb412d4db77acc358b27ce"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:5ff3e4e4cf7592d36541edec434e09fb8ab9ba6b47608c4ffe30c9038d301897"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:3cbd734e1aefc7c5080e6b6973fe062f97c26a1cdf1a991037ca196ce1c8f427"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-win32.whl", hash = "sha256:f74429a07fedb36a03c159332b914e6de757176064f9fed94b5f79ebac07d913"}, + {file = "scikit_learn-0.24.2-cp36-cp36m-win_amd64.whl", hash = "sha256:dd968a174aa82f3341a615a033fa6a8169e9320cbb46130686562db132d7f1f0"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:49ec0b1361da328da9bb7f1a162836028e72556356adeb53342f8fae6b450d47"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f18c3ed484eeeaa43a0d45dc2efb4d00fc6542ccdcfa2c45d7b635096a2ae534"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:cdf24c1b9bbeb4936456b42ac5bd32c60bb194a344951acb6bfb0cddee5439a4"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:d177fe1ff47cc235942d628d41ee5b1c6930d8f009f1a451c39b5411e8d0d4cf"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:f3ec00f023d84526381ad0c0f2cff982852d035c921bbf8ceb994f4886c00c64"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:ae19ac105cf7ce8c205a46166992fdec88081d6e783ab6e38ecfbe45729f3c39"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-win32.whl", hash = "sha256:f0ed4483c258fb23150e31b91ea7d25ff8495dba108aea0b0d4206a777705350"}, + {file = "scikit_learn-0.24.2-cp37-cp37m-win_amd64.whl", hash = "sha256:39b7e3b71bcb1fe46397185d6c1a5db1c441e71c23c91a31e7ad8cc3f7305f9a"}, + {file = "scikit_learn-0.24.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:90a297330f608adeb4d2e9786c6fda395d3150739deb3d42a86d9a4c2d15bc1d"}, + {file = "scikit_learn-0.24.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:f1d2108e770907540b5248977e4cff9ffaf0f73d0d13445ee938df06ca7579c6"}, + {file = "scikit_learn-0.24.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:1eec963fe9ffc827442c2e9333227c4d49749a44e592f305398c1db5c1563393"}, + {file = "scikit_learn-0.24.2-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:2db429090b98045d71218a9ba913cc9b3fe78e0ba0b6b647d8748bc6d5a44080"}, + {file = "scikit_learn-0.24.2-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:62214d2954377fcf3f31ec867dd4e436df80121e7a32947a0b3244f58f45e455"}, + {file = "scikit_learn-0.24.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:8fac72b9688176922f9f54fda1ba5f7ffd28cbeb9aad282760186e8ceba9139a"}, + {file = "scikit_learn-0.24.2-cp38-cp38-win32.whl", hash = "sha256:ae426e3a52842c6b6d77d00f906b6031c8c2cfdfabd6af7511bb4bc9a68d720e"}, + {file = "scikit_learn-0.24.2-cp38-cp38-win_amd64.whl", hash = "sha256:038f4e9d6ef10e1f3fe82addc3a14735c299866eb10f2c77c090410904828312"}, + {file = "scikit_learn-0.24.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:48f273836e19901ba2beecd919f7b352f09310ce67c762f6e53bc6b81cacf1f0"}, + {file = "scikit_learn-0.24.2-cp39-cp39-manylinux1_i686.whl", hash = "sha256:a2a47449093dcf70babc930beba2ca0423cb7df2fa5fd76be5260703d67fa574"}, + {file = "scikit_learn-0.24.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:0e71ce9c7cbc20f6f8b860107ce15114da26e8675238b4b82b7e7cd37ca0c087"}, + {file = "scikit_learn-0.24.2-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2754c85b2287333f9719db7f23fb7e357f436deed512db3417a02bf6f2830aa5"}, + {file = "scikit_learn-0.24.2-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:7be1b88c23cfac46e06404582215a917017cd2edaa2e4d40abe6aaff5458f24b"}, + {file = "scikit_learn-0.24.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:4e6198675a6f9d333774671bd536668680eea78e2e81c0b19e57224f58d17f37"}, + {file = "scikit_learn-0.24.2-cp39-cp39-win32.whl", hash = "sha256:cbdb0b3db99dd1d5f69d31b4234367d55475add31df4d84a3bd690ef017b55e2"}, + {file = "scikit_learn-0.24.2-cp39-cp39-win_amd64.whl", hash = "sha256:40556bea1ef26ef54bc678d00cf138a63069144a0b5f3a436eecd8f3468b903e"}, +] +scipy = [ + {file = "scipy-1.6.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2a799714bf1f791fb2650d73222b248d18d53fd40d6af2df2c898db048189606"}, + {file = "scipy-1.6.3-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:9e3302149a369697c6aaea18b430b216e3c88f9a61b62869f6104881e5f9ef85"}, + {file = "scipy-1.6.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:b79104878003487e2b4639a20b9092b02e1bad07fc4cf924b495cf413748a777"}, + {file = "scipy-1.6.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:44d452850f77e65e25b1eb1ac01e25770323a782bfe3a1a3e43847ad4266d93d"}, + {file = "scipy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:b30280fbc1fd8082ac822994a98632111810311a9ece71a0e48f739df3c555a2"}, + {file = "scipy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:10dbcc7de03b8d635a1031cb18fd3eaa997969b64fdf78f99f19ac163a825445"}, + {file = "scipy-1.6.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1b21c6e0dc97b1762590b70dee0daddb291271be0580384d39f02c480b78290a"}, + {file = "scipy-1.6.3-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1caade0ede6967cc675e235c41451f9fb89ae34319ddf4740194094ab736b88d"}, + {file = "scipy-1.6.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:19aeac1ad3e57338723f4657ac8520f41714804568f2e30bd547d684d72c392e"}, + {file = "scipy-1.6.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:ad7269254de06743fb4768f658753de47d8b54e4672c5ebe8612a007a088bd48"}, + {file = "scipy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:d647757373985207af3343301d89fe738d5a294435a4f2aafb04c13b4388c896"}, + {file = "scipy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:33d1677d46111cfa1c84b87472a0274dde9ef4a7ef2e1f155f012f5f1e995d8f"}, + {file = "scipy-1.6.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d449d40e830366b4c612692ad19fbebb722b6b847f78a7b701b1e0d6cda3cc13"}, + {file = "scipy-1.6.3-cp39-cp39-manylinux1_i686.whl", hash = "sha256:23995dfcf269ec3735e5a8c80cfceaf384369a47699df111a6246b83a55da582"}, + {file = "scipy-1.6.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:fdf606341cd798530b05705c87779606fcdfaf768a8129c348ea94441da15b04"}, + {file = "scipy-1.6.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:f68eb46b86b2c246af99fcaa6f6e37c7a7a413e1084a794990b877f2ff71f7b6"}, + {file = "scipy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:01b38dec7e9f897d4db04f8de4e20f0f5be3feac98468188a0f47a991b796055"}, + {file = "scipy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:3274ce145b5dc416c49c0cf8b6119f787f0965cd35e22058fe1932c09fe15d77"}, + {file = "scipy-1.6.3.tar.gz", hash = "sha256:a75b014d3294fce26852a9d04ea27b5671d86736beb34acdfc05859246260707"}, +] +six = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] +theano = [ + {file = "Theano-1.0.5.tar.gz", hash = "sha256:6e9439dd53ba995fcae27bf20626074bfc2fff446899dc5c53cb28c1f9202e89"}, +] +threadpoolctl = [ + {file = "threadpoolctl-2.1.0-py3-none-any.whl", hash = "sha256:38b74ca20ff3bb42caca8b00055111d74159ee95c4370882bbff2b93d24da725"}, + {file = "threadpoolctl-2.1.0.tar.gz", hash = "sha256:ddc57c96a38beb63db45d6c159b5ab07b6bced12c45a1f07b2b92f272aebfa6b"}, +] +tifffile = [ + {file = "tifffile-2021.4.8-py3-none-any.whl", hash = "sha256:1cfc55f5b728e200142580a7bf108b72775c4097d007b4111876559fa1fb7432"}, + {file = "tifffile-2021.4.8.tar.gz", hash = "sha256:55aa8baad38e1567c9fe450fff52160e4a21294a612f241c5e414da80f87209b"}, +] +toml = [ + {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, + {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, +] +tqdm = [ + {file = "tqdm-4.60.0-py2.py3-none-any.whl", hash = "sha256:daec693491c52e9498632dfbe9ccfc4882a557f5fa08982db1b4d3adbe0887c3"}, + {file = "tqdm-4.60.0.tar.gz", hash = "sha256:ebdebdb95e3477ceea267decfc0784859aa3df3e27e22d23b83e9b272bf157ae"}, +] +typing-extensions = [ + {file = "typing_extensions-3.10.0.0-py2-none-any.whl", hash = "sha256:0ac0f89795dd19de6b97debb0c6af1c70987fd80a2d62d1958f7e56fcc31b497"}, + {file = "typing_extensions-3.10.0.0-py3-none-any.whl", hash = "sha256:779383f6086d90c99ae41cf0ff39aac8a7937a9283ce0a414e5dd782f4c94a84"}, + {file = "typing_extensions-3.10.0.0.tar.gz", hash = "sha256:50b6f157849174217d0656f99dc82fe932884fb250826c18350e159ec6cdf342"}, +] +wget = [ + {file = "wget-3.2.zip", hash = "sha256:35e630eca2aa50ce998b9b1a127bb26b30dfee573702782aa982f875e3f16061"}, +] +zipp = [ + {file = "zipp-3.4.1-py3-none-any.whl", hash = "sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"}, + {file = "zipp-3.4.1.tar.gz", hash = "sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76"}, +] diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 00000000..6d34619d --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,31 @@ +[tool.poetry] +name = "dl4mic" +version = "0.1.0" +description = "" +authors = ["Craig "] + +[tool.poetry.dependencies] +python = ">=3.7.1,<3.10" +numpy = "^1.20.3" +matplotlib = "^3.4.2" +tifffile = "^2021.4.8" +pandas = "^1.2.4" +scipy = "^1.6.3" +scikit-learn = "^0.24.2" +scikit-image = "^0.18.1" +astropy = "^4.2.1" +fpdf = "^1.7.2" +wget = "^3.2" +mashumaro = "^2.5" +numexpr = "^2.7.3" +h5py = "^3.2.1" + +[tool.poetry.dev-dependencies] +pytest = "^6.2.4" +n2v = "^0.2.1" +keras = ">=2.2.4,<2.3.0" +Theano = "^1.0.5" + +[build-system] +requires = ["poetry-core>=1.0.0"] +build-backend = "poetry.core.masonry.api" diff --git a/pytest.ini b/pytest.ini new file mode 100644 index 00000000..c1fa8785 --- /dev/null +++ b/pytest.ini @@ -0,0 +1,2 @@ +[pytest] +addopts = -p no:warnings \ No newline at end of file diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 00000000..def36aa6 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,15 @@ +keras +# tensorflow==1.15.2 +n2v +wget +fpdf +memory_profiler +numpy +matplotlib +tifffile +pandas +scipy +scikit-image +sklearn +astropy +h5py \ No newline at end of file diff --git a/tests/test_main.py b/tests/test_main.py new file mode 100644 index 00000000..b0ce2e99 --- /dev/null +++ b/tests/test_main.py @@ -0,0 +1,340 @@ +# %% + +# import pytest +import dl4mic.models as models +import dl4mic.utils as utils +import time +import pandas as pd +import pytest + +model_configs = [ + { + "model": None, + "X_train": None, + "X_test": None, + # "model_name": None, + # "model_path": None, + # "ref_str"=, + "Notebook_version": 1.12, + "initial_learning_rate": 0.0004, + "number_of_steps": 1, + "percentage_validation": 10, + # "image_patches"=, + "loss_function": "mse", + "batch_size": 128, + "patch_size": 64, + "Training_source": "tests/n2v/Training", + "pretrained_model_path": "tests/n2v/weights_last.h5", + "pretrained_model_name": "Model_name", + "number_of_epochs": 1, + "Use_Default_Advanced_Parameters": False, + "Use_Data_augmentation": False, + # "trained": False, + # "augmentation": False, + "pretrained_model": False, + "pretrained_model_choice": "Model_from_file", + "percentage_validation": 10, + "Use_pretrained_model": True, + "Use_the_current_trained_model": True, + "Source_QC_folder": None, + "Target_QC_folder": None, + "Prediction_model_folder": None, + "QC_model_name": "n2v", + "Data_folder": None, + "Data_type": models.params.Data_type.SINGLE_IMAGES, + "Prediction_model_name": None, + "Prediction_model_path": None, + } + , + { + "model": None, + "X_train": None, + "X_test": None, + # "model_name": None, + # "model_path": None, + # "ref_str"=, + "Notebook_version": 1.12, + "initial_learning_rate": 0.0004, + "number_of_steps": 1, + "percentage_validation": 10, + # "image_patches"=, + "loss_function": "mse", + "batch_size": 128, + "patch_size": 64, + "Training_source": "tests/n2v/Training", + "pretrained_model_path": "tests/n2v/weights_last.h5", + "pretrained_model_name": "Model_name", + "number_of_epochs": 1, + "Use_Default_Advanced_Parameters": True, + "number_of_steps": 100, + "Use_Data_augmentation": False, + # "trained": False, + # "augmentation": False, + "pretrained_model": False, + "pretrained_model_choice": "Model_from_file", + "percentage_validation": 10, + "Use_pretrained_model": True, + "Use_the_current_trained_model": True, + "Source_QC_folder": None, + "Target_QC_folder": None, + "Prediction_model_folder": None, + "QC_model_name": "n2v", + "Data_folder": None, + "Data_type": models.params.Data_type.SINGLE_IMAGES, + "Prediction_model_name": None, + "Prediction_model_path": None, + } +] + +# Use_Default_Advanced_Parameters = [True,False] +# %% +def test_dl4mic_model(): + dl4mic_model = models.DL4MicModel() + + +def test_N2V(): + import os + + os.environ["KERAS_BACKEND"] = "tensorflow" + + from n2v.models import N2VConfig, N2V + from csbdeep.utils import plot_history + from n2v.utils.n2v_utils import manipulate_val_data + from n2v.internals.N2V_DataGenerator import N2V_DataGenerator + from csbdeep.io import save_tiff_imagej_compatible + + model_config = model_configs[0] + dl4mic_model = models.N2V(model_config) + # dl4mic_model.append_config({"Training_source": "Training"}) + # print(dl4mic_model["Training_source"]) + + # Training_source = dl4mic_model["Training_source"] + # print(Training_source) + datagen = N2V_DataGenerator() + # training_images = Training_source + imgs = datagen.load_imgs_from_directory(directory=dl4mic_model["Training_source"]) + + example_image = dl4mic_model.data_checks() + dl4mic_model.data_augmentation() + h5_file_path = dl4mic_model.load_pretrained_model() + + Xdata = datagen.generate_patches_from_list( + imgs, + shape=(dl4mic_model["patch_size"], dl4mic_model["patch_size"]), + augment=dl4mic_model["Use_Data_augmentation"], + ) + + dl4mic_model.gleen_data(Xdata) + + shape_of_Xdata = Xdata.shape + + threshold = dl4mic_model["threshold"] + image_patches = dl4mic_model["image_patches"] + + X = Xdata[threshold:] + X_val = Xdata[:threshold] + + print(shape_of_Xdata[0], "patches created.") + print( + dl4mic_model["threshold"], + "patch images for validation (", + dl4mic_model["percentage_validation"], + "%).", + ) + print(image_patches - threshold, "patch images for training.") + + config = N2VConfig( + X, + unet_kern_size=3, + train_steps_per_epoch=dl4mic_model["number_of_steps"], + train_epochs=dl4mic_model["number_of_epochs"], + train_loss=dl4mic_model["loss_function"], + batch_norm=True, + train_batch_size=dl4mic_model["batch_size"], + n2v_perc_pix=0.198, + n2v_manipulator="uniform_withCP", + n2v_neighborhood_radius=5, + train_learning_rate=dl4mic_model["initial_learning_rate"], + ) + + model = N2V( + config=config, + name=dl4mic_model["model_name"], + basedir="tests", + ) + if dl4mic_model["Use_pretrained_model"]: + model.load_weights("weights_last.h5") + + print("Setup done.") + print(config) + dl4mic_model.check_model_params() + pdf = dl4mic_model.pre_report(X_train=X, X_test=X_val, show_image=False) + + # def test_check_quality(): + + # start = time.time() + + dl4mic_model["start"] = time.time() + + # TF1 Hack + import tensorflow.compat.v1 as tf + + tf.disable_v2_behavior() + tf.__version__ = 1.14 + + # model.load_weights("n2v/weights_last.h5") + # history = model.train(X, X_val) + history = [0, 1, 2] + print("Training done.") + # lossData_df = pd.DataFrame(history.history) + # dl4mic_model.save_model(model) + dl4mic_model.quality(history) + # dl4mic_model.quality() + pdf = dl4mic_model.post_report(show_image=False) + dl4mic_model.predict() + dl4mic_model.assess() + + +@pytest.mark.parametrize("model_config", model_configs) +def test_N2V_short(model_config): + import os + + os.environ["KERAS_BACKEND"] = "tensorflow" + + from n2v.internals.N2V_DataGenerator import N2V_DataGenerator + + dl4mic_model = models.N2V(model_config) + datagen = N2V_DataGenerator() + imgs = datagen.load_imgs_from_directory(directory=dl4mic_model["Training_source"]) + + Xdata = datagen.generate_patches_from_list( + imgs, + shape=(dl4mic_model["patch_size"], dl4mic_model["patch_size"]), + augment=dl4mic_model["Use_Data_augmentation"], + ) + + dl4mic_model.pre_training(Xdata) + + dl4mic_model["start"] = time.time() + + # TF1 Hack + import tensorflow.compat.v1 as tf + + tf.disable_v2_behavior() + tf.__version__ = 1.14 + + model = dl4mic_model.get_model() + threshold = dl4mic_model["threshold"] + + X = Xdata[threshold:] + X_val = Xdata[:threshold] + + history = model.train(X, X_val) + print("Training done.") + + pdf_post = dl4mic_model.post_report(history) + + +# def test_N2V_short(model_config): +@pytest.mark.parametrize("model_config", model_configs) +def test_N2V_very_short(model_config): + models.N2V(model_config).run() + +model_config_care = { + "model": None, + "X_train": None, + "X_test": None, + # "model_name": None, + # "model_path": None, + # "ref_str"=, + "Notebook_version": 1.12, + "initial_learning_rate": 0.0004, + "number_of_steps": 1, + "percentage_validation": 10, + # "image_patches"=, + "loss_function": "mse", + "batch_size": 128, + "patch_size": 64, + "Training_source": "tests/n2v/Training", + "pretrained_model_path": "tests/n2v/weights_last.h5", + "pretrained_model_name": "Model_name", + "number_of_epochs": 1, + "Use_Default_Advanced_Parameters": False, + "Use_Data_augmentation": False, + # "trained": False, + # "augmentation": False, + "pretrained_model": False, + "pretrained_model_choice": models.params.Pretrained_model_choice.MODEL_FROM_FILE, + "percentage_validation": 10, + "Use_pretrained_model": True, + "Use_the_current_trained_model": True, + "Source_QC_folder": None, + "Target_QC_folder": None, + "Prediction_model_folder": None, + "QC_model_name": "n2v", + "Data_folder": None, + "Data_type": models.params.Data_type.SINGLE_IMAGES, + "Prediction_model_name": None, + "Prediction_model_path": None, + } +@pytest.mark.parametrize("model_config_care", [model_config_care]) +def test_CARE_very_short(model_config_care): + models.CARE(model_config_care).run() + + + +# def n2v_get_model(dl4mic_model, Xdata): + +# ################ N2V ###################### + +# from n2v.models import N2VConfig, N2V +# from csbdeep.utils import plot_history +# from n2v.utils.n2v_utils import manipulate_val_data +# from n2v.internals.N2V_DataGenerator import N2V_DataGenerator +# from csbdeep.io import save_tiff_imagej_compatible + +# threshold = dl4mic_model["threshold"] +# image_patches = dl4mic_model["image_patches"] +# shape_of_Xdata = dl4mic_model["shape_of_Xdata"] + +# print(shape_of_Xdata[0], "patches created.") +# print( +# dl4mic_model["threshold"], +# "patch images for validation (", +# dl4mic_model["percentage_validation"], +# "%).", +# ) +# print(image_patches - threshold, "patch images for training.") + +# config = N2VConfig( +# dl4mic_model["X_train"], +# unet_kern_size=3, +# train_steps_per_epoch=dl4mic_model["number_of_steps"], +# train_epochs=dl4mic_model["number_of_epochs"], +# train_loss=dl4mic_model["loss_function"], +# batch_norm=True, +# train_batch_size=dl4mic_model["batch_size"], +# n2v_perc_pix=0.198, +# n2v_manipulator="uniform_withCP", +# n2v_neighborhood_radius=5, +# train_learning_rate=dl4mic_model["initial_learning_rate"], +# ) + +# model = N2V( +# config=config, +# name=dl4mic_model["model_name"], +# basedir="tests", +# ) + +# print("Setup done.") +# print(config) +# return model + +# # if dl4mic_model["Use_pretrained_model"]: +# # model.load_weights("weights_last.h5") + +# ############################################### + + +# # test_N2V() +# # %%