Skip to content

Releases: EthanTreg/PyTorch-Network-Loader

v3.9.4

04 Nov 14:58

Choose a tag to compare

Additions

  • Added base layers and netloader.layers.utils to netloader.layers.
  • Added save_freq to BaseNetwork.
  • Added keyword argument check to layers.AdaptivePool.
  • Added get_hyperparams to BaseNetwork.

Changes

  • Changed Network to not convert to CPU when saving.
  • Changed some type hints.

Deprecations

  • Deprecated ArrayTC for ArrayCT.

Fixes

  • Fixed support for some old netloader versions.
  • Fixed save corruption of BaseNetwork during keyboard interrupt.
  • Fixed layers.Index slice JSON support.
  • Fixed keyword argument name clash in layers.Index.

v3.9.1

06 Oct 10:39

Choose a tag to compare

Additions

  • Added BaseSingleLayer.
  • Added Shapes class.
  • Added Data class for data & uncertainties.
  • Added DataList class for storing multiple arrays or tensors.
  • Added BaseNetwork loss dictionary support.
  • Added support to BaseNetwork for list of transforms.
  • Added DataList support to BaseNetwork.
  • Added BaseNetwork._loss_tensor.
  • Added Data support to BaseTransform.
  • Added ConvNeXt._head.
  • Added return_idxs to loader_init.
  • Added data_collation to collate lists of arrays, tensors, Data, and DataLists.
  • Added loss_func argument to Encoder and NormFlowEncoder.
  • Added GaussianNLLLoss.
  • Added Pack layer.
  • Added layer and checkpoint option to Unpack.
  • Added support for DataList to several layers.
  • Added CompatibleNetwork.
  • Added Network.version.
  • Added len, getitem, and iter magic methods to Network.
  • Added label keyword argument to ascii_plot.
  • Added several types in types.py.
  • Added BatchNorm layer.
  • Added support to set total loss in BaseNetwork if loss is a dictionary.
  • Added support for DataList in Index layer.
  • Addd option to only pack output from previous layer in Pack layer.
  • Added root keyword argument to Network for loading networks and composite layers.

Changes

  • Changed layers to use Shapes class for layer shapes.
  • Changed most methods and functions to not allow positional arguments for keyword arguments.
  • Changed BaseNetwork.predict to not return None values.
  • Changed load_net to accept kwargs to torch.load.
  • Changed ConvNeXtBlock to netloader.layers.
  • Changed type hints to be more consistent.
  • Changed package versions.
  • Changed Index layer to use indexing or slicing.

Removals

  • Removed UNSET.
  • Removed support for BaseLayer.layers of type list.

Deprecations

  • Deprecated BaseLayer.layers.
  • Deprecated BaseLayer.forward.
  • Deprecated BaseNetwork._loss_func.
  • Deprecated number and greater arguments in Index layer.

Fixes

  • Fixed ConvNext weight initialisation for biases that are None.
  • Fixed Network ignoring Network.layer_num.
  • Fixed Log uncertainty inplace operations.
  • Fixed type hint errors.
  • Fixed backwards compatibility with Python-3.11.
  • Fixed BaseDataset error message for incorrect attribute length.
  • Fixed Concatenate Layer when dim=-1.
  • Fixed Index transform not detecting old state format.
  • Fixed Normalise transform error for offset or scale ndarray keyword arguments.

v3.8.0

20 Aug 13:10

Choose a tag to compare

Additions

  • Added ConvNeXt implemented using classes.
  • Added group attribute to BaseLayer.
  • Added GitHub reference to timm for DropPath.
  • Added extra_repr to DropPath.
  • Added netloader version to BaseNetwork save state.
  • Added BaseNetwork.losses to save multiple loss values.
  • Added print functions to BaseNetwork for custom print logic during training and predicting.
  • Added BaseNetwork._loss_func to return Tensor before BaseNetwork._loss returns float.
  • Added transforms.MultiTransform.extend to add multiple transforms.
  • Added ASCII plot for loss curve during training.

Changes

  • Changed loader_init ratios argument to split the dataset after idxs.
  • Changed Network forward pass to use BaseLayer rather than config for information.
  • Changed Network to catch all exceptions during the forward pass.

Fixes

  • Fixed get_extra return type to Any.
  • Fixed type hints in loader_init.
  • Fixed DropPath not working if prob is 0.
  • Fixed NormFlowEncoder optimiser initialisation.
  • Fixed optimiser and scheduler kwargs not being saved.
  • Fixed transforms.Index weights_only safe saving.
  • Fixed transforms.Normalise uncertainty in place operation.
  • Fixed non netloader classes being added to PyTorch safe globals.
  • Fixed BaseDataset type hints.

v3.6.1

12 Jun 15:22

Choose a tag to compare

Additions

  • Added BaseDataset for creating datasets.
  • Added get_device to BaseNetwork.
  • Added overwrite keyword argument to BaseNetwork to prevent unwanted file deletion.
  • Added get_epochs to BaseNetwork.
  • Added Jupyter Notebook package use example.

Changes

  • Changed input_ keyword argument to inputs in BaseNetwork.predict.
  • Change Network in_shape and out_shape arguments to accept tuples.
  • Changed package requirements.
  • Changed Index transform to allow negative in_shape dimension sizes for undefined lengths.

Fixes

  • Fixed docstrings and type hints.

v3.5.6

03 Apr 12:25

Choose a tag to compare

Additions

  • Added Index and Reshape transforms.

Fixes

  • Fixed backwards compatibility with old in_transform attribute.
  • Fixed Normalise from transforms.py type hint.

v3.5.3

24 Feb 09:37

Choose a tag to compare

Additions

  • Added inputs transform to BaseNetwork transforms attribute.
  • Added optional saving of inputs in BaseNetwork.predict().
  • Added additional representation information to BaseNetwork.
  • Added representation information to BaseLoss.
  • Added automatic file extension to saving of BaseNetwork.predict().
  • Added set_optimiser method to NormFlowEncoder.
  • Added optional kwargs for set_optimiser and set_scheduler in BaseNetwork.

Changes

  • Changed BaseNetwork header attribute to transforms.
  • Change PyTorch & Numpy requirements to latest version.
  • Changed load_net num argument type signature to int | str.
  • Changed BaseTransform extra_repr to public.

Removals

  • Removed BaseNetwork in_transform attribute.
  • Removed optional learning rate, optimiser, & scheduler.

Fixes

  • Fixed saving of BaseNetwork idxs attribute.
  • Fixed Encoder classes attribute not being converted to device.
  • Fixed device problems with netloader_test.py.
  • Fixed BaseNetwork predict and uncertainties.
  • Fixed input data being saved as Tensor in BaseNetwork.predict.
  • Fixed some inconsistent method signatures.

v3.4.8

28 Jan 17:56

Choose a tag to compare

Additions

  • Added all network architectures, loss functions, transforms, and Network to PyTorch safe globals.
  • Added set_optimiser and set_scheduler to BaseNetwork to set the default scheduler and optimiser.
  • Added weights_only optional argument to load_net.
  • Added weights only compatible loss functions.
  • Added flatten_target optional argument to Linear.
  • Added test decoder and encoder architectures.
  • Added optional offset and scale arguments to Normalise transform if data is not provided.
  • Added repr to BaseNetwork.

Changes

  • Changed package versions to latest versions.
  • Changed network loading to be compatible with PyTorch weights only loading.
  • Changed load_net to use weights only by default.
  • Changed optimiser and scheduler saving/loading to save/load state dictionaries.

Fixes

  • Fixed optimiser loading not linking with loaded network.
  • Fixed list indexing and boolean operation problems with Shortcut layer.
  • Fixed incorrect loading of Network state dictionary.
  • Fixed improper file path creation.

v3.3.3

16 Jan 13:14

Choose a tag to compare

Additions

  • Added state saving to Network and BaseNetwork for reduced saving time.
  • Added loss and time information to batch training progress bar.
  • Added option to pass dictionary instead of config directory path to Network, Composite and _create_network.
  • Added print kwargs to progress_bar.
  • Added support for Apple silicon MPS.
  • Added minimum learning rate to default scheduler for BaseNetwork.

Changes

  • Changed netloader.utils.transforms.py to netloader.transforms.py.
  • Changed BaseNetwork default device to CPU.
  • Changed layers attribute to config.

Fixes

  • Fixed kwargs not returning anything if device is cuda.

Deprecations (v3.5.0)

  • netloader.utils.transforms.py.
  • BaseLayer and BaseMultiLayer from netloader.layers.utils.py.

v3.2.5

17 Oct 15:42

Choose a tag to compare

Additions

  • Added representation to transforms.
  • Added verbose check for BaseNetwork predict prediction time.

Changes

  • Changed MultiTransform to accept transform args rather than list of transforms.

Fixes

  • Fixed Encoder not moving classes attribute to device.
  • Fixed OrderedBottleneck not working if bottleneck dimension is smaller than min_size.

v3.2.4

30 Aug 12:25

Choose a tag to compare

Additions

  • Added ConvDepth for depthwise convolution.
  • Added Activation for activation functions.
  • Added DropPath to drop samples.
  • Added LayerNorm for layer normalisation.
  • Added Scale for learnable scaling.
  • Added SplineFlow for neural spline flow.
  • Added optimiser and scheduler to BaseNetwork.
  • Added module logger.
  • Added ConvNeXt.
  • Added kl_loss to Autoencoder.
  • Added groups parameter to Conv layers.
  • Added json net parameters check.
  • Added names to sub-layers.
  • Added invertible data transforms.
  • Added support for epoch and loss schedulers.
  • Added latent saving to Autoencoder predict.
  • Added uncertainty propagation to transforms.
  • Added in_transform attribute to BaseNetwork.
  • Added progress to predict if verbose is full.
  • Added input saving to Autoencoder predict.
  • Added loss function attributes to Autoencoder and Decoder.
  • Added samples transformation to NormFlow.
  • Added transformation to max and meds in NormFlowEncoder.

Changes

  • Changed BaseNetwork to accept any PyTorch Module.
  • Changed NormFlowEncoder to require networks with SplineFlow as last layer rather than two networks.
  • Changed Conv layers batch_norm to norm to support LayerNorm.
  • Changed default dropout to 0.
  • Changed Conv layers default padding to 0.
  • Changed inceptionv4.json to use checkpoints.
  • Changed setup.py to use version number from init.py.
  • Changed BaseNetwork data transformation to accept transformation per output.
  • Changed BaseNetwork header attribute to public.
  • Changed get_device arguments to use 4 workers and persistent workers.

Removals

  • Removed optimiser and scheduler from Network.
  • Removed second optimiser and scheduler from NormFlowEncoder.
  • Removed kl_loss_weight from Network.
  • Removed kernel size check.
  • Removed Pandas requirement.

Fixes

  • Fixed Encoder batch_predict squeezing dimension.
  • Fixed optimiser not changing device.
  • Fixed Composite defaults overwriting network defaults rather than merging.