Skip to content

Conversation

@vinhngx
Copy link

@vinhngx vinhngx commented Aug 16, 2019

Automatic Mixed Precision training on GPU for TensorFlow has been recently introduced:

https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540

Automatic mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput. Mixed precision training also often allows larger batch sizes.

This PR adds GPU automatic mixed precision training to tensorflow-wavenet via passing the flags value --auto_mixed_precision=True.

python train.py --data_dir=/path/to/data/ --auto_mixed_precision=True

To learn more about mixed precision and how it works:

Overview of Automatic Mixed Precision for Deep Learning
NVIDIA Mixed Precision Training Documentation
NVIDIA Deep Learning Performance Guide

@vinhngx vinhngx requested a review from ibab February 7, 2020 03:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant