Skip to content
This repository was archived by the owner on Jul 1, 2024. It is now read-only.

Commit 865d005

Browse files
royweisandeepkrishnamurthy-dev
authored andcommitted
Rebase to latest Keras April 20 2018 (#71)
* Improve tests by designating dtype of sample data (keras-team#9834) * Document that "same" is inconsistent across backends with strides!=1 (keras-team#9629) * Document that `"same"` is inconsistent across backends with `strides` != 1 * Use "[here](...)" * keras-team#9642 Add kwarg and documentation for dilation_rate to SeparableConvs (keras-team#9844) * Add kwarg and documentation for dilation_rate to SeparableConvs * Fix pep8 complaint I forgot to check the style before committing. Pep8 was complaining about a missing whitespace after comma, now it's fixed. * fit/evaluate_generator supporting native tensors (keras-team#9816) Currently, `fit/evaluate_generator` don't support this case without this fix. But framework-native data tensors are already supported by `_fit_loop` and `_test_loop`. Signed-off-by: CUI Wei <ghostplant@qq.com> * Add h5py to dependencies * Fixed typo. (keras-team#9866) * Fix image_ocr.py example ValueError (keras-team#9869) * Fixed the NASNet issue. (keras-team#9865) * Fixed the NASNet issue. * Nasnet doesn't require flatten. * Updated documentation accordingly. * Removed generate dropout ones from recurrent. (keras-team#9892) * Removed generate dropout ones from recurrent. * Fixed index issue. * Fix `in_test_phase` of CNTK and Add its tests (keras-team#9902) * Fix dtype designation for `variable` of CNTK and Add its tests (keras-team#9903) * import `pydot`, improve error messages about `pydot` and GraphViz, bump to `pydot >= 1.2.4` (keras-team#9904) * REL: bump to `pydot >= 1.2.4` in `extras_require` * MAI: import pydot (as required in `extras_require`) * MAI: refine error messages for `pydot` and GraphViz distinguish between absence of `pydot` and failure to find the executables of GraphViz in the $PATH. * DEV: ignore `.pytest_cache` * Fix documentation of flow_from_directory() (keras-team#9910) The way the documentation is parsed for the Keras website made some lines of the documentation beginning with "Default:" look funny. Also changed the documentation of return value to be clear that it always returns a batch of images. * ModelCheckpoint: print previous best (keras-team#9911) * multi_gpu_model supporting legacy/fullCPU/fullGPU (keras-team#9638) Signed-off-by: CUI Wei <ghostplant@qq.com> * Fix `batch_dot` of Theano when `axes=0` (keras-team#9920) * Fix `batch_dot` of CNTK when `axes=None` (keras-team#9921) * Fix `batch_dot` of TensorFlow when `axes=None` (keras-team#9922) * Fix stateful metrics when passing dict to compile (keras-team#9894) * Added note to manually install h5py where needed (keras-team#9830) * Added notes to manually install h5py if needed * Added FAQ entry on h5py * deleted redundant remark about h5py * updated FAQ to reflect dependency change * fixed comment format to pass failing test * removed new trailing whitespaces * improved docstring format * reverted callbacks.py * fixed links in model.py * updated faq.py * link pointing to FAQ * Add support for `constants` in Bidirectional wrapper (keras-team#9260) * Add support fot `constants` in Bidirectional wrapper * Add more tests for Bidirectional wrapper * Fix `compute_mask` for Birectional with return_state=True Fix `compute_mask` to properly support `return_state` introduced in Birectional with keras-team#8977 * Add test for Bidirectional with unknown timestamps * Skip test for CNTK for unknown timestamps with Bidirectional * avoid override the input constant when need broadcast sequential axis on rnn's constant * Move _standardize_args to recurrent, remove duplication * Fix for Birectional when multiple masks are passed * Updated for TF 1.7 (keras-team#9937) * fix TimeSeriesGenerator glitch (keras-team#9899) * Added an error message for undefined shape on NASNet. (keras-team#9891) * Added an error message for undefined shape on NASNet. * Forgot that the message should be present only when loading imagenet weights. * Changed the message. * Fix PEP8 * Allow shift_range to be 1-D array-like or int (keras-team#8869) * Allow shift_range to be 1-D array-like or int * Add docstrings * Fix conflict resolution merge minor disaster * remove stray line from merge * Remove extra "tabs" * Exclude multi-gpu utils when reporting coverages (keras-team#9942) * Make conv_invalid_use and pooling_invalid_use efficient (keras-team#9944) * Chenta/cntk bn (keras-team#9952) * fix cntk static learning phase issue; add a test * fix code style;add more comments * add boolean support * fix code style issue * Immigrate reference operations to a separate module (keras-team#9948) * Add MXNet Backend (#59) * Adding MXNet backend template. Adding all basic Variable and Tensor operations (#1) * add activation functions * add activation functions * fix some legacy * fix some legacy * cross entropy * cross entropy * fix name scoping introduced in 2.0 * fix name scoping introduced in 2.0 * Add dropout, l2_normalization, random_normal/uniform/binomial (#2) * remove the logic for hacking RNN * remove the logic for hacking RNN * add pooling with utils * add pooling with utils * minor * lint and name scope fix * fix access protected var * fix add neighbor, removed __eq__ in KerasSymbol * fix eval function, unittest for placeholder and variable * add unittests * fix bug * fix bug * fix * add some temporary fixes in mxnet backend. undo change to the pytest.ini * mxnet_backend graph fix, layer support (#3) * add activation functions * fix some legacy * cross entropy * fix name scoping introduced in 2.0 * Add dropout, l2_normalization, random_normal/uniform/binomial (#2) * remove the logic for hacking RNN * add pooling with utils * add activation functions * fix some legacy * cross entropy * fix name scoping introduced in 2.0 * remove the logic for hacking RNN * add pooling with utils * minor * lint and name scope fix * fix access protected var * fix add neighbor, removed __eq__ in KerasSymbol * fix eval function, unittest for placeholder and variable * add unittests * fix bug * fix bug * fix * add some temporary fixes in mxnet backend. undo change to the pytest.ini * Keras function not working is a known issue, add skip in the test * fix random_uniform/constant * fix legacy randomize methods * Fix MXNet backend operator bugs. Enabled Keras backend tests * add bias * Add Amazon copyrights to License (#6) * fix * fix * fix backend for mlp * fix context management, add optimizers * minor change * undo changes on example * fix eval * minor cleanup * fix some property usage * fixing AlphaDroupout, not finished yet * add mx model instantiate * modifies training model construct logic, fix some tests. fix reshape layer. * minor fix * fix bias_add * more fix on Dense and bias_add * In progress commit * fix comment * small fix * remove pytest.skip in conv3d. But it failed with theano backend in my workspace though. * Add conv2d and in_topk operator for mxnet backend (#11) * Skip BatchDot tests for Theano backend. (#12) * BatchDot, Basic Batchnorm, Fix BiasAdd, Fix Conv2D, CodeCleanup (#14) * Fix Conv2d shape issues and enable Conv2D UTs * Remove redundant mxnet only unit tests * Adding batch_dot, remove deconv, code comments and cleanup * Remove buggy conv1d implementation * Fix CR comments. Fix lint check issues * Move mxnet specific code from keras engine to mxnet_backend. (#15) * Move MXNet optimizers from keras optimizers to mxnet backend (#16) * Fix bug in reshape. Minor rename to avoid local conflicts * Bug fixes and enable/skip all Keras tests for mxnet backend (#21) * test results - 374 passed, 235 skipped in 114.44 seconds * fix/skip keras tests - tests/integration_tests, tests/keras/applications * fix/skip keras tests - tests/keras/engine/test_topology * fix/skip keras tests - tests/keras/engine/test_training * fix/skip keras tests - tests/keras/legacy/ * fix/skip keras tests - tests/keras/preprocessing * fix/skip keras tests - tests/keras/utils/ * Fix CR comments * Fix issues in zero_padding. Fix/Enable tests/layers/convolutional_test * Add momentum to batchnorm. Enable/skip tests in layers/core, local, merge, noise, normalization * Skip RNN tests in keras/tests/layers/recurrent_test, wrappers_test * Fix bug in spatial padding, enable/skip tests in loss,optimizers,callback,loss_weighting, model_saving * Fix mxnet backend multi-gpu training (#31) Fixing bug for mxnet backend to use multiple gpus. * Fix performance issue - Batchnormalization, Conv operator (#35) * Fix default axis for batchnorm layer for channels_first data_format * Performance improvement by avoiding kernel transpose in conv operation for channels_first format * Fix model - architecture, weights and both, load and save. (#36) * Prepare initial version of mxnet related documentation in keras (#38) * Skip failing unit tests for unsupported functionality in mxnet backend * Fix pep tests reported by CI * Use pytest module skip, revert kernel_shape logic * remove data_format param from bias_add API * Allow Predict() without compile for mxnet backend and enable tests. contributor - roywei@ * Fix bug - mxnet backend should not override keras config data_format to channels_first. Only warn of low performance * Conv3d() operator implementation for Keras2.0 using MXNet backend (#40) * conv3d implementation for keras2.0 as MXNet backend * conv3d implementation/testing for keras2.0 using MXNet backend * keeping -n option in pytest.ini file * fixed comments given by Sandeep * Add Conv1D support for MXNet backend (#44) * Add Conv1D support for MXNet backend * Fix CR comments * Conv2d transpose (#47) * add conv2d_transpose * conv2d transpose for both channels, enabled test case * add detailed comments and examples, fix style issue * enable test case in topology * Enable performance optimization for conv operators with MXNet backend. Make MXNet default backend with this branch (#48) * Fix conv kernel shape bug for TF backend. (#50) * Add support for keras multi_gpu_model() API with MXNet backend (#49) * Add support for keras multi_gpu_model() API with MXNet backend. Autoset GPU0 context on GPU machine * Fix typo * Add SAME padding mode support for pooling operator. (#51) * Add rnn() operator for MXNet backend with unrolling and masking feature (#46) * Adding rnn() operator in Keras2.0 with MXNet as backend with unroll=True and Masking=True/False and enabled relevant testcases. Also, modified couple of operators. * Modified comments * Added comments to a method * Enable categorical crossentropy testcases and made minor changes * Modified message * nit * Added detail description of handling variable length input in RNN * Skip conv2d_transpose and conv3d_transpose test-case for MXNet backend and minor changes in rnn() * Adamax and NAdam optimizer for MXNet backend (#54) * Add Adamax optimizer for MXNet backend * Fix lr and adamax params * Add Nadam optimizer for mxnet backend * Add Conv3d transpose (#52) * conv3d tranpose, enabled test case * update kernel shape * replace conv2d_transpse conv3d_transpose with convnd_transpose * update value errors with MXNet Backend info, fix typo * add check for conv3d transpose only supports gpu with cudnn * update context check * diable conv3d transpose test * fix typo in comment * Adding MXNet backend template. Adding all basic Variable and Tensor operations (#1) * add activation functions * add activation functions * fix some legacy * fix some legacy * cross entropy * cross entropy * fix name scoping introduced in 2.0 * fix name scoping introduced in 2.0 * Add dropout, l2_normalization, random_normal/uniform/binomial (#2) * remove the logic for hacking RNN * remove the logic for hacking RNN * add pooling with utils * add pooling with utils * minor * lint and name scope fix * fix access protected var * fix add neighbor, removed __eq__ in KerasSymbol * fix eval function, unittest for placeholder and variable * add unittests * fix bug * fix bug * fix * add some temporary fixes in mxnet backend. undo change to the pytest.ini * mxnet_backend graph fix, layer support (#3) * add activation functions * fix some legacy * cross entropy * fix name scoping introduced in 2.0 * Add dropout, l2_normalization, random_normal/uniform/binomial (#2) * remove the logic for hacking RNN * add pooling with utils * add activation functions * fix some legacy * cross entropy * fix name scoping introduced in 2.0 * remove the logic for hacking RNN * add pooling with utils * minor * lint and name scope fix * fix access protected var * fix add neighbor, removed __eq__ in KerasSymbol * fix eval function, unittest for placeholder and variable * add unittests * fix bug * fix bug * fix * add some temporary fixes in mxnet backend. undo change to the pytest.ini * Keras function not working is a known issue, add skip in the test * fix random_uniform/constant * fix legacy randomize methods * Fix MXNet backend operator bugs. Enabled Keras backend tests * add bias * Add Amazon copyrights to License (#6) * fix * fix * fix backend for mlp * fix context management, add optimizers * minor change * undo changes on example * fix eval * minor cleanup * fix some property usage * fixing AlphaDroupout, not finished yet * add mx model instantiate * modifies training model construct logic, fix some tests. fix reshape layer. * minor fix * fix bias_add * more fix on Dense and bias_add * In progress commit * fix comment * small fix * remove pytest.skip in conv3d. But it failed with theano backend in my workspace though. * Add conv2d and in_topk operator for mxnet backend (#11) * Skip BatchDot tests for Theano backend. (#12) * BatchDot, Basic Batchnorm, Fix BiasAdd, Fix Conv2D, CodeCleanup (#14) * Fix Conv2d shape issues and enable Conv2D UTs * Remove redundant mxnet only unit tests * Adding batch_dot, remove deconv, code comments and cleanup * Remove buggy conv1d implementation * Fix CR comments. Fix lint check issues * Move mxnet specific code from keras engine to mxnet_backend. (#15) * Move MXNet optimizers from keras optimizers to mxnet backend (#16) * Fix bug in reshape. Minor rename to avoid local conflicts * Bug fixes and enable/skip all Keras tests for mxnet backend (#21) * test results - 374 passed, 235 skipped in 114.44 seconds * fix/skip keras tests - tests/integration_tests, tests/keras/applications * fix/skip keras tests - tests/keras/engine/test_topology * fix/skip keras tests - tests/keras/engine/test_training * fix/skip keras tests - tests/keras/legacy/ * fix/skip keras tests - tests/keras/preprocessing * fix/skip keras tests - tests/keras/utils/ * Fix CR comments * Fix issues in zero_padding. Fix/Enable tests/layers/convolutional_test * Add momentum to batchnorm. Enable/skip tests in layers/core, local, merge, noise, normalization * Skip RNN tests in keras/tests/layers/recurrent_test, wrappers_test * Fix bug in spatial padding, enable/skip tests in loss,optimizers,callback,loss_weighting, model_saving * Fix mxnet backend multi-gpu training (#31) Fixing bug for mxnet backend to use multiple gpus. * Fix performance issue - Batchnormalization, Conv operator (#35) * Fix default axis for batchnorm layer for channels_first data_format * Performance improvement by avoiding kernel transpose in conv operation for channels_first format * Fix model - architecture, weights and both, load and save. (#36) * Prepare initial version of mxnet related documentation in keras (#38) * Skip failing unit tests for unsupported functionality in mxnet backend * Fix pep tests reported by CI * Use pytest module skip, revert kernel_shape logic * remove data_format param from bias_add API * Allow Predict() without compile for mxnet backend and enable tests. contributor - roywei@ * Fix bug - mxnet backend should not override keras config data_format to channels_first. Only warn of low performance * Conv3d() operator implementation for Keras2.0 using MXNet backend (#40) * conv3d implementation for keras2.0 as MXNet backend * conv3d implementation/testing for keras2.0 using MXNet backend * keeping -n option in pytest.ini file * fixed comments given by Sandeep * Add Conv1D support for MXNet backend (#44) * Add Conv1D support for MXNet backend * Fix CR comments * Conv2d transpose (#47) * add conv2d_transpose * conv2d transpose for both channels, enabled test case * add detailed comments and examples, fix style issue * enable test case in topology * Enable performance optimization for conv operators with MXNet backend. Make MXNet default backend with this branch (#48) * Fix conv kernel shape bug for TF backend. (#50) * Add support for keras multi_gpu_model() API with MXNet backend (#49) * Add support for keras multi_gpu_model() API with MXNet backend. Autoset GPU0 context on GPU machine * Fix typo * Add SAME padding mode support for pooling operator. (#51) * Add rnn() operator for MXNet backend with unrolling and masking feature (#46) * Adding rnn() operator in Keras2.0 with MXNet as backend with unroll=True and Masking=True/False and enabled relevant testcases. Also, modified couple of operators. * Modified comments * Added comments to a method * Enable categorical crossentropy testcases and made minor changes * Modified message * nit * Added detail description of handling variable length input in RNN * Skip conv2d_transpose and conv3d_transpose test-case for MXNet backend and minor changes in rnn() * Adamax and NAdam optimizer for MXNet backend (#54) * Add Adamax optimizer for MXNet backend * Fix lr and adamax params * Add Nadam optimizer for mxnet backend * Add Conv3d transpose (#52) * conv3d tranpose, enabled test case * update kernel shape * replace conv2d_transpse conv3d_transpose with convnd_transpose * update value errors with MXNet Backend info, fix typo * add check for conv3d transpose only supports gpu with cudnn * update context check * diable conv3d transpose test * fix typo in comment * Rebase to latest Keras - April 3, 2018 * Add build badges * Fix multi_gpu API bug for CPU. Fix PEP. (#64) * Fix multi_gpu API bug for CPU. Fix PEP. * fix embedding layer bug (#61) * fix embedding bug * addressed comments, enabled more test cases * add keras test * reduce line length * fix style, add blank lines * Benchmark (#55) * add conv2d_transpose * conv2d transpose for both channels, enabled test case * add detailed comments and examples, fix style issue * add benchmark scripts for resnet and imagenet data * combine scripts * fix args * fix num of gpus * update log * multi_gpu_model only support tf * add benchamrk scripts for synthetic data * update read me and scripts * add mxnet traing result table * update on readme * add cifar10 dataset and enable various resnet layers * fix compile for mxnet multiple gpu * update callbacks * update synthetic data script, add credits * undo new line * update readme, addressed pr comments * update readme * benchmark scripts style fix (#66) * style fix * remove unused import, fix line too long * adrressed pr comments * Added keras util API for conversion of data tensor from channels_last to channels_first using MXNet backend (#65) * Added keras util API for conversion of data tensor from channels_last to channels_first using MXNet backend * Modified comments * Addressed review comments and made the API more generic accross backends * Removed shape check * Modified comments * Added edge cases * moved helper method as nested * Added RNN benchmark scripts (#69) * Added RNN benchmark scripts * Fixed new line in bash script * Removed different backend code and modified comments * Removed spacing * Automated the wikiText2 download script * Added dataset_util functionality to have more flexible code * Added minor comments * modified minor comments * Fixed the multi-gpu context (#68) * Update benchmark result (#70) * update benchmark result * update result * simplify folder structure * add image result * add note * add note * rebase to latest Keras - April 20, 2018, fix bug and unit tests * Added detailed RNN results (#73) * Added detailed RNN results * Modified table content and added CUDA version * fix keras examples (#72) * fix auto encoder examples * update other examples * fix style and add ctc not implemented error * Added Detailed RNN results (#77) * Modified RNN benchmark document * Added minor comments * fixed broken image link * Added API to extract metrics from a test and also added epoch parameter (#78) * Add mxnet backend tutorial documents (#76) * add performance tips document * update warning * add docs from wiki * add initial multi gpu doc, simplified installation doc, fix benchmark doc typo * update install steps * add multi_gpu_model tutorial * Support exporting model as MXNet model (sym, params). (#80) * Support exporting model as MXNet model (sym, params). * Return data_names and data_shapes * add unit tests for mxnet model save API * Add test with LSTM layer for mxnet model save API * Add support for functional Model graphs in save_mxnet_model API * add multi gpu model example (#85) * add multi gpu model * specify param name * Add additional logging for cnn benchmarks (#89) * add extra logging * add logging for cnn synthetic * fix log name * fix file name * Log RNN benchmark results (#90) * Make benchmark result logging available in RNN scripts * Make log file name consistent across CNN and RNN benchmarks * fix pytest errors (#93) * Cherry pick keras-team/keras 2.1.6 missing 3 commits into awslabs/keras-apache-mxnet (#96) * update multi_gpu api in benchmark scripts (#95) * update multi_gpu * update logging * fix logging * fix logging * fix speed format * remove learning rate log * Revamp keras-mxnet docs (#82) * Update main README and move mxnet_backend_docs under docs * revisit installation mxnet backend docs * revisit multi_gpu_training mxnet backend docs * revisit performance_guide mxnet backend docs * revisit using rnn with mxnet backend in mxnet backend docs * add save_mxnet_model tutorials in mxnet backend docs * Fixing review comments from aaron * Resolve CR comments on save_mxnet_model tutorial * Fix broken links, update tutorial links in the mxnet_backend code * revamp benchmark results readme * Benchmark results README page revamp * Add library versions * Remove too detailed benchmark results. Summarize in README * Get back detailed results document * Remove experiemental RNN benchmarks from README * addressed review comments on benchmark results * Set latest stable dependency of h5py to avoid warnings
1 parent 77e2ab8 commit 865d005

File tree

78 files changed

+3273
-716
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

78 files changed

+3273
-716
lines changed

.coveragerc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,4 @@ omit =
1717
keras/datasets/*
1818
keras/layers/cudnn_recurrent.py
1919
keras/legacy/*
20+
keras/utils/multi_gpu_utils.py

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ examples/img/*
1616
# test-related
1717
.coverage
1818
.cache
19+
.pytest_cache
1920

2021
# developer environments
2122
.idea

README.md

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,14 @@
88

99
[![license](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/keras-team/keras/blob/master/LICENSE)
1010

11-
## You have just found Keras.
11+
## You have just found Keras-MXNet
12+
13+
Keras is a high-level neural networks API, written in Python and capable of running on top of [Apache MXNet (incubating)](https://github.com/apache/incubator-mxnet/), [TensorFlow](https://github.com/tensorflow/tensorflow), [CNTK](https://github.com/Microsoft/cntk), or [Theano](https://github.com/Theano/Theano). It was developed with a focus on enabling fast experimentation. *Being able to go from idea to result with the least possible delay is key to doing good research.*
14+
15+
Keras-MXNet is the fork of [Keras project](https://github.com/keras-team/keras) and adds support for the high-performance, scalable deep learning library MXNet as a backend.
16+
17+
Detailed documentation for the MXNet backend are provided in the [docs/mxnet_backend folder](docs/mxnet_backend/README.md).
1218

13-
Keras is a high-level neural networks API, written in Python and capable of running on top of [TensorFlow](https://github.com/tensorflow/tensorflow), [CNTK](https://github.com/Microsoft/cntk), [Apache MXNet](https://github.com/apache/incubator-mxnet/), or [Theano](https://github.com/Theano/Theano). It was developed with a focus on enabling fast experimentation. *Being able to go from idea to result with the least possible delay is key to doing good research.*
1419

1520
Use Keras if you need a deep learning library that:
1621

@@ -107,20 +112,21 @@ For a more in-depth tutorial about Keras, you can check out:
107112
- [Getting started with the Sequential model](https://keras.io/getting-started/sequential-model-guide)
108113
- [Getting started with the functional API](https://keras.io/getting-started/functional-api-guide)
109114

110-
In the [examples folder](https://github.com/keras-team/keras/tree/master/examples) of the repository, you will find more advanced models: question-answering with memory networks, text generation with stacked LSTMs, etc.
115+
In the [examples folder](https://github.com/awslabs/keras-apache-mxnet/tree/master/examples) of the repository, you will find more advanced models: question-answering with memory networks, text generation with stacked LSTMs, etc.
111116

112117

113118
------------------
114119

115120

116121
## Installation
117122

118-
Before installing Keras, please install one of its backend engines: TensorFlow, Theano, or CNTK. We recommend the TensorFlow backend.
123+
Before installing Keras, please install one of its backend engines: MXNet, TensorFlow, Theano, or CNTK. We recommend
124+
the MXNet backend.
119125

126+
- [MXNet installation instructions](http://mxnet.incubator.apache.org/install/index.html).
120127
- [TensorFlow installation instructions](https://www.tensorflow.org/install/).
121128
- [Theano installation instructions](http://deeplearning.net/software/theano/install.html#install).
122129
- [CNTK installation instructions](https://docs.microsoft.com/en-us/cognitive-toolkit/setup-cntk-on-your-machine).
123-
- [MXNet installation instructions](http://mxnet.incubator.apache.org/install/index.html).
124130

125131
You may also consider installing the following **optional dependencies**:
126132

@@ -133,24 +139,24 @@ Then, you can install Keras itself. There are two ways to install Keras:
133139
- **Install Keras from PyPI (recommended):**
134140

135141
```sh
136-
sudo pip install keras
142+
sudo pip install keras-mxnet
137143
```
138144

139145
If you are using a virtualenv, you may want to avoid using sudo:
140146

141147
```sh
142-
pip install keras
148+
pip install keras-mxnet
143149
```
144150

145151
- **Alternatively: install Keras from the GitHub source:**
146152

147153
First, clone Keras using `git`:
148154

149155
```sh
150-
git clone https://github.com/keras-team/keras.git
156+
git clone https://github.com/awslabs/keras-apache-mxnet.git
151157
```
152158

153-
Then, `cd` to the Keras folder and run the install command:
159+
Then, `cd` to the keras-apache-mxnet folder and run the install command:
154160
```sh
155161
cd keras
156162
sudo python setup.py install
@@ -159,16 +165,18 @@ sudo python setup.py install
159165
------------------
160166

161167

162-
## Switching from TensorFlow to CNTK, MXNet or Theano
168+
## Switching from MXNet to TensorFlow, CNTK or Theano
163169

164-
By default, Keras will use TensorFlow as its tensor manipulation library. [Follow these instructions](https://keras.io/backend/) to configure the Keras backend.
170+
By default, Keras-MXNet will use MXNet as its tensor manipulation library. [Follow these instructions](https://keras.io/backend/) to configure the Keras backend.
165171

166172
------------------
167173

168174

169175
## Support
170176

171-
You can ask questions and join the development discussion:
177+
You can ask Keras-MXNet specific questions or post **bug reports and feature requests** in [GitHub issues](https://github.com/awslabs/keras-apache-mxnet/issues).
178+
179+
You can ask Keras questions and join the development discussion:
172180

173181
- On the [Keras Google group](https://groups.google.com/forum/#!forum/keras-users).
174182
- On the [Keras Slack channel](https://kerasteam.slack.com). Use [this link](https://keras-slack-autojoin.herokuapp.com/) to request an invitation to the channel.

0 commit comments

Comments
 (0)