Please also note that this is not the final release note, which will contain additional information on bug fixes and so forth.
Feel free to ask any questions or provide any feedback through the list!
Planned Features and Improvements For 0.7.0-beta release
MindSpore
Major Features and Improvements
Ascend 910 Training and Inference Framework
- New models
- Mask-RCNN: a simple and flexible deep neural network extended from Faster-RCNN for object instance segmentation on COCO 2017 dataset.
- TinyBert: a smaller and faster version of BERT using transformer distillation for natural language understanding on GLUE benchmark.
- SENet: a convoluation neural network using Squeeze-and-Excitation blocks to improve channel interdependencies for image classification on ImageNet 2017 dataset.
- Inception V3: the third version of Inception convolutional architectures for image classification on ImageNet 2012 dataset.
- Hub: third-party model hosting, downloading and usage; interface definition specifications; benchmarking process and test of preset model on the device side.
- Frontend and user interface
- Embedding operator high-level packaging to support segmented by field for Wide&Deep.
- Load multi-node checkpoint into single-process to support host-device hybrid inference.
- Support Concat/Tile/Strideslice distributed operators.
- Support cumulative gradient and batch training split.
- Support variable parameter input for Cell object.
- Parameter mixed calculation optimization for pynative mode.
- Deep Probabilistic Programming
- Support statistical distributions classes used to generate stochastic tensors.
- Support probabilistic inference algorithms.
- Support BNN layers used to construct BNN.
- Support interfaces for the transformation between BNN and DNN.
- Support uncertainty estimation to estimate epistemic uncertainty and aleatoric uncertainty.
- Executor and performance optimization
- Minspore graph compilation process performance improved by 20%.
- Decoupling C++ and Python modules to achieve separate compilation of core modules.
- Serving module supports restful interface.
- Data processing, augmentation, and save format
- Support automatic data augmentation
- Support GNN distributed cache in single node
- Batch operator optimization
Other Hardware Support
- GPU platform
- New model supported: VGG16, ResNet101, InceptionV3, NASNet, Transformer, TinyBert, YoloV3-DarkNet, MNASNet, EffecientNet-B0, ShuffleNet.
- Support some distributed operators in ResNet50 and Wide&Deep.
- Support automatic parallel for Wide&Deep.
- Support function funcsi (such as switch-case).
- Support distributed training with parameter server.
- Performance optimization of the distributed training with allreduce.
- Performance optimization of the mixed precision training.
- Performance optimization of the pynative mode.
- Performance optimization of the convolution operator, batch normalization operator.
- Serving module supports GPU backend.
- CPU platform
- Support MobileNetV2 Re-Training: Re-train the network with different class number.
MindSpore Lite
Major Features and Improvements
- Converter
- Support third party model, including TFLite/Caffe/ONNX.
- Add 96 TFLite op.
- Add 18 Caffe op.
- Add support for windows.
- Add 11 optimized passes, include fusion/const fold.
- Support aware-training and Post-training quantization.
- API
- Add JAVA APIs.
- Add pre/post data processing APIs.