Please find below the release plan for our upcoming release 0.6.0 beta targeting the end
of July. We've also documented the release management related info at
Please also note that this is not the final release note, which will contain additional
information on bug fixes and so forth.
Feel free to ask any questions or provide any feedback through the list!
Planned Features and Improvements For 0.6.0-beta release
Ascend 910 Training and Inference Framework
MaskRCNN: a simple and flexible deep neural network for object instance segmentation on
COCO 2014 dataset.
TinyBERT: a smaller version of the base BERT model for natural language understanding
using transformer distillation and two-stage learning framework.
Frontend and user interface
Supports user side operator compilation and graph execution error rendering.
Uniform definition dynamic learning rate behavior in optimizers.
Support ps and allreduce mixing in optimizers.
Support IndexSlice in sparse expression.
Support mixed precision in pynative mode.
Support the process of forward execution, dynamically constructing reverse graph.
Support use parent construct method during construct.
Support asynchronous execution save checkpoint file.
Support implicit type conversion in pynative mode.
Executor and performance optimization
Decouple C++ and python, so make the architecture more extensible.
Parameter Server for distributed deep learning supported, and is verified in Wide&Deep
Quantitative training of YoloV3 on Ascend-910 is supported, and quantitative inference on
Ascend-310 is also supported.
Serving：a flexible service deployment framework for deep learning models.
Memory reuse is enhanced, and the batch size of Bert large model is increased from 96 to
160 on a single server.
Data processing, augmentation, and save format
Support single cache after data processing
Support MindRecord save operator after date processing
Support automatic fusion operator, such as decode/resize/crop
Support CSV dataset loading
Other Hardware Support
New model supported: ShuffleNet, NASNet, RetinaFace + MobileNetV2.
Support hyperparametric search and data enhanced automl on GPU.
Support Resnet50 automatic parallel in GPU backend.
GE supports function control operators such as If/Case/While/For.
In a single operator call scenario, GE supports recording the correspondence between
operators and tasks for performance commissioning.
GE supports new operator overflow positioning solution.
Differential privacy model training
Optimizers with differential privacy
Differential privacy model training now supports some new policies.
Adaptive Norm policy is supported.
Adaptive Noise policy with expontional decrease is supported.
Differential Privacy Training Monitor
A new monitor is supported using zCDP as its asmpotetic budget estimator.
Provide monitoring capabilities for each of Ascend AI processor and other hardware
resources,including CPU, memory and disk.
Visualization of weight, gradient and other tensor data in model training
Provide tabular form presentation of tensor data.
Provide histogram to show the distribution of tensor data and its change over time.