Dear TSC members,

 

Please find below the release plan for our upcoming release 0.5.0-beta targeting the end of June.  We¡¯ve also documented the release management related info at https://gitee.com/mindspore/mindspore/wikis/June%202020?sort_id=2339889 

Please also note that this is not the final release note, which will contain additional information on bug fixes and so forth.

 

Feel free to ask any questions or provide any feedback through the list !

 

Planned Features and Improvements For 0.5.0-beta release

MindSpore

Ascend 910 Training and Inference Framework

  • New models
    • MASS: a pre-training method for sequence to sequence based language generation tasks on Text Summarization and Conversational Response Generation using News Crawls 2007-2017 dataset, Gigaword corpus and Cornell movie dialog corpus.
    • Transformer: a neural network architecture for language understanding on WMT 2014 English-German dataset.
    • DeepFM: a factorization-machine based neural network for CTR prediction on Criteo dataset.
    • Wide&Deep: jointly trained wide linear models and deep neural networks for recommender systems on Criteo dataset.
    • DeepLabV3: significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2007 semantic image segmentation benchmark.
    • InceptionV3: the third edition of Google's Inception Convolutional Neural Network on ImageNet 2012 dataset.
    • Faster-RCNN: towards real-time object detection with region proposal networks on COCO 2017 dataset.
    • SSD: a single shot multibox detector for detecting objects in images using a single deep nueral network on COCO 2017 dataset.
    • GoogLeNet: a deep convolutional neural network architecture codenamed Inception V1 for classification and detection on CIFAR-10 dataset.
  • Frontend and User Interface
    • Complete numpy advanced indexing method. Supports value and assignment through tensor index.
    • Some optimizers support separating parameter groups. Different parameter groups can set different learning_rate and weight_decay.
    • Support setting submodule's logging level independently, e.g. you can set logging level of module A to warning and set logging level of module B to info.
    • Support weights to be compiled according to shape to solve the problem of large memory overhead.
    • Add more operators implement and grammar support in pynative mode. To be consistent with graph mode.
  • Executor and Performance Optimization
    • Support doing evaluation while in training process, so that the accuracy of training can be easily obtained.
    • Enable second-order optimization for resnet50, which can achieve 75.9% accuracy in 45 epochs (Resnet50 @ImageNet).
    • Optimize pynative implementation and improve it's execution performance.
    • Optimize summary record implementation and improve its performance.
  • Data processing, augmentation, and save format
    • Support simple text processing, such as tokenizer/buildvocab/lookup.
    • Support simple gnn dataset processing.

Other Hardware Support

  • GPU platform
    • New models supported: MobileNetV2, MobileNetV3.
    • Support quantization aware training.
    • Support mixed precision training.
    • Support device memory swap in/out during training.
  • CPU platform
    • New model supported: LSTM.

GraphEngine

  • Optimize Allreduce trailing parallelism, rebuild the calculation graph dependencies, adjust the calculation order, and maximize the efficiency of calculation and gradient aggregation communication in parallel, especially in large data volume gradient aggregation and low bandwidth/large cluster scenarios You can get a bigger income.
  • Advance constant folding, variable fusion, conversion operator related optimization pass to the end of the graph preparation.
  • Modify memory allocation algorithm, optimize GE memory allocation, and reduce memory usage in training multi-PCS scenarios.
  • Support IR composition, model compilation, inference execution in the same process.


MindArmour

  • Optimizers with differential privacy

    • Differential privacy model training now supports both Pynative mode and graph mode.

    • Graph mode is recommended for its performance.


MindInsight

  • MindSpore Profiler
    • Provide performance analyse tool for the input data pipeline.
    • Provide timeline analyse tool, which can show the details of the streams/tasks.
    • Provide a tool to visualize the step trace information, which can be used analyse the general performance of the neural network in each phase.
    • Provide profiling guides for the users to find the performance bottlenecks quickly.
  • CPU summary operations support for CPU summary data
  • Over threshold warn support in scalar training dashboard
  • Provide more user-friendly callback function for visualization
    • Provide unified callback SummaryCollector to log most commonly visualization event.
    • Discard the original visualization callback SummaryStepTrainLineage and EvalLineage.
    • SummaryRecord provide new API add_value to collect data into cache for summary persistence.
    • SummaryRecord provide new API set_mode to distinguish summary persistence mode at different stages.
  • Mindconverter supports conversion of more operators and networks, and improves its ease of use.

 

Thanks

Liucunwei

 

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!