Dear TSC members,
Please find below the release plan for our upcoming release 0.3.0-alpha targeting the end of May. We’ve also documented the release management related info at https://gitee.com/mindspore/mindspore/wikis/May%202020?sort_id=2196349
Please also note that this is not the final release note, which will contain additional information on bug fixes and so forth.
Feel free to ask any questions or provide any feedback through the list !
Planned Features and Improvements For 0.3.0-alpha release MindSpore Ascend 910 Training and Inference Framework
* New models * MASS: a pre-training method for sequence to sequence based language generation tasks on Text Summarization and Conversational Response Generation using News Crawls 2007-2017 dataset, Gigaword corpus and Cornell movie dialog corpus. * Transformer: a neural network architecture for language understanding on WMT 2014 English-German dataset. * DeepFM: a factorization-machine based neural network for CTR prediction on Criteo dataset. * Wide&Deep: jointly trained wide linear models and deep neural networks for recommender systems on Criteo dataset. * DeepLabV3: significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2007 semantic image segmentation benchmark. * InceptionV3: the third edition of Google's Inception Convolutional Neural Network on ImageNet 2012 dataset. * Faster-RCNN: towards real-time object detection with region proposal networks on COCO 2017 dataset. * SSD: a single shot multibox detector for detecting objects in images using a single deep nueral network on COCO 2017 dataset. * GoogLeNet: a deep convolutional neural network architecture codenamed Inception V1 for classification and detection on CIFAR-10 dataset. * Frontend and User Interface * Complete numpy advanced indexing method. Supports value and assignment through tensor index. * Some optimizers support separating parameter groups. Different parameter groups can set different learning_rate and weight_decay. * Support setting submodule's logging level independently, e.g. you can set logging level of module A to warning and set logging level of module B to info. * Support weights to be compiled according to shape to solve the problem of large memory overhead. * Add more operators implement and grammar support in pynative mode. To be consistent with graph mode. * Executor and Performance Optimization * Support doing evaluation while in training process, so that the accuracy of training can be easily obtained. * Enable second-order optimization for resnet50, which can achieve 75.9% accuracy in 45 epochs (Resnet50 @ImageNet). * Optimize pynative implementation and improve it's execution performance. * Optimize summary record implementation and improve its performance. * Data processing, augmentation, and save format * Support simple text processing, such as tokenizer/buildvocab/lookup. * Support simple gnn dataset processing. Other Hardware Support
* GPU platform * New models supported: MobileNetV2, MobileNetV3. * Support quantization aware training. * Support mixed precision training. * Support device memory swap in/out during training. * CPU platform * New model supported: LSTM. GraphEngine
* It supports dynamic batches and shapes with certain fixed levels. * Scope fusion interfaces are opened allowing user defined scope fusion rules. * Enhance the maintenance and measurement capability. * A package of compiled libraries is generated after compilation to facilitate code deployment.(#I1D1QFhttps://gitee.com/mindspore/dashboard/issues?id=I1D1QF)
MindArmour Differential Privacy is coming! By using Differential-Privacy-Optimziers, one can still train a model as usual, while the trained model preserved the privacy of training dataset, satisfying the definition of differential privacy with proper budget.
* Optimizers with Differential Privacy * Some common optimizers now have a differential privacy version (SGD/Adam-SGD). We are adding more. * Automatically and adaptively add Gaussian Noise during training to achieve Differential Privacy. * Automatically stop training when Differential Privacy Budget exceeds. * Differential Privacy Accoutant * Calculate overall budget consumed during training, indicating the altimate protect effect.
MindInsight
* Profiling * Provide easy to use apis for profiling start/stop and profiling data analyse (on Ascend only). * Provide AICore and AICPU operators performance display and analysis on MindInsight UI. * Large scale network computation graph visualization. * Optimize summary record implementation and improve its performance. * Improve lineage usability * Optimize lineage display and enrich tabular operation. * Decouple lineage callback from SummaryRecord. * Support scalar compare of multiple runs. * Scripts conversion from other frameworks. Support for converting PyTorch scripts within TorchVision to MindSpore scripts automatically.
黄之鹏 Zhipeng (Howard) Huang 主任工程师 - 智能计算&IT开源生态部 Principle Engineer - Intelligent Computing & IT Open Source Ecosystem Department 华为技术有限公司 Huawei Technologies Co., Ltd. Tel : +86 755 28780808 / 18576658966 Email : huangzhipeng@huawei.com [cid:image001.png@01D62890.24914470]
This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!