Hi all,
Please see the report from SIG MindSpore Compiler - Auto Parallel
#MindSpore Compiler Special Interest Group (SIG)
This is the working repo for the Compiler special interest group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding ANF IR, auto differentiation, auto parallel, graph optimizer, VM and any other programs for high level graph compilation in MindSpore. Feedbacks and contributions are welcome.
Auto Parallel: Auto Parallel is a novel approach to achieve automatic parallel training and inference, including data parallel, model parallel and hybrid parallel. Auto Parallel is part of MindSpore graph compiler, and aims to improve the usability and performance of model training and inference.
#Current Progress:
Support resnet50 auto parallel training.
#Development plan:
We plan to support ReID, recommendation, and NLP models, such as resnet, wid&deep and bert, in next few months. At present, we have support resnet which used in ReID. Some key feature plan to do:
1. Support GPU/CPU backend. At present, only support Ascend.
2. Support auto parallel inference, using multi device to speed up inference, and run bigger model.
3. Improve cost model to search a better performance parallel strategy.
4. Develop more parallel operators.
#SIG Leads
* Ding Jian (Huawei)
#Logistics
* SIG leads will drive the meeting.
* Meeting annoucement will be posted on our gitee channel: https://gitee.com/mindspore/community/tree/master/sigs/compiler
* Feedbacks and topic requests are welcome by all.
#Discussion
* Slack channel https://app.slack.com/client/TUKCY4QDR/C011RSWRN3S?cdn_fallback=2
* Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/compiler
Meeting notes
Dear tic member,
Please see the report from SIG MindSpore Data.
# MindSpore Data Special Interest Group (SIG)
This is the working repo for the Data special interest group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding dataset - data processing and mindrecord - data format in MindSpore. Feedbacks and contributions are welcome.
Data Processing: You can understand it as a Dataset, which is mainly responsible for reading the user's data into a Dataset, then performing related data enhancement operations (such as: resize, onehot, rotate, shuffle, batch ...), and finally provide the Dataset to the training process.
Data Format: It can conveniently normalize the user's training data to a unified format (MindRecord). The specific operation steps are as follows: The user can easily convert the training data into MindRecrod data by defining the training data schema and calling the Python API interface. The format is then read into a Dataset through MindDataset and provided to the training process.
# SIG Leads
Liu Cunwei (Huawei)
# Logistics
SIG leads will drive the meeting.
Meeting annoucement will be posted on our gitee channel: https://gitee.com/mindspore/community/tree/master/sigs/data
Feedbacks and topic requests are welcome by all.
# Discussion
Slack channel https://app.slack.com/client/TUKCY4QDR/C010RPN6QNP?cdn_fallback=2
Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/data
# Meeting notes
Thursday April 2, 2020
# Current Progress
* Support multi-process of GeneratorDataset/PyFunc for high performance
* Support variable batchsize
* Support new Dataset operators, such as filter,skip,take,TextLineDataset
Dear tic member,
Please see the report from SIG Frontend.
# MindSpore FrontEnd Special Interest Group (SIG)
This is the working repo for the FrontEnd Special Interest Group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **basic elements** , **operators and layers**, **training interfaces**, **distributed training**, and any other frontend programs in MindSpore. Feedbacks and contributions are welcomed.
1. **Basic Elements**: Basic data structure definitions, including Parameter, Tensor, Cell and so on.
2. **Operators and Layers**: Provide operators and functions, neural network layers, loss functions and optimizers.
3. **Training Interfaces**: Interfaces for model training, evaluating and predicting, including high-level wrapped APIs, checkpoint related APIs, callbacks and so on.
4. **Distributed Training**: Interfaces for data parallel, model parallel or auto parallel. Common communication operators are also included.
# SIG Leads
* Deng Yiping (Huawei)
# Logistics
* SIG leads will drive the meeting.
* Meeting annoucement will be posted on our gitee channel: https://gitee.com/mindspore/community/tree/master/sigs/frontend
* Feedbacks and topic requests are welcomed by all.
# Discussion
* Slack channel: https://app.slack.com/client/TUKCY4QDR/C011B2DSC6B?cdn_fallback=2
* Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/frontend
# Meeting notes
# Current Progress
* Support for all python comparison operators.
* Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in.
* Support for the gradients of function with variable arguments.
* Support for tensor indexing assignment for certain indexing type.
* Support for dynamic learning rate.
Best regards,
Zhenglei Fang
2020-04-28
Dear TSC members,
Please find below the release plan for our upcoming release 0.2.0-alpha. Starting next release we will present the release planning to TSC at the beginning of the month for extensive review.
Please also note that this is not the final releasenote, which will contain additional information on bugfixes and so forth.
Feel free to ask any questions or provide any feedback through the list !
Planned Features and Improvements For 0.2.0-alpha release
Training and Inference Framework
・ New models
* SSD: Single Shot MultiBox Detector.
* MobileNetV2: Inverted Residuals and Linear Bottlenecks.
* ResNet101: Deep Residual Learning for Image Recognition.
・ Frontend and User Interface
* Support for all python comparison operators.
* Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in.
* Support for the gradients of function with variable arguments.
* Support for tensor indexing assignment for certain indexing type.
* Support for dynamic learning rate.
* User interfaces change log
* DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter, DepthwiseConv2dNativeBackpropInput(!424<https://gitee.com/mindspore/mindspore/pulls/424>)
* ReLU6, ReLU6Grad(!224<https://gitee.com/mindspore/mindspore/pulls/224>)
* GeneratorDataset(!183<https://gitee.com/mindspore/mindspore/pulls/183>)
* VOCDataset(!477<https://gitee.com/mindspore/mindspore/pulls/477>)
* MindDataset, PKSampler(!514<https://gitee.com/mindspore/mindspore/pulls/514>)
* map(!506<https://gitee.com/mindspore/mindspore/pulls/506>)
* Conv(!226<https://gitee.com/mindspore/mindspore/pulls/226>)
* Adam(!253<https://gitee.com/mindspore/mindspore/pulls/253>)
* _set_fusion_strategy_by_idx, _set_fusion_strategy_by_size(!189<https://gitee.com/mindspore/mindspore/pulls/189>)
* CheckpointConfig(!122<https://gitee.com/mindspore/mindspore/pulls/122>)
* Constant(!54<https://gitee.com/mindspore/mindspore/pulls/54>)
・ Executor and Performance Optimization
* Support parallel execution of data prefetching and forward/backward computing.
* Support parallel execution of gradient aggregation and forward/backward computing in distributed training scenarios.
* Support operator fusion optimization.
* Optimize compilation process and improve the performance.
・ Data processing, augmentation, and save format
* Support multi-process of GeneratorDataset/PyFunc for high performance
* Support variable batchsize
* Support new Dataset operators, such as filter,skip,take,TextLineDataset
Other Hardware Support
* GPU platform
* Support device memory swap in/out during training process.
* Quantization aware training (including training and inference).
* Add GPU kernels for Bert.
* CPU platform
* Support for windows 10 OS.
GraphEngine
* Provides a common graph-level option, and multiple requirements can also share this mechanism in the future.
* Improve graph compilation performance.
* Optimize memory allocation.
* Optimize serveral operators e.g., Slice, StridedSlice, ScatterMax etc.
MindArmour
* Add a white-box attack method: M-DI2-FGSM(PR14<https://gitee.com/mindspore/mindarmour/pulls/14>).
* Add three neuron coverage metrics: KMNCov, NBCov, SNACov(PR12<https://gitee.com/mindspore/mindarmour/pulls/12>).
* Add a coverage-guided fuzzing test framework for deep neural networks(PR13<https://gitee.com/mindspore/mindarmour/pulls/13>).
* Update the MNIST Lenet5 example.
* Remove some duplicate code.
MindInsight
* Parameter distribution graph (Histogram). Now you can use HistogramSummary<https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.op…> and MindInsight to record and visualize distribution info of tensors. See our tutorial<https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/visualization_t…> for details.
* Lineage support Custom information
* GPU support
* Model and dataset tracking linkage support
黄之鹏 Zhipeng (Howard) Huang
主任工程师 - 智能计算&IT开源生态部
Principle Engineer - Intelligent Computing & IT Open Source Ecosystem Department
华为技术有限公司
Huawei Technologies Co., Ltd.
Tel : +86 755 28780808 / 18576658966
Email : huangzhipeng(a)huawei.com
[cid:image001.png@01D61A27.52D86B50]
This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!