Dear TSC Members,
This is a kind reminder for our monthly meeting tomorrow. Besides the usual
items on community progress, we will introduce 2 new SIGs and 2 new WGs :)
Respectively:
*SIG-MSLITE* for mobile and IoT support
*SIG-DPP* for deep probabilistic programming support
*WG-MM* for community collaboration on deep learning support in Molecular
Modeling
*WG-Research* for community collaboration on several research topics
For the US-Asia timezone friendly meeting, please join
https://zoom.us/j/98865564258
<https://www.google.com/url?q=https://zoom.us/j/98865564258&sa=D&source=cale…>
and
for EU-Asia timezone friendly meeting, please join
https://zoom.us/j/99378539769
<https://www.google.com/url?q=https://zoom.us/j/99378539769&sa=D&source=cale…>.
The calendar invitation has been updated as well.
--
Zhipeng (Howard) Huang
Principle Engineer
OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service
Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C
Please find below the release plan for our upcoming release 0.7.0 beta targeting the end of August. We've also documented the release management related info at https://gitee.com/mindspore/mindspore/wikis/August%202020?sort_id=2684223
Please also note that this is not the final release note, which will contain additional information on bug fixes and so forth.
Feel free to ask any questions or provide any feedback through the list!
Planned Features and Improvements For 0.7.0-beta release
MindSpore
Major Features and Improvements
Ascend 910 Training and Inference Framework
New models
Mask-RCNN: a simple and flexible deep neural network extended from Faster-RCNN for object instance segmentation on COCO 2017 dataset.
TinyBert: a smaller and faster version of BERT using transformer distillation for natural language understanding on GLUE benchmark.
SENet: a convoluation neural network using Squeeze-and-Excitation blocks to improve channel interdependencies for image classification on ImageNet 2017 dataset.
Inception V3: the third version of Inception convolutional architectures for image classification on ImageNet 2012 dataset.
Hub: third-party model hosting, downloading and usage; interface definition specifications; benchmarking process and test of preset model on the device side.
Frontend and user interface
Embedding operator high-level packaging to support segmented by field for Wide&Deep.
Load multi-node checkpoint into single-process to support host-device hybrid inference.
Support Concat/Tile/Strideslice distributed operators.
Support cumulative gradient and batch training split.
Support variable parameter input for Cell object.
Parameter mixed calculation optimization for pynative mode.
Deep Probabilistic Programming
Support statistical distributions classes used to generate stochastic tensors.
Support probabilistic inference algorithms.
Support BNN layers used to construct BNN.
Support interfaces for the transformation between BNN and DNN.
Support uncertainty estimation to estimate epistemic uncertainty and aleatoric uncertainty.
Executor and performance optimization
Minspore graph compilation process performance improved by 20%.
Decoupling C++ and Python modules to achieve separate compilation of core modules.
Serving module supports restful interface.
Data processing, augmentation, and save format
Support automatic data augmentation
Support GNN distributed cache in single node
Batch operator optimization
Other Hardware Support
GPU platform
New model supported: VGG16, ResNet101, InceptionV3, NASNet, Transformer, TinyBert, YoloV3-DarkNet, MNASNet, EffecientNet-B0, ShuffleNet.
Support some distributed operators in ResNet50 and Wide&Deep.
Support automatic parallel for Wide&Deep.
Support function funcsi (such as switch-case).
Support distributed training with parameter server.
Performance optimization of the distributed training with allreduce.
Performance optimization of the mixed precision training.
Performance optimization of the pynative mode.
Performance optimization of the convolution operator, batch normalization operator.
Serving module supports GPU backend.
CPU platform
Support MobileNetV2 Re-Training: Re-train the network with different class number.
MindSpore Lite
Major Features and Improvements
Converter
Support third party model, including TFLite/Caffe/ONNX.
Add 96 TFLite op.
Add 18 Caffe op.
Add support for windows.
Add 11 optimized passes, include fusion/const fold.
Support aware-training and Post-training quantization.
API
Add JAVA APIs.
Add pre/post data processing APIs.
CPU
Add 100+ops,support fp32, int8/uint8, FP16 ops
Support fast convolution algorithms: Sliding Window, Img2col + Gemm, Strassen, Winograd
Support assembly/neon instruction.
Support CPU fp16 and sdot on ARM v8.2+.
GPU
Add 50+ ops for OpenCL.
Support image2D/buffer format.
Optimize online initialization time.
add optimized convolution1X1/3X3/depthwise/convolution_transposed for OpenCL.
NPU
Support Kirin NPU.
Tool & example
Add benchmark and TimeProfile tools.
Add image classification and object detection Android Demo.
GraphEngine
Major Features and Improvements
Conditional operator memory supports separate allocation of 4G memory space;
In the zero-copy scenario, atomic_clean supports cleaning the memory of each part of the output when the network is multi-output;
Support profiling of multiple levels of data in inference scenarios;
In the online compilation scenarios, GE compilation time optimization.
MindArmour
Major Features and Improvements
Privacy leakage evaluation.
Using Membership inference to evaluate the privacy-preserving of AI model.
Fuzzing based Adversarial Robustness testing.
Coverage-guided test set generation.
MindInsight
Major Features and Improvements
Support GPU operator profiling
Web UI supports English
Computation graph show full_name in anf_ir.proto as node name
Provides a code wizard tool to quickly generate classic network scripts in a friendly and easy way.
Thanks
Lujiale