Dear TSC members,
Please find below the release plan for our upcoming release 0.2.0-alpha. Starting next release we will present the release planning to TSC at the beginning of the
month for extensive review.
Please also note that this is not the final releasenote, which will contain additional information on bugfixes and so forth.
Feel free to ask any questions or provide any feedback through the list !
Planned Features and Improvements For 0.2.0-alpha release
Training and Inference Framework
¡¤
New models
-
SSD: Single Shot MultiBox Detector.
-
MobileNetV2: Inverted Residuals and Linear Bottlenecks.
-
ResNet101: Deep Residual Learning for Image Recognition.
¡¤
Frontend and User Interface
-
Support for all python comparison operators.
-
Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in.
-
Support for the gradients of function with variable arguments.
-
Support for tensor indexing assignment for certain indexing type.
-
Support for dynamic learning rate.
-
User interfaces change log
-
DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter, DepthwiseConv2dNativeBackpropInput(!424)
-
ReLU6, ReLU6Grad(!224)
-
GeneratorDataset(!183)
-
VOCDataset(!477)
-
MindDataset, PKSampler(!514)
-
map(!506)
-
Conv(!226)
-
Adam(!253)
-
_set_fusion_strategy_by_idx, _set_fusion_strategy_by_size(!189)
-
CheckpointConfig(!122)
-
Constant(!54)
¡¤
Executor and Performance Optimization
-
Support parallel execution of data prefetching and forward/backward computing.
-
Support parallel execution of gradient aggregation and forward/backward computing in distributed training scenarios.
-
Support operator fusion optimization.
-
Optimize compilation process and improve the performance.
¡¤
Data processing, augmentation, and save format
-
Support multi-process of GeneratorDataset/PyFunc for high performance
-
Support variable batchsize
-
Support new Dataset operators, such as filter,skip,take,TextLineDataset
Other Hardware Support
-
Support device memory swap in/out during training process.
-
Quantization aware training (including training and inference).
-
Add GPU kernels for Bert.
-
Support for windows 10 OS.
GraphEngine
-
Provides a common graph-level option, and multiple requirements can also share this mechanism in the future.
-
Improve graph compilation performance.
-
Optimize memory allocation.
-
Optimize serveral operators e.g., Slice, StridedSlice, ScatterMax etc.
MindArmour
-
Add a white-box attack method: M-DI2-FGSM(PR14).
-
Add three neuron coverage metrics: KMNCov, NBCov, SNACov(PR12).
-
Add a coverage-guided fuzzing test framework for deep neural networks(PR13).
-
Update the MNIST Lenet5 example.
-
Remove some duplicate code.
MindInsight
-
Parameter distribution graph (Histogram). Now you can use
HistogramSummary
and
MindInsight to record and visualize distribution info of tensors. See our tutorial for details. -
Lineage support Custom information
-
GPU support
-
Model and dataset tracking linkage support
»ÆÖ®Åô
Zhipeng (Howard) Huang
Ö÷È餳Ìʦ - ÖÇÄܼÆËã&IT¿ªÔ´Éú̬²¿
Principle Engineer - Intelligent Computing & IT Open Source Ecosystem Department
»ªÎª¼¼ÊõÓÐÏÞ¹«Ë¾
Huawei Technologies Co., Ltd.
Tel : +86 755 28780808 / 18576658966
Email : huangzhipeng@huawei.com
This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity
whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive
this e-mail in error, please notify the sender by phone or email immediately and delete it!