Details
-
Epic
-
Status: In Progress
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
-
Deep Learning DML Library
Description
This issue tracks the creation of a layers-based deep learning library in pure DML.
The library contains layers with simple forward (function evaluation) and backward (gradient computation) functions for affine, convolution (start with 2D), max-pooling, non-linearities (relu, sigmoid, softmax, etc.), dropout, loss functions, other layers, optimizers, and gradient checks.
Examples: Please see example scripts and notebooks in the examples folder: https://github.com/apache/systemml/tree/master/scripts/nn/examples.
SystemML-NN: https://github.com/apache/systemml/tree/master/scripts/nn
- Layers:
- Core:
- Affine
- Batch Normalization 1D
- Batch Normalization 2D ("Spatial Batch Normalization")
- Convolution 2D ("Spatial Convolution")
- LSTM
- Max Pooling 2D ("Spatial Max Pooling")
- RNN
- Nonlinearities:
- ReLU
- Sigmoid
- Softmax
- Softmax 2D
- Tanh
- Loss:
- Cross-entropy loss
- Cross-entropy loss 2D
- L1 loss
- L2 loss
- Log ("Logistic") loss
- Regularization:
- Dropout
- L1 reg
- L2 reg
- Core:
- Optimizers:
- Adagrad
- Adam
- RMSprop
- SGD
- SGD w/ Momentum
- SGD w/ Nesterov Momentum
- Tests:
- Gradient Checks
- Unit Tests
Attachments
Attachments
Issue Links
- is depended upon by
-
SYSTEMDS-1185 SystemML Breast Cancer Project
- Resolved
- is part of
-
SYSTEMDS-540 Deep Learning
- In Progress
- is related to
-
SYSTEMDS-409 Extended update in-place support
- Open
-
SYSTEMDS-669 Improve PyDML Language
- Closed
-
SYSTEMDS-1566 Possible regression from 0.13 -> 0.14 for MNIST LeNet script
- Closed
-
SYSTEMDS-1621 `max(0, X)` fails with type mismatch
- Closed
-
SYSTEMDS-1686 Transpose Conv2d has incorrect filter shape and incorrect input size argument
- Closed
-
SYSTEMDS-633 Improve Left-Indexing Performance with (Nested) Parfor Loops in UDFs
- Closed
-
SYSTEMDS-845 Compare Performance of LeNet Scripts With & Without Using SystemML-NN
- Closed
-
SYSTEMDS-1561 Improve constant folding during compilation
- Closed
-
SYSTEMDS-1000 Allow users to pass non-1 bias filler in conv_builtin.dml
- Closed
-
SYSTEMDS-1479 Make Caffe2DML feature-complete
- Open
- relates to
-
SYSTEMDS-716 Consumability of SystemML for Deep Learning
- Open
Issues in epic
|
SYSTEMDS-779 | Add Affine layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-780 | Add Convolution layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-781 | Add Max Pooling layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-782 | Add ReLU nonlinearity layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-783 | Add Sigmoid nonlinearity layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-784 | Add Softmax nonlinearity layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-785 | Add Tanh nonlinearity layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-786 | Add Cross-Entropy Loss layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-787 | Add L1 Loss layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-788 | Add L2 Loss layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-789 | Add Log Loss layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-790 | Add Dropout Regularization layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-791 | Add L1 Regularization layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-792 | Add L2 Regularization layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-793 | Add Adagrad Optimizer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-794 | Add RMSprop Optimizer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-795 | Add Adam Optimizer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-796 | Add SGD Optimizer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-797 | Add SGD w/ Momentum Optimizer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-798 | Add SGD w/ Nesterov Momentum Optimizer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-799 | Add Gradient Check Testing to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-800 | Add Builtin Convolution layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-801 | Add Builtin Max Pooling layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-802 | Add MNIST Softmax Classifier Example to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-803 | Add MNIST LeNet Example to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-807 | Add RNN layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-808 | Add LSTM layer to DML Deep Learning Library | Closed | Mike Dusenberry | ||
|
SYSTEMDS-867 | Update the deep learning notebook examples to use the new Python MLContext API. | Closed | Mike Dusenberry | ||
|
SYSTEMDS-904 | Update the API calls in the deep learning notebook examples. | Closed | Mike Dusenberry | ||
|
SYSTEMDS-908 | Improve Test Suite | Closed | Mike Dusenberry | ||
SYSTEMDS-1113 | Vectorize im2col | Open | Mike Dusenberry | |||
SYSTEMDS-1114 | Vectorize pad_image | Open | Mike Dusenberry | |||
SYSTEMDS-1115 | Vectorize Convolution | Open | Mike Dusenberry | |||
SYSTEMDS-1383 | Performance testing of individual layer for common data shapes | Open | Unassigned | |||
SYSTEMDS-1384 | Revisit the weight and bias of fully connected layer | Open | Unassigned | |||
SYSTEMDS-1389 | Update API: Pass in all outputs from `forward` to `backward` for performance | Open | Mike Dusenberry | |||
|
SYSTEMDS-1408 | Add padding parameters to max-pooling layers | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1409 | Add batch normalization layer | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1410 | Add spatial batch normalization layer | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1412 | Rename `nn/test/tests.dml` to `nn/test/run_tests.dml` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1413 | Extract test-only utilities from `nn/util.dml` to new `nn/test/util.dml` | Resolved | Mike Dusenberry | ||
|
SYSTEMDS-1414 | Rename `nn/layers/conv.dml` to `nn/layers/conv2d.dml` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1415 | Rename `nn/layers/max_pool.dml` to `nn/layers/max_pool2d.dml` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1416 | Rename `nn/layers/conv_builtin.dml` to `nn/layers/conv2d_builtin.dml` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1417 | Rename `nn/layers/max_pool_builtin.dml` to `nn/layers/max_pool2d_builtin.dml` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1432 | Extend `util::pad_image` with a `pad_value` parameter | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1450 | Update LSTM & RNN layers with `Tout` parameter | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1452 | General code cleanup | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1453 | Update Conv & Max Pooling layer names to include "2D" | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1460 | Add `epochs` parameter to `mnist_lenet::train(...)` function. | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1463 | Rename `batch_norm.dml` and `spatial_batch_norm.dml` to `batch_norm1d.dml` and `batch_norm2d.dml` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1468 | Add new 1D/2D "Scale & Shift" layers | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1469 | Add a new `conv2d_transpose` layer. | Closed | Prithviraj Sen | ||
|
SYSTEMDS-1516 | Improve output size calculation in conv2d & max_pool2d | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1524 | Graduate `nn` library from `scripts/staging/SystemML-NN/nn` to `scripts/nn` | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1563 | Add a distributed synchronous SGD MNIST LeNet example | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1564 | Add a Java test suite wrapper around `nn` DML test suite | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1674 | Add a new 2D depthwise convolution layer | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1675 | Add a new 2D depthwise transpose convolution layer | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1678 | Add new 1D top_k utility function | Closed | Fei Hu | ||
|
SYSTEMDS-1679 | Add a new threshold utility function | Closed | Fei Hu | ||
SYSTEMDS-1680 | Add a new max margin loss | Open | Mike Dusenberry | |||
SYSTEMDS-1681 | Add a unit test for 2D convolution comparing against another system | In Progress | Mike Dusenberry | |||
|
SYSTEMDS-1676 | Add a new 2D softmax layer | Closed | Fei Hu | ||
|
SYSTEMDS-1677 | Add a new 2D cross-entropy layer | Closed | Fei Hu | ||
|
SYSTEMDS-1736 | Add new 2D top_k utility function | Closed | Fei Hu | ||
SYSTEMDS-1760 | Improve engine robustness of distributed SGD training | In Progress | Fei Hu | |||
|
SYSTEMDS-1762 | Improve the matrix reshape function for the Spark mode | Closed | Matthias Boehm | ||
|
SYSTEMDS-1774 | Improve Parfor parallelism for deep learning | Closed | Fei Hu | ||
SYSTEMDS-1872 | Add an average pooling layer | Open | Unassigned | |||
|
SYSTEMDS-1921 | Bug in recurrent layers when only returning final output | Closed | Mike Dusenberry | ||
|
SYSTEMDS-1965 | Refactor nn layers to move the computation in forward/backward function known at compile time to init function | Closed | Matthias Boehm |