Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
0.13.2
-
None
-
None
-
None
Description
Following strategy
It should-
1. implement incoreMLPs which can be 'plugged together' for purposes of back propegation (this makes for easy extension into more complex networks)
2. implement a common distributed MLP which maps out incoreMLPs and then averages parameters
3. regression and classifier wrappers around the base MLP to reduce duplication of code
4. would be nice to make distributed and incore neural network 'trait' for consistent API across all future neural networks.
Attachments
Issue Links
- links to