README.md 2.15 KB
Newer Older
Bernhard Rumpe's avatar
BR-sy    
Bernhard Rumpe committed
1
<!-- (c) https://github.com/MontiCore/monticore -->
2
![pipeline](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/CNNArch2MXNet/badges/master/build.svg)
Bernhard Rumpe's avatar
BR-sy    
Bernhard Rumpe committed
3
![coverage](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/CNNArch2MXNet/badges/master/coverage.svg)
4
5
6
7
8
9
10
11
12
13

## How to export inner network layers, e.g. an attention matrix
In order to visualize attention from an attention network, the data from this layer has to be returned from the network. Two steps are neccessary for that.
1.  The layer that should be exported has to be defined as a VariableSymbol. In order to do this, the keyword `layer` can be used in front of the respective layer. The definition of the layer has to be made before it is actually used in the network.
    For example, one could define an attention layer as `layer FullyConnected(units = 1, flatten=false) attention;` at the beginning of a network. By filling this layer with data, e.g. `
               input ->
               ... ->
               attention ->
               ...
`, the data in this layer will be saved until the end of the network iteration.
danielkisov's avatar
danielkisov committed
14
15
2.  In order to make the network return the saved data, the networks name must be added to the 'AllAttentionModels' class in this project. Furthermore, the layer must either be named `attention`, or the CNNNet.ftl template has to be adjusted to return differently named layers.

danielkisov's avatar
danielkisov committed
16
17
## Using Custom Layers

danielkisov's avatar
danielkisov committed
18
19
20
21
22
23
24
25
26
27
28
* Create a directory with following structure somewhere in your system :

+-- custom_files
     +-- python
           +-- gluon
                +-- custom_layers

* Add __init__.py file to the custom_layers folder and include a method in it to enable the usage of "from custom_layers import * " statement
* Place your custom layer python file inside the custom_layers folder (for the appropriate backend) and include the three functions in its class as shown in the example
* Use the custom layer inside the model with the same name as the file and the class inside is called
* When you use the script or directly use the EMADL2CPP generator to start generation code and training your model add the "-cfp" command line argument followed by the path to the custom_files folder