README.md 3.58 KB
Newer Older
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
1
## Development and deployment of a CNN component using EMADL2CPP
2

Svetlana's avatar
Svetlana committed
3
## Prerequisites
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
4 5 6 7 8 9 10 11
* Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
* Deep learning backend:
    * MXNet
        * training - generated is Python code. Required is Python 2.7 or higher, Python packages `h5py`, `mxnet` (for training on CPU) or e.g. `mxnet-cu75` for CUDA 7.5 (for training on GPU with CUDA, concrete package should be selected according to CUDA version). Follow [official instructions on MXNet site](https://mxnet.incubator.apache.org/install/index.html?platform=Linux&language=Python&processor=CPU)
        * prediction - generated code is C++. Install MXNet using [official instructions on MXNet site](https://mxnet.incubator.apache.org/ for CPP package).

### HowTo
1. Define a EMADL component containing architecture of a neural network and save it in a `.emadl` file. For more information on architecture language please refer to [CNNArchLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNArchLang). An example of NN architecture:
Svetlana's avatar
Svetlana committed
12
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
component VGG16{
    ports in Z(0:255)^{3, 224, 224} image,
         out Q(0:1)^{1000} predictions;

    implementation CNN {

        def conv(filter, channels){
            Convolution(kernel=(filter,filter), channels=channels) ->
            Relu()
        }
        def fc(){
            FullyConnected(units=4096) ->
            Relu() ->
            Dropout(p=0.5)
        }

        image ->
        conv(filter=3, channels=64, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=128, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=256, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        fc() ->
        fc() ->
        FullyConnected(units=1000) ->
        Softmax() ->
        predictions
    }
}
Svetlana's avatar
Svetlana committed
47
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
48
2. Define a training configuration for this network and store it in a `.cnnt file`, the name of the file should be the same as that of the corresponding architecture (e.g. `VGG16.emadl` and `VGG16.cnnt`). For more information on architecture language please refer to [CNNTrainLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNTrainLang). An example of a training configuration:
Svetlana's avatar
Svetlana committed
49
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
50 51 52 53 54 55 56 57 58 59 60
configuration VGG16{
    num_epoch:10
    batch_size:64
    normalize:true
    load_checkpoint:false
    optimizer:adam{
        learning_rate:0.01
        learning_rate_decay:0.8
        step_size:1000
    }
}
Svetlana's avatar
Svetlana committed
61
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
62 63 64 65 66
3. Generate GPL code for the specified deep learning backend using the jar package of a EMADL2CPP generator. The generator receives the following command line parameters:
    * -m    path to directory with EMADL models
    * -r    name of the root model
    * -o    output path
    * -b    backend
67

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
68 69
    Assume both the architecture definition `VGG16.emadl`and the corresponding training configuration `VGG16.cnnt` are located in a folder `models` and the target code should be generated into `target` folder using `MXNet` backend. An example of a command is then:  
    ```java -jar embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar -m models -r VGG16 -o target -b=MXNET```
70

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
71
4. When the target code is generated, the corresponding trainer file (e.g. `CNNTrainer_<root_model_name>.py` in case of MXNet) can be executed.
72

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
73
## Development and deployment of an application for TORCS