README.md 5.33 KB
Newer Older
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
1
## Development and deployment of a CNN component using EMADL2CPP
2

Svetlana's avatar
Svetlana committed
3
## Prerequisites
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
4
5
6
7
8
9
10
11
* Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
* Deep learning backend:
    * MXNet
        * training - generated is Python code. Required is Python 2.7 or higher, Python packages `h5py`, `mxnet` (for training on CPU) or e.g. `mxnet-cu75` for CUDA 7.5 (for training on GPU with CUDA, concrete package should be selected according to CUDA version). Follow [official instructions on MXNet site](https://mxnet.incubator.apache.org/install/index.html?platform=Linux&language=Python&processor=CPU)
        * prediction - generated code is C++. Install MXNet using [official instructions on MXNet site](https://mxnet.incubator.apache.org/ for CPP package).

### HowTo
1. Define a EMADL component containing architecture of a neural network and save it in a `.emadl` file. For more information on architecture language please refer to [CNNArchLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNArchLang). An example of NN architecture:
Svetlana's avatar
Svetlana committed
12
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
component VGG16{
    ports in Z(0:255)^{3, 224, 224} image,
         out Q(0:1)^{1000} predictions;

    implementation CNN {

        def conv(filter, channels){
            Convolution(kernel=(filter,filter), channels=channels) ->
            Relu()
        }
        def fc(){
            FullyConnected(units=4096) ->
            Relu() ->
            Dropout(p=0.5)
        }

        image ->
        conv(filter=3, channels=64, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=128, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=256, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        fc() ->
        fc() ->
        FullyConnected(units=1000) ->
        Softmax() ->
        predictions
    }
}
Svetlana's avatar
Svetlana committed
47
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
48
2. Define a training configuration for this network and store it in a `.cnnt file`, the name of the file should be the same as that of the corresponding architecture (e.g. `VGG16.emadl` and `VGG16.cnnt`). For more information on architecture language please refer to [CNNTrainLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNTrainLang). An example of a training configuration:
Svetlana's avatar
Svetlana committed
49
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
50
51
52
53
54
55
56
57
58
59
60
configuration VGG16{
    num_epoch:10
    batch_size:64
    normalize:true
    load_checkpoint:false
    optimizer:adam{
        learning_rate:0.01
        learning_rate_decay:0.8
        step_size:1000
    }
}
Svetlana's avatar
Svetlana committed
61
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
62
63
64
65
66
3. Generate GPL code for the specified deep learning backend using the jar package of a EMADL2CPP generator. The generator receives the following command line parameters:
    * -m    path to directory with EMADL models
    * -r    name of the root model
    * -o    output path
    * -b    backend
67

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
68
69
    Assume both the architecture definition `VGG16.emadl`and the corresponding training configuration `VGG16.cnnt` are located in a folder `models` and the target code should be generated into `target` folder using `MXNet` backend. An example of a command is then:  
    ```java -jar embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar -m models -r VGG16 -o target -b=MXNET```
70

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
71
4. When the target code is generated, the corresponding trainer file (e.g. `CNNTrainer_<root_model_name>.py` in case of MXNet) can be executed.
72

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
73
## Development and deployment of an application for TORCS
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104

### Required software
1. Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
2. ROS, Java runtime environment, GCC/Clang and armadillo - install using your linux distribution tools, e.g. apt in Ubuntu:

    ```apt-get install ros-base-dev clang openjdk-8-jre libarmadillo-dev```
3. TORCS
4. MXNet - install using official instructions at [MXNet Website](https://mxnet.incubator.apache.org/) for CPP package

### TORCS Installation
1. Download customized TORCS distribution from the [DeepDriving page](http://deepdriving.cs.princeton.edu/)
2. Unpack downloaded archive, navigate to the DeepDriving/torcs-1.3.6 directory
3. Compile and install by running `./configure --prefix=/opt/torcs && make -j && make install && make datainstall`
4. Remove original TORCS tracks and copy customized tracks:
    ```rm -rf /opt/torcs/share/games/torcs/tracks/*
    && cp -rf ../modified_tracks/* /opt/torcs/share/games/torcs/tracks/```
5. Start TORCS by running /opt/torcs/bin/torcs
Installation help can be found in the Readme file provided with the DeepDriving distribution.

### TORCS Setup
1. Run TORCS
2. Configure race
    Select Race -> Quick Race -> Configure Race
    Select one of the maps with the chenyi- prefix and click Accept
    Remove all drivers from the Selected section on the left by selecting every driver and clicking (De)Select
    Select driver chenyi on the right side and add it by clicking (De)Select
    Add other drivers with the chenyi- prefix if needed
    Click Accept -> Accept -> New Race
3. Use keys 1-9 and M to hide all the widgets from the screen
4. Use F2 key to switch between camera modes to select the mode when the car or it's parts are not visible
5. Use PgUp/PgDown keys to switch between cars and select chenyi - the car that does not drive on its own