README.md 6.53 KB
Newer Older
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
1
2
3
4
5
1. [ How to develop and train a CNN component using EMADL2CPP](#nn)
2. [ How to build and run the app ](#app)

<a name="nn"></a>
# Development and training of a CNN component using EMADL2CPP
6

Svetlana's avatar
Svetlana committed
7
## Prerequisites
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
8
9
10
11
* Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
* Deep learning backend:
    * MXNet
        * training - generated is Python code. Required is Python 2.7 or higher, Python packages `h5py`, `mxnet` (for training on CPU) or e.g. `mxnet-cu75` for CUDA 7.5 (for training on GPU with CUDA, concrete package should be selected according to CUDA version). Follow [official instructions on MXNet site](https://mxnet.incubator.apache.org/install/index.html?platform=Linux&language=Python&processor=CPU)
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
12
        * prediction - generated code is C++. Install MXNet using [official instructions on MXNet site](https://mxnet.incubator.apache.org) for C++.
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
13
14
15

### HowTo
1. Define a EMADL component containing architecture of a neural network and save it in a `.emadl` file. For more information on architecture language please refer to [CNNArchLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNArchLang). An example of NN architecture:
Svetlana's avatar
Svetlana committed
16
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
component VGG16{
    ports in Z(0:255)^{3, 224, 224} image,
         out Q(0:1)^{1000} predictions;

    implementation CNN {

        def conv(filter, channels){
            Convolution(kernel=(filter,filter), channels=channels) ->
            Relu()
        }
        def fc(){
            FullyConnected(units=4096) ->
            Relu() ->
            Dropout(p=0.5)
        }

        image ->
        conv(filter=3, channels=64, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=128, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=256, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        fc() ->
        fc() ->
        FullyConnected(units=1000) ->
        Softmax() ->
        predictions
    }
}
Svetlana's avatar
Svetlana committed
51
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
52
2. Define a training configuration for this network and store it in a `.cnnt file`, the name of the file should be the same as that of the corresponding architecture (e.g. `VGG16.emadl` and `VGG16.cnnt`). For more information on architecture language please refer to [CNNTrainLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNTrainLang). An example of a training configuration:
Svetlana's avatar
Svetlana committed
53
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
54
55
56
57
58
59
60
61
62
63
64
configuration VGG16{
    num_epoch:10
    batch_size:64
    normalize:true
    load_checkpoint:false
    optimizer:adam{
        learning_rate:0.01
        learning_rate_decay:0.8
        step_size:1000
    }
}
Svetlana's avatar
Svetlana committed
65
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
66
3. Generate GPL code for the specified deep learning backend using the jar package of a EMADL2CPP generator. The generator receives the following command line parameters:
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
67
68
69
70
    * `-m`    path to directory with EMADL models
    * `-r`    name of the root model
    * `-o`    output path
    * `-b`    backend
71

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
72
    Assume both the architecture definition `VGG16.emadl`and the corresponding training configuration `VGG16.cnnt` are located in a folder `models` and the target code should be generated into `target` folder using `MXNet` backend. An example of a command is then:  
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
73
74
    ```java -jar embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar -m models -r VGG16 -o target -b MXNET```
    
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
75
    You can find the EMADL2CPP jar [here](doc/embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar)
76

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
77
4. When the target code is generated, the corresponding trainer file (e.g. `CNNTrainer_<root_model_name>.py` in case of MXNet) can be executed.
78

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
79
80
<a name="app"></a>
# Building and running an application for TORCS
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
81

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
82
## Prerequisites
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
83
84
85
86
1. Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
2. ROS, Java runtime environment, GCC/Clang and armadillo - install using your linux distribution tools, e.g. apt in Ubuntu:

    ```apt-get install ros-base-dev clang openjdk-8-jre libarmadillo-dev```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
87
88
3. MXNet - install using [official instructions at MXNet Website](https://mxnet.incubator.apache.org/) for C++
4. TORCS (see below)
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
89
90

### TORCS Installation
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
91
92
1. Download customized TORCS distribution from the [DeepDriving site](http://deepdriving.cs.princeton.edu/)
2. Unpack downloaded archive, navigate to the `DeepDriving/torcs-1.3.6` directory
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
93
94
3. Compile and install by running `./configure --prefix=/opt/torcs && make -j && make install && make datainstall`
4. Remove original TORCS tracks and copy customized tracks:
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
95

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
96
97
    ```rm -rf /opt/torcs/share/games/torcs/tracks/*
    && cp -rf ../modified_tracks/* /opt/torcs/share/games/torcs/tracks/```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
98
99
100
5. Start TORCS by running `/opt/torcs/bin/torcs`

Further installation help can be found in the Readme file provided with the DeepDriving distribution.
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
101
102
103
104

### TORCS Setup
1. Run TORCS
2. Configure race
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
105
106
107
108
109
110
    1. Select Race -> Quick Race -> Configure Race
    2. Select one of the maps with the chenyi- prefix and click Accept
    3. Remove all drivers from the Selected section on the left by selecting every driver and clicking (De)Select
    4. Select driver chenyi on the right side and add it by clicking (De)Select
    5. Add other drivers with the chenyi- prefix if needed
    6. Click Accept -> Accept -> New Race
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
111
112
113
114
    
    Example of a drivers configuration screen:

    ![Drivers](doc/torcs_Drivers.png)
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
115
116
117
3. Use keys `1-9` and `M` to hide all the widgets from the screen
4. Use `F2` key to switch between camera modes to select the mode when the car or it's parts are not visible
5. Use `PgUp/PgDown` keys to switch between cars and select `chenyi` - the car that does not drive on its own
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
118

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
119
120
## Code generation and running the project

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
121
1. Download and unpack the [archive](doc/deep_driving_project.zip) that contains all EMA and EMADL component for an application
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
122
123
124
125
2. Run `generate.sh` script. It will generate the code to the `target` folder, copy the handwritten part of the project (communication with TORCS via shared memory) as well as the weights of the trained CNN and finally build the project
3. Start TORCS and configure race as described above. Select mode where host car is not visible
4. Go to the `target` folder and start `run.sh` script. It will open two three terminals: one for the ROS core, one for the TORCSCOmponent (application part responsible for communication with TORCS) and one for the Mastercomponent (application part generated from the models at step 2 which is repsondible for application logic)