README.md 8.77 KB
Newer Older
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
1 2 3 4 5
1. [ How to develop and train a CNN component using EMADL2CPP](#nn)
2. [ How to build and run the app ](#app)

<a name="nn"></a>
# Development and training of a CNN component using EMADL2CPP
6

Svetlana's avatar
Svetlana committed
7
## Prerequisites
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
8 9 10 11
* Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
* Deep learning backend:
    * MXNet
        * training - generated is Python code. Required is Python 2.7 or higher, Python packages `h5py`, `mxnet` (for training on CPU) or e.g. `mxnet-cu75` for CUDA 7.5 (for training on GPU with CUDA, concrete package should be selected according to CUDA version). Follow [official instructions on MXNet site](https://mxnet.incubator.apache.org/install/index.html?platform=Linux&language=Python&processor=CPU)
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
12
        * prediction - generated code is C++. Install MXNet using [official instructions on MXNet site](https://mxnet.incubator.apache.org) for C++.
13 14 15 16
    * Caffe2
        * Install Caffe2 using [provided instructions from this link](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/CNNArch2Caffe2#ubuntu).
        * training - generated is Python code. Required is Python 2.7
        * prediction - generated code is C++.
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
17 18 19

### HowTo
1. Define a EMADL component containing architecture of a neural network and save it in a `.emadl` file. For more information on architecture language please refer to [CNNArchLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNArchLang). An example of NN architecture:
Svetlana's avatar
Svetlana committed
20
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
component VGG16{
    ports in Z(0:255)^{3, 224, 224} image,
         out Q(0:1)^{1000} predictions;

    implementation CNN {

        def conv(filter, channels){
            Convolution(kernel=(filter,filter), channels=channels) ->
            Relu()
        }
        def fc(){
            FullyConnected(units=4096) ->
            Relu() ->
            Dropout(p=0.5)
        }

        image ->
        conv(filter=3, channels=64, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=128, ->=2) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=256, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        conv(filter=3, channels=512, ->=3) ->
        Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
        fc() ->
        fc() ->
        FullyConnected(units=1000) ->
        Softmax() ->
        predictions
    }
}
Svetlana's avatar
Svetlana committed
55
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
56
2. Define a training configuration for this network and store it in a `.cnnt file`, the name of the file should be the same as that of the corresponding architecture (e.g. `VGG16.emadl` and `VGG16.cnnt`). For more information on architecture language please refer to [CNNTrainLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNTrainLang). An example of a training configuration:
Svetlana's avatar
Svetlana committed
57
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
58 59 60 61 62 63 64 65 66 67 68
configuration VGG16{
    num_epoch:10
    batch_size:64
    normalize:true
    load_checkpoint:false
    optimizer:adam{
        learning_rate:0.01
        learning_rate_decay:0.8
        step_size:1000
    }
}
Svetlana's avatar
Svetlana committed
69
```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
70
3. Generate GPL code for the specified deep learning backend using the jar package of a EMADL2CPP generator. The generator receives the following command line parameters:
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
71 72 73 74
    * `-m`    path to directory with EMADL models
    * `-r`    name of the root model
    * `-o`    output path
    * `-b`    backend
75

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
76
    Assume both the architecture definition `VGG16.emadl`and the corresponding training configuration `VGG16.cnnt` are located in a folder `models` and the target code should be generated into `target` folder using `MXNet` backend. An example of a command is then:  
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
77
    ```java -jar embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar -m models -r VGG16 -o target -b MXNET```
78

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
79
    You can find the EMADL2CPP jar [here](doc/embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar)
80

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
81
4. When the target code is generated, the corresponding trainer file (e.g. `CNNTrainer_<root_model_name>.py` in case of MXNet) can be executed.
82

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
83 84
<a name="app"></a>
# Building and running an application for TORCS
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
85

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
86
## Prerequisites
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
87
1. Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
88
2. Armadillo (at least armadillo version 6.600 must be used) [Official instructions at Armadillo Website](http://arma.sourceforge.net/download.html).
89
3. ROS, Java runtime environment, GCC/Clang and armadillo - install using your linux distribution tools, e.g. apt in Ubuntu:
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
90

91
    ```apt-get install ros-base-dev clang openjdk-8-jre```
92 93
4. MXNet - install using [official instructions at MXNet Website](https://mxnet.incubator.apache.org/) for C++
5. TORCS (see below)
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
94 95

### TORCS Installation
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
96 97
1. Download customized TORCS distribution from the [DeepDriving site](http://deepdriving.cs.princeton.edu/)
2. Unpack downloaded archive, navigate to the `DeepDriving/torcs-1.3.6` directory
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
98 99
3. Compile and install by running `./configure --prefix=/opt/torcs && make -j && make install && make datainstall`
4. Remove original TORCS tracks and copy customized tracks:
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
100

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
101 102
    ```rm -rf /opt/torcs/share/games/torcs/tracks/*
    && cp -rf ../modified_tracks/* /opt/torcs/share/games/torcs/tracks/```
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
103 104 105
5. Start TORCS by running `/opt/torcs/bin/torcs`

Further installation help can be found in the Readme file provided with the DeepDriving distribution.
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
106 107 108 109

### TORCS Setup
1. Run TORCS
2. Configure race
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
110 111 112 113 114 115
    1. Select Race -> Quick Race -> Configure Race
    2. Select one of the maps with the chenyi- prefix and click Accept
    3. Remove all drivers from the Selected section on the left by selecting every driver and clicking (De)Select
    4. Select driver chenyi on the right side and add it by clicking (De)Select
    5. Add other drivers with the chenyi- prefix if needed
    6. Click Accept -> Accept -> New Race
116

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
117 118 119
    Example of a drivers configuration screen:

    ![Drivers](doc/torcs_Drivers.png)
120
3. Use keys `1-9` and `M` to hide all the widgets such as the speedometer, map, etc. from the TORCS screen
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
121 122
4. Use `F2` key to switch between camera modes to select the mode when the car or it's parts are not visible
5. Use `PgUp/PgDown` keys to switch between cars and select `chenyi` - the car that does not drive on its own
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
123

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
124 125
## Code generation and running the project

Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
126
1. Download and unpack the [archive](doc/deep_driving_project.zip) that contains all EMA and EMADL component for an application
Svetlana Pavlitskaya's avatar
Svetlana Pavlitskaya committed
127 128 129 130
2. Run `generate.sh` script. It will generate the code to the `target` folder, copy the handwritten part of the project (communication with TORCS via shared memory) as well as the weights of the trained CNN and finally build the project
3. Start TORCS and configure race as described above. Select mode where host car is not visible
4. Go to the `target` folder and start `run.sh` script. It will open two three terminals: one for the ROS core, one for the TORCSCOmponent (application part responsible for communication with TORCS) and one for the Mastercomponent (application part generated from the models at step 2 which is repsondible for application logic)

131
# Troubleshooting Help
132

133 134 135
ERROR: CNNPredictor_dp_mastercomponent_dpnet.h:4:33: fatal error: mxnet/c_predict_api.h: No such file or directory.

FIX:
136
Copy compiled mxnet lib and include files to usr/lib and usr/include respectively. Replace YOUR_MXNET_REPOSITORY with your corresponding information:
137 138
```
cd YOUR_MXNET_REPOSITORY/incubator-mxnet/lib
139 140
sudo cp * /usr/lib
cd YOUR_MXNET_REPOSITORY/incubator-mxnet/include
141 142
sudo cp -r * /usr/include
```
143 144


145 146 147
ERROR: HelperA.h:79:28: error: ‘sqrtmat’ was not declared in this scope.

FIX:
148
Copy compiled armadillo lib and include files to usr/lib and usr/include respectively. Replace YOUR_ARMADILLO_REPOSITORY and VERSION (e.g. 8.500.1) with your corresponding information:
149 150
```
cd YOUR_ARMADILLO_REPOSITORY/armadillo-VERSION
151 152
sudo cp libarmadillo* /usr/lib
cd YOUR_ARMADILLO_REPOSITORY/armadillo-VERSION/include
153 154
sudo cp -r * /usr/include
```
155

156 157 158
ERROR: Coordinator_dp_mastercomponent.cpp.o: undefined reference to symbol 'dsyrk_' usr/lib/libopenblas.so.0: error adding symbols: DSO missing from command line (after executing Run generate2ros.sh).

FIX:
159
Once generate2ros.sh was executed, modify the file YOUR_TORCSDL_REPOSITORY/torcs_dl/doc/deep_driving_project/target/Mastercomponent/dp_mastercomponent/coordinator/CMakeLists.txt to include the blas and openblas libraries, i.e.:
160 161 162
```
target_link_libraries(Coordinator_dp_mastercomponent RosAdapter_dp_mastercomponent dp_mastercomponent Threads::Threads -lblas -lopenblas)
```
163 164

Then navigate to YOUR_TORCSDL_REPOSITORY/torcs_dl/doc/deep_driving_project/target and execute build_all.sh. Make sure you delete the build folders to remove the existed compilation configurations for both components:
165 166 167 168 169
```
cd YOUR_TORCSDL_REPOSITORY/torcs_dl/doc/deep_driving_project/target
bash build_all.sh
```

170 171

Finally, the deep driving project will be compiled successfully.