*TorcsClient* is a C++ application that connects to Torcs and receives images via shared memory (FreeImage library is used to read and store images).
*EMAM2RosCpp*<https://github.com/EmbeddedMontiArc/EMAM2RosCpp> and *EMAM2RosMsg*<https://github.com/EmbeddedMontiArc/EMAM2RosMsg> are used to generate ROS code from EmbeddedMontiArc.
## Development and deployment of a CNN component using EMADL2CPP
in the RosWorkspace/src/torcs2/CMakeLists.txt file, uncomment and change to:
`add_executable(${PROJECT_NAME} src/torcs2.cpp)`
Find and uncomment in the same file:
* Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
* Deep learning backend:
* MXNet
* training - generated is Python code. Required is Python 2.7 or higher, Python packages `h5py`, `mxnet` (for training on CPU) or e.g. `mxnet-cu75` for CUDA 7.5 (for training on GPU with CUDA, concrete package should be selected according to CUDA version). Follow [official instructions on MXNet site](https://mxnet.incubator.apache.org/install/index.html?platform=Linux&language=Python&processor=CPU)
* prediction - generated code is C++. Install MXNet using [official instructions on MXNet site](https://mxnet.incubator.apache.org/ for CPP package).
### HowTo
1. Define a EMADL component containing architecture of a neural network and save it in a `.emadl` file. For more information on architecture language please refer to [CNNArchLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNArchLang). An example of NN architecture:
2. Define a training configuration for this network and store it in a `.cnnt file`, the name of the file should be the same as that of the corresponding architecture (e.g. `VGG16.emadl` and `VGG16.cnnt`). For more information on architecture language please refer to [CNNTrainLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNTrainLang). An example of a training configuration:
```
target_link_libraries(${PROJECT_NAME}
${catkin_LIBRARIES}
)
configuration VGG16{
num_epoch:10
batch_size:64
normalize:true
load_checkpoint:false
optimizer:adam{
learning_rate:0.01
learning_rate_decay:0.8
step_size:1000
}
}
```
3. Generate GPL code for the specified deep learning backend using the jar package of a EMADL2CPP generator. The generator receives the following command line parameters:
* -m path to directory with EMADL models
* -r name of the root model
* -o output path
* -b backend
* Run `catkin_make` and `catkin_make install` as described in tutorial.
Check that package is now available: `rospack list | grep torcs`
* Run node with the package: `rosrun torcs torcs`
### Our ROS Packages
Assume both the architecture definition `VGG16.emadl`and the corresponding training configuration `VGG16.cnnt` are located in a folder `models` and the target code should be generated into `target` folder using `MXNet` backend. An example of a command is then: