Skip to content
Snippets Groups Projects
Commit e01a7b7c authored by Svetlana Pavlitskaya's avatar Svetlana Pavlitskaya
Browse files

Update README.md

parent 5b26318b
No related branches found
No related tags found
1 merge request!2Develop
*TorcsClient* is a C++ application that connects to Torcs and receives images via shared memory (FreeImage library is used to read and store images).
*EMAM2RosCpp* <https://github.com/EmbeddedMontiArc/EMAM2RosCpp> and *EMAM2RosMsg* <https://github.com/EmbeddedMontiArc/EMAM2RosMsg> are used to generate ROS code from EmbeddedMontiArc.
## Development and deployment of a CNN component using EMADL2CPP
## Prerequisites
* Ubuntu 16.04
* ROS lunar
## TorcsClient
* Compile torcs:
..* Install dependencies: `libopenal-dev libalut-dev libplib-dev libvorbis-dev`
..* cd to torcs directory and run: `./configure --prefix/opt/torcs && make -j && sudo make install && sudo makeinstall`
..* Test if it runs by starting `/opt/torcs/bin/torcs`
* Install `libfreeimage-dev`
## ROS
### Installation
* libarmadillo-dev package
`sudo ln -s /usr/include/armadillo /usr/include/armadillo.h`
* Follow ROS installation instructions on <http://wiki.ros.org/lunar/Installation/Ubuntu>
* Follow instructions to create a package on <http://wiki.ros.org/ROS/Tutorials/CreatingPackage>
We use `RosWorkspace/` as the ROS workspace directory and `RosWorkspace/src/torcs2` as the package directory example.
* Generate sources via EMAMRosCpp
* Copy sources into created package src directory (RosWorkspace/src/torcs2/src)
Find string:
`# add_executable(${PROJECT_NAME}_node src/torcs2_node.cpp)`
in the RosWorkspace/src/torcs2/CMakeLists.txt file, uncomment and change to:
`add_executable(${PROJECT_NAME} src/torcs2.cpp)`
Find and uncomment in the same file:
* Linux. Ubuntu Linux 16.04 and 18.04 were used during testing.
* Deep learning backend:
* MXNet
* training - generated is Python code. Required is Python 2.7 or higher, Python packages `h5py`, `mxnet` (for training on CPU) or e.g. `mxnet-cu75` for CUDA 7.5 (for training on GPU with CUDA, concrete package should be selected according to CUDA version). Follow [official instructions on MXNet site](https://mxnet.incubator.apache.org/install/index.html?platform=Linux&language=Python&processor=CPU)
* prediction - generated code is C++. Install MXNet using [official instructions on MXNet site](https://mxnet.incubator.apache.org/ for CPP package).
### HowTo
1. Define a EMADL component containing architecture of a neural network and save it in a `.emadl` file. For more information on architecture language please refer to [CNNArchLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNArchLang). An example of NN architecture:
```
target_link_libraries(${PROJECT_NAME}_node
${catkin_LIBRARIES}
)
component VGG16{
ports in Z(0:255)^{3, 224, 224} image,
out Q(0:1)^{1000} predictions;
implementation CNN {
def conv(filter, channels){
Convolution(kernel=(filter,filter), channels=channels) ->
Relu()
}
def fc(){
FullyConnected(units=4096) ->
Relu() ->
Dropout(p=0.5)
}
image ->
conv(filter=3, channels=64, ->=2) ->
Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
conv(filter=3, channels=128, ->=2) ->
Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
conv(filter=3, channels=256, ->=3) ->
Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
conv(filter=3, channels=512, ->=3) ->
Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
conv(filter=3, channels=512, ->=3) ->
Pooling(pool_type="max", kernel=(2,2), stride=(2,2)) ->
fc() ->
fc() ->
FullyConnected(units=1000) ->
Softmax() ->
predictions
}
}
```
and remove _node suffix to get:
2. Define a training configuration for this network and store it in a `.cnnt file`, the name of the file should be the same as that of the corresponding architecture (e.g. `VGG16.emadl` and `VGG16.cnnt`). For more information on architecture language please refer to [CNNTrainLang project](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/languages/CNNTrainLang). An example of a training configuration:
```
target_link_libraries(${PROJECT_NAME}
${catkin_LIBRARIES}
)
configuration VGG16{
num_epoch:10
batch_size:64
normalize:true
load_checkpoint:false
optimizer:adam{
learning_rate:0.01
learning_rate_decay:0.8
step_size:1000
}
}
```
3. Generate GPL code for the specified deep learning backend using the jar package of a EMADL2CPP generator. The generator receives the following command line parameters:
* -m path to directory with EMADL models
* -r name of the root model
* -o output path
* -b backend
* Run `catkin_make` and `catkin_make install` as described in tutorial.
Check that package is now available: `rospack list | grep torcs`
* Run node with the package: `rosrun torcs torcs`
### Our ROS Packages
Assume both the architecture definition `VGG16.emadl`and the corresponding training configuration `VGG16.cnnt` are located in a folder `models` and the target code should be generated into `target` folder using `MXNet` backend. An example of a command is then:
```java -jar embedded-montiarc-emadl-generator-0.2.4-SNAPSHOT-jar-with-dependencies.jar -m models -r VGG16 -o target -b=MXNET```
* Go to RosWorkspace directory
* Run `catkin_make` to compile our packages
* Run `source devel/setup.bash` to setup environment
* Run `rospack list` and check that it shows our packages among others
4. When the target code is generated, the corresponding trainer file (e.g. `CNNTrainer_<root_model_name>.py` in case of MXNet) can be executed.
## Development and deployment of an application for TORCS
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment