Commit 25557da3 authored by Sascha Dewes's avatar Sascha Dewes

fixed some formatting issues in the readme

parent d9a60520
Pipeline #399907 failed with stage
in 31 seconds
......@@ -30,27 +30,27 @@ Generation, training, and execution were tested on Ubuntu 16.04 LTS. The generat
- Armadillo >= 9.400.3 (Follow official [installation guide](http://arma.sourceforge.net/download.html))
- ROS Kinetic (Follow official [installation guide](http://wiki.ros.org/kinetic/Installation/Ubuntu))
##Installing the agent
## Installing the agent
To install the agent run
```bash
mvn clean install -s settings.xml
```
##Training the agent
## Training the agent
To train the agent run
```bash
./train_agent.sh
```
The trained model can be found in `target/agent/forestrl_singlestep_agent_master/src/forestrl_singlestep_agent_master/emadlcpp/model`. In the `ForestAgent` subfolder you will find a folder containing all snapshot models of the latest training, as well as a `stats.pdf`, containing graphs that show the training progress. The best performing model is saved in `target/agent/forestrl_singlestep_agent_master/src/forestrl_singlestep_agent_master/emadlcpp/model/forestrl.singlestep.agent.networks.ForestActor` in the two files `model_0_newest-0000.params` and `model_0_newest-symbol.json`.
##Evaluating the agent
## Evaluating the agent
To run/evaluate the trained agent, you first need to copy the two files `model_0_newest-0000.params` and `model_0_newest-symbol.json` from the above mentioned location into the `model/forestrl.singlestep.agent.networks.ForestActor` folder in the root directory. There already is a pretrained model inside this directory, which you can use without having to train the agent first. If you want to keep this model you should back it up before replacing the files. If you want to use a model from the snapshot directory instead, you have to rename the files to the two names mentioned above, so that the agent can find them. Once the model is in place, the agent can be evaluated by running
```bash
./evaluate_agent.sh
```
This will start the simulator and the agent and lets the agent complete 100 episodes in the simulator. At the end of the evaluation it will output the average rewards in the terminal.
##The benchmark agents
## The benchmark agents
Two benchmark agents are included in the project, against which the performance of the reinforcement learning agent can be compared: a random agent and a rule-based agent. They both use the postprocessor of the reinforcement learning agent, so the reinforcement learning agent has to be installed before the benchmark agents can be used. To start/evaluate the random agent run
```bash
./evaluate_agent.sh -b random
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment