Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.

ForestRL

A reinforcement learning agent for automated decision making in a forestry simulator. There are two versions of the agent, which are called singlestep and multistep. The singlestep agent completes deliveries in four individual steps, while the multistep version does all four steps at once. The multistep agent is based on the singlestep agent and reuses most of its components. The multistep directory therefore only contains the files that were changed. The singlestep version is the original version of the agent, that was described in the thesis "Autonomous Decision Making in Forestry 4.0 using Deep Reinforcement Learning". The multistep version was developed to deal with the problem of unused network outputs, that was described in the thesis. It is recommended to use the multistep version, as it improves the training results, while also being faster than the singlestep version. All scripts use the multistep agent by default.

Prerequisites

In order to run this application you need the following tools:

General

Generation, training, and execution were tested on Ubuntu 16.04 LTS. The generation, training, and execution of the model requires the following tools and packages:

  • Java 8, Build Tools (make, cmake, gcc), Git, Python 2.7, pip, numpy, SWIG, maven:

    sudo apt install openjdk-8-jre gcc make cmake git python2.7 python-dev python-numpy swig libboost-all-dev curl maven
  • Python pip:

    curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
    python get-pip.py --user
  • Python packages for numpy, h5py, pyprind, matplotlib:

    pip install --user h5py numpy pyprind matplotlib
  • MXNet C++ Language Bindings (Follow official installation guide)

  • MXNet for Python (Follow official installation guide)

  • Armadillo >= 9.400.3 (Follow official installation guide)

  • ROS Kinetic (Follow official installation guide)

Installing the agent

To install the agent run

./install.sh

Training the agent

To train the agent run

./train_agent.sh

The trained model can be found in target/agent/forestrl_singlestep_agent_master/src/forestrl_singlestep_agent_master/emadlcpp/model. In the ForestAgent subfolder you will find a folder containing all snapshot models of the latest training, as well as a stats.pdf, containing graphs that show the training progress. The best performing model is saved in target/agent/forestrl_singlestep_agent_master/src/forestrl_singlestep_agent_master/emadlcpp/model/forestrl.singlestep.agent.networks.ForestActor in the two files model_0_newest-0000.params and model_0_newest-symbol.json.

Evaluating the agent

To run/evaluate the trained agent, you first need to copy the two files model_0_newest-0000.params and model_0_newest-symbol.json from the above mentioned location into the model/forestrl.singlestep.agent.networks.ForestActor folder in the root directory. There already is a pretrained model inside this directory, which you can use without having to train the agent first. If you want to keep this model you should back it up before replacing the files. If you want to use a model from the snapshot directory instead, you have to rename the files to the two names mentioned above, so that the agent can find them. Once the model is in place, the agent can be evaluated by running

./evaluate_agent.sh

This will start the simulator and the agent and lets the agent complete 100 episodes in the simulator. At the end of the evaluation it will output the average rewards in the terminal.

The benchmark agents

Two benchmark agents are included in the project, against which the performance of the reinforcement learning agent can be compared: a random agent and a rule-based agent. They both use the postprocessor of the reinforcement learning agent, so the reinforcement learning agent has to be installed before the benchmark agents can be used. To start/evaluate the random agent run

./evaluate_agent.sh -b random

To start/evaluate the rule-based agent run

./evaluate_agent.sh -b rulebased