Skip to content
Snippets Groups Projects
Commit b7f2ad54 authored by Timm's avatar Timm
Browse files

README

parent b020cd89
Branches
Tags
No related merge requests found
Pipeline #1562745 failed
# CAPICE Compute Backend
# CAPICE compute backend
[![CI](https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend/badges/main/pipeline.svg)](https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend/-/pipelines?page=1&scope=all&ref=main)
[![Code coverage](https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend/badges/main/coverage.svg)](https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend/-/graphs/main/charts)
......@@ -13,23 +13,23 @@
<!-- [![REUSE status](https://api.reuse.software/badge/git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend)](https://api.reuse.software/info/git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend) -->
DASF backend module to run a glacier model
DASF backend module to run the ["Ice Sheet System Model (ISSM)"](https://issm.jpl.nasa.gov/) on a remote machine.
## Installation
Install this package in a dedicated python environment via
Clone the [source code][source code] from gitlab and install it in a dedicated python environment using pip
```bash
https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend
python -m venv venv
source venv/bin/activate
pip install capice-compute-backend
source ven/bin/activate
pip install ./capice-copmuting-backend/
```
To use this in a development setup, clone the [source code][source code] from
gitlab, start the development server and make your changes::
```bash
git clone https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend
https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend
cd capice-compute-backend
python -m venv venv
source venv/bin/activate
......@@ -42,6 +42,207 @@ More detailed installation instructions my be found in the [docs][docs].
[source code]: https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend
[docs]: https://nfdi4earth.pages.rwth-aachen.de/pilotsincubatorlab/pilots/capice/capice-compute-backend/installation.html
## Setup und run the backend
To set up the communication with the [DASF](https://helmholtz.software/software/dasf-messaging-python) message broker, it is recommended to generate and .env file. This .env file has to contain information in the following form
```bash
export DE_BACKEND_TOPIC="<message-broker-topic>"
export DE_BACKEND_WEBSOCKET_URL="<websocket-url>"
export DE_BACKEND_HEADER='{ "authorization": "Token <consumer-secret-token>" }'
```
Where
- `<message-broker-topic>` is the topic at the message broker that the backend subscribes to,
- `<websocket-url>` is the URL of the websocket the backend connects to,
- `<consumer-secret token>` is the secret token to authenticate at the message broker.
Then export the environmental variable `DE_MESSAGING_ENV_FILE` to point to your .env file, for example located in your home directory
```bash
export DE_MESSAGING_ENV_FILE="${HOME}/.capice_compute_env"
```
With this setup you should be able to run the backend via
```bash
./capice-compunting-backend/app.sh
```
The output should look something like this
```bash
2024-12-16 17:42:34,810 - INFO - [demessaging.messaging.consumer]: waiting for incoming request
2024-12-16 17:42:34,882 - INFO - [websocket]: Websocket connected
2024-12-16 17:42:35,004 - INFO - [websocket]: Websocket connected
```
## Send a request to the backend
To send a request to the backend, for example from your local machine, you have to install this package in the same way than before on the remote machine.
```bash
https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend
python -m venv venv
source venv/bin/activate
pip install ./capice-compute-backend/
```
Then again create an .env file, with the following contents
```bash
export DE_BACKEND_TOPIC="<message-broker-topic>"
export DE_BACKEND_WEBSOCKET_URL="<websocket-url>"
export DE_BACKEND_HEADER='{ "authorization": "Token <producer-secret-token>" }'
```
`'DE_BACKEND_TOPIC'` and `'DE_WEBSOCKET_URL'` are the same as before for the backend. However, the `'DE_BACKEND_HEADER'` now contains the <producer-secret-token>. The secret token for authorization differ whether the backend authenticates to the message broker, or the API side.
The source the .env file
```bash
source .env
```
A minimal example could look like this
```python
from capice_compute.api import version_info
print(version_info())
```
If everything is setup correct. The script on your local machine should request the software's version from the backend. The output should look something like this
```bash
2024-12-16 17:48:02,061 - INFO - [websocket]: Websocket connected
2024-12-16 17:48:02,062 - INFO - [websocket]: Websocket connected
2024-12-16 17:48:02,135 - INFO - [demessaging.messaging.producer]: start waiting for message
{'capice-compute-backend': '0+untagged.80.ge8d0a5f'}
```
In this case the version of `'capice-compute-backend'` is `'0+untagged.80.ge8d0a5f'`.
## Running ISSM
To run the "Ice Sheet System Model ISSM" ([Larour et al., 2012](https://doi.org/10.1029/2011JF002140)), ISSM has to be installed on the system where the backend is running. The model can be clone from [https://github.com/ISSMteam/ISSM](https://github.com/ISSMteam/ISSM). Information on how to configure and install the model can be found on the [project's website](https://issm.jpl.nasa.gov/).
Another requirment to run ISSM is that the environment variable `$ISSM_DIR` is defined
```bash
export ISSM_DIR="<path-to-ISSM-installation>"
```
`$ISSM_DIR` points to the root folder of the ISSM installation. This variable usually have to be set during the installation of ISSM and should therefore be available. [issm.py](https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-compute-backend/-/blob/main/capice_compute/issm.py) is a wrapper to make ISSm functions available to the backen. It imports the ISSM Python interface based on the `$ISSM_DIR`
A minimal example on how to run ISSM on the backend system, without accessing it, but only calling the API to the backend could look like this
```python
import os
from capice_compute.api import SwiftSettings, SlurmSettings, ISSMJob
slurm_settings = SlurmSettings(
output="slurm.out",
error="slurm.err",
cluster="cluster_on_the_HPC_system",
partition="partition_on_the_HPC_system",
nodes=1,
ntasks=1,
ntasks_per_node=1,
cpus_per_task=4,
time="00:10:00",
mail_type="end",
mail_user="user@mail.com",
)
swift_settings = SwiftSettings(
os_auth_token=os.getenv("OS_AUTH_TOKEN"),
os_storage_url=os.getenv("OS_STORAGE_URL"),
)
issm_job = ISSMJob(
"Greenland",
file_transfer_method="swift",
slurm_settings=slurm_setting,
swift_settings=swift_settings,
)
issm_job.create_empty_model()
numberofelement, numberofvertices = issm_job.triangle(
"DomainOutline.exp",
20000.0,
)
```
A more complete example can be found at [https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-greenland-example](https://git.rwth-aachen.de/nfdi4earth/pilotsincubatorlab/pilots/capice/capice-greenland-example).
**Larour, E., Seroussi, H., Morlighem, M., and Rignot, E. (2012).** “Continental scale, high order, hight spatial resolution, ice sheet modeling using the Ice Sheet System Model (ISSM)”. In: *Journal of Geophysical Research: Earth Surface* 117.F1. DOI: [https://doi.org/10.1029/2011JF002140](https://doi.org/10.1029/2011JF002140)
## File transfer
The implemented software allows for data transfer via the [swift object storage](https://docs.openstack.org/swift/latest/) or [NextCloud](https://nextcloud.com/). For the presented case it assumed that the [swift service hosted by DKRZ](https://docs.dkrz.de/doc/datastorage/swift/index.html) or [AWI's NextCloud](nextcloud.awi.de) is used. In principle other instances of these solution can be used. However this requires some adjustement.
### swift
To use swift for file transfer between the different systems, the class `SwiftSettings` has to be passed the ISSM job. This class holds information for authentification and where to store files.
```python
from capice_compute.api import SwiftSettings
swift_settings = SwiftSettings(
os_auth_token="<your-secret-swift-token>",
os_storage_url="<your-storage-url>",
swift_url="<url-to-swift-service>",
swift_container="<your-swift-container>",
)
```
While `os_auth_token` and `os_storage_url` are required arguments, `swift_url` and `swift_container` are optional arguments. The default values are `swift_url="https://swift.dkrz.de"` and `swift_container="issm"`. In case the default values are used, a container with the name "issm" has to be created within the users swift storage.
A documentation on how to generate an authentication token for the DKRZ swift storage service can be found with [the respective documentation](https://docs.dkrz.de/doc/datastorage/swift/python-swiftclient.html).
### NextCloud
To use NextCloud for file transfer between the different systems, the class `NextcloudSettings` has to pass to the ISSM Job.
```python
from capice_compute.api import NextcloudSettings
nextcloud_settings = NextcloudSettings(
nextcloud_user="<your-user-name>",
nextcloud_auth_token="<your-secret-nextcloud-token>",
nextcloud_url="<url-to-nextcloud-service>",
nextcloud_container=<"your-nextcloud-folder">,
)
```
While `nextcloud_user` and `nextcloud_auth_token` are required arguments, `nextcloud_url` and `nextcloud_container` are optional arguments. The default values point to the NextCloud hosted by the AWI: `nextcloud_url="https://nextcloud.awi.de"` and `nextcloud_container="issm"`. In case the defaul values are used, a folder with the name "issm" has to be created within the users NextCloud storage.
Tokens for authentification with NextCloud can be generated using NextClouds web-interface. The function can be found within the security settings.
## Slurm
```python
from capice_compute.api import SlurmSettings
slurm_settings = SlurmSettings(
output="slurm.out",
error="slurm.err",
cluster="cluster_on_the_HPC_system",
partition="partition_on_the_HPC_system",
nodes=1,
ntasks=1,
ntasks_per_node=1,
cpus_per_task=4,
time="00:10:00",
mail_type="end",
mail_user="user@mail.com",
)
```
```bash
#!/bin/bash
#SBATCH --output="slurm.out"
#SBATCH --error="slurm.err"
#SBATCH --cluster="cluster_on_the_HPC_system"
#SBATCH --partition="partition_on_the_HPC_system"
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --ntasks_per_node=1
#SBATCH --cpus_per_task=4
#SBATCH --time="00:10:00"
#SBATCH --mail_type="end"
#SBATCH --mail_user="user@mail.com"
pipenv run python -m capice.run_issm
```
## Technical note
This package has been generated from the template
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment