Welcome to d2r-api’s documentation!

A data-processing backend and API for the [Data2Resilience] project(https://data2resilience.de/) implemented by the Bochum Urban Climate Lab (BUCL).

ci pre-commit deploy docs to gh-page Release Workflow

d2r-api

installation

This is packaged and can be installed via:

via https

pip install git+https://github.com/RUBclim/d2r-api

via ssh

pip install git+ssh://git@github.com/RUBclim/d2r-api

development

setup the development environment

  1. create a virtual environment using tox (needs to be available globally)

    tox --devenv venv -e py313
    
  2. alternatively, create the virtual environment manually

    virtualenv venv -ppy313
    

    or

    python3.13 -m venv venv
    

    or

    uv venv venv -ppython313
    
  3. and install the requirements

    pip install -r requirements.txt -r requirements-dev.txt
    
  4. activate the virtual environment

    . venv/bin/activate
    
  5. install and set up pre-commit. If not already installed globally, run

    pip install pre-commit
    

    setup the git-hook

    pre-commit install
    

run only the web app

You can only run the web app without the queue and worker process

  1. export the environment variables

    export $(cat .env.dev | grep -Ev "^#" | xargs -L 1)
    
  2. start the database container

    docker compose -f docker-compose.yml -f docker-compose.dev.yml --env-file .env.dev up -d db
    
  3. run the web app

    DB_HOST=localhost uvicorn app.main:app --reload
    

run the entire system in development mode

You need to have docker compose and docker installed on your system.

docker compose -f docker-compose.yml -f docker-compose.dev.yml --env-file .env.dev up -d
  • the setup is configured, so that the fastapi web app restarts if changes are made to any of the Python code. The celery worker, however, needs to be restarted manually.

run the tests

You can run the tests including coverage using tox

tox -e py

You can run the tests using only pytest (without coverage)

pytest tetsts/

run celery locally

Celery does not support auto-reloading. You can workaround that using the watchdog package and start celery like this:

PGPORT=5433 TC_DATABASE_HOST=localhost CELERY_BROKER_URL=redis://localhost:6379/0 \
watchmedo auto-restart --directory=app --pattern=*.py --recursive -- \
celery -A app.tasks worker --concurrency=1

You may have to override additional environment variables, depending on what part you are working on.

upgrade requirements

We are using uv pip compile to manage our requirements

backups

for the network data

A database backup/dump in production can be done by running these commands from the host:

docker exec db pg_dump -Fc d2r_db -U dbuser > d2r-db.dump

This will generate some hypertable-related warnings, but they can be ignored.

The backup can be restored like this:

  1. bring up a temporary db container and mount the backup you want to restore as a volume

    docker compose \
       -f docker-compose.yml \
       -f docker-compose.prod.yml \
       --env-file .env.prod \
       run \
       --rm \
       -v "$(pwd)/d2r_db.dump:/backups/d2r_db.dump:ro" \
       --name db \
       -d \
       db
    
  2. prepare the database for restore

    docker exec -it db psql -U dbuser -d d2r_db -c "SELECT timescaledb_pre_restore();"
    
  3. perform the restore - this will take some time!

    docker exec -it db pg_restore -Fc -d d2r_db -U dbuser /backups/d2r_db.dump
    
  4. finish the restore

    docker exec -it db psql -U dbuser -d d2r_db -c "SELECT timescaledb_post_restore();"
    
  5. stop the temporary container

    docker stop db
    
  6. start all services as usual

    docker compose \
       -f docker-compose.yml \
       -f docker-compose.prod.yml \
       --env-file .env.prod \
       up -d
    

for the raster data

A database backup/dump in production can be done by running these commands from the host:

docker exec terracotta-db pg_dump -Fc terracotta -U dbuser

The backup can be restored like this:

  1. restore the raster files to the correct directory by extracting them from restic (if stored there). This may already be the final destination (e.g. mounted via sshfs), if available, otherwise you will have to copy them from an intermediate directory to the final destination.

    restic -r d2r restore <ID of the backup> --target /tmp/
    
  2. bring up a temporary terracotta-db container and mount the backup you want to restore as a volume

    docker compose \
       -f docker-compose.yml \
       -f docker-compose.prod.yml \
       --env-file .env.prod run \
       --rm \
       -v "$(pwd)/d2r_tc_db.dump:/backups/d2r_tc_db.dump:ro" \
       --name terracotta-db \
       terracotta-db
    
  3. create a database to restore into

    docker exec -it terracotta-db psql -U dbuser -d postgres -c "CREATE DATABASE terracotta;"
    
  4. perform the restore

    docker exec -it terracotta-db pg_restore -Fc -d terracotta -U dbuser /backups/d2r_tc_db.dump
    
  5. stop the temporary container

    docker stop db
    
  6. start all services as usual

    docker compose \
       -f docker-compose.yml \
       -f docker-compose.prod.yml \
       --env-file .env.prod \
       up -d
    

Project Funding

This project was funded by ICLEI Europe through the ICLEI Action Fund 2.0, a granting scheme supported by Google.org, under the project “Data2Resillience: Data-driven Urban Climate Adaptation”.

Indices and tables