Skip to content

Developing the Service

Laurens Borst edited this page Jan 8, 2026 · 2 revisions

Development

If you're interested in developing for the package, there are several steps you need to know.

Working on a Service

CI Checks Locally

All the existing services have the same workflow. They both depend on tox for local CI checks and poetry for package management. tox will require you to install black, isort, flake8, autoflake and mypy. I recommend installing them system-wide via pipx. When they're installed just run tox from the root of the sub-project and the continuous integration checks, exactly as they are configured on GitHub, will kick in.

For formatting; black ., sorting imports; isort ., PEP-compliancy; flake8 . and autoflake ., and typechecking; mypy .

Poetry

poetry add [package name] to add a package. poetry remove [package name] to remove it.

Working on the Core Package

You can install the core package from here. This core package is used by all the sub-repos. You might want to fix a bug or add a feature to this core package, but to truly test it against its consumers (that is, the packages in this repo), you need to follow a few steps.

Typically, you would clone the repository and then activate the environment to start working

git clone git@github.com:LAAC-LSCP/analysis-service-core.git
cd ./analysis-service-core
eval $(poetry env activate)

You can work on another package that depends on it quite easily, by installing it editably

poetry add -e [local path to core package]

Remember to point your IDE project's interpreter to the interpreter associated with this environment (hint: which python).

As you make changes to the core package, they are reflected in the dependent repository because it now uses the local source map. Your IDE will also pick up on this and IntelliSense and code jumping will work cross-repo.

This is all great if you want to go fast, but once you build the container image, for instance for E2E exploration, there is a problem; the build is now referenced by a path on the local filesystem. In the container filesystem this repo is not available. It is neither easy nor advisable to add it into the container. But you still like to see your changes.

In this case, the fastest workflow would be to make a feature branch in the core package and push to the GitHub remote. Then you can reference the core package by a specific branch or commit instead. This will also work inside the container.

# Specific tag
poetry add git+https://github.com/LAAC-LSCP/analysis-service-core.git@v1.0.0

# Specific commit
poetry add git+https://github.com/LAAC-LSCP/analysis-service-core.git@commit-hash

Docker, Docker Desktop and Docker Compose

There's much to be said about these technologies but I will outline only the most important bits for our work.

Docker

Docker is a containerization software, and highly worth it in and of itself. Briefly, containers act as light-weight virtual machines, albeit not approximately. Containers are built from container images, which are defined by Docker files (literally called "DockerFile"). The recipe is typically a pre-existing base image followed by a series of "RUN" and "COPY" calls to get the image into the desired state.

Docker volumes are an important concept to us. Containers filesystems are isolated from the host system. Complete isolation would make many containers useless. Docker volumes allow us to map certain folders in the host to folders in the container.

Docker Compose

Docker Compose wraps Docker and lets us skip the messy details of going from recipe to image to container, or from having to write long CLI commands to configure one. This abstraction is far more useful for our purposes, as it also lets us organise a group of cooperating containers and affect this group as a whole, as is the case for the analysis-service. The docker-compose.yml and cocker-compose.test.yml files are read by docker compose to read the image files, spin up containers, and connect them accordingly. Here we define for each container its ports, its endpoints on a software-defined network shared by containers, its environment variables, and also volume maps.

To build all the container images (from the analysis-service root folder)

docker compose -f docker-compose.yml build`

Note that this may fail if you haven't constructed the corresponding network yet with docker network create echoservice-external.

To run the services

docker compose -f docker-compose.yml up`

Finally to stop the services

docker compose -f docker-compose.yml down`

Docker Desktop

Docker Desktop–developed by the Docker Foundation–is a GUI that makes working with containers much, much easier. I recommend using it to easily inspect container metadata, filesystems, logs, and so on.

TODO: Kubernetes

Clone this wiki locally