-
-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of Cluster creation stay in Deploying state #884
Comments
Please attach API service log example:
|
I believe there are no
This is after a second attempt and when I try a third time, I will get a 3rd error for a 3rd containers, is it trying to track the automation container status? |
@ngurban Could you take a look at this? |
@sbusso Could you attach the full log? this would help us in the diagnostics. |
Here are logs of another try with the vars from frontend redacted: |
when the {"log":"{\"level\":\"info\",\"app\":\"pg_console\",\"version\":\"2.1.0\",\"cid\":\"9a6bc197-91c6-4ae7-ae3d-e60ddf9d83b4\",\"operation\":{\"ID\":5,\"ProjectID\":34,\"ClusterID\":7,\"DockerCode\":\"a2f997d0b153d821776f5edb28f2d4d0a7d475ae8cb813e56b0d9489bf400b27\",\"Cid\":\"9a6bc197-91c6-4ae7-ae3d-e60ddf9d83b4\",\"Type\":\"deploy\",\"Status\":\"in_progress\",\"Log\":null,\"CreatedAt\":\"2025-02-06T12:20:48.764705Z\",\"UpdatedAt\":null},\"time\":\"2025-02-06T12:20:48Z\",\"message\":\"operation was created\"}\n","stream":"stdout","time":"2025-02-06T12:20:48.766871737Z"} but when the container is destroyed, the api loses track and throws an error, either the operation was successful or failed. |
The only reference I found is
where it looks like the |
Thanks, we'll take a look at it. It seems that this problem is not always reproduced: @sbusso could you share your instructions on how to start the Autobase console? Have you mounted a directory with the ansible json log?
Example:
|
Ok, the path Could this path use a named volume shared by the containers instead of a host hardcoded path? Here is the config I use. I can submit it with a caddy config in a PR over the weekend services:
pg-console-api:
image: autobase/console_api:latest
container_name: pg-console-api
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/ansible:/tmp/ansible
depends_on:
- pg-console-db
environment:
- PG_CONSOLE_API_URL=${PG_CONSOLE_API_URL}
- PG_CONSOLE_AUTHORIZATION_TOKEN=${PG_CONSOLE_AUTH_TOKEN}
- PG_CONSOLE_DB_HOST=pg-console-db
- PG_CONSOLE_LOGGER_LEVEL=${PG_CONSOLE_LOGGER_LEVEL:-INFO}
networks:
- pg-console
- caddy
pg-console-ui:
image: autobase/console_ui:latest
container_name: pg-console-ui
restart: unless-stopped
labels:
caddy: ${PG_CONSOLE_DOMAIN}
[email protected]: /api/v1/*
caddy.0_reverse_proxy: "@api pg-console-api:8080"
caddy.1_reverse_proxy: "{{upstreams 80}}"
environment:
- PG_CONSOLE_API_URL=${PG_CONSOLE_API_URL}
- PG_CONSOLE_AUTHORIZATION_TOKEN=${PG_CONSOLE_AUTH_TOKEN}
networks:
- pg-console
- caddy
pg-console-db:
image: autobase/console_db:latest
container_name: pg-console-db
restart: unless-stopped
volumes:
- console_postgres:/var/lib/postgresql
networks:
- pg-console
volumes:
console_postgres:
networks:
pg-console:
caddy:
name: caddy
external: true |
I think so. The automation container’s log needs to be accessible to the API container. |
Bug description
I deployed a 1 instance cluster to Hetzner and the operation stayed in
in_progress
despite the deployment being finishedExpected behavior
Status to change to deployed
Steps to reproduce
Installation method
Console (UI)
System info
Autobase console deployed to a Docker Swarm cluster using the individual images for ui/api/db
Additional info
No response
The text was updated successfully, but these errors were encountered: