You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran into this same error on the percona operator 2.4.1 where I had to restore a database after a cluster failure. The first restore failed due to an invalid restore time selected, it could not find a valid restore point. Trying to fix the restore time was not possible, doing a kubectl delete on the restore yaml showed the restore was deleted but the operator did not seem to know it. All further attempts to restore using different name also failed as pointed out in the community forum.
I could not find a way to even list what the operator thought was running as to restores.
As a further test I deleted the cluster and re-created it with the same name. The percona operator saw the new cluster and tried to re-start the failed restore again and again. It finally gave up after 5 more attempts. Even deleting the cluster does not signal to the operator to remove any failed or in progress restores.
There needs to be a way to list the restores and completely delete them so a new one can be started.
More about the problem
I would have expected that doing a kubectl delete on the restore yaml would have killed any further restore attempts.
Steps to reproduce
See the Community message posted above.
Versions
Kubernetes 1.27.11
Operator 2.4.1
Database postgres 15
Anything else?
This is a very serious problem when you cannot fix a failed restore and the database is down because of the restore attempt.
The text was updated successfully, but these errors were encountered:
@Lobo75 I have reproduced it and we will fix it in the next PG release 2.6.0
STR:
run restore with some wrong option e.g. time
do not wait until restore fails completely (just the first try) and remove the restore object manually
As a result, the restore section was not removed from pg object
Report
I found this community message with the same issue https://forums.percona.com/t/manually-restoring-multiple-times-is-not-working/27301 however the link to the jira ticket is invalid.
I ran into this same error on the percona operator 2.4.1 where I had to restore a database after a cluster failure. The first restore failed due to an invalid restore time selected, it could not find a valid restore point. Trying to fix the restore time was not possible, doing a kubectl delete on the restore yaml showed the restore was deleted but the operator did not seem to know it. All further attempts to restore using different name also failed as pointed out in the community forum.
I could not find a way to even list what the operator thought was running as to restores.
As a further test I deleted the cluster and re-created it with the same name. The percona operator saw the new cluster and tried to re-start the failed restore again and again. It finally gave up after 5 more attempts. Even deleting the cluster does not signal to the operator to remove any failed or in progress restores.
There needs to be a way to list the restores and completely delete them so a new one can be started.
More about the problem
I would have expected that doing a kubectl delete on the restore yaml would have killed any further restore attempts.
Steps to reproduce
See the Community message posted above.
Versions
Anything else?
This is a very serious problem when you cannot fix a failed restore and the database is down because of the restore attempt.
The text was updated successfully, but these errors were encountered: