-
Notifications
You must be signed in to change notification settings - Fork 867
chore: Refactoring/domain handler updates #7403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Refactoring/domain handler updates #7403
Conversation
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
Signed-off-by: David Porter <[email protected]>
| // as a historical backwards compatibility measure. | ||
| // Going forward, any history use-cases should rely on the FailoverHistory endpoint which | ||
| // supports all failover types and more than a handful of entries. | ||
| if !wasActiveActive && !isActiveActive { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
behaviour change, not doing to put active/active updates in the domain-data since they will likely get too hard to read and be too large.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In long-term, are we removing failover history from data blob for active-passive domains?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really have an opinion. Probably yes, but some system parts have taken a dependency on it, so I'm not racing to do so
| lastUpdatedTime, | ||
| response, err := d.handleFailoverRequest( | ||
| ctx, | ||
| failoverRequest.ToUpdateDomainRequest(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chose to map to the UpdateDomain request struct here rather than the reverse because the FailoverDomain API is actually a subset of the UpdateDomain failover capabilities, we didn't include any graceful failover fields for it, so it has to be this way
Signed-off-by: David Porter <[email protected]>
0375f5d to
5628316
Compare
Signed-off-by: David Porter <[email protected]>
| FailoverType string `json:"failoverType,omitempty"` | ||
|
|
||
| // active-active domain failover | ||
| FromActiveClusters types.ActiveClusters `json:"fromActiveClusters,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intentional removal, these are to be replaced by the FailoverHistory endpoint
| }{ | ||
| { | ||
| name: "Success case - global domain force failover via replication config", | ||
| name: "Success case - active/passive domain - global domain force failover - failing over from cluster A to cluster B", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rewrote the test with different data to be less confusing
| DomainName: t.DomainName, | ||
| DomainActiveClusterName: *t.DomainActiveClusterName, | ||
| DomainActiveClusterName: t.GetDomainActiveClusterName(), | ||
| ActiveClusters: FromActiveClusters(t.ActiveClusters), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug, mappers were missing
| // as a historical backwards compatibility measure. | ||
| // Going forward, any history use-cases should rely on the FailoverHistory endpoint which | ||
| // supports all failover types and more than a handful of entries. | ||
| if !wasActiveActive && !isActiveActive { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In long-term, are we removing failover history from data blob for active-passive domains?
| // Any parts of the system (domain-callbacks, domain cache etc) that watch for | ||
| // failover events and may trigger some action or cache invalidation will be watching | ||
| // the domain-level failver counter for changes. Therefore, by bumping it even for | ||
| // cases where it isn't changing, we ensure all these other subprocesses will |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think we should find the subprocesses depending on this and extend them to support cluster-attribute-level failover instead of increasing the domain level failover version fall all failovers.
But I'm ok with the current state.
What changed?
This is not a perfect refactor, I changed a few behaviours while I was here since I considered them low risk:
Why?
How did you test it?
Potential risks
Release notes
Documentation Changes