-
Notifications
You must be signed in to change notification settings - Fork 22
Question > What are the guarantees in term of consistency between the PostgreSQL event store and the Kafka topic when using the Axon Kafka producer? #91
Comments
Any news on this topic? I am quite interested. |
@nklmish might you be able to shed some light on this situation? |
Have a look here while sending an event to Kafka, current unit of work is taken into consideration providing we have transactional kafka producer. |
For me it doesn't guarantee anything. Reading the code just shows that indeed it is in the unit of work, but they are corner cases where you will be able to commit the Kafka transaction and not the Axon event store transaction. This will lead to message in Kafka topic which don't exist in Axon even store. |
I am not sure what sort of guarantees are we referring here, please note that Kafka and Postgre transaction are two distinct transactions, not a single (XA) transaction. Kafka does not support XA transactions What we are doing is first executing the current unit of work and after its completed we are publishing the event to Kafka. In case event publication fails we are signaling it by throwing @abuijze for visibility. |
I am totally fine with what you said. I am referring to consistency guarantees between the two stores (Kafka and PostgreSQL). So now consider the case where you can commit to Kafka and then you can not commit to PostgreSQL, there is an inconsistency between the two stores. It is clear that you cannot have immediate consistency but I would like to have eventual consistency when using your Kafka producer. |
That's why we are executing Kafka work only after the current unit of work is committed, i.e. If you are referring to two different apps (let say A and B) where
We will work on improving the documentation and adding relevant examples. Thank you for creating the issue! |
@nklmish But then you may end up with events not committed at all to Kafka and loosing events.
I am creating a read model on my aggregate and I use Kafka to forward events to my read model. I therefore need to have eventual consistency or at least once guarantee of delivery which seems not to be the case with your current design. |
@nklmish I was wondering if implementing the KafkaPublisher as a TrackingEventProcessor would not be a better solution. It would allow to have at least one guarantee of delivery if the Kafka transaction is committed before the one managing the TrackingToken. What do you think? |
Hi @ghilainm, |
@abuijze No problem. But I think that in this case the documentation must be improved in order to explain such 'limitation' in the guarantees offered by the Kafka publisher that you provide. It should also maybe be explained what is the recommended approach depending on the guarantees needed. |
I have a question for which I couldn't find a clear answer in the documentation:
What are the guarantees in term of consistency between the PostgreSQL event store and the Kafka topic when using the Axon Kafka producer?
I would say that as they occur in the same Unit Of Work it should be atomic. However, I am pretty sure it isn't and when some race conditions occur an event could be publish in PostgreSQL and not in Kafka and the other way around. If this is true, what is the recommended approach to synchronise them? Because I don't see how you can rely on something which is not (eventually) consistent.
Could you please clarify the topic in the documentation?
The text was updated successfully, but these errors were encountered: