Replies: 1 comment · 8 replies
-
Well, running multiple Kafka clusters in the same namespace is always pain 🤷. I guess you can try to migrate the clusters one by one:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
👎 1
-
It depends what the error actually is. From your description, it is not really clear. You would need to share the configs, full logs, exact description of what you did etc. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Sure! In my case I have 2 Kafka Clusters, one called "kafka" and the second one called "kafka-suffix" I need to migrate them one by one because of this warning: So I started to migrate a "kafka-suffix" from Zookeeper to KRaft following this documentation and this are the KafkaNodePool and Kafka resources I have after the migration:
This Kafka node pool will have pods name: kafka-0, kafka-1, kafka-2 After the migration, I wanted to assign all the data by creating a new KafkaNodePool called "kafka-suffix":
This Kafka node pool will have pods name: kafka-suffix-3, kafka-suffix-4, kafka-suffix-5 After every pod is green, I create and apply a KafkaRebalance resource:
After I applied this KafkaRebalance I waited until the status becomes like this:
Now after I receive the status above, I delete the "kafka" KafkaNodePool (including PVCs) and I will only have "kafka-suffix" node pool with kafka-suffix-3, kafka-suffix-4, kafka-suffix-5 as the name of the pods. The logs of the Kafka broker are looking something like this:
And these are the logs from the operator:
One more mention is that I tried this rebalance several times. First time I encountered this behavior and these errors. I tried again, second and third times it worked and everything was working without any errors. The fourth time it happened again. Please let me know if you need any other information. |
Beta Was this translation helpful? Give feedback.
All reactions
-
I think you are mixing different things together? Migration to NodePools happens with ZooKeeper-based clusters. But the controller role suggests you are using KRaft. That makes things way more complicated, because Kafka does not support moving the controller role. So it is not clear how did you got the The original plan discussed here was about ZooKeeper-based clusters:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
I see now, yes, I was mixing things up. I was trying to use Kafka Rebalance after I migrated to KRaft. Thank you very much for your help! |
Beta Was this translation helpful? Give feedback.
All reactions
-
So, you can use KafkaRebalance in KRaft -> but it would not help with the controller scaling / changing the controller role. There might be ways to work around the Kafka limitations. But you would need to explain the exact situation and steps you have done. Possibly in a separate discussion as it is unrelated to this one. |
Beta Was this translation helpful? Give feedback.
-
Hello!
I'm attempting to migrate an existing Kafka cluster to use a Kafka node pool (as part of a first step to enable the migration from Zookeeper to KRaft). In the Strimzi documentation on how to do this, there is the following warning:
In my case, I have multiple Kafka clusters in the same k8s namespace so I need each Kafka node pool to have a different metadata/name (currently, each Kafka cluster I have has a name pattern like:
<prefix>-kafka
).Now, the warning seems to imply that if the metadata/name of the Kafka node pool is not set to "kafka" I may lose data (not sure if this is actually a "may" or a "will" lose data). Does this mean that there is no way to perform this migration in my case without losing the existing data?
Beta Was this translation helpful? Give feedback.
All reactions