-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Replaced bitnami kafka image with Apache Kafka #7762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replaced bitnami kafka image with Apache Kafka #7762
Conversation
| KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 | ||
| KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 | ||
| # Cluster ID for KRaft | ||
| CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it have to be a weird string, or can we make it readable like test_kafka_cluster_id?
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #7762 +/- ##
==========================================
+ Coverage 95.49% 95.51% +0.01%
==========================================
Files 307 307
Lines 15911 15911
==========================================
+ Hits 15195 15197 +2
+ Misses 562 561 -1
+ Partials 154 153 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Hii @yurishkuro , Just went through your comments, Any more changes you think needs to be done? |
|
Heyy @yurishkuro , as mentioned earlier CLUSTER_ID has to be a base64 encoded UUID of 22 character, correct me If I am wrong and any other changes are needed or not ? Should I procees further with the discussed changes? |
what is the proof that it has to be base64 encoded and not just a plain string?
The main issue is the CI workflow for Kafka doesn't pass. |
Signed-off-by: Somil Jain <[email protected]>
Signed-off-by: Somil Jain <[email protected]>
bf888b2 to
c04ad3f
Compare
Metrics Comparison SummaryTotal changes across all snapshots: 7 Detailed changes per snapshotsummary_metrics_snapshot_kafka📊 Metrics Diff SummaryTotal Changes: 7
🔄 Modified Metrics
|
| - "9094:9094" | ||
| - "9095:9095" | ||
| volumes: | ||
| - ../../../internal/config/tlscfg/testdata:/bitnami/kafka/config/certs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so if we are not mapping certs, do we still need all the functionality in generate_jks_files() in kafka.sh?
@Parship12 what was the intent of introducing those in #7395 ? It seems our Kafka exporter config is pretty vanilla without any authentication
kafka:
brokers:
- ${env:KAFKA_BROKER:-localhost:9092}
traces:
topic: ${env:KAFKA_TOPIC:-jaeger-spans}
encoding: ${env:KAFKA_ENCODING:-otlp_proto}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct - those ports were added in #7395 to support testing diff authentication methods. However, after looking at the actual tests, i noticed that:
Current situation is the infrastructure exists (ports 9092, 9093, 9095 with PLAINTEXT, SASL_SSL, SASL_PLAINTEXT), generate_jks_files().
BUT the actual integration tests in kafka_test.go only connect to localhost:9092 (PLAINTEXT), no tests actually use the SASL_SSL (9093) or SASL_PLAINTEXT (9095). (ignoring the usage in this PR #7762)
The code fix from #7395 (supporting TLS with different auth methods) works correctly, but the test infrastructure to verify it was never actually used.
Should I raise a PR to simplify it? (remove unused ports 9093/9095 and generate_jks_files(), JKS/SSL/SASL configuration, certificate volume mounting, so it will be simplified to PLAINTEXT Kafka on port 9092 only).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ports are already being removed / simplified in this PR. I think it's ok for us to not test auth permutations, since we're just using the upstream exporter/receiver.
Signed-off-by: Somil Jain <[email protected]>
yurishkuro
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Which problem is this PR solving?
#7731
Description of the changes
Below are the description of the changes
a. Will use apache/kafka and will remove Bitnami
b. Import Kafka setup from v3/docker-compose.yml as suggested by you.
c. Will remove ZooKeeper and instead use KRaft mode.
How was this change tested?
Through local testing
Checklist