Skip to content

Commit

Permalink
Reorder remaining examples based on importance (#10588)
Browse files Browse the repository at this point in the history
Signed-off-by: Jakub Scholz <[email protected]>
  • Loading branch information
scholzj authored Sep 16, 2024
1 parent 14d2cff commit ff63bf5
Show file tree
Hide file tree
Showing 4 changed files with 197 additions and 196 deletions.
52 changes: 26 additions & 26 deletions documentation/modules/configuring/con-config-kafka-bridge.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,43 +27,43 @@ spec:
replicas: 3 # <1>
# Kafka bootstrap servers (required)
bootstrapServers: _<cluster_name>_-cluster-kafka-bootstrap:9092 # <2>
# HTTP configuration (required)
http: # <3>
port: 8080
# CORS configuration (optional)
cors: # <4>
allowedOrigins: "https://strimzi.io"
allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH"
# Resources requests and limits (recommended)
resources: # <5>
requests:
cpu: "1"
memory: 2Gi
limits:
cpu: "2"
memory: 2Gi
# TLS configuration (optional)
tls: # <3>
tls: # <6>
trustedCertificates:
- secretName: my-cluster-cluster-cert
pattern: "*.crt"
- secretName: my-cluster-cluster-cert
certificate: ca2.crt
# Authentication (optional)
authentication: # <4>
authentication: # <7>
type: tls
certificateAndKey:
secretName: my-secret
certificate: public.crt
key: private.key
# HTTP configuration (required)
http: # <5>
port: 8080
# CORS configuration (optional)
cors: # <6>
allowedOrigins: "https://strimzi.io"
allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH"
# Consumer configuration (optional)
consumer: # <7>
consumer: # <8>
config:
auto.offset.reset: earliest
# Producer configuration (optional)
producer: # <8>
producer: # <9>
config:
delivery.timeout.ms: 300000
# Resources requests and limits (recommended)
resources: # <9>
requests:
cpu: "1"
memory: 2Gi
limits:
cpu: "2"
memory: 2Gi
# Logging configuration (optional)
logging: # <10>
type: inline
Expand Down Expand Up @@ -112,14 +112,14 @@ spec:
----
<1> The number of replica nodes.
<2> Bootstrap address for connection to the target Kafka cluster. The address takes the format `<cluster_name>-kafka-bootstrap:<port_number>`. The Kafka cluster doesn't need to be managed by Strimzi or deployed to a Kubernetes cluster.
<3> TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets.
<4> Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
<3> HTTP access to Kafka brokers.
<4> CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster.
<5> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed.
<6> TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets.
<7> Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
By default, the Kafka Bridge connects to Kafka brokers without authentication.
<5> HTTP access to Kafka brokers.
<6> CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster.
<7> Consumer configuration options.
<8> Producer configuration options.
<9> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed.
<8> Consumer configuration options.
<9> Producer configuration options.
<10> Specified Kafka Bridge loggers and log levels added directly (`inline`) or indirectly (`external`) through a ConfigMap. A custom Log4j configuration must be placed under the `log4j.properties` or `log4j2.properties` key in the ConfigMap. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF.
<11> JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge.
<12> Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
Expand Down
80 changes: 40 additions & 40 deletions documentation/modules/configuring/con-config-kafka-connect.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,24 +39,10 @@ metadata:
spec:
# Replicas (required)
replicas: 3 # <3>
# Authentication (optional)
authentication: # <4>
type: tls
certificateAndKey:
certificate: source.crt
key: source.key
secretName: my-user-source
# Bootstrap servers (required)
bootstrapServers: my-cluster-kafka-bootstrap:9092 # <5>
# TLS configuration (optional)
tls: # <6>
trustedCertificates:
- secretName: my-cluster-cluster-cert
pattern: "*.crt"
- secretName: my-cluster-cluster-cert
pattern: "*.crt"
bootstrapServers: my-cluster-kafka-bootstrap:9092 # <4>
# Kafka Connect configuration (recommended)
config: # <7>
config: # <5>
group.id: my-connect-cluster
offset.storage.topic: my-connect-cluster-offsets
config.storage.topic: my-connect-cluster-configs
Expand All @@ -68,13 +54,35 @@ spec:
config.storage.replication.factor: 3
offset.storage.replication.factor: 3
status.storage.replication.factor: 3
# Resources requests and limits (recommended)
resources: # <6>
requests:
cpu: "1"
memory: 2Gi
limits:
cpu: "2"
memory: 2Gi
# Authentication (optional)
authentication: # <7>
type: tls
certificateAndKey:
certificate: source.crt
key: source.key
secretName: my-user-source
# TLS configuration (optional)
tls: # <8>
trustedCertificates:
- secretName: my-cluster-cluster-cert
pattern: "*.crt"
- secretName: my-cluster-cluster-cert
pattern: "*.crt"
# Build configuration (optional)
build: # <8>
output: # <9>
build: # <9>
output: # <10>
type: docker
image: my-registry.io/my-org/my-connect-cluster:latest
pushSecret: my-registry-credentials
plugins: # <10>
plugins: # <11>
- name: connector-1
artifacts:
- type: tgz
Expand All @@ -86,7 +94,7 @@ spec:
url: <url_to_download_connector_2_artifact>
sha512sum: <SHA-512_checksum_of_connector_2_artifact>
# External configuration (optional)
externalConfiguration: # <11>
externalConfiguration: # <12>
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
Expand All @@ -98,14 +106,6 @@ spec:
secretKeyRef:
name: aws-creds
key: awsSecretAccessKey
# Resources requests and limits (recommended)
resources: # <12>
requests:
cpu: "1"
memory: 2Gi
limits:
cpu: "2"
memory: 2Gi
# Logging configuration (optional)
logging: # <13>
type: inline
Expand Down Expand Up @@ -162,21 +162,21 @@ spec:
<1> Use `KafkaConnect`.
<2> Enables the use of `KafkaConnector` resources to start, stop, and manage connector instances.
<3> The number of replica nodes for the workers that run tasks.
<4> Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
By default, Kafka Connect connects to Kafka brokers using a plain text connection.
<5> Bootstrap address for connection to the Kafka cluster. The address takes the format `<cluster_name>-kafka-bootstrap:<port_number>`. The Kafka cluster doesn't need to be managed by Strimzi or deployed to a Kubernetes cluster.
<6> TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets.
<7> Kafka Connect configuration of workers (not connectors) that run connectors and their tasks.
<4> Bootstrap address for connection to the Kafka cluster. The address takes the format `<cluster_name>-kafka-bootstrap:<port_number>`. The Kafka cluster doesn't need to be managed by Strimzi or deployed to a Kubernetes cluster.
<5> Kafka Connect configuration of workers (not connectors) that run connectors and their tasks.
Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Strimzi.
In this example, JSON convertors are specified.
A replication factor of 3 is set for the internal topics used by Kafka Connect (minimum requirement for production environment).
In this example, JSON convertors are specified.
A replication factor of 3 is set for the internal topics used by Kafka Connect (minimum requirement for production environment).
Changing the replication factor after the topics have been created has no effect.
<8> Build configuration properties for building a container image with connector plugins automatically.
<9> (Required) Configuration of the container registry where new images are pushed.
<10> (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one `artifact`.
<11> External configuration for connectors using environment variables, as shown here, or volumes.
<6> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed.
<7> Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
By default, Kafka Connect connects to Kafka brokers using a plain text connection.
<8> TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets.
<9> Build configuration properties for building a container image with connector plugins automatically.
<10> (Required) Configuration of the container registry where new images are pushed.
<11> (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one `artifact`.
<12> External configuration for connectors using environment variables, as shown here, or volumes.
You can also use configuration provider plugins to load configuration values from external sources.
<12> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed.
<13> Specified Kafka Connect loggers and log levels added directly (`inline`) or indirectly (`external`) through a ConfigMap. A custom Log4j configuration must be placed under the `log4j.properties` or `log4j2.properties` key in the ConfigMap. For the Kafka Connect `log4j.rootLogger` logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF.
<14> Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
<15> Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under `metricsConfig.valueFrom.configMapKeyRef.key`.
Expand Down
Loading

0 comments on commit ff63bf5

Please sign in to comment.