-
-
Notifications
You must be signed in to change notification settings - Fork 7
[Merged by Bors] - Docs new landing page #573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from 5 commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
816ebe9
refactored usage guide
e9f4b2b
Added diagram
03a85fb
some update
1e4440d
some text changes
822bb39
updated changelog
6f91760
Update docs/modules/kafka/pages/index.adoc
fhennig 92309fb
Update docs/modules/kafka/pages/index.adoc
fhennig 8437547
Update docs/modules/kafka/pages/index.adoc
fhennig 2278ffe
Update docs/modules/kafka/pages/index.adoc
fhennig 2dc8d8a
Update docs/modules/kafka/pages/index.adoc
fhennig 403dcc0
Update docs/modules/kafka/pages/usage-guide/storage-resources.adoc
fhennig File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file was deleted.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,50 @@ | ||
= Stackable Operator for Apache Kafka | ||
:description: The Stackable Operator for Apache Superset is a Kubernetes operator that can manage Apache Kafka clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Kafka versions. | ||
:keywords: Stackable Operator, Apache Kafka, Kubernetes, operator, SQL, engineer, broker, big data, CRD, StatefulSet, ConfigMap, Service, Druid, ZooKeeper, NiFi, S3, demo, version | ||
|
||
This is an operator for Kubernetes that can manage https://kafka.apache.org/[Apache Kafka] clusters. | ||
The Stackable Operator for Apache Kafka is an operator that can deploy and manage https://kafka.apache.org/[Apache Kafka] clusters on Kubernetes. | ||
// what is Kafka? | ||
Apache Kafka is a distributed streaming platform designed to handle large volumes of data in real-time. It is commonly used for real-time data processing, data ingestion, event streaming, and messaging between applications. | ||
|
||
== Getting started | ||
|
||
Follow the xref:kafka:getting_started/index.adoc[] which will guide you through installing The Stackable Kafka and ZooKeeper Operators, setting up ZooKeeper and Kafka and testing your Kafka using kcat. | ||
|
||
== Resources | ||
|
||
The _KafkaCluster_ custom resource defines all your Kafka cluster configuration. It defines a single `broker` xref:concepts:roles-and-role-groups.adoc[role]. | ||
|
||
image::kafka_overview.drawio.svg[A diagram depicting the Kubernetes resources created by the operator.] | ||
|
||
For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in the `broker` role the Operator creates a StatefulSet. Multiple Services are created, one at role level, one per role group as well as one for every individual Pod, to allow accessing the whole Kafka cluster, parts of it or even individual brokers. | ||
fhennig marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
For every StatefulSet (role group) a ConfigMap is deployed containing a `log4j.properties` file for xref:usage-guide/logging.adoc[logging] configuration and a `server.properties` file containing the whole Kafka configuration which is derived from the KafkaCluster resource. | ||
|
||
The Operator creates a xref:concepts:service_discovery.adoc[] for the whole KafkaCluster which references the Service for the whole cluster. Other operators use this ConfigMap to connect to a Kafka cluster simply by name and it can also be used by custom third party applications to find the endpoint to connect to the kafka. | ||
fhennig marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
== Dependencies | ||
|
||
Kafka requires xref:zookeeper:index.adoc[Apache ZooKeeper] for coordination purposes (Although it will not be needed in the future as it will be replaced with a https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum[built-in solution]). | ||
fhennig marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
== Connections to other products | ||
|
||
Since Kafka often takes on a bridging role, many other products connect to it. In the <<demos, demos>> below you will find example data pipelines that use xref:nifi:index.adoc[Apache NiFi with the Stackable Operator] to write to Kafka and xref:nifi:index.adoc[Apache Druid with the Stackable Operator] to read from Kafka. But you can also connect xref:spark-k8s:index.adoc[Apache Spark] or custom Jobs written in various languages to it. | ||
fhennig marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
== [[demos]]Demos | ||
|
||
xref:stackablectl::index.adoc[] supports installing xref:stackablectl::demos/index.adoc[] with a single command. The demos are complete data piplines which showcase multiple components of the Stackable platform working together and which you can try out interactively. Both demos below inject data into Kafka using NiFi and read from the Kafka topics using Druid. | ||
|
||
=== Waterlevel Demo | ||
|
||
The xref:stackablectl::demos/nifi-kafka-druid-water-level-data.adoc[] demo uses data from https://www.pegelonline.wsv.de/webservice/ueberblick[PEGELONLINE] to visualize water levels in rivers and coastal regions of Germany from historic and real time data. | ||
|
||
=== Earthquake Demo | ||
|
||
The xref:stackablectl::demos/nifi-kafka-druid-earthquake-data.adoc[] demo ingests https://earthquake.usgs.gov/[earthquake data] into a similar pipeline as is used in the waterlevel demo. | ||
|
||
WARNING: This operator only works with images from the https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fkafka[Stackable] repository | ||
|
||
== Supported Versions | ||
|
||
The Stackable Operator for Apache Kafka currently supports the following versions of Kafka: | ||
|
||
include::partial$supported-versions.adoc[] | ||
|
||
== Getting the Docker image | ||
|
||
[source] | ||
---- | ||
docker pull docker.stackable.tech/stackable/kafka:<version> | ||
---- |
63 changes: 63 additions & 0 deletions
63
docs/modules/kafka/pages/usage-guide/configuration-environment-overrides.adoc
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
= Configuration & Environment Overrides | ||
|
||
The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role). | ||
|
||
IMPORTANT: Overriding certain properties which are set by operator (such as the ports) can interfere with the operator and can lead to problems. | ||
|
||
== Configuration Properties | ||
|
||
For a role or role group, at the same level of `config`, you can specify: `configOverrides` for the `server.properties`. For example, if you want to set the `auto.create.topics.enable` to disable automatic topic creation, it can be configured in the `KafkaCluster` resource like so: | ||
|
||
[source,yaml] | ||
---- | ||
brokers: | ||
roleGroups: | ||
default: | ||
configOverrides: | ||
server.properties: | ||
auto.create.topics.enable: "false" | ||
replicas: 1 | ||
---- | ||
|
||
Just as for the `config`, it is possible to specify this at role level as well: | ||
|
||
[source,yaml] | ||
---- | ||
brokers: | ||
configOverrides: | ||
server.properties: | ||
auto.create.topics.enable: "false" | ||
roleGroups: | ||
default: | ||
replicas: 1 | ||
---- | ||
|
||
All override property values must be strings. | ||
|
||
For a full list of configuration options we refer to the Apache Kafka https://kafka.apache.org/documentation/#configuration[Configuration Reference]. | ||
|
||
== Environment Variables | ||
|
||
In a similar fashion, environment variables can be (over)written. For example per role group: | ||
|
||
[source,yaml] | ||
---- | ||
servers: | ||
roleGroups: | ||
default: | ||
envOverrides: | ||
MY_ENV_VAR: "MY_VALUE" | ||
replicas: 1 | ||
---- | ||
|
||
or per role: | ||
|
||
[source,yaml] | ||
---- | ||
servers: | ||
envOverrides: | ||
MY_ENV_VAR: "MY_VALUE" | ||
roleGroups: | ||
default: | ||
replicas: 1 | ||
---- |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
= Usage guide | ||
:page-aliases: usage.adoc |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
= Log aggregation | ||
|
||
The logs can be forwarded to a Vector log aggregator by providing a discovery | ||
ConfigMap for the aggregator and by enabling the log agent: | ||
|
||
[source,yaml] | ||
---- | ||
spec: | ||
clusterConfig: | ||
vectorAggregatorConfigMapName: vector-aggregator-discovery | ||
brokers: | ||
config: | ||
logging: | ||
enableVectorAgent: true | ||
---- | ||
|
||
Further information on how to configure logging, can be found in | ||
xref:home:concepts:logging.adoc[]. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
= Monitoring | ||
|
||
The managed Kafka instances are automatically configured to export Prometheus metrics. See | ||
xref:home:operators:monitoring.adoc[] for more details. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
= Pod Placement | ||
|
||
You can configure Pod placement for Kafka brokers as described in xref:concepts:pod_placement.adoc[]. | ||
|
||
By default, the operator configures the following Pod placement constraints: | ||
|
||
[source,yaml] | ||
---- | ||
affinity: | ||
podAntiAffinity: | ||
preferredDuringSchedulingIgnoredDuringExecution: | ||
- podAffinityTerm: | ||
labelSelector: | ||
matchLabels: | ||
app.kubernetes.io/component: broker | ||
app.kubernetes.io/instance: cluster-name | ||
app.kubernetes.io/name: kafka | ||
topologyKey: kubernetes.io/hostname | ||
weight: 70 | ||
---- | ||
|
||
In the example above `cluster-name` is the name of the Kafka custom resource that owns this Pod. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,161 @@ | ||
= Security | ||
|
||
== Encryption | ||
|
||
The internal and client communication can be encrypted TLS. This requires the xref:secret-operator::index.adoc[Secret Operator] to be present in order to provide certificates. The utilized certificates can be changed in a top-level config. | ||
|
||
[source,yaml] | ||
---- | ||
--- | ||
apiVersion: kafka.stackable.tech/v1alpha1 | ||
kind: KafkaCluster | ||
metadata: | ||
name: simple-kafka | ||
spec: | ||
image: | ||
productVersion: 3.3.1 | ||
stackableVersion: "23.4.0-rc2" | ||
clusterConfig: | ||
zookeeperConfigMapName: simple-kafka-znode | ||
tls: | ||
serverSecretClass: tls # <1> | ||
internalSecretClass: kafka-internal-tls # <2> | ||
brokers: | ||
roleGroups: | ||
default: | ||
replicas: 3 | ||
---- | ||
<1> The `spec.clusterConfig.tls.serverSecretClass` refers to the client-to-server encryption. Defaults to the `tls` secret. Can be deactivated by setting `serverSecretClass` to `null`. | ||
<2> The `spec.clusterConfig.tls.internalSecretClass` refers to the broker-to-broker internal encryption. This must be explicitly set or defaults to `tls`. May be disabled by setting `internalSecretClass` to `null`. | ||
|
||
The `tls` secret is deployed from the xref:secret-operator::index.adoc[Secret Operator] and looks like this: | ||
|
||
[source,yaml] | ||
---- | ||
--- | ||
apiVersion: secrets.stackable.tech/v1alpha1 | ||
kind: SecretClass | ||
metadata: | ||
name: tls | ||
spec: | ||
backend: | ||
autoTls: | ||
ca: | ||
secret: | ||
name: secret-provisioner-tls-ca | ||
namespace: default | ||
autoGenerate: true | ||
---- | ||
|
||
You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or `spec.clusterConfig.tls.internalSecretClass` to use different certificates. | ||
|
||
== Authentication | ||
|
||
The internal or broker-to-broker communication is authenticated via TLS. In order to enforce TLS authentication for client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the xref:commons-operator::index.adoc[Commons Operator]. | ||
|
||
[source,yaml] | ||
---- | ||
--- | ||
apiVersion: authentication.stackable.tech/v1alpha1 | ||
kind: AuthenticationClass | ||
metadata: | ||
name: kafka-client-tls # <2> | ||
spec: | ||
provider: | ||
tls: | ||
clientCertSecretClass: kafka-client-auth-secret # <3> | ||
--- | ||
apiVersion: secrets.stackable.tech/v1alpha1 | ||
kind: SecretClass | ||
metadata: | ||
name: kafka-client-auth-secret # <4> | ||
spec: | ||
backend: | ||
autoTls: | ||
ca: | ||
secret: | ||
name: secret-provisioner-tls-kafka-client-ca | ||
namespace: default | ||
autoGenerate: true | ||
--- | ||
apiVersion: kafka.stackable.tech/v1alpha1 | ||
kind: KafkaCluster | ||
metadata: | ||
name: simple-kafka | ||
spec: | ||
image: | ||
productVersion: 3.3.1 | ||
stackableVersion: "23.4.0-rc2" | ||
clusterConfig: | ||
authentication: | ||
- authenticationClass: kafka-client-tls # <1> | ||
zookeeperConfigMapName: simple-kafka-znode | ||
brokers: | ||
roleGroups: | ||
default: | ||
replicas: 3 | ||
---- | ||
<1> The `clusterConfig.authentication.authenticationClass` can be set to use TLS for authentication. This is optional. | ||
<2> The referenced `AuthenticationClass` that references a `SecretClass` to provide certificates. | ||
<3> The reference to a `SecretClass`. | ||
<4> The `SecretClass` that is referenced by the `AuthenticationClass` in order to provide certificates. | ||
|
||
|
||
== [[authorization]]Authorization | ||
|
||
If you wish to include integration with xref:opa::index.adoc[Open Policy Agent] and already have an OPA cluster, then you can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package. The package is optional and will default to the `metadata.name` field: | ||
|
||
[source,yaml] | ||
---- | ||
--- | ||
apiVersion: kafka.stackable.tech/v1alpha1 | ||
kind: KafkaCluster | ||
metadata: | ||
name: simple-kafka | ||
spec: | ||
image: | ||
productVersion: 3.3.1 | ||
stackableVersion: "23.4.0-rc2" | ||
clusterConfig: | ||
authorization: | ||
opa: | ||
configMapName: simple-opa | ||
package: kafka | ||
zookeeperConfigMapName: simple-kafka-znode | ||
brokers: | ||
roleGroups: | ||
default: | ||
replicas: 1 | ||
---- | ||
|
||
You can change some opa cache properties by overriding: | ||
|
||
[source,yaml] | ||
---- | ||
--- | ||
apiVersion: kafka.stackable.tech/v1alpha1 | ||
kind: KafkaCluster | ||
metadata: | ||
name: simple-kafka | ||
spec: | ||
image: | ||
productVersion: 3.3.1 | ||
stackableVersion: "23.4.0-rc2" | ||
clusterConfig: | ||
authorization: | ||
opa: | ||
configMapName: simple-opa | ||
package: kafka | ||
zookeeperConfigMapName: simple-kafka-znode | ||
brokers: | ||
configOverrides: | ||
server.properties: | ||
opa.authorizer.cache.initial.capacity: "100" | ||
opa.authorizer.cache.maximum.size: "100" | ||
opa.authorizer.cache.expire.after.seconds: "10" | ||
roleGroups: | ||
default: | ||
replicas: 1 | ||
---- | ||
|
||
A full list of settings and their respective defaults can be found https://github.com/anderseknert/opa-kafka-plugin[here]. |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.