Skip to content

Commit 8533c43

Browse files
authored
KAFKA-18876 4.0 documentation improvement (apache#19065)
1. add prefix "config" to the commands' properties 2. add missed sections (6.11 and 6.12) 3. fix some incorrect commands Reviewers: David Jacot <[email protected]>, Ken Huang <[email protected]>, TengYao Chi <[email protected]>, Jun Rao <[email protected]>, Chia-Ping Tsai <[email protected]>
1 parent e4ece37 commit 8533c43

File tree

2 files changed

+17
-15
lines changed

2 files changed

+17
-15
lines changed

docs/ops.html

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1343,13 +1343,13 @@ <h4 class="anchor-heading"><a id="replace_disk" class="anchor-link"></a><a href=
13431343

13441344
<p>Check and wait until the <code>Lag</code> is small for a majority of the controllers. If the leader's end offset is not increasing, you can wait until the lag is 0 for a majority; otherwise, you can pick the latest leader end offset and wait until all replicas have reached it. Check and wait until the <code>LastFetchTimestamp</code> and <code>LastCaughtUpTimestamp</code> are close to each other for the majority of the controllers. At this point it is safer to format the controller's metadata log directory. This can be done by running the kafka-storage.sh command.</p>
13451345

1346-
<pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id uuid --config server_properties</code></pre>
1346+
<pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id uuid --config config/server.properties</code></pre>
13471347

13481348
<p>It is possible for the <code>bin/kafka-storage.sh format</code> command above to fail with a message like <code>Log directory ... is already formatted</code>. This can happen when combined mode is used and only the metadata log directory was lost but not the others. In that case and only in that case, can you run the <code>bin/kafka-storage.sh format</code> command with the <code>--ignore-formatted</code> option.</p>
13491349

13501350
<p>Start the KRaft controller after formatting the log directories.</p>
13511351

1352-
<pre><code class="language-bash">$ bin/kafka-server-start.sh server_properties</code></pre>
1352+
<pre><code class="language-bash">$ bin/kafka-server-start.sh config/server.properties</code></pre>
13531353

13541354
<h3 class="anchor-heading"><a id="monitoring" class="anchor-link"></a><a href="#monitoring">6.7 Monitoring</a></h3>
13551355

@@ -3827,22 +3827,22 @@ <h4 class="anchor-heading"><a id="kraft_storage" class="anchor-link"></a><a href
38273827
<h5 class="anchor-heading"><a id="kraft_storage_standalone" class="anchor-link"></a><a href="#kraft_storage_standalone">Bootstrap a Standalone Controller</a></h5>
38283828
The recommended method for creating a new KRaft controller cluster is to bootstrap it with one voter and dynamically <a href="#kraft_reconfig_add">add the rest of the controllers</a>. Bootstrapping the first controller can be done with the following CLI command:
38293829

3830-
<pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id &lt;cluster-id&gt; --standalone --config controller.properties</code></pre>
3830+
<pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id &lt;CLUSTER_ID&gt; --standalone --config config/controller.properties</code></pre>
38313831

38323832
This command will 1) create a meta.properties file in metadata.log.dir with a randomly generated directory.id, 2) create a snapshot at 00000000000000000000-0000000000.checkpoint with the necessary control records (KRaftVersionRecord and VotersRecord) to make this Kafka node the only voter for the quorum.
38333833

38343834
<h5 class="anchor-heading"><a id="kraft_storage_voters" class="anchor-link"></a><a href="#kraft_storage_voters">Bootstrap with Multiple Controllers</a></h5>
38353835
The KRaft cluster metadata partition can also be bootstrapped with more than one voter. This can be done by using the --initial-controllers flag:
38363836

3837-
<pre><code class="language-bash">cluster-id=$(bin/kafka-storage.sh random-uuid)
3838-
controller-0-uuid=$(bin/kafka-storage.sh random-uuid)
3839-
controller-1-uuid=$(bin/kafka-storage.sh random-uuid)
3840-
controller-2-uuid=$(bin/kafka-storage.sh random-uuid)
3837+
<pre><code class="language-bash">CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
3838+
CONTROLLER_0_UUID="$(bin/kafka-storage.sh random-uuid)"
3839+
CONTROLLER_1_UUID="$(bin/kafka-storage.sh random-uuid)"
3840+
CONTROLLER_2_UUID="$(bin/kafka-storage.sh random-uuid)"
38413841

38423842
# In each controller execute
3843-
bin/kafka-storage.sh format --cluster-id ${cluster-id} \
3844-
--initial-controllers "0@controller-0:1234:${controller-0-uuid},1@controller-1:1234:${controller-1-uuid},2@controller-2:1234:${controller-2-uuid}" \
3845-
--config controller.properties</code></pre>
3843+
bin/kafka-storage.sh format --cluster-id ${CLUSTER_ID} \
3844+
--initial-controllers "0@controller-0:1234:${CONTROLLER_0_UUID},1@controller-1:1234:${CONTROLLER_1_UUID},2@controller-2:1234:${CONTROLLER_2_UUID}" \
3845+
--config config/controller.properties</code></pre>
38463846

38473847
This command is similar to the standalone version but the snapshot at 00000000000000000000-0000000000.checkpoint will instead contain a VotersRecord that includes information for all of the controllers specified in --initial-controllers. It is important that the value of this flag is the same in all of the controllers with the same cluster id.
38483848

@@ -3851,7 +3851,7 @@ <h5 class="anchor-heading"><a id="kraft_storage_voters" class="anchor-link"></a>
38513851
<h5 class="anchor-heading"><a id="kraft_storage_observers" class="anchor-link"></a><a href="#kraft_storage_observers">Formatting Brokers and New Controllers</a></h5>
38523852
When provisioning new broker and controller nodes that we want to add to an existing Kafka cluster, use the <code>kafka-storage.sh format</code> command with the --no-initial-controllers flag.
38533853

3854-
<pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id &lt;cluster-id&gt; --config server.properties --no-initial-controllers</code></pre>
3854+
<pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id &lt;CLUSTER_ID&gt; --config config/server.properties --no-initial-controllers</code></pre>
38553855

38563856
<h4 class="anchor-heading"><a id="kraft_reconfig" class="anchor-link"></a><a href="#kraft_reconfig">Controller membership changes</a></h4>
38573857

@@ -3917,10 +3917,10 @@ <h5 class="anchor-heading"><a id="kraft_reconfig_add" class="anchor-link"></a><a
39173917
After starting the controller, the replication to the new controller can be monitored using the <code>bin/kafka-metadata-quorum.sh describe --replication</code> command. Once the new controller has caught up to the active controller, it can be added to the cluster using the <code>bin/kafka-metadata-quorum.sh add-controller</code> command.
39183918

39193919
When using broker endpoints use the --bootstrap-server flag:
3920-
<pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh --command-config controller.properties --bootstrap-server localhost:9092 add-controller</code></pre>
3920+
<pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh --command-config config/controller.properties --bootstrap-server localhost:9092 add-controller</code></pre>
39213921

39223922
When using controller endpoints use the --bootstrap-controller flag:
3923-
<pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh --command-config controller.properties --bootstrap-controller localhost:9092 add-controller</code></pre>
3923+
<pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh --command-config config/controller.properties --bootstrap-controller localhost:9093 add-controller</code></pre>
39243924

39253925
<h5 class="anchor-heading"><a id="kraft_reconfig_remove" class="anchor-link"></a><a href="#kraft_reconfig_remove">Remove Controller</a></h5>
39263926
If the dynamic controller cluster already exists, it can be shrunk using the <code>bin/kafka-metadata-quorum.sh remove-controller</code> command. Until KIP-996: Pre-vote has been implemented and released, it is recommended to shutdown the controller that will be removed before running the remove-controller command.
@@ -4187,7 +4187,7 @@ <h4 class="anchor-heading"><a id="consumer_rebalance_protocol_server" class="anc
41874187
<p>The assignment strategy is also controlled by the server. The <code>group.consumer.assignors</code> configuration can be used to specify the list of available
41884188
assignors for <code>Consumer</code> groups. By default, the <code>uniform</code> assignor and the <code>range</code> assignor are configured. The first assignor
41894189
in the list is used by default unless the Consumer selects a different one. It is also possible to implement custom assignment strategies on the server side
4190-
by implementing the <code>org.apache.kafka.coordinator.group.api.assignor.ConsumerGroupPartitionAssignor</code> interface and specifying the full class name in the configuration.</p>
4190+
by implementing the <code>ConsumerGroupPartitionAssignor</code> interface and specifying the full class name in the configuration.</p>
41914191

41924192
<h4 class="anchor-heading"><a id="consumer_rebalance_protocol_consumer" class="anchor-link"></a><a href="#consumer_rebalance_protocol_consumer">Consumer</a></h4>
41934193

docs/toc.html

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,9 @@
174174
<li><a href="#tiered_storage_config_ex">Quick Start Example</a>
175175
<li><a href="#tiered_storage_limitation">Limitations</a>
176176
</ul>
177-
177+
<li><a href="#consumer_rebalance_protocol">6.10 Consumer Rebalance Protocol</a>
178+
<li><a href="#transaction_protocol">6.11 Transaction Protocol</a>
179+
<li><a href="#eligible_leader_replicas">6.12 Eligible Leader Replicas</a>
178180
</ul>
179181

180182
<li><a href="#security">7. Security</a>

0 commit comments

Comments
 (0)