Skip to content

Commit 408c4dc

Browse files
committed
add
1 parent c51f714 commit 408c4dc

4 files changed

+10
-8
lines changed

tencentcloud/services/tke/resource_tc_kubernetes_node_pool.go

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

tencentcloud/services/tke/resource_tc_kubernetes_node_pool.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Provide a resource to create an auto scaling group for kubernetes cluster.
66

77
~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want to destroy together, please set `delete_with_instance` to `true`.
88

9-
~> **NOTE:** There are two parameters `wait_node_ready` and `scale_tolerance` to ensure better management of node pool scaling operations. If this parameter is set, when creating resources, if the set criteria are not met, the resources will be marked as `tainted`.
9+
~> **NOTE:** There are two parameters `wait_node_ready` and `scale_tolerance` to ensure better management of node pool scaling operations. If this parameter is set when creating a resource, the resource will be marked as `tainted` if the set conditions are not met.
1010

1111
Example Usage
1212

@@ -145,7 +145,7 @@ resource "tencentcloud_kubernetes_node_pool" "example" {
145145
}
146146
```
147147

148-
Set `wait_node_ready` and `scale_tolerance`
148+
Wait for all scaling nodes to be ready with wait_node_ready and scale_tolerance parameters.
149149

150150
```hcl
151151
resource "tencentcloud_kubernetes_node_pool" "example" {

tencentcloud/services/tke/resource_tc_kubernetes_node_pool_extension.go

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1438,7 +1438,7 @@ func waitNodePoolInitializing(ctx context.Context, clusterId, nodePoolId string)
14381438
nodePoolDetailrequest := tke.NewDescribeClusterNodePoolDetailRequest()
14391439
nodePoolDetailrequest.ClusterId = common.StringPtr(clusterId)
14401440
nodePoolDetailrequest.NodePoolId = common.StringPtr(nodePoolId)
1441-
err = resource.Retry(1*tccommon.ReadRetryTimeout, func() *resource.RetryError {
1441+
err = resource.Retry(10*tccommon.ReadRetryTimeout, func() *resource.RetryError {
14421442
result, e := meta.(tccommon.ProviderMeta).GetAPIV3Conn().UseTkeV20180525Client().DescribeClusterNodePoolDetailWithContext(ctx, nodePoolDetailrequest)
14431443
if e != nil {
14441444
return tccommon.RetryError(e)
@@ -1501,8 +1501,10 @@ func waitNodePoolInitializing(ctx context.Context, clusterId, nodePoolId string)
15011501
})
15021502

15031503
if err != nil {
1504-
return fmt.Errorf("Node pool scaling failed, Reason: %s\nPlease check your resource inventory, Or adjust `desired_capacity`, `scale_tolerance` and `instance_type`, Then try again.", errFmt)
1504+
return fmt.Errorf("Describe auto scaling activities failed: %s", err)
15051505
}
1506+
1507+
return fmt.Errorf("Node pool scaling failed, Reason: %s\nPlease check your resource inventory, Or adjust `desired_capacity`, `scale_tolerance` and `instance_type`, Then try again.", errFmt)
15061508
} else {
15071509
return fmt.Errorf("Node pool scaling failed, Desired value: %d, Actual value: %d, Scale tolerance: %d%%\nPlease check your resource inventory, Or adjust `desired_capacity`, `scale_tolerance` and `instance_type`, Then try again.", desiredCapacity, currentNormal, scaleTolerance)
15081510
}

website/docs/r/kubernetes_node_pool.html.markdown

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Provide a resource to create an auto scaling group for kubernetes cluster.
1717

1818
~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want to destroy together, please set `delete_with_instance` to `true`.
1919

20-
~> **NOTE:** There are two parameters `wait_node_ready` and `scale_tolerance` to ensure better management of node pool scaling operations. If this parameter is set, when creating resources, if the set criteria are not met, the resources will be marked as `tainted`.
20+
~> **NOTE:** There are two parameters `wait_node_ready` and `scale_tolerance` to ensure better management of node pool scaling operations. If this parameter is set when creating a resource, the resource will be marked as `tainted` if the set conditions are not met.
2121

2222
## Example Usage
2323

@@ -156,7 +156,7 @@ resource "tencentcloud_kubernetes_node_pool" "example" {
156156
}
157157
```
158158

159-
159+
### Wait for all scaling nodes to be ready with wait_node_ready and scale_tolerance parameters.
160160

161161
```hcl
162162
resource "tencentcloud_kubernetes_node_pool" "example" {
@@ -254,7 +254,7 @@ The following arguments are supported:
254254
* `taints` - (Optional, List) Taints of kubernetes node pool created nodes.
255255
* `termination_policies` - (Optional, List: [`String`]) Policy of scaling group termination. Available values: `["OLDEST_INSTANCE"]`, `["NEWEST_INSTANCE"]`.
256256
* `unschedulable` - (Optional, Int, ForceNew) Sets whether the joining node participates in the schedule. Default is '0'. Participate in scheduling.
257-
* `wait_node_ready` - (Optional, Bool) Whether to wait for all expansion resources to be ready. Default is false. Only can be set if `enable_auto_scale` is `false`.
257+
* `wait_node_ready` - (Optional, Bool) Whether to wait for all desired nodes to be ready. Default is false. Only can be set if `enable_auto_scale` is `false`.
258258
* `zones` - (Optional, List: [`String`]) List of auto scaling group available zones, for Basic network it is required.
259259

260260
The `annotations` object supports the following:

0 commit comments

Comments
 (0)