You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(eks-v2-alpha): eks auto mode support (#33373)
### Issue # (if applicable)
Address #32364 in aws-eks-v2-alpha.
For EKS Auto Mode, all required configs, including `computeConfig`, `kubernetesNetworkConfig`, and `blockStorage` are managed through the `defaultCapacityType` enum. When set to `DefaultCapacityType.AUTOMODE` (which is the default), these configurations are automatically enabled. The `Cluster` construct in aws-eks-v2-alpha enables EKS Auto Mode by default, managing compute resources through node pools instead of creating default capacity or nodegroups. Users can still opt-in to traditional nodegroup management by setting `defaultCapacityType` to `NODEGROUP` or `EC2`.
User Experience:
```ts
// Default usage - Auto Mode enabled by default
new eks.Cluster(this, 'hello-eks', {
vpc,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
},
// Auto Mode is enabled by default, no need to specify anything
});
// Explicit Auto Mode configuration
new eks.Cluster(this, 'hello-eks', {
vpc,
version: eks.KubernetesVersion.V1_32,
kubectlProviderOptions: {
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
},
defaultCapacityType: eks.DefaultCapacityType.AUTOMODE, // Optional, this is default
compute: {
nodePools: ['system', 'general-purpose'], // Optional, these are default values
nodeRole: customRole, // Optional, custom IAM role for nodes
}
});
```
### Update Summary
- [x] EKS Auto Mode is the default mode for `Cluster` construct in V2. When enabled:
- Automatically manages compute resources through node pools
- Enables elastic load balancing in Kubernetes networking
- Enables block storage configuration
- Will not create `defaultCapacity` as a `NODEGROUP`(major difference from aws-eks module)
- [x] Node pools are case-sensitive and must be "system" and/or "general-purpose"
- [x] Auto Mode can coexist with manually added node groups for hybrid deployments
- [x] Required IAM policies are automatically attached
- [x] Restore the `outputConfigCommand` support previously in `aws-eks` module
- [x] integration test
- [x] unit tests
### Description of how you validated changes
On deploy the autoMode enabled cluster using the code above.
```sh
% kubectl create deployment nginx --image=nginx
% kubectl get events --sort-by='.lastTimestamp'
```
```
20m Normal Nominated pod/nginx-5869d7778c-52pzg Pod should schedule on: nodeclaim/general-purpose-87brc
20m Normal Launched nodeclaim/general-purpose-87brc Status condition transitioned, Type: Launched, Status: Unknown -> True, Reason: Launched
20m Normal DisruptionBlocked nodeclaim/general-purpose-87brc Nodeclaim does not have an associated node
19m Normal NodeHasSufficientPID node/i-0322e9d8dd1b95a51 Node i-0322e9d8dd1b95a51 status is now: NodeHasSufficientPID
19m Normal NodeAllocatableEnforced node/i-0322e9d8dd1b95a51 Updated Node Allocatable limit across pods
19m Normal NodeReady node/i-0322e9d8dd1b95a51 Node i-0322e9d8dd1b95a51 status is now: NodeReady
19m Normal Ready node/i-0322e9d8dd1b95a51 Status condition transitioned, Type: Ready, Status: False -> True, Reason: KubeletReady, Message: kubelet is posting ready status
19m Normal Synced node/i-0322e9d8dd1b95a51 Node synced successfully
19m Normal NodeHasNoDiskPressure node/i-0322e9d8dd1b95a51 Node i-0322e9d8dd1b95a51 status is now: NodeHasNoDiskPressure
19m Normal NodeHasSufficientMemory node/i-0322e9d8dd1b95a51 Node i-0322e9d8dd1b95a51 status is now: NodeHasSufficientMemory
19m Warning InvalidDiskCapacity node/i-0322e9d8dd1b95a51 invalid capacity 0 on image filesystem
19m Normal Starting node/i-0322e9d8dd1b95a51 Starting kubelet.
19m Normal Registered nodeclaim/general-purpose-87brc Status condition transitioned, Type: Registered, Status: Unknown -> True, Reason: Registered
19m Normal Ready nodeclaim/general-purpose-87brc Status condition transitioned, Type: Ready, Status: Unknown -> True, Reason: Ready
19m Normal Initialized nodeclaim/general-purpose-87brc Status condition transitioned, Type: Initialized, Status: Unknown -> True, Reason: Initialized
19m Normal RegisteredNode node/i-0322e9d8dd1b95a51 Node i-0322e9d8dd1b95a51 event: Registered Node i-0322e9d8dd1b95a51 in Controller
19m Normal DisruptionBlocked node/i-0322e9d8dd1b95a51 Node is nominated for a pending pod
19m Normal Scheduled pod/nginx-5869d7778c-52pzg Successfully assigned default/nginx-5869d7778c-52pzg to i-0322e9d8dd1b95a51
19m Warning FailedCreatePodSandBox pod/nginx-5869d7778c-52pzg Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9bd199c61bd9e93437b10a85af3ddc6965888e01bda96706e153b9e9852f67af": plugin type="aws-cni" name="aws-cni" failed (add): add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:50051: connect: connection refused"
19m Normal Pulling pod/nginx-5869d7778c-52pzg Pulling image "nginx"
19m Normal Pulled pod/nginx-5869d7778c-52pzg Successfully pulled image "nginx" in 2.307s (2.307s including waiting). Image size: 72188133 bytes.
19m Normal Created pod/nginx-5869d7778c-52pzg Created container: nginx
19m Normal Started pod/nginx-5869d7778c-52pzg Started container nginx
```
verify the nodes and pods
```sh
% kubectl get no
NAME STATUS ROLES AGE VERSION
i-0322e9d8dd1b95a51 Ready <none> 21m v1.32.0-eks-2e66e76
% kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5869d7778c-52pzg 1/1 Running 0 90m
```
### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)
### References
eksctl YAML experience
```yaml
# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-auto-cluster
region: us-west-2
autoModeConfig:
# defaults to false
enabled: true
# optional, defaults to [general-purpose, system]
# suggested to leave unspecified
nodePools: []string
# optional, eksctl creates a new role if this is not supplied
# and nodePools are present
nodeRoleARN: string
```
Terraform experience:
```hcl
provider "aws" {
region = "us-east-1"
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "eks-auto-mode-cluster"
cluster_version = "1.27"
vpc_id = "<your-vpc-id>"
subnet_ids = ["<subnet-id-1>", "<subnet-id-2>"]
cluster_compute_config = {
enabled = true
node_pools = ["general-purpose"] # Default pool for Auto Mode
}
bootstrap_self_managed_addons = true
}
```
Pulumi experience
```ts
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
// Create EKS cluster with Auto Mode enabled
const cluster = new aws.eks.Cluster("example", {
name: "example",
version: "1.31",
bootstrapSelfManagedAddons: false, // Required: Must be false for Auto Mode
computeConfig: {
enabled: true, // Enable Auto Mode compute
nodePools: ["general-purpose"],
},
kubernetesNetworkConfig: {
elasticLoadBalancing: {
enabled: true, // Required for Auto Mode
},
},
storageConfig: {
blockStorage: {
enabled: true, // Required for Auto Mode
},
},
});
```
### Links
- https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-eks-auto-mode/
- https://aws.amazon.com/eks/auto-mode/
- https://aws.amazon.com/blogs/aws/streamline-kubernetes-cluster-management-with-new-amazon-eks-auto-mode/
----
*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Copy file name to clipboardExpand all lines: packages/@aws-cdk/aws-eks-v2-alpha/README.md
+125-2
Original file line number
Diff line number
Diff line change
@@ -100,22 +100,144 @@ new eks.Cluster(this, 'hello-eks', {
100
100
});
101
101
```
102
102
103
+
## EKS Auto Mode
104
+
105
+
[Amazon EKS Auto Mode](https://aws.amazon.com/eks/auto-mode/) extends AWS management of Kubernetes clusters beyond the cluster itself, allowing AWS to set up and manage the infrastructure that enables the smooth operation of your workloads.
106
+
107
+
### Using Auto Mode
108
+
109
+
While `aws-eks` uses `DefaultCapacityType.NODEGROUP` by default, `aws-eks-v2` uses `DefaultCapacityType.AUTOMODE` as the default capacity type.
110
+
111
+
Auto Mode is enabled by default when creating a new cluster without specifying any capacity-related properties:
112
+
113
+
```ts
114
+
// Create EKS cluster with Auto Mode implicitly enabled
For more information, see [Create a Node Pool for EKS Auto Mode](https://docs.aws.amazon.com/eks/latest/userguide/create-node-pool.html).
150
+
151
+
### Node Groups as the default capacity type
152
+
153
+
If you prefer to manage your own node groups instead of using Auto Mode, you can use the traditional node group approach by specifying `defaultCapacityType` as `NODEGROUP`:
154
+
155
+
```ts
156
+
// Create EKS cluster with traditional managed node group
1. Auto Mode and traditional capacity management are mutually exclusive at the default capacity level. You cannot opt in to Auto Mode and specify `defaultCapacity` or `defaultCapacityInstance`.
211
+
212
+
2. When Auto Mode is enabled:
213
+
- The cluster will automatically manage compute resources
214
+
- Node pools cannot be modified, only enabled or disabled
215
+
- EKS will handle scaling and management of the node pools
216
+
217
+
3. Auto Mode requires specific IAM permissions. The construct will automatically attach the required managed policies.
218
+
103
219
### Managed node groups
104
220
105
221
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
106
222
With Amazon EKS managed node groups, you don't need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
107
223
108
224
> For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).
109
225
110
-
**Managed Node Groups are the recommended way to allocate cluster capacity.**
226
+
By default, when using `DefaultCapacityType.NODEGROUP`, this library will allocate a managed node group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
111
227
112
-
By default, this library will allocate a managed node group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
0 commit comments