Skip to content

Commit 7175a04

Browse files
authored
feat(eks-v2-alpha): support eks with k8s 1.32 (#33344)
### Issue # (if applicable) Closes #<issue number here>. ### Reason for this change ### Description of changes ### Describe any new or updated permissions being added ### Description of how you validated changes ```ts import * as ec2 from 'aws-cdk-lib/aws-ec2'; import * as iam from 'aws-cdk-lib/aws-iam'; import { App, Stack, StackProps } from 'aws-cdk-lib'; import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32'; import * as eks from '../lib'; import { Construct } from 'constructs'; export class EksClusterLatestVersion extends Stack { constructor(scope: Construct, id: string, props: StackProps) { super(scope, id, props); const vpc = new ec2.Vpc(this, 'Vpc', { natGateways: 1 }); const mastersRole = new iam.Role(this, 'Role', { assumedBy: new iam.AccountRootPrincipal(), }); new eks.Cluster(this, 'hello-eks', { vpc, mastersRole, version: eks.KubernetesVersion.V1_32, kubectlProviderOptions: { kubectlLayer: new KubectlV32Layer(this, 'kubectl'), }, }); } } const app = new App(); new EksClusterLatestVersion(app, 'v32-stack', { env: { account: process.env.CDK_DEFAULT_ACCOUNT, region: process.env.CDK_DEFAULT_REGION, }, }); app.synth(); ``` verify: ``` % kubectl get no NAME STATUS ROLES AGE VERSION ip-172-31-1-32.ec2.internal Ready <none> 10m v1.32.0-eks-aeac579 ip-172-31-2-70.ec2.internal Ready <none> 10m v1.32.0-eks-aeac579 pahud@MBP @aws-cdk % kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE aws-node-6lp5b 2/2 Running 2 (8m33s ago) 11m aws-node-tckj8 2/2 Running 2 (8m47s ago) 11m coredns-6b9575c64c-pntcb 1/1 Running 1 (8m33s ago) 15m coredns-6b9575c64c-zsqw8 1/1 Running 1 (8m33s ago) 15m kube-proxy-q7744 1/1 Running 1 (8m32s ago) 11m kube-proxy-tfrmc 1/1 Running 1 (8m47s ago) 11m ``` ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
1 parent 4e71675 commit 7175a04

File tree

121 files changed

+6946
-389
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

121 files changed

+6946
-389
lines changed

packages/@aws-cdk/aws-eks-v2-alpha/README.md

Lines changed: 34 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Here is the minimal example of defining an AWS EKS cluster
3333

3434
```ts
3535
const cluster = new eks.Cluster(this, 'hello-eks', {
36-
version: eks.KubernetesVersion.V1_31,
36+
version: eks.KubernetesVersion.V1_32,
3737
});
3838
```
3939

@@ -73,15 +73,15 @@ Creating a new cluster is done using the `Cluster` constructs. The only required
7373

7474
```ts
7575
new eks.Cluster(this, 'HelloEKS', {
76-
version: eks.KubernetesVersion.V1_31,
76+
version: eks.KubernetesVersion.V1_32,
7777
});
7878
```
7979

8080
You can also use `FargateCluster` to provision a cluster that uses only fargate workers.
8181

8282
```ts
8383
new eks.FargateCluster(this, 'HelloEKS', {
84-
version: eks.KubernetesVersion.V1_31,
84+
version: eks.KubernetesVersion.V1_32,
8585
});
8686
```
8787

@@ -90,20 +90,20 @@ be created by default. It will only be deployed when `kubectlProviderOptions`
9090
property is used.**
9191

9292
```ts
93-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
93+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
9494

9595
new eks.Cluster(this, 'hello-eks', {
96-
version: eks.KubernetesVersion.V1_31,
96+
version: eks.KubernetesVersion.V1_32,
9797
kubectlProviderOptions: {
98-
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
98+
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
9999
}
100100
});
101101
```
102102

103103
### Managed node groups
104104

105105
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
106-
With Amazon EKS managed node groups, you dont need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
106+
With Amazon EKS managed node groups, you don't need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
107107

108108
> For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).
109109
@@ -115,7 +115,7 @@ At cluster instantiation time, you can customize the number of instances and the
115115

116116
```ts
117117
new eks.Cluster(this, 'HelloEKS', {
118-
version: eks.KubernetesVersion.V1_31,
118+
version: eks.KubernetesVersion.V1_32,
119119
defaultCapacity: 5,
120120
defaultCapacityInstance: ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL),
121121
});
@@ -127,7 +127,7 @@ Additional customizations are available post instantiation. To apply them, set t
127127

128128
```ts
129129
const cluster = new eks.Cluster(this, 'HelloEKS', {
130-
version: eks.KubernetesVersion.V1_31,
130+
version: eks.KubernetesVersion.V1_32,
131131
defaultCapacity: 0,
132132
});
133133

@@ -177,7 +177,7 @@ The following code defines an Amazon EKS cluster with a default Fargate Profile
177177

178178
```ts
179179
const cluster = new eks.FargateCluster(this, 'MyCluster', {
180-
version: eks.KubernetesVersion.V1_31,
180+
version: eks.KubernetesVersion.V1_32,
181181
});
182182
```
183183

@@ -196,7 +196,7 @@ You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/
196196

197197
```ts
198198
const cluster = new eks.Cluster(this, 'hello-eks', {
199-
version: eks.KubernetesVersion.V1_31,
199+
version: eks.KubernetesVersion.V1_32,
200200
endpointAccess: eks.EndpointAccess.PRIVATE, // No access outside of your VPC.
201201
});
202202
```
@@ -218,7 +218,7 @@ To deploy the controller on your EKS cluster, configure the `albController` prop
218218

219219
```ts
220220
new eks.Cluster(this, 'HelloEKS', {
221-
version: eks.KubernetesVersion.V1_31,
221+
version: eks.KubernetesVersion.V1_32,
222222
albController: {
223223
version: eks.AlbControllerVersion.V2_8_2,
224224
},
@@ -259,7 +259,7 @@ You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properti
259259
declare const vpc: ec2.Vpc;
260260

261261
new eks.Cluster(this, 'HelloEKS', {
262-
version: eks.KubernetesVersion.V1_31,
262+
version: eks.KubernetesVersion.V1_32,
263263
vpc,
264264
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }],
265265
});
@@ -302,12 +302,12 @@ To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cl
302302
`kubectlLayer` is the only required property in `kubectlProviderOptions`.
303303

304304
```ts
305-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
305+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
306306

307307
new eks.Cluster(this, 'hello-eks', {
308-
version: eks.KubernetesVersion.V1_31,
308+
version: eks.KubernetesVersion.V1_32,
309309
kubectlProviderOptions: {
310-
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
310+
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
311311
}
312312
});
313313
```
@@ -317,7 +317,7 @@ new eks.Cluster(this, 'hello-eks', {
317317
If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
318318

319319
```ts
320-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
320+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
321321

322322
const handlerRole = iam.Role.fromRoleArn(this, 'HandlerRole', 'arn:aws:iam::123456789012:role/lambda-role');
323323
// get the serivceToken from the custom resource provider
@@ -338,12 +338,12 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
338338
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
339339

340340
```ts
341-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
341+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
342342

343343
const cluster = new eks.Cluster(this, 'hello-eks', {
344-
version: eks.KubernetesVersion.V1_31,
344+
version: eks.KubernetesVersion.V1_32,
345345
kubectlProviderOptions: {
346-
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
346+
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
347347
environment: {
348348
'http_proxy': 'http://proxy.myproxy.com',
349349
},
@@ -364,12 +364,12 @@ Depending on which version of kubernetes you're targeting, you will need to use
364364
the `@aws-cdk/lambda-layer-kubectl-vXY` packages.
365365

366366
```ts
367-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
367+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
368368

369369
const cluster = new eks.Cluster(this, 'hello-eks', {
370-
version: eks.KubernetesVersion.V1_31,
370+
version: eks.KubernetesVersion.V1_32,
371371
kubectlProviderOptions: {
372-
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
372+
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
373373
},
374374
});
375375
```
@@ -379,14 +379,14 @@ const cluster = new eks.Cluster(this, 'hello-eks', {
379379
By default, the kubectl provider is configured with 1024MiB of memory. You can use the `memory` option to specify the memory size for the AWS Lambda function:
380380

381381
```ts
382-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
382+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
383383

384384
new eks.Cluster(this, 'MyCluster', {
385385
kubectlProviderOptions: {
386-
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
386+
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
387387
memory: Size.gibibytes(4),
388388
},
389-
version: eks.KubernetesVersion.V1_31,
389+
version: eks.KubernetesVersion.V1_32,
390390
});
391391
```
392392

@@ -417,7 +417,7 @@ When you create a cluster, you can specify a `mastersRole`. The `Cluster` constr
417417
```ts
418418
declare const role: iam.Role;
419419
new eks.Cluster(this, 'HelloEKS', {
420-
version: eks.KubernetesVersion.V1_31,
420+
version: eks.KubernetesVersion.V1_32,
421421
mastersRole: role,
422422
});
423423
```
@@ -438,7 +438,7 @@ You can use the `secretsEncryptionKey` to configure which key the cluster will u
438438
const secretsKey = new kms.Key(this, 'SecretsKey');
439439
const cluster = new eks.Cluster(this, 'MyCluster', {
440440
secretsEncryptionKey: secretsKey,
441-
version: eks.KubernetesVersion.V1_31,
441+
version: eks.KubernetesVersion.V1_32,
442442
});
443443
```
444444

@@ -448,7 +448,7 @@ You can also use a similar configuration for running a cluster built using the F
448448
const secretsKey = new kms.Key(this, 'SecretsKey');
449449
const cluster = new eks.FargateCluster(this, 'MyFargateCluster', {
450450
secretsEncryptionKey: secretsKey,
451-
version: eks.KubernetesVersion.V1_31,
451+
version: eks.KubernetesVersion.V1_32,
452452
});
453453
```
454454

@@ -489,7 +489,7 @@ eks.AccessPolicy.fromAccessPolicyName('AmazonEKSAdminPolicy', {
489489
Use `grantAccess()` to grant the AccessPolicy to an IAM principal:
490490

491491
```ts
492-
import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31';
492+
import { KubectlV32Layer } from '@aws-cdk/lambda-layer-kubectl-v32';
493493
declare const vpc: ec2.Vpc;
494494

495495
const clusterAdminRole = new iam.Role(this, 'ClusterAdminRole', {
@@ -503,9 +503,9 @@ const eksAdminRole = new iam.Role(this, 'EKSAdminRole', {
503503
const cluster = new eks.Cluster(this, 'Cluster', {
504504
vpc,
505505
mastersRole: clusterAdminRole,
506-
version: eks.KubernetesVersion.V1_31,
506+
version: eks.KubernetesVersion.V1_32,
507507
kubectlProviderOptions: {
508-
kubectlLayer: new KubectlV31Layer(this, 'kubectl'),
508+
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
509509
memory: Size.gibibytes(4),
510510
},
511511
});
@@ -690,7 +690,7 @@ when a cluster is defined:
690690

691691
```ts
692692
new eks.Cluster(this, 'MyCluster', {
693-
version: eks.KubernetesVersion.V1_31,
693+
version: eks.KubernetesVersion.V1_32,
694694
prune: false,
695695
});
696696
```
@@ -1003,7 +1003,7 @@ property. For example:
10031003
```ts
10041004
const cluster = new eks.Cluster(this, 'Cluster', {
10051005
// ...
1006-
version: eks.KubernetesVersion.V1_31,
1006+
version: eks.KubernetesVersion.V1_32,
10071007
clusterLogging: [
10081008
eks.ClusterLoggingTypes.API,
10091009
eks.ClusterLoggingTypes.AUTHENTICATOR,

packages/@aws-cdk/aws-eks-v2-alpha/lib/cluster.ts

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -630,6 +630,15 @@ export class KubernetesVersion {
630630
*/
631631
public static readonly V1_31 = KubernetesVersion.of('1.31');
632632

633+
/**
634+
* Kubernetes version 1.32
635+
*
636+
* When creating a `Cluster` with this version, you need to also specify the
637+
* `kubectlLayer` property with a `KubectlV32Layer` from
638+
* `@aws-cdk/lambda-layer-kubectl-v32`.
639+
*/
640+
public static readonly V1_32 = KubernetesVersion.of('1.32');
641+
633642
/**
634643
* Custom cluster version
635644
* @param version custom version number

packages/@aws-cdk/aws-eks-v2-alpha/package.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,7 @@
9191
"@aws-cdk/lambda-layer-kubectl-v29": "^2.1.0",
9292
"@aws-cdk/lambda-layer-kubectl-v30": "^2.0.1",
9393
"@aws-cdk/lambda-layer-kubectl-v31": "^2.0.0",
94+
"@aws-cdk/lambda-layer-kubectl-v32": "^2.0.0",
9495
"@types/jest": "^29.5.1",
9596
"aws-sdk": "^2.1379.0",
9697
"aws-cdk-lib": "0.0.0",
@@ -134,6 +135,7 @@
134135
"jsiiRosetta": {
135136
"exampleDependencies": {
136137
"@aws-cdk/lambda-layer-kubectl-v31": "^2.0.0",
138+
"@aws-cdk/lambda-layer-kubectl-v32": "^2.0.0",
137139
"cdk8s-plus-25": "^2.7.0"
138140
}
139141
}

0 commit comments

Comments
 (0)