Skip to content

Commit 2da569f

Browse files
authored
Merge branch 'main' into merge-back/2.184.0
2 parents cd09577 + 4128ff4 commit 2da569f

File tree

40 files changed

+154
-106
lines changed

40 files changed

+154
-106
lines changed

.github/workflows/issue-label-assign.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ jobs:
5858
env:
5959
OSDS_DEVS: >
6060
{
61-
"assignees":["ashishdhingra","khushail","hunhsieh"]
61+
"assignees":["ashishdhingra","hunhsieh"]
6262
}
6363
6464
AREA_AFFIXES: >

packages/@aws-cdk/aws-iot-actions-alpha/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Currently supported are:
2828
- Capture CloudWatch metrics
2929
- Change state for a CloudWatch alarm
3030
- Put records to Kinesis Data stream
31-
- Put records to Kinesis Data Firehose stream
31+
- Put records to Amazon Data Firehose stream
3232
- Send messages to SQS queues
3333
- Publish messages on SNS topics
3434
- Write messages into columns of DynamoDB
@@ -232,10 +232,10 @@ const topicRule = new iot.TopicRule(this, 'TopicRule', {
232232
});
233233
```
234234

235-
## Put records to Kinesis Data Firehose stream
235+
## Put records to Amazon Data Firehose stream
236236

237237
The code snippet below creates an AWS IoT Rule that puts records to Put records
238-
to Kinesis Data Firehose stream when it is triggered.
238+
to Amazon Data Firehose stream when it is triggered.
239239

240240
```ts
241241
import * as firehose from 'aws-cdk-lib/aws-kinesisfirehose';

packages/@aws-cdk/aws-iot-actions-alpha/lib/firehose-put-record-action.ts

+5-5
Original file line numberDiff line numberDiff line change
@@ -30,11 +30,11 @@ export enum FirehoseRecordSeparator {
3030
}
3131

3232
/**
33-
* Configuration properties of an action for the Kinesis Data Firehose stream.
33+
* Configuration properties of an action for the Amazon Data Firehose stream.
3434
*/
3535
export interface FirehosePutRecordActionProps extends CommonActionProps {
3636
/**
37-
* Whether to deliver the Kinesis Data Firehose stream as a batch by using `PutRecordBatch`.
37+
* Whether to deliver the Amazon Data Firehose stream as a batch by using `PutRecordBatch`.
3838
* When batchMode is true and the rule's SQL statement evaluates to an Array, each Array
3939
* element forms one record in the PutRecordBatch request. The resulting array can't have
4040
* more than 500 records.
@@ -44,23 +44,23 @@ export interface FirehosePutRecordActionProps extends CommonActionProps {
4444
readonly batchMode?: boolean;
4545

4646
/**
47-
* A character separator that will be used to separate records written to the Kinesis Data Firehose stream.
47+
* A character separator that will be used to separate records written to the Amazon Data Firehose stream.
4848
*
4949
* @default - none -- the stream does not use a separator
5050
*/
5151
readonly recordSeparator?: FirehoseRecordSeparator;
5252
}
5353

5454
/**
55-
* The action to put the record from an MQTT message to the Kinesis Data Firehose stream.
55+
* The action to put the record from an MQTT message to the Amazon Data Firehose stream.
5656
*/
5757
export class FirehosePutRecordAction implements iot.IAction {
5858
private readonly batchMode?: boolean;
5959
private readonly recordSeparator?: string;
6060
private readonly role?: iam.IRole;
6161

6262
/**
63-
* @param stream The Kinesis Data Firehose stream to which to put records.
63+
* @param stream The Amazon Data Firehose stream to which to put records.
6464
* @param props Optional properties to not use default
6565
*/
6666
constructor(private readonly stream: firehose.IDeliveryStream, props: FirehosePutRecordActionProps = {}) {

packages/@aws-cdk/aws-msk-alpha/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ const cluster = new msk.Cluster(this, 'Cluster', {
177177
## Logging
178178

179179
You can deliver Apache Kafka broker logs to one or more of the following destination types:
180-
Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose.
180+
Amazon CloudWatch Logs, Amazon S3, Amazon Data Firehose.
181181

182182
To configure logs to be sent to an S3 bucket, provide a bucket in the `logging` config.
183183

packages/@aws-cdk/aws-msk-alpha/lib/cluster.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ export interface MonitoringConfiguration {
275275
*/
276276
export interface BrokerLogging {
277277
/**
278-
* The Kinesis Data Firehose delivery stream that is the destination for broker logs.
278+
* The Amazon Data Firehose delivery stream that is the destination for broker logs.
279279
*
280280
* @default - disabled
281281
*/

packages/@aws-cdk/aws-pipes-alpha/lib/logs.ts

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
import { IDeliveryStream } from '@aws-cdk/aws-kinesisfirehose-alpha';
21
import { IRole } from 'aws-cdk-lib/aws-iam';
2+
import { IDeliveryStream } from 'aws-cdk-lib/aws-kinesisfirehose';
33
import { ILogGroup } from 'aws-cdk-lib/aws-logs';
44
import { CfnPipe } from 'aws-cdk-lib/aws-pipes';
55
import { IBucket } from 'aws-cdk-lib/aws-s3';
@@ -111,7 +111,7 @@ export interface LogDestinationParameters {
111111
readonly cloudwatchLogsLogDestination?: CfnPipe.CloudwatchLogsLogDestinationProperty;
112112

113113
/**
114-
* The Amazon Kinesis Data Firehose logging configuration settings for the pipe.
114+
* The Amazon Data Firehose logging configuration settings for the pipe.
115115
*
116116
* @see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-pipes-pipe-pipelogconfiguration.html#cfn-pipes-pipe-pipelogconfiguration-firehoselogdestination
117117
*

packages/@aws-cdk/aws-pipes-alpha/package.json

+2-4
Original file line numberDiff line numberDiff line change
@@ -88,14 +88,12 @@
8888
"jest": "^29",
8989
"aws-cdk-lib": "0.0.0",
9090
"constructs": "^10.0.0",
91-
"@aws-cdk/integ-tests-alpha": "0.0.0",
92-
"@aws-cdk/aws-kinesisfirehose-alpha": "0.0.0"
91+
"@aws-cdk/integ-tests-alpha": "0.0.0"
9392
},
9493
"dependencies": {},
9594
"peerDependencies": {
9695
"aws-cdk-lib": "^0.0.0",
97-
"constructs": "^10.0.0",
98-
"@aws-cdk/aws-kinesisfirehose-alpha": "0.0.0"
96+
"constructs": "^10.0.0"
9997
},
10098
"engines": {
10199
"node": ">= 14.15.0"

packages/@aws-cdk/aws-pipes-alpha/test/integ.logs.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import { randomUUID } from 'crypto';
2-
import { DeliveryStream, DestinationBindOptions, DestinationConfig, IDestination } from '@aws-cdk/aws-kinesisfirehose-alpha';
32
import { ExpectedResult, IntegTest } from '@aws-cdk/integ-tests-alpha';
43
import * as cdk from 'aws-cdk-lib';
4+
import { DeliveryStream, DestinationBindOptions, DestinationConfig, IDestination } from 'aws-cdk-lib/aws-kinesisfirehose';
55
import { Construct } from 'constructs';
66
import { CloudwatchLogsLogDestination, FirehoseLogDestination, IPipe, ISource, ITarget, IncludeExecutionData, InputTransformation, LogLevel, Pipe, S3LogDestination, SourceConfig, TargetConfig } from '../lib';
77
import { name } from '../package.json';

packages/@aws-cdk/aws-pipes-alpha/test/logs.test.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
import { DeliveryStream, DestinationBindOptions, DestinationConfig, IDestination } from '@aws-cdk/aws-kinesisfirehose-alpha';
21
import { App, Stack } from 'aws-cdk-lib';
32
import { Template } from 'aws-cdk-lib/assertions';
43
import { Role, ServicePrincipal } from 'aws-cdk-lib/aws-iam';
4+
import { DeliveryStream, DestinationBindOptions, DestinationConfig, IDestination } from 'aws-cdk-lib/aws-kinesisfirehose';
55
import { LogGroup } from 'aws-cdk-lib/aws-logs';
66
import { Bucket } from 'aws-cdk-lib/aws-s3';
77
import { Construct } from 'constructs';

packages/@aws-cdk/aws-scheduler-targets-alpha/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ The following targets are supported:
3131
6. `targets.EventBridgePutEvents`: [Put Events on EventBridge](#send-events-to-an-eventbridge-event-bus)
3232
7. `targets.InspectorStartAssessmentRun`: [Start an Amazon Inspector assessment run](#start-an-amazon-inspector-assessment-run)
3333
8. `targets.KinesisStreamPutRecord`: [Put a record to an Amazon Kinesis Data Stream](#put-a-record-to-an-amazon-kinesis-data-stream)
34-
9. `targets.KinesisDataFirehosePutRecord`: [Put a record to a Kinesis Data Firehose](#put-a-record-to-a-kinesis-data-firehose)
34+
9. `targets.KinesisDataFirehosePutRecord`: [Put a record to an Amazon Data Firehose](#put-a-record-to-an-amazon-data-firehose)
3535
10. `targets.CodePipelineStartPipelineExecution`: [Start a CodePipeline execution](#start-a-codepipeline-execution)
3636
11. `targets.SageMakerStartPipelineExecution`: [Start a SageMaker pipeline execution](#start-a-sagemaker-pipeline-execution)
3737
12. `targets.Universal`: [Invoke a wider set of AWS API](#invoke-a-wider-set-of-aws-api)
@@ -252,9 +252,9 @@ new Schedule(this, 'Schedule', {
252252
});
253253
```
254254

255-
## Put a record to a Kinesis Data Firehose
255+
## Put a record to an Amazon Data Firehose
256256

257-
Use the `KinesisDataFirehosePutRecord` target to put a record to a Kinesis Data Firehose delivery stream.
257+
Use the `KinesisDataFirehosePutRecord` target to put a record to an Amazon Data Firehose delivery stream.
258258

259259
The code snippet below creates an event rule with a delivery stream as a target
260260
called every hour by EventBridge Scheduler with a custom payload.

packages/@aws-cdk/aws-scheduler-targets-alpha/lib/kinesis-data-firehose-put-record.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import { IDeliveryStream } from 'aws-cdk-lib/aws-kinesisfirehose';
44
import { ScheduleTargetBase, ScheduleTargetBaseProps } from './target';
55

66
/**
7-
* Use an Amazon Kinesis Data Firehose as a target for AWS EventBridge Scheduler.
7+
* Use an Amazon Data Firehose as a target for AWS EventBridge Scheduler.
88
*/
99
export class KinesisDataFirehosePutRecord extends ScheduleTargetBase implements IScheduleTarget {
1010
constructor(

packages/aws-cdk-lib/aws-apigateway/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1400,7 +1400,7 @@ const api = new apigateway.RestApi(this, 'books', {
14001400

14011401
**Note:** The delivery stream name must start with `amazon-apigateway-`.
14021402

1403-
> Visit [Logging API calls to Kinesis Data Firehose](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-logging-to-kinesis.html) for more details.
1403+
> Visit [Logging API calls to Amazon Data Firehose](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-logging-to-kinesis.html) for more details.
14041404
14051405
## Cross Origin Resource Sharing (CORS)
14061406

packages/aws-cdk-lib/aws-config/lib/rule.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -2754,7 +2754,7 @@ export class ResourceType {
27542754
public static readonly IAM_SAML_PROVIDER = new ResourceType('AWS::IAM::SAMLProvider');
27552755
/** AWS IAM ServerCertificate */
27562756
public static readonly IAM_SERVER_CERTIFICATE = new ResourceType('AWS::IAM::ServerCertificate');
2757-
/** Amazon Kinesis Firehose DeliveryStream */
2757+
/** Amazon Data Firehose DeliveryStream */
27582758
public static readonly KINESIS_FIREHOSE_DELIVERY_STREAM = new ResourceType('AWS::KinesisFirehose::DeliveryStream');
27592759
/** Amazon Pinpoint Campaign */
27602760
public static readonly PINPOINT_CAMPAIGN = new ResourceType('AWS::Pinpoint::Campaign');

packages/aws-cdk-lib/aws-ec2/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -2202,7 +2202,7 @@ new ec2.FlowLog(this, 'FlowLogWithKeyPrefix', {
22022202
});
22032203
```
22042204

2205-
*Kinesis Data Firehose*
2205+
*Amazon Data Firehose*
22062206

22072207
```ts
22082208
import * as firehose from 'aws-cdk-lib/aws-kinesisfirehose';
@@ -2524,4 +2524,4 @@ new ec2.Instance(this, 'Instance', {
25242524
machineImage: ec2.MachineImage.latestAmazonLinux2023(),
25252525
instanceProfile,
25262526
});
2527-
```
2527+
```

packages/aws-cdk-lib/aws-ec2/lib/vpc-flow-logs.ts

+5-5
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ export enum FlowLogDestinationType {
6060
S3 = 's3',
6161

6262
/**
63-
* Send flow logs to Kinesis Data Firehose
63+
* Send flow logs to Amazon Data Firehose
6464
*/
6565
KINESIS_DATA_FIREHOSE = 'kinesis-data-firehose',
6666
}
@@ -215,9 +215,9 @@ export abstract class FlowLogDestination {
215215
}
216216

217217
/**
218-
* Use Kinesis Data Firehose as the destination
218+
* Use Amazon Data Firehose as the destination
219219
*
220-
* @param deliveryStreamArn the ARN of Kinesis Data Firehose delivery stream to publish logs to
220+
* @param deliveryStreamArn the ARN of Amazon Data Firehose delivery stream to publish logs to
221221
*/
222222
public static toKinesisDataFirehoseDestination(deliveryStreamArn: string): FlowLogDestination {
223223
return new KinesisDataFirehoseDestination({
@@ -272,7 +272,7 @@ export interface FlowLogDestinationConfig {
272272
readonly keyPrefix?: string;
273273

274274
/**
275-
* The ARN of Kinesis Data Firehose delivery stream to publish the flow logs to
275+
* The ARN of Amazon Data Firehose delivery stream to publish the flow logs to
276276
*
277277
* @default - undefined
278278
*/
@@ -849,7 +849,7 @@ export class FlowLog extends FlowLogBase {
849849
public readonly logGroup?: logs.ILogGroup;
850850

851851
/**
852-
* The ARN of the Kinesis Data Firehose delivery stream to publish flow logs to
852+
* The ARN of the Amazon Data Firehose delivery stream to publish flow logs to
853853
*/
854854
public readonly deliveryStreamArn?: string;
855855

packages/aws-cdk-lib/aws-ecs/lib/base/service-managed-volume.ts

+4
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,10 @@ export enum FileSystemType {
165165
* xfs type
166166
*/
167167
XFS = 'xfs',
168+
/**
169+
* ntfs type
170+
*/
171+
NTFS = 'ntfs',
168172
}
169173

170174
/**

packages/aws-cdk-lib/aws-eks/lib/managed-nodegroup.ts

+4
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,10 @@ export enum CapacityType {
103103
* on-demand instances
104104
*/
105105
ON_DEMAND = 'ON_DEMAND',
106+
/**
107+
* capacity block instances
108+
*/
109+
CAPACITY_BLOCK = 'CAPACITY_BLOCK',
106110
}
107111

108112
/**

packages/aws-cdk-lib/aws-elasticsearch/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,7 @@ const domain = new es.Domain(this, 'Domain', {
234234
```
235235

236236
For more complex use-cases, for example, to set the domain up to receive data from a
237-
[cross-account Kinesis Firehose](https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/) the `addAccessPolicies` helper method
237+
[cross-account Amazon Data Firehose](https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/) the `addAccessPolicies` helper method
238238
allows for policies that include the explicit domain ARN.
239239

240240
```ts

packages/aws-cdk-lib/aws-events-targets/lib/kinesis-firehose-stream.ts

+5-5
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ import * as firehose from '../../aws-kinesisfirehose';
55
import { IResource } from '../../core';
66

77
/**
8-
* Customize the Firehose Stream Event Target
8+
* Customize the Amazon Data Firehose Stream Event Target
99
*/
1010
export interface KinesisFirehoseStreamProps {
1111
/**
@@ -19,7 +19,7 @@ export interface KinesisFirehoseStreamProps {
1919
}
2020

2121
/**
22-
* Customize the Firehose Stream Event Target
22+
* Customize the Amazon Data Firehose Stream Event Target
2323
*
2424
* @deprecated Use KinesisFirehoseStreamV2
2525
*/
@@ -48,7 +48,7 @@ export class KinesisFirehoseStream implements events.IRuleTarget {
4848
}
4949

5050
/**
51-
* Represents a Kinesis Data Firehose delivery stream.
51+
* Represents an Amazon Data Firehose delivery stream.
5252
*/
5353
export interface IDeliveryStream extends IResource {
5454
/**
@@ -67,8 +67,8 @@ export interface IDeliveryStream extends IResource {
6767
}
6868

6969
/**
70-
* Customize the Firehose Stream Event Target V2 to support L2 Kinesis Delivery Stream
71-
* instead of L1 Cfn Kinesis Delivery Stream.
70+
* Customize the Amazon Data Firehose Stream Event Target V2 to support L2 Amazon Data Firehose Delivery Stream
71+
* instead of L1 Cfn Firehose Delivery Stream.
7272
*/
7373
export class KinesisFirehoseStreamV2 implements events.IRuleTarget {
7474
constructor(private readonly stream: IDeliveryStream, private readonly props: KinesisFirehoseStreamProps = {}) {

packages/aws-cdk-lib/aws-kinesisfirehose/README.md

+4-5
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ new firehose.DeliveryStream(this, 'Delivery Stream', {
6161

6262
### Direct Put
6363

64-
Data must be provided via "direct put", ie., by using a `PutRecord` or
64+
Data must be provided via "direct put", ie., by using a `PutRecord` or
6565
`PutRecordBatch` API call. There are a number of ways of doing so, such as:
6666

6767
- Kinesis Agent: a standalone Java application that monitors and delivers files while
@@ -80,10 +80,9 @@ Data must be provided via "direct put", ie., by using a `PutRecord` or
8080

8181
## Destinations
8282

83-
Amazon Data Firehose supports multiple AWS and third-party services as destinations, including Amazon S3, Amazon Redshift, and more. You can find the full list of supported destination [here](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html).
83+
Amazon Data Firehose supports multiple AWS and third-party services as destinations, including Amazon S3, Amazon Redshift, and more. You can find the full list of supported destination [here](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html).
8484

85-
Currently in the AWS CDK, only S3 is implemented as an L2 construct destination. Other destinations can still be configured using L1 constructs. See [kinesisfirehose-destinations](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-kinesisfirehose-destinations-readme.html)
86-
for the implementations of these destinations.
85+
Currently in the AWS CDK, only S3 is implemented as an L2 construct destination. Other destinations can still be configured using L1 constructs.
8786

8887
### S3
8988

@@ -214,7 +213,7 @@ limit of records per second (indicating data is flowing into your delivery strea
214213
than it is configured to process).
215214

216215
CDK provides methods for accessing delivery stream metrics with default configuration,
217-
such as `metricIncomingBytes`, and `metricIncomingRecords` (see [`IDeliveryStream`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-kinesisfirehose.IDeliveryStream.html)
216+
such as `metricIncomingBytes`, and `metricIncomingRecords` (see [`IDeliveryStream`](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cdk-lib.aws_kinesisfirehose.IDeliveryStream.html)
218217
for a full list). CDK also provides a generic `metric` method that can be used to produce
219218
metric configurations for any metric provided by Amazon Data Firehose; the configurations
220219
are pre-populated with the correct dimensions for the delivery stream.

0 commit comments

Comments
 (0)