Skip to content

Commit a7633a9

Browse files
authored
docs: canonicalize service name of Amazon Data Firehose (#33757)
### Reason for this change Docs in CDK sometimes refer the Amazon Data Firehose as former name "(Amazon) Kinesis Data Firehose" or "(Amazon) Kinesis Firehose". This PR updates them to canonical "Amazon Data Firehose". ### Description of changes - Fixed dangling links in `packages/aws-cdk-lib/aws-kinesisfirehose/README.md` - Updated README and docstrings to refer Amazon Data Firehose - Updated examples refer `@aws-cdk/aws-kinesisfirehose-alpha` - Following files are not changed: - Changelog entries - Test files - aws-kinesisfirehose alpha packages - `.github/ISSUE_TEMPLATE` - `packages/@aws-cdk/pkglint/lib/aws-service-official-names.json` ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
1 parent f67a88b commit a7633a9

File tree

31 files changed

+81
-95
lines changed

31 files changed

+81
-95
lines changed

packages/@aws-cdk/aws-iot-actions-alpha/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Currently supported are:
2828
- Capture CloudWatch metrics
2929
- Change state for a CloudWatch alarm
3030
- Put records to Kinesis Data stream
31-
- Put records to Kinesis Data Firehose stream
31+
- Put records to Amazon Data Firehose stream
3232
- Send messages to SQS queues
3333
- Publish messages on SNS topics
3434
- Write messages into columns of DynamoDB
@@ -232,10 +232,10 @@ const topicRule = new iot.TopicRule(this, 'TopicRule', {
232232
});
233233
```
234234

235-
## Put records to Kinesis Data Firehose stream
235+
## Put records to Amazon Data Firehose stream
236236

237237
The code snippet below creates an AWS IoT Rule that puts records to Put records
238-
to Kinesis Data Firehose stream when it is triggered.
238+
to Amazon Data Firehose stream when it is triggered.
239239

240240
```ts
241241
import * as firehose from 'aws-cdk-lib/aws-kinesisfirehose';

packages/@aws-cdk/aws-iot-actions-alpha/lib/firehose-put-record-action.ts

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,11 +30,11 @@ export enum FirehoseRecordSeparator {
3030
}
3131

3232
/**
33-
* Configuration properties of an action for the Kinesis Data Firehose stream.
33+
* Configuration properties of an action for the Amazon Data Firehose stream.
3434
*/
3535
export interface FirehosePutRecordActionProps extends CommonActionProps {
3636
/**
37-
* Whether to deliver the Kinesis Data Firehose stream as a batch by using `PutRecordBatch`.
37+
* Whether to deliver the Amazon Data Firehose stream as a batch by using `PutRecordBatch`.
3838
* When batchMode is true and the rule's SQL statement evaluates to an Array, each Array
3939
* element forms one record in the PutRecordBatch request. The resulting array can't have
4040
* more than 500 records.
@@ -44,23 +44,23 @@ export interface FirehosePutRecordActionProps extends CommonActionProps {
4444
readonly batchMode?: boolean;
4545

4646
/**
47-
* A character separator that will be used to separate records written to the Kinesis Data Firehose stream.
47+
* A character separator that will be used to separate records written to the Amazon Data Firehose stream.
4848
*
4949
* @default - none -- the stream does not use a separator
5050
*/
5151
readonly recordSeparator?: FirehoseRecordSeparator;
5252
}
5353

5454
/**
55-
* The action to put the record from an MQTT message to the Kinesis Data Firehose stream.
55+
* The action to put the record from an MQTT message to the Amazon Data Firehose stream.
5656
*/
5757
export class FirehosePutRecordAction implements iot.IAction {
5858
private readonly batchMode?: boolean;
5959
private readonly recordSeparator?: string;
6060
private readonly role?: iam.IRole;
6161

6262
/**
63-
* @param stream The Kinesis Data Firehose stream to which to put records.
63+
* @param stream The Amazon Data Firehose stream to which to put records.
6464
* @param props Optional properties to not use default
6565
*/
6666
constructor(private readonly stream: firehose.IDeliveryStream, props: FirehosePutRecordActionProps = {}) {

packages/@aws-cdk/aws-msk-alpha/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ const cluster = new msk.Cluster(this, 'Cluster', {
177177
## Logging
178178

179179
You can deliver Apache Kafka broker logs to one or more of the following destination types:
180-
Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose.
180+
Amazon CloudWatch Logs, Amazon S3, Amazon Data Firehose.
181181

182182
To configure logs to be sent to an S3 bucket, provide a bucket in the `logging` config.
183183

packages/@aws-cdk/aws-msk-alpha/lib/cluster.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ export interface MonitoringConfiguration {
275275
*/
276276
export interface BrokerLogging {
277277
/**
278-
* The Kinesis Data Firehose delivery stream that is the destination for broker logs.
278+
* The Amazon Data Firehose delivery stream that is the destination for broker logs.
279279
*
280280
* @default - disabled
281281
*/

packages/@aws-cdk/aws-pipes-alpha/lib/logs.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ export interface LogDestinationParameters {
111111
readonly cloudwatchLogsLogDestination?: CfnPipe.CloudwatchLogsLogDestinationProperty;
112112

113113
/**
114-
* The Amazon Kinesis Data Firehose logging configuration settings for the pipe.
114+
* The Amazon Data Firehose logging configuration settings for the pipe.
115115
*
116116
* @see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-pipes-pipe-pipelogconfiguration.html#cfn-pipes-pipe-pipelogconfiguration-firehoselogdestination
117117
*

packages/@aws-cdk/aws-scheduler-targets-alpha/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ The following targets are supported:
3131
6. `targets.EventBridgePutEvents`: [Put Events on EventBridge](#send-events-to-an-eventbridge-event-bus)
3232
7. `targets.InspectorStartAssessmentRun`: [Start an Amazon Inspector assessment run](#start-an-amazon-inspector-assessment-run)
3333
8. `targets.KinesisStreamPutRecord`: [Put a record to an Amazon Kinesis Data Stream](#put-a-record-to-an-amazon-kinesis-data-stream)
34-
9. `targets.KinesisDataFirehosePutRecord`: [Put a record to a Kinesis Data Firehose](#put-a-record-to-a-kinesis-data-firehose)
34+
9. `targets.KinesisDataFirehosePutRecord`: [Put a record to an Amazon Data Firehose](#put-a-record-to-an-amazon-data-firehose)
3535
10. `targets.CodePipelineStartPipelineExecution`: [Start a CodePipeline execution](#start-a-codepipeline-execution)
3636
11. `targets.SageMakerStartPipelineExecution`: [Start a SageMaker pipeline execution](#start-a-sagemaker-pipeline-execution)
3737
12. `targets.Universal`: [Invoke a wider set of AWS API](#invoke-a-wider-set-of-aws-api)
@@ -252,9 +252,9 @@ new Schedule(this, 'Schedule', {
252252
});
253253
```
254254

255-
## Put a record to a Kinesis Data Firehose
255+
## Put a record to an Amazon Data Firehose
256256

257-
Use the `KinesisDataFirehosePutRecord` target to put a record to a Kinesis Data Firehose delivery stream.
257+
Use the `KinesisDataFirehosePutRecord` target to put a record to an Amazon Data Firehose delivery stream.
258258

259259
The code snippet below creates an event rule with a delivery stream as a target
260260
called every hour by EventBridge Scheduler with a custom payload.

packages/@aws-cdk/aws-scheduler-targets-alpha/lib/kinesis-data-firehose-put-record.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import { IDeliveryStream } from 'aws-cdk-lib/aws-kinesisfirehose';
44
import { ScheduleTargetBase, ScheduleTargetBaseProps } from './target';
55

66
/**
7-
* Use an Amazon Kinesis Data Firehose as a target for AWS EventBridge Scheduler.
7+
* Use an Amazon Data Firehose as a target for AWS EventBridge Scheduler.
88
*/
99
export class KinesisDataFirehosePutRecord extends ScheduleTargetBase implements IScheduleTarget {
1010
constructor(

packages/aws-cdk-lib/aws-apigateway/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1400,7 +1400,7 @@ const api = new apigateway.RestApi(this, 'books', {
14001400

14011401
**Note:** The delivery stream name must start with `amazon-apigateway-`.
14021402

1403-
> Visit [Logging API calls to Kinesis Data Firehose](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-logging-to-kinesis.html) for more details.
1403+
> Visit [Logging API calls to Amazon Data Firehose](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-logging-to-kinesis.html) for more details.
14041404
14051405
## Cross Origin Resource Sharing (CORS)
14061406

packages/aws-cdk-lib/aws-config/lib/rule.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2754,7 +2754,7 @@ export class ResourceType {
27542754
public static readonly IAM_SAML_PROVIDER = new ResourceType('AWS::IAM::SAMLProvider');
27552755
/** AWS IAM ServerCertificate */
27562756
public static readonly IAM_SERVER_CERTIFICATE = new ResourceType('AWS::IAM::ServerCertificate');
2757-
/** Amazon Kinesis Firehose DeliveryStream */
2757+
/** Amazon Data Firehose DeliveryStream */
27582758
public static readonly KINESIS_FIREHOSE_DELIVERY_STREAM = new ResourceType('AWS::KinesisFirehose::DeliveryStream');
27592759
/** Amazon Pinpoint Campaign */
27602760
public static readonly PINPOINT_CAMPAIGN = new ResourceType('AWS::Pinpoint::Campaign');

packages/aws-cdk-lib/aws-ec2/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2202,7 +2202,7 @@ new ec2.FlowLog(this, 'FlowLogWithKeyPrefix', {
22022202
});
22032203
```
22042204

2205-
*Kinesis Data Firehose*
2205+
*Amazon Data Firehose*
22062206

22072207
```ts
22082208
import * as firehose from 'aws-cdk-lib/aws-kinesisfirehose';
@@ -2524,4 +2524,4 @@ new ec2.Instance(this, 'Instance', {
25242524
machineImage: ec2.MachineImage.latestAmazonLinux2023(),
25252525
instanceProfile,
25262526
});
2527-
```
2527+
```

packages/aws-cdk-lib/aws-ec2/lib/vpc-flow-logs.ts

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ export enum FlowLogDestinationType {
6060
S3 = 's3',
6161

6262
/**
63-
* Send flow logs to Kinesis Data Firehose
63+
* Send flow logs to Amazon Data Firehose
6464
*/
6565
KINESIS_DATA_FIREHOSE = 'kinesis-data-firehose',
6666
}
@@ -215,9 +215,9 @@ export abstract class FlowLogDestination {
215215
}
216216

217217
/**
218-
* Use Kinesis Data Firehose as the destination
218+
* Use Amazon Data Firehose as the destination
219219
*
220-
* @param deliveryStreamArn the ARN of Kinesis Data Firehose delivery stream to publish logs to
220+
* @param deliveryStreamArn the ARN of Amazon Data Firehose delivery stream to publish logs to
221221
*/
222222
public static toKinesisDataFirehoseDestination(deliveryStreamArn: string): FlowLogDestination {
223223
return new KinesisDataFirehoseDestination({
@@ -272,7 +272,7 @@ export interface FlowLogDestinationConfig {
272272
readonly keyPrefix?: string;
273273

274274
/**
275-
* The ARN of Kinesis Data Firehose delivery stream to publish the flow logs to
275+
* The ARN of Amazon Data Firehose delivery stream to publish the flow logs to
276276
*
277277
* @default - undefined
278278
*/
@@ -849,7 +849,7 @@ export class FlowLog extends FlowLogBase {
849849
public readonly logGroup?: logs.ILogGroup;
850850

851851
/**
852-
* The ARN of the Kinesis Data Firehose delivery stream to publish flow logs to
852+
* The ARN of the Amazon Data Firehose delivery stream to publish flow logs to
853853
*/
854854
public readonly deliveryStreamArn?: string;
855855

packages/aws-cdk-lib/aws-elasticsearch/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,7 @@ const domain = new es.Domain(this, 'Domain', {
234234
```
235235

236236
For more complex use-cases, for example, to set the domain up to receive data from a
237-
[cross-account Kinesis Firehose](https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/) the `addAccessPolicies` helper method
237+
[cross-account Amazon Data Firehose](https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/) the `addAccessPolicies` helper method
238238
allows for policies that include the explicit domain ARN.
239239

240240
```ts

packages/aws-cdk-lib/aws-events-targets/lib/kinesis-firehose-stream.ts

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ import * as firehose from '../../aws-kinesisfirehose';
55
import { IResource } from '../../core';
66

77
/**
8-
* Customize the Firehose Stream Event Target
8+
* Customize the Amazon Data Firehose Stream Event Target
99
*/
1010
export interface KinesisFirehoseStreamProps {
1111
/**
@@ -19,7 +19,7 @@ export interface KinesisFirehoseStreamProps {
1919
}
2020

2121
/**
22-
* Customize the Firehose Stream Event Target
22+
* Customize the Amazon Data Firehose Stream Event Target
2323
*
2424
* @deprecated Use KinesisFirehoseStreamV2
2525
*/
@@ -48,7 +48,7 @@ export class KinesisFirehoseStream implements events.IRuleTarget {
4848
}
4949

5050
/**
51-
* Represents a Kinesis Data Firehose delivery stream.
51+
* Represents an Amazon Data Firehose delivery stream.
5252
*/
5353
export interface IDeliveryStream extends IResource {
5454
/**
@@ -67,8 +67,8 @@ export interface IDeliveryStream extends IResource {
6767
}
6868

6969
/**
70-
* Customize the Firehose Stream Event Target V2 to support L2 Kinesis Delivery Stream
71-
* instead of L1 Cfn Kinesis Delivery Stream.
70+
* Customize the Amazon Data Firehose Stream Event Target V2 to support L2 Amazon Data Firehose Delivery Stream
71+
* instead of L1 Cfn Firehose Delivery Stream.
7272
*/
7373
export class KinesisFirehoseStreamV2 implements events.IRuleTarget {
7474
constructor(private readonly stream: IDeliveryStream, private readonly props: KinesisFirehoseStreamProps = {}) {

packages/aws-cdk-lib/aws-kinesisfirehose/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ new firehose.DeliveryStream(this, 'Delivery Stream', {
6161

6262
### Direct Put
6363

64-
Data must be provided via "direct put", ie., by using a `PutRecord` or
64+
Data must be provided via "direct put", ie., by using a `PutRecord` or
6565
`PutRecordBatch` API call. There are a number of ways of doing so, such as:
6666

6767
- Kinesis Agent: a standalone Java application that monitors and delivers files while
@@ -80,10 +80,9 @@ Data must be provided via "direct put", ie., by using a `PutRecord` or
8080

8181
## Destinations
8282

83-
Amazon Data Firehose supports multiple AWS and third-party services as destinations, including Amazon S3, Amazon Redshift, and more. You can find the full list of supported destination [here](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html).
83+
Amazon Data Firehose supports multiple AWS and third-party services as destinations, including Amazon S3, Amazon Redshift, and more. You can find the full list of supported destination [here](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html).
8484

85-
Currently in the AWS CDK, only S3 is implemented as an L2 construct destination. Other destinations can still be configured using L1 constructs. See [kinesisfirehose-destinations](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-kinesisfirehose-destinations-readme.html)
86-
for the implementations of these destinations.
85+
Currently in the AWS CDK, only S3 is implemented as an L2 construct destination. Other destinations can still be configured using L1 constructs.
8786

8887
### S3
8988

@@ -214,7 +213,7 @@ limit of records per second (indicating data is flowing into your delivery strea
214213
than it is configured to process).
215214

216215
CDK provides methods for accessing delivery stream metrics with default configuration,
217-
such as `metricIncomingBytes`, and `metricIncomingRecords` (see [`IDeliveryStream`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-kinesisfirehose.IDeliveryStream.html)
216+
such as `metricIncomingBytes`, and `metricIncomingRecords` (see [`IDeliveryStream`](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cdk-lib.aws_kinesisfirehose.IDeliveryStream.html)
218217
for a full list). CDK also provides a generic `metric` method that can be used to produce
219218
metric configurations for any metric provided by Amazon Data Firehose; the configurations
220219
are pre-populated with the correct dimensions for the delivery stream.

packages/aws-cdk-lib/aws-kinesisfirehose/lib/common.ts

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ import * as s3 from '../../aws-s3';
66
import * as cdk from '../../core';
77

88
/**
9-
* Possible compression options Kinesis Data Firehose can use to compress data on delivery.
9+
* Possible compression options Amazon Data Firehose can use to compress data on delivery.
1010
*/
1111
export class Compression {
1212
/**
@@ -75,7 +75,7 @@ interface DestinationLoggingProps {
7575
}
7676

7777
/**
78-
* Common properties for defining a backup, intermediary, or final S3 destination for a Kinesis Data Firehose delivery stream.
78+
* Common properties for defining a backup, intermediary, or final S3 destination for a Amazon Data Firehose delivery stream.
7979
*/
8080
export interface CommonDestinationS3Props {
8181
/**
@@ -90,7 +90,7 @@ export interface CommonDestinationS3Props {
9090
readonly bufferingInterval?: cdk.Duration;
9191

9292
/**
93-
* The size of the buffer that Kinesis Data Firehose uses for incoming data before
93+
* The size of the buffer that Amazon Data Firehose uses for incoming data before
9494
* delivering it to the S3 bucket.
9595
*
9696
* Minimum: Size.mebibytes(1)
@@ -101,7 +101,7 @@ export interface CommonDestinationS3Props {
101101
readonly bufferingSize?: cdk.Size;
102102

103103
/**
104-
* The type of compression that Kinesis Data Firehose uses to compress the data
104+
* The type of compression that Amazon Data Firehose uses to compress the data
105105
* that it delivers to the Amazon S3 bucket.
106106
*
107107
* The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift
@@ -120,7 +120,7 @@ export interface CommonDestinationS3Props {
120120
readonly encryptionKey?: kms.IKey;
121121

122122
/**
123-
* A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3.
123+
* A prefix that Amazon Data Firehose evaluates and adds to failed records before writing them to S3.
124124
*
125125
* This prefix appears immediately following the bucket name.
126126
* @see https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
@@ -130,7 +130,7 @@ export interface CommonDestinationS3Props {
130130
readonly errorOutputPrefix?: string;
131131

132132
/**
133-
* A prefix that Kinesis Data Firehose evaluates and adds to records before writing them to S3.
133+
* A prefix that Amazon Data Firehose evaluates and adds to records before writing them to S3.
134134
*
135135
* This prefix appears immediately following the bucket name.
136136
* @see https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
@@ -171,7 +171,7 @@ export interface CommonDestinationProps extends DestinationLoggingProps {
171171
/**
172172
* The IAM role associated with this destination.
173173
*
174-
* Assumed by Kinesis Data Firehose to invoke processors and write to destinations
174+
* Assumed by Amazon Data Firehose to invoke processors and write to destinations
175175
*
176176
* @default - a role will be created with default permissions.
177177
*/

packages/aws-cdk-lib/aws-kinesisfirehose/lib/delivery-stream.ts

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ const PUT_RECORD_ACTIONS = [
1818
];
1919

2020
/**
21-
* Represents a Kinesis Data Firehose delivery stream.
21+
* Represents an Amazon Data Firehose delivery stream.
2222
*/
2323
export interface IDeliveryStream extends cdk.IResource, iam.IGrantable, ec2.IConnectable {
2424
/**
@@ -72,7 +72,7 @@ export interface IDeliveryStream extends cdk.IResource, iam.IGrantable, ec2.ICon
7272
metricBackupToS3Bytes(props?: cloudwatch.MetricOptions): cloudwatch.Metric;
7373

7474
/**
75-
* Metric for the age (from getting into Kinesis Data Firehose to now) of the oldest record in Kinesis Data Firehose.
75+
* Metric for the age (from getting into Amazon Data Firehose to now) of the oldest record in Amazon Data Firehose.
7676
*
7777
* Any record older than this age has been delivered to the Amazon S3 bucket for backup.
7878
*
@@ -89,7 +89,7 @@ export interface IDeliveryStream extends cdk.IResource, iam.IGrantable, ec2.ICon
8989
}
9090

9191
/**
92-
* Base class for new and imported Kinesis Data Firehose delivery streams.
92+
* Base class for new and imported Amazon Data Firehose delivery streams.
9393
*/
9494
abstract class DeliveryStreamBase extends cdk.Resource implements IDeliveryStream {
9595
public abstract readonly deliveryStreamName: string;
@@ -99,7 +99,7 @@ abstract class DeliveryStreamBase extends cdk.Resource implements IDeliveryStrea
9999
public abstract readonly grantPrincipal: iam.IPrincipal;
100100

101101
/**
102-
* Network connections between Kinesis Data Firehose and other resources, i.e. Redshift cluster.
102+
* Network connections between Amazon Data Firehose and other resources, i.e. Redshift cluster.
103103
*/
104104
public readonly connections: ec2.Connections;
105105

@@ -206,7 +206,7 @@ export interface DeliveryStreamProps {
206206
/**
207207
* The IAM role associated with this delivery stream.
208208
*
209-
* Assumed by Kinesis Data Firehose to read from sources and encrypt data server-side.
209+
* Assumed by Amazon Data Firehose to read from sources and encrypt data server-side.
210210
*
211211
* @default - a role will be created with default permissions.
212212
*/
@@ -245,15 +245,15 @@ export interface DeliveryStreamAttributes {
245245
/**
246246
* The IAM role associated with this delivery stream.
247247
*
248-
* Assumed by Kinesis Data Firehose to read from sources and encrypt data server-side.
248+
* Assumed by Amazon Data Firehose to read from sources and encrypt data server-side.
249249
*
250250
* @default - the imported stream cannot be granted access to other resources as an `iam.IGrantable`.
251251
*/
252252
readonly role?: iam.IRole;
253253
}
254254

255255
/**
256-
* Create a Kinesis Data Firehose delivery stream
256+
* Create a Amazon Data Firehose delivery stream
257257
*
258258
* @resource AWS::KinesisFirehose::DeliveryStream
259259
*/

0 commit comments

Comments
 (0)