Skip to content

Commit fcb26f8

Browse files
author
awstools
committed
Updates SDK to v2.1598.0
1 parent 7172625 commit fcb26f8

35 files changed

+19157
-11580
lines changed

Diff for: .changes/2.1598.0.json

+42
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
[
2+
{
3+
"type": "feature",
4+
"category": "Batch",
5+
"description": "This release adds the task properties field to attempt details and the name field on EKS container detail."
6+
},
7+
{
8+
"type": "feature",
9+
"category": "CloudFront",
10+
"description": "CloudFront origin access control extends support to AWS Lambda function URLs and AWS Elemental MediaPackage v2 origins."
11+
},
12+
{
13+
"type": "feature",
14+
"category": "CloudWatch",
15+
"description": "This release adds support for Metric Characteristics for CloudWatch Anomaly Detection. Anomaly Detector now takes Metric Characteristics object with Periodic Spikes boolean field that tells Anomaly Detection that spikes that repeat at the same time every week are part of the expected pattern."
16+
},
17+
{
18+
"type": "feature",
19+
"category": "IAM",
20+
"description": "For CreateOpenIDConnectProvider API, the ThumbprintList parameter is no longer required."
21+
},
22+
{
23+
"type": "feature",
24+
"category": "MediaLive",
25+
"description": "AWS Elemental MediaLive introduces workflow monitor, a new feature that enables the visualization and monitoring of your media workflows. Create signal maps of your existing workflows and monitor them by creating notification and monitoring template groups."
26+
},
27+
{
28+
"type": "feature",
29+
"category": "Omics",
30+
"description": "This release adds support for retrieval of S3 direct access metadata on sequence stores and read sets, and adds support for SHA256up and SHA512up HealthOmics ETags."
31+
},
32+
{
33+
"type": "feature",
34+
"category": "Pipes",
35+
"description": "LogConfiguration ARN validation fixes"
36+
},
37+
{
38+
"type": "feature",
39+
"category": "WAFV2",
40+
"description": "Adds an updated version of smoke tests, including smithy trait, for SDK testing."
41+
}
42+
]

Diff for: CHANGELOG.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,17 @@
11
# Changelog for AWS SDK for JavaScript
2-
<!--LATEST=2.1597.0-->
2+
<!--LATEST=2.1598.0-->
33
<!--ENTRYINSERT-->
44

5+
## 2.1598.0
6+
* feature: Batch: This release adds the task properties field to attempt details and the name field on EKS container detail.
7+
* feature: CloudFront: CloudFront origin access control extends support to AWS Lambda function URLs and AWS Elemental MediaPackage v2 origins.
8+
* feature: CloudWatch: This release adds support for Metric Characteristics for CloudWatch Anomaly Detection. Anomaly Detector now takes Metric Characteristics object with Periodic Spikes boolean field that tells Anomaly Detection that spikes that repeat at the same time every week are part of the expected pattern.
9+
* feature: IAM: For CreateOpenIDConnectProvider API, the ThumbprintList parameter is no longer required.
10+
* feature: MediaLive: AWS Elemental MediaLive introduces workflow monitor, a new feature that enables the visualization and monitoring of your media workflows. Create signal maps of your existing workflows and monitor them by creating notification and monitoring template groups.
11+
* feature: Omics: This release adds support for retrieval of S3 direct access metadata on sequence stores and read sets, and adds support for SHA256up and SHA512up HealthOmics ETags.
12+
* feature: Pipes: LogConfiguration ARN validation fixes
13+
* feature: WAFV2: Adds an updated version of smoke tests, including smithy trait, for SDK testing.
14+
515
## 2.1597.0
616
* feature: CleanRooms: AWS Clean Rooms Differential Privacy is now fully available. Differential privacy protects against user-identification attempts.
717
* feature: Connect: This release adds new Submit Auto Evaluation Action for Amazon Connect Rules.

Diff for: README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ require('aws-sdk/lib/maintenance_mode_message').suppress = true;
6464
To use the SDK in the browser, simply add the following script tag to your
6565
HTML pages:
6666

67-
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1597.0.min.js"></script>
67+
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1598.0.min.js"></script>
6868

6969
You can also build a custom browser SDK with your specified set of AWS services.
7070
This can allow you to reduce the SDK's size, specify different API versions of

Diff for: apis/batch-2016-08-10.min.json

+47-19
Original file line numberDiff line numberDiff line change
@@ -475,7 +475,34 @@
475475
"stoppedAt": {
476476
"type": "long"
477477
},
478-
"statusReason": {}
478+
"statusReason": {},
479+
"taskProperties": {
480+
"type": "list",
481+
"member": {
482+
"type": "structure",
483+
"members": {
484+
"containerInstanceArn": {},
485+
"taskArn": {},
486+
"containers": {
487+
"type": "list",
488+
"member": {
489+
"type": "structure",
490+
"members": {
491+
"exitCode": {
492+
"type": "integer"
493+
},
494+
"name": {},
495+
"reason": {},
496+
"logStreamName": {},
497+
"networkInterfaces": {
498+
"shape": "S45"
499+
}
500+
}
501+
}
502+
}
503+
}
504+
}
505+
}
479506
}
480507
}
481508
},
@@ -493,7 +520,7 @@
493520
"type": "long"
494521
},
495522
"dependsOn": {
496-
"shape": "S47"
523+
"shape": "S4b"
497524
},
498525
"jobDefinition": {},
499526
"parameters": {
@@ -632,10 +659,10 @@
632659
"shape": "S37"
633660
},
634661
"containers": {
635-
"shape": "S4g"
662+
"shape": "S4k"
636663
},
637664
"initContainers": {
638-
"shape": "S4g"
665+
"shape": "S4k"
639666
},
640667
"volumes": {
641668
"shape": "S3l"
@@ -658,10 +685,10 @@
658685
"type": "structure",
659686
"members": {
660687
"containers": {
661-
"shape": "S4k"
688+
"shape": "S4o"
662689
},
663690
"initContainers": {
664-
"shape": "S4k"
691+
"shape": "S4o"
665692
},
666693
"podName": {},
667694
"nodeName": {},
@@ -1067,14 +1094,14 @@
10671094
}
10681095
},
10691096
"dependsOn": {
1070-
"shape": "S47"
1097+
"shape": "S4b"
10711098
},
10721099
"jobDefinition": {},
10731100
"parameters": {
10741101
"shape": "S1o"
10751102
},
10761103
"containerOverrides": {
1077-
"shape": "S5f"
1104+
"shape": "S5j"
10781105
},
10791106
"nodeOverrides": {
10801107
"type": "structure",
@@ -1092,10 +1119,10 @@
10921119
"members": {
10931120
"targetNodes": {},
10941121
"containerOverrides": {
1095-
"shape": "S5f"
1122+
"shape": "S5j"
10961123
},
10971124
"ecsPropertiesOverride": {
1098-
"shape": "S5j"
1125+
"shape": "S5n"
10991126
},
11001127
"instanceTypes": {
11011128
"shape": "Sb"
@@ -1124,10 +1151,10 @@
11241151
"type": "structure",
11251152
"members": {
11261153
"containers": {
1127-
"shape": "S5q"
1154+
"shape": "S5u"
11281155
},
11291156
"initContainers": {
1130-
"shape": "S5q"
1157+
"shape": "S5u"
11311158
},
11321159
"metadata": {
11331160
"shape": "S3q"
@@ -1137,7 +1164,7 @@
11371164
}
11381165
},
11391166
"ecsPropertiesOverride": {
1140-
"shape": "S5j"
1167+
"shape": "S5n"
11411168
}
11421169
}
11431170
},
@@ -2185,7 +2212,7 @@
21852212
}
21862213
}
21872214
},
2188-
"S47": {
2215+
"S4b": {
21892216
"type": "list",
21902217
"member": {
21912218
"type": "structure",
@@ -2195,7 +2222,7 @@
21952222
}
21962223
}
21972224
},
2198-
"S4g": {
2225+
"S4k": {
21992226
"type": "list",
22002227
"member": {
22012228
"type": "structure",
@@ -2228,19 +2255,20 @@
22282255
}
22292256
}
22302257
},
2231-
"S4k": {
2258+
"S4o": {
22322259
"type": "list",
22332260
"member": {
22342261
"type": "structure",
22352262
"members": {
2263+
"name": {},
22362264
"exitCode": {
22372265
"type": "integer"
22382266
},
22392267
"reason": {}
22402268
}
22412269
}
22422270
},
2243-
"S5f": {
2271+
"S5j": {
22442272
"type": "structure",
22452273
"members": {
22462274
"vcpus": {
@@ -2265,7 +2293,7 @@
22652293
}
22662294
}
22672295
},
2268-
"S5j": {
2296+
"S5n": {
22692297
"type": "structure",
22702298
"members": {
22712299
"taskProperties": {
@@ -2296,7 +2324,7 @@
22962324
}
22972325
}
22982326
},
2299-
"S5q": {
2327+
"S5u": {
23002328
"type": "list",
23012329
"member": {
23022330
"type": "structure",

Diff for: apis/batch-2016-08-10.normal.json

+70-6
Original file line numberDiff line numberDiff line change
@@ -655,6 +655,10 @@
655655
"statusReason": {
656656
"shape": "String",
657657
"documentation": "<p>A short, human-readable string to provide additional details for the current status of the job attempt.</p>"
658+
},
659+
"taskProperties": {
660+
"shape": "ListAttemptEcsTaskDetails",
661+
"documentation": "<p>The properties for a task definition that describes the container and volume definitions of an Amazon ECS task.</p>"
658662
}
659663
},
660664
"documentation": "<p>An object that represents a job attempt.</p>"
@@ -665,6 +669,50 @@
665669
"shape": "AttemptDetail"
666670
}
667671
},
672+
"AttemptEcsTaskDetails": {
673+
"type": "structure",
674+
"members": {
675+
"containerInstanceArn": {
676+
"shape": "String",
677+
"documentation": "<p>The Amazon Resource Name (ARN) of the container instance that hosts the task.</p>"
678+
},
679+
"taskArn": {
680+
"shape": "String",
681+
"documentation": "<p>The ARN of the Amazon ECS task.</p>"
682+
},
683+
"containers": {
684+
"shape": "ListAttemptTaskContainerDetails",
685+
"documentation": "<p>A list of containers that are included in the <code>taskProperties</code> list.</p>"
686+
}
687+
},
688+
"documentation": "<p>An object that represents the details of a task.</p>"
689+
},
690+
"AttemptTaskContainerDetails": {
691+
"type": "structure",
692+
"members": {
693+
"exitCode": {
694+
"shape": "Integer",
695+
"documentation": "<p>The exit code for the container’s attempt. A non-zero exit code is considered failed.</p>"
696+
},
697+
"name": {
698+
"shape": "String",
699+
"documentation": "<p>The name of a container.</p>"
700+
},
701+
"reason": {
702+
"shape": "String",
703+
"documentation": "<p>A short (255 max characters) string that's easy to understand and provides additional details for a running or stopped container.</p>"
704+
},
705+
"logStreamName": {
706+
"shape": "String",
707+
"documentation": "<p>The name of the Amazon CloudWatch Logs log stream that's associated with the container. The log group for Batch jobs is <code>/aws/batch/job</code>. Each container attempt receives a log stream name when they reach the <code>RUNNING</code> status.</p>"
708+
},
709+
"networkInterfaces": {
710+
"shape": "NetworkInterfaceList",
711+
"documentation": "<p>The network interfaces that are associated with the job attempt.</p>"
712+
}
713+
},
714+
"documentation": "<p>An object that represents the details of a container that's part of a job attempt.</p>"
715+
},
668716
"Boolean": {
669717
"type": "boolean"
670718
},
@@ -1893,6 +1941,10 @@
18931941
"EksAttemptContainerDetail": {
18941942
"type": "structure",
18951943
"members": {
1944+
"name": {
1945+
"shape": "String",
1946+
"documentation": "<p>The name of a container.</p>"
1947+
},
18961948
"exitCode": {
18971949
"shape": "Integer",
18981950
"documentation": "<p>The exit code returned for the job attempt. A non-zero exit code is considered failed.</p>"
@@ -2271,7 +2323,7 @@
22712323
},
22722324
"imagePullSecrets": {
22732325
"shape": "ImagePullSecrets",
2274-
"documentation": "<p>References a Kubernetes secret resource. This object must start and end with an alphanumeric character, is required to be lowercase, can include periods (.) and hyphens (-), and can't contain more than 253 characters.</p> <p> <code>ImagePullSecret$name</code> is required when this object is used.</p>"
2326+
"documentation": "<p>References a Kubernetes secret resource. It holds a list of secrets. These secrets help to gain access to pull an images from a private registry.</p> <p> <code>ImagePullSecret$name</code> is required when this object is used.</p>"
22752327
},
22762328
"containers": {
22772329
"shape": "EksContainers",
@@ -2313,7 +2365,7 @@
23132365
},
23142366
"imagePullSecrets": {
23152367
"shape": "ImagePullSecrets",
2316-
"documentation": "<p>Displays the reference pointer to the Kubernetes secret resource.</p>"
2368+
"documentation": "<p>Displays the reference pointer to the Kubernetes secret resource. These secrets help to gain access to pull an images from a private registry.</p>"
23172369
},
23182370
"containers": {
23192371
"shape": "EksContainerDetails",
@@ -2558,7 +2610,7 @@
25582610
"documentation": "<p>Provides a unique identifier for the <code>ImagePullSecret</code>. This object is required when <code>EksPodProperties$imagePullSecrets</code> is used.</p>"
25592611
}
25602612
},
2561-
"documentation": "<p>References a Kubernetes configuration resource that holds a list of secrets. These secrets help to gain access to pull an image from a private registry.</p>"
2613+
"documentation": "<p>References a Kubernetes secret resource. This name of the secret must start and end with an alphanumeric character, is required to be lowercase, can include periods (.) and hyphens (-), and can't contain more than 253 characters.</p>"
25622614
},
25632615
"ImagePullSecrets": {
25642616
"type": "list",
@@ -2920,15 +2972,15 @@
29202972
},
29212973
"state": {
29222974
"shape": "JobStateTimeLimitActionsState",
2923-
"documentation": "<p>The state of the job needed to trigger the action. The only supported value is \"<code>RUNNABLE</code>\".</p>"
2975+
"documentation": "<p>The state of the job needed to trigger the action. The only supported value is <code>RUNNABLE</code>.</p>"
29242976
},
29252977
"maxTimeSeconds": {
29262978
"shape": "Integer",
29272979
"documentation": "<p>The approximate amount of time, in seconds, that must pass with the job in the specified state before the action is taken. The minimum value is 600 (10 minutes) and the maximum value is 86,400 (24 hours).</p>"
29282980
},
29292981
"action": {
29302982
"shape": "JobStateTimeLimitActionsAction",
2931-
"documentation": "<p>The action to take when a job is at the head of the job queue in the specified state for the specified period of time. The only supported value is \"<code>CANCEL</code>\", which will cancel the job.</p>"
2983+
"documentation": "<p>The action to take when a job is at the head of the job queue in the specified state for the specified period of time. The only supported value is <code>CANCEL</code>, which will cancel the job.</p>"
29322984
}
29332985
},
29342986
"documentation": "<p>Specifies an action that Batch will take after the job has remained at the head of the queue in the specified state for longer than the specified time.</p>"
@@ -3118,6 +3170,18 @@
31183170
},
31193171
"documentation": "<p>Linux-specific modifications that are applied to the container, such as details for device mappings.</p>"
31203172
},
3173+
"ListAttemptEcsTaskDetails": {
3174+
"type": "list",
3175+
"member": {
3176+
"shape": "AttemptEcsTaskDetails"
3177+
}
3178+
},
3179+
"ListAttemptTaskContainerDetails": {
3180+
"type": "list",
3181+
"member": {
3182+
"shape": "AttemptTaskContainerDetails"
3183+
}
3184+
},
31213185
"ListEcsTaskDetails": {
31223186
"type": "list",
31233187
"member": {
@@ -4425,5 +4489,5 @@
44254489
}
44264490
}
44274491
},
4428-
"documentation": "<fullname>Batch</fullname> <p>Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources d, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly.</p> <p>As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.</p>"
4492+
"documentation": "<fullname>Batch</fullname> <p>Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly.</p> <p>As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.</p>"
44294493
}

0 commit comments

Comments
 (0)