Skip to content

Commit 1ec3fb8

Browse files
author
awstools
committed
docs(client-ecs): This is a documentation only release that updates to documentation to let customers know that Amazon Elastic Inference is no longer available.
1 parent 9695325 commit 1ec3fb8

File tree

5 files changed

+218
-14
lines changed

5 files changed

+218
-14
lines changed

clients/client-ecs/src/commands/CreateServiceCommand.ts

+3-1
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,9 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met
3535
* <note>
3636
* <p>On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.</p>
3737
* </note>
38+
* <note>
39+
* <p>Amazon Elastic Inference (EI) is no longer available to customers.</p>
40+
* </note>
3841
* <p>In addition to maintaining the desired count of tasks in your service, you can
3942
* optionally run your service behind one or more load balancers. The load balancers
4043
* distribute traffic across the tasks that are associated with the service. For more
@@ -112,7 +115,6 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met
112115
* information about task placement and task placement strategies, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement.html">Amazon ECS
113116
* task placement</a> in the <i>Amazon Elastic Container Service Developer Guide</i>
114117
* </p>
115-
* <p>Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. </p>
116118
* @example
117119
* Use a bare-bones client and the command you need to make an API call.
118120
* ```javascript

clients/client-ecs/src/commands/RunTaskCommand.ts

+3-1
Original file line numberDiff line numberDiff line change
@@ -32,12 +32,14 @@ export interface RunTaskCommandOutput extends RunTaskResponse, __MetadataBearer
3232
* <note>
3333
* <p>On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.</p>
3434
* </note>
35+
* <note>
36+
* <p>Amazon Elastic Inference (EI) is no longer available to customers.</p>
37+
* </note>
3538
* <p>You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places
3639
* tasks using placement constraints and placement strategies. For more information, see
3740
* <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html">Scheduling Tasks</a> in the <i>Amazon Elastic Container Service Developer Guide</i>.</p>
3841
* <p>Alternatively, you can use <code>StartTask</code> to use your own scheduler or
3942
* place tasks manually on specific container instances.</p>
40-
* <p>Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. </p>
4143
* <p>You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or
4244
* updating a service. For more infomation, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html#ebs-volume-types">Amazon EBS volumes</a> in the <i>Amazon Elastic Container Service Developer Guide</i>.</p>
4345
* <p>The Amazon ECS API follows an eventual consistency model. This is because of the

clients/client-ecs/src/commands/StartTaskCommand.ts

+3-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,9 @@ export interface StartTaskCommandOutput extends StartTaskResponse, __MetadataBea
3333
* <note>
3434
* <p>On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.</p>
3535
* </note>
36-
* <p>Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. </p>
36+
* <note>
37+
* <p>Amazon Elastic Inference (EI) is no longer available to customers.</p>
38+
* </note>
3739
* <p>Alternatively, you can use<code>RunTask</code> to place tasks for you. For more
3840
* information, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html">Scheduling Tasks</a> in the <i>Amazon Elastic Container Service Developer Guide</i>.</p>
3941
* <p>You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or

clients/client-ecs/src/models/models_0.ts

+204-6
Original file line numberDiff line numberDiff line change
@@ -716,11 +716,13 @@ export interface ClusterConfiguration {
716716
* <code>FARGATE_SPOT</code> capacity providers. The Fargate capacity providers are
717717
* available to all accounts and only need to be associated with a cluster to be used in a
718718
* capacity provider strategy.</p>
719-
* <p>With <code>FARGATE_SPOT</code>, you can run interruption tolerant tasks at a rate
720-
* that's discounted compared to the <code>FARGATE</code> price. <code>FARGATE_SPOT</code>
721-
* runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are
722-
* interrupted with a two-minute warning. <code>FARGATE_SPOT</code> only supports Linux
723-
* tasks with the X86_64 architecture on platform version 1.3.0 or later.</p>
719+
* <p>With <code>FARGATE_SPOT</code>, you can run interruption tolerant tasks at a rate that's
720+
* discounted compared to the <code>FARGATE</code> price. <code>FARGATE_SPOT</code> runs
721+
* tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are
722+
* interrupted with a two-minute warning. <code>FARGATE_SPOT</code> supports Linux tasks
723+
* with the X86_64 architecture on platform version 1.3.0 or later.
724+
* <code>FARGATE_SPOT</code> supports Linux tasks with the ARM64 architecture on
725+
* platform version 1.4.0 or later.</p>
724726
* <p>A capacity provider strategy may contain a maximum of 6 capacity providers.</p>
725727
* @public
726728
*/
@@ -1964,7 +1966,203 @@ export interface LogConfiguration {
19641966
logDriver: LogDriver | undefined;
19651967

19661968
/**
1967-
* <p>The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: <code>sudo docker version --format '\{\{.Server.APIVersion\}\}'</code>
1969+
* <p>The configuration options to send to the log driver.</p>
1970+
* <p>The options you can specify depend on the log driver. Some
1971+
* of the options you can specify when you use the <code>awslogs</code> log driver to route logs to
1972+
* Amazon CloudWatch include the following:</p>
1973+
* <dl>
1974+
* <dt>awslogs-create-group</dt>
1975+
* <dd>
1976+
* <p>Required: No</p>
1977+
* <p>Specify whether you want the log group to be
1978+
* created automatically. If this option isn't
1979+
* specified, it defaults to
1980+
* <code>false</code>.</p>
1981+
* <note>
1982+
* <p>Your IAM policy must include the
1983+
* <code>logs:CreateLogGroup</code> permission before
1984+
* you attempt to use
1985+
* <code>awslogs-create-group</code>.</p>
1986+
* </note>
1987+
* </dd>
1988+
* <dt>awslogs-region</dt>
1989+
* <dd>
1990+
* <p>Required: Yes</p>
1991+
* <p>Specify the Amazon Web Services Region that the
1992+
* <code>awslogs</code> log driver is to send your
1993+
* Docker logs to. You can choose to send all of your
1994+
* logs from clusters in different Regions to a
1995+
* single region in CloudWatch Logs. This is so that they're
1996+
* all visible in one location. Otherwise, you can
1997+
* separate them by Region for more granularity. Make
1998+
* sure that the specified log group exists in the
1999+
* Region that you specify with this option.</p>
2000+
* </dd>
2001+
* <dt>awslogs-group</dt>
2002+
* <dd>
2003+
* <p>Required: Yes</p>
2004+
* <p>Make sure to specify a log group that the
2005+
* <code>awslogs</code> log driver sends its log
2006+
* streams to.</p>
2007+
* </dd>
2008+
* <dt>awslogs-stream-prefix</dt>
2009+
* <dd>
2010+
* <p>Required: Yes, when
2011+
* using the Fargate launch
2012+
* type.Optional for
2013+
* the EC2 launch type, required for
2014+
* the Fargate launch
2015+
* type.</p>
2016+
* <p>Use the <code>awslogs-stream-prefix</code>
2017+
* option to associate a log stream with the
2018+
* specified prefix, the container name, and the ID
2019+
* of the Amazon ECS task that the container belongs to.
2020+
* If you specify a prefix with this option, then the
2021+
* log stream takes the format <code>prefix-name/container-name/ecs-task-id</code>.</p>
2022+
* <p>If you don't specify a prefix
2023+
* with this option, then the log stream is named
2024+
* after the container ID that's assigned by the
2025+
* Docker daemon on the container instance. Because
2026+
* it's difficult to trace logs back to the container
2027+
* that sent them with just the Docker container ID
2028+
* (which is only available on the container
2029+
* instance), we recommend that you specify a prefix
2030+
* with this option.</p>
2031+
* <p>For Amazon ECS services, you can use the service
2032+
* name as the prefix. Doing so, you can trace log
2033+
* streams to the service that the container belongs
2034+
* to, the name of the container that sent them, and
2035+
* the ID of the task that the container belongs
2036+
* to.</p>
2037+
* <p>You must specify a
2038+
* stream-prefix for your logs to have your logs
2039+
* appear in the Log pane when using the Amazon ECS
2040+
* console.</p>
2041+
* </dd>
2042+
* <dt>awslogs-datetime-format</dt>
2043+
* <dd>
2044+
* <p>Required: No</p>
2045+
* <p>This option defines a multiline start pattern
2046+
* in Python <code>strftime</code> format. A log
2047+
* message consists of a line that matches the
2048+
* pattern and any following lines that don’t match
2049+
* the pattern. The matched line is the delimiter
2050+
* between log messages.</p>
2051+
* <p>One example of a use case for using this
2052+
* format is for parsing output such as a stack dump,
2053+
* which might otherwise be logged in multiple
2054+
* entries. The correct pattern allows it to be
2055+
* captured in a single entry.</p>
2056+
* <p>For more information, see <a href="https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format">awslogs-datetime-format</a>.</p>
2057+
* <p>You cannot configure both the
2058+
* <code>awslogs-datetime-format</code> and
2059+
* <code>awslogs-multiline-pattern</code>
2060+
* options.</p>
2061+
* <note>
2062+
* <p>Multiline logging performs regular
2063+
* expression parsing and matching of all log
2064+
* messages. This might have a negative impact on
2065+
* logging performance.</p>
2066+
* </note>
2067+
* </dd>
2068+
* <dt>awslogs-multiline-pattern</dt>
2069+
* <dd>
2070+
* <p>Required: No</p>
2071+
* <p>This option defines a multiline start pattern
2072+
* that uses a regular expression. A log message
2073+
* consists of a line that matches the pattern and
2074+
* any following lines that don’t match the pattern.
2075+
* The matched line is the delimiter between log
2076+
* messages.</p>
2077+
* <p>For more information, see <a href="https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern">awslogs-multiline-pattern</a>.</p>
2078+
* <p>This option is ignored if
2079+
* <code>awslogs-datetime-format</code> is also
2080+
* configured.</p>
2081+
* <p>You cannot configure both the
2082+
* <code>awslogs-datetime-format</code> and
2083+
* <code>awslogs-multiline-pattern</code>
2084+
* options.</p>
2085+
* <note>
2086+
* <p>Multiline logging performs regular
2087+
* expression parsing and matching of all log
2088+
* messages. This might have a negative impact on
2089+
* logging performance.</p>
2090+
* </note>
2091+
* </dd>
2092+
* <dt>mode</dt>
2093+
* <dd>
2094+
* <p>Required: No</p>
2095+
* <p>Valid values: <code>non-blocking</code> |
2096+
* <code>blocking</code>
2097+
* </p>
2098+
* <p>This option defines the delivery mode of log
2099+
* messages from the container to CloudWatch Logs. The delivery
2100+
* mode you choose affects application availability
2101+
* when the flow of logs from container to CloudWatch is
2102+
* interrupted.</p>
2103+
* <p>If you use the <code>blocking</code>
2104+
* mode and the flow of logs to CloudWatch is interrupted,
2105+
* calls from container code to write to the
2106+
* <code>stdout</code> and <code>stderr</code>
2107+
* streams will block. The logging thread of the
2108+
* application will block as a result. This may cause
2109+
* the application to become unresponsive and lead to
2110+
* container healthcheck failure. </p>
2111+
* <p>If you use the <code>non-blocking</code> mode,
2112+
* the container's logs are instead stored in an
2113+
* in-memory intermediate buffer configured with the
2114+
* <code>max-buffer-size</code> option. This prevents
2115+
* the application from becoming unresponsive when
2116+
* logs cannot be sent to CloudWatch. We recommend using this mode if you want to
2117+
* ensure service availability and are okay with some
2118+
* log loss. For more information, see <a href="http://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/">Preventing log loss with non-blocking mode in the <code>awslogs</code> container log driver</a>.</p>
2119+
* </dd>
2120+
* <dt>max-buffer-size</dt>
2121+
* <dd>
2122+
* <p>Required: No</p>
2123+
* <p>Default value: <code>1m</code>
2124+
* </p>
2125+
* <p>When <code>non-blocking</code> mode is used,
2126+
* the <code>max-buffer-size</code> log option
2127+
* controls the size of the buffer that's used for
2128+
* intermediate message storage. Make sure to specify
2129+
* an adequate buffer size based on your application.
2130+
* When the buffer fills up, further logs cannot be
2131+
* stored. Logs that cannot be stored are lost.
2132+
* </p>
2133+
* </dd>
2134+
* </dl>
2135+
* <p>To route logs using the <code>splunk</code> log router, you need to specify a
2136+
* <code>splunk-token</code> and a
2137+
* <code>splunk-url</code>.</p>
2138+
* <p>When you use the <code>awsfirelens</code> log router to route logs to an Amazon Web Services Service or
2139+
* Amazon Web Services Partner Network destination for log storage and analytics, you can
2140+
* set the <code>log-driver-buffer-limit</code> option to limit
2141+
* the number of events that are buffered in memory, before
2142+
* being sent to the log router container. It can help to
2143+
* resolve potential log loss issue because high throughput
2144+
* might result in memory running out for the buffer inside of
2145+
* Docker.</p>
2146+
* <p>Other options you can specify when using <code>awsfirelens</code> to route
2147+
* logs depend on the destination. When you export logs to
2148+
* Amazon Data Firehose, you can specify the Amazon Web Services Region with
2149+
* <code>region</code> and a name for the log stream with
2150+
* <code>delivery_stream</code>.</p>
2151+
* <p>When you export logs to
2152+
* Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with
2153+
* <code>region</code> and a data stream name with
2154+
* <code>stream</code>.</p>
2155+
* <p> When you export logs to Amazon OpenSearch Service,
2156+
* you can specify options like <code>Name</code>,
2157+
* <code>Host</code> (OpenSearch Service endpoint without protocol), <code>Port</code>,
2158+
* <code>Index</code>, <code>Type</code>,
2159+
* <code>Aws_auth</code>, <code>Aws_region</code>, <code>Suppress_Type_Name</code>, and
2160+
* <code>tls</code>.</p>
2161+
* <p>When you export logs to Amazon S3, you can
2162+
* specify the bucket using the <code>bucket</code> option. You can also specify <code>region</code>,
2163+
* <code>total_file_size</code>, <code>upload_timeout</code>,
2164+
* and <code>use_put_object</code> as options.</p>
2165+
* <p>This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: <code>sudo docker version --format '\{\{.Server.APIVersion\}\}'</code>
19682166
* </p>
19692167
* @public
19702168
*/

0 commit comments

Comments
 (0)