|
246 | 246 | {"shape":"ConnectorServerException"},
|
247 | 247 | {"shape":"ConnectorAuthenticationException"}
|
248 | 248 | ],
|
249 |
| - "documentation":"<p>Registers a new connector with your Amazon Web Services account. Before you can register the connector, you must deploy lambda in your account.</p>" |
| 249 | + "documentation":"<p>Registers a new custom connector with your Amazon Web Services account. Before you can register the connector, you must deploy the associated AWS lambda function in your account.</p>" |
250 | 250 | },
|
251 | 251 | "StartFlow":{
|
252 | 252 | "name":"StartFlow",
|
|
308 | 308 | {"shape":"ConflictException"},
|
309 | 309 | {"shape":"InternalServerException"}
|
310 | 310 | ],
|
311 |
| - "documentation":"<p>Unregisters the custom connector registered in your account that matches the connectorLabel provided in the request.</p>" |
| 311 | + "documentation":"<p>Unregisters the custom connector registered in your account that matches the connector label provided in the request.</p>" |
312 | 312 | },
|
313 | 313 | "UntagResource":{
|
314 | 314 | "name":"UntagResource",
|
|
342 | 342 | ],
|
343 | 343 | "documentation":"<p> Updates a given connector profile associated with your account. </p>"
|
344 | 344 | },
|
| 345 | + "UpdateConnectorRegistration":{ |
| 346 | + "name":"UpdateConnectorRegistration", |
| 347 | + "http":{ |
| 348 | + "method":"POST", |
| 349 | + "requestUri":"/update-connector-registration" |
| 350 | + }, |
| 351 | + "input":{"shape":"UpdateConnectorRegistrationRequest"}, |
| 352 | + "output":{"shape":"UpdateConnectorRegistrationResponse"}, |
| 353 | + "errors":[ |
| 354 | + {"shape":"ValidationException"}, |
| 355 | + {"shape":"ConflictException"}, |
| 356 | + {"shape":"AccessDeniedException"}, |
| 357 | + {"shape":"ResourceNotFoundException"}, |
| 358 | + {"shape":"ServiceQuotaExceededException"}, |
| 359 | + {"shape":"ThrottlingException"}, |
| 360 | + {"shape":"InternalServerException"}, |
| 361 | + {"shape":"ConnectorServerException"}, |
| 362 | + {"shape":"ConnectorAuthenticationException"} |
| 363 | + ], |
| 364 | + "documentation":"<p>Updates a custom connector that you've previously registered. This operation updates the connector with one of the following:</p> <ul> <li> <p>The latest version of the AWS Lambda function that's assigned to the connector</p> </li> <li> <p>A new AWS Lambda function that you specify</p> </li> </ul>" |
| 365 | + }, |
345 | 366 | "UpdateFlow":{
|
346 | 367 | "name":"UpdateFlow",
|
347 | 368 | "http":{
|
|
4149 | 4170 | },
|
4150 | 4171 | "dataTransferApi":{
|
4151 | 4172 | "shape":"SalesforceDataTransferApi",
|
4152 |
| - "documentation":"<p>Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce.</p> <dl> <dt>AUTOMATIC</dt> <dd> <p>The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.</p> <p>Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn't transfer Salesforce compound fields.</p> <p>By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.</p> </dd> <dt>BULKV2</dt> <dd> <p>Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it's optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.</p> <p>Note that Bulk API 2.0 does not transfer Salesforce compound fields.</p> </dd> <dt>REST_SYNC</dt> <dd> <p>Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.</p> </dd> </dl>" |
| 4173 | + "documentation":"<p>Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce.</p> <dl> <dt>AUTOMATIC</dt> <dd> <p>The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.</p> <p>Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn't transfer Salesforce compound fields.</p> <p>By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.</p> </dd> <dt>BULKV2</dt> <dd> <p>Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it's optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.</p> <p>Note that Bulk API 2.0 does not transfer Salesforce compound fields.</p> </dd> <dt>REST_SYNC</dt> <dd> <p>Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.</p> </dd> </dl>" |
4153 | 4174 | }
|
4154 | 4175 | },
|
4155 | 4176 | "documentation":"<p> The properties that are applied when Salesforce is being used as a source. </p>"
|
|
5064 | 5085 | }
|
5065 | 5086 | }
|
5066 | 5087 | },
|
| 5088 | + "UpdateConnectorRegistrationRequest":{ |
| 5089 | + "type":"structure", |
| 5090 | + "required":["connectorLabel"], |
| 5091 | + "members":{ |
| 5092 | + "connectorLabel":{ |
| 5093 | + "shape":"ConnectorLabel", |
| 5094 | + "documentation":"<p>The name of the connector. The name is unique for each connector registration in your AWS account.</p>" |
| 5095 | + }, |
| 5096 | + "description":{ |
| 5097 | + "shape":"Description", |
| 5098 | + "documentation":"<p>A description about the update that you're applying to the connector.</p>" |
| 5099 | + }, |
| 5100 | + "connectorProvisioningConfig":{"shape":"ConnectorProvisioningConfig"} |
| 5101 | + } |
| 5102 | + }, |
| 5103 | + "UpdateConnectorRegistrationResponse":{ |
| 5104 | + "type":"structure", |
| 5105 | + "members":{ |
| 5106 | + "connectorArn":{ |
| 5107 | + "shape":"ARN", |
| 5108 | + "documentation":"<p>The ARN of the connector being updated.</p>" |
| 5109 | + } |
| 5110 | + } |
| 5111 | + }, |
5067 | 5112 | "UpdateFlowRequest":{
|
5068 | 5113 | "type":"structure",
|
5069 | 5114 | "required":[
|
|
0 commit comments