You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This runs a transform job against all the files under ``s3://mybucket/path/to/my/csv/data``, transforming the input
695
+
data in order with each model container in the pipeline. For each inputfile that was successfully transformed, one output filein``s3://my-output-bucket/path/to/my/output/data/``
696
+
will be created with the same name, appended with'.out'.
697
+
This transform job will split CSV files by newline separators, which is especially useful if the input files are large.
698
+
The Transform Job assembles the outputs with line separators when writing each inputfile's corresponding output file.
699
+
Each payload entering the first model container will be up to six megabytes, and up to eight inference requests are sent at the
700
+
same time to the first model container. Because each payload consists of a mini-batch of multiple CSV records, the model
701
+
containers transform each mini-batch of records.
702
+
674
703
For comprehensive examples on how to use Inference Pipelines please refer to the following notebooks:
0 commit comments