-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Custom s3 path for uploading spark configuration files #3200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I also ran in to this when using the spark processors for a normal job where certain prefixes in a bucket are locked down. There's no way to write the files to a different prefix other than the root of the bucket. I ended up patching around it with the hack below. Ideally, this could either be a direct input to the spark processors or specified in the conf like @nebur395 was thinking. It seems like specifying the url in the config might fail from sagemaker.spark.processing import _SparkProcessorBase, PySparkProcessor
from io import BytesIO
from sagemaker.s3 import S3Uploader
def _stage_configuration(self, configuration):
"""Serializes and uploads the user-provided EMR application configuration to S3.
This method prepares an input channel.
Args:
configuration (Dict): the configuration dict for the EMR application configuration.
"""
serialized_configuration = BytesIO(json.dumps(configuration).encode("utf-8"))
s3_uri = (
# Patch
f"{my_custom_prefix}"
f"input/{self._conf_container_input_name}/{self._conf_file_name}"
)
S3Uploader.upload_string_as_file_body(
body=serialized_configuration,
desired_s3_uri=s3_uri,
sagemaker_session=self.sagemaker_session,
)
conf_input = ProcessingInput(
source=s3_uri,
destination=f"{self._conf_container_base_path}{self._conf_container_input_name}",
input_name=_SparkProcessorBase._conf_container_input_name,
)
return conf_input
_SparkProcessorBase._stage_configuration = _stage_configuration
spark_processor = PySparkProcessor(
base_job_name="my-job",
role=role,
instance_count=1,
instance_type=instance_type,
image_uri=image,
sagemaker_session=sagemaker_session,
) |
Yes. One more reason to have this feature is that with every pipeline deployment operation a configuration file is uploaded to a new directory, which breaks a step caching functionality. So I would use this feature in the opposite way. I would set a static path for the configuration file (or based on its hash, whatever). This would allow using the step caching. |
I believe this is fixed? |
Closing as fixed by #3486 |
Describe the feature you'd like
In order to be able to organise our S3 bucket in which the input and the outputs are being uploaded for our Sagemaker pipelines, we want to be able to specify a custom spark
configuration
path. To improve our data lineage we are trying to store these files by pipeline name and execution ids.Currently there is no possibility to do so and those spark config files are being uploaded automatically following this convention:
https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/spark/processing.py#L391-L394.
So Ideally we would like to be able to specify a custom s3 path or at least a prefix for the uploaded files in S3. It's important to be able to use ExecutionVariables in that s3 path. Something like this:
How would this feature be used? Please describe.
In the end the s3 bucket should look like something similar to this:
Therefore now you have everything related with that
demo-processing
(spark app logs, spark app config files, spark outputs...) unders3://XYZ/demo-pipeline/execution-1/
, increasing data lineage, and reproducibilityThe text was updated successfully, but these errors were encountered: