Skip to content

fix: Normalizing job_name in the ProcessingStep. #2786

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

purak24
Copy link

@purak24 purak24 commented Dec 6, 2021

Issue #, if available: #2736

Description of changes: Passing the job_name from the processor when normalizing the args in the ProcessingStep, to avoid resulting in a generated job name each time.

Testing done: Ran test suite

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

  • I have read the CONTRIBUTING doc
  • I certify that the changes I am introducing will be backword compatible, and I have discussed concerns about this, if any, with the Python SDK team
  • I used the commit message format described in CONTRIBUTING
  • I have passed the region in to all S3 and STS clients that I've initialized as part of this change.
  • I have updated any necessary documentation, including READMEs and API docs (if appropriate)

Tests

  • I have added tests that prove my fix is effective or that my feature works (if appropriate)
  • I have added unit and/or integration tests as appropriate to ensure backward compatibility of the changes
  • I have checked that my tests are not configured for a specific region or account (if appropriate)
  • I have used unique_name_from_base to create resource names in integ tests (if appropriate)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-python-sdk-unit-tests
  • Commit ID: 4e37cf1
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@@ -496,6 +496,7 @@ def __init__(
self.job_arguments = job_arguments
self.code = code
self.property_files = property_files
self.job_name = name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use the step name as the job name then all input and output locations will be overwritten when the args are normalized. In practical terms that means any pipeline with a step named MyProcessingStep will start using the code for this processor.

Things get a little better if we include the pipeline_name in the job_name, but that still means the code will be overwritten which will prevent recreatibility (ie, looking at a previous execution and going to S3 to see the script that was run).

Let me give this some thought. We need some signal of intentionality from the user here to maintain backwards compatibility.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the best approach here is to hash the caller's script contents and include that in the job_name. This ensures there will be cache hits/misses as appropriate while not requiring any input from the user. I can work on this change since it's a little heavier weight than initially expected

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good callout! How do you feel about using the enable caching boolean as the signal of intentionality from the user? Something like:

name = pipeline_name + job_name
self.job_name = name if cache_config.enable_caching else None

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your comment loaded later for me - I like your idea as well. This will go against the documentation here

Pipelines doesn't check whether the data or code that the arguments point to has changed

As a user though, I do believe it will be nice to evolve caching pipeline steps functionality to detect data/code changes!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#2790

I'm going to close this PR

@staubhp staubhp closed this Dec 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants