Skip to content

update warning message #1587

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 17, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions src/sagemaker/fw_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@
"Please set the argument \"py_version='py3'\" to use the Python 3 {framework} image."
)
PARAMETER_SERVER_MULTI_GPU_WARNING = (
"You have selected a multi-GPU training instance type. "
"You have also enabled parameter server for distributed training. "
"If you have selected a multi-GPU training instance type, "
"and have also enabled parameter server for distributed training. "
"Distributed training with the default parameter server configuration will not "
"fully leverage all GPU cores; the parameter server will be configured to run "
"only one worker per host regardless of the number of GPUs."
Expand Down Expand Up @@ -617,9 +617,9 @@ def warn_if_parameter_server_with_multi_gpu(training_instance_type, distribution
return

is_multi_gpu_instance = (
training_instance_type.split(".")[1].startswith("p")
and training_instance_type not in SINGLE_GPU_INSTANCE_TYPES
)
training_instance_type == "local_gpu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how are we checking if its a multi_gpu instance just by checking for "local_gpu"
or is it assumed that the person who is testing locally has verified that his local instance supports multi_gpu

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot check that, so I updated the warning message to "if"

or training_instance_type.split(".")[1].startswith("p")
) and training_instance_type not in SINGLE_GPU_INSTANCE_TYPES

ps_enabled = "parameter_server" in distributions and distributions["parameter_server"].get(
"enabled", False
Expand Down
10 changes: 10 additions & 0 deletions tests/unit/test_fw_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1272,3 +1272,13 @@ def test_warn_if_parameter_server_with_multi_gpu(caplog):
training_instance_type=train_instance_type, distributions=distributions
)
assert fw_utils.PARAMETER_SERVER_MULTI_GPU_WARNING in caplog.text


def test_war_if_parameter_server_with_multi_gpu(caplog):
train_instance_type = "local_gpu"
distributions = {"parameter_server": {"enabled": True}}

fw_utils.warn_if_parameter_server_with_multi_gpu(
training_instance_type=train_instance_type, distributions=distributions
)
assert fw_utils.PARAMETER_SERVER_MULTI_GPU_WARNING in caplog.text