Skip to content

Commit ec81634

Browse files
authored
Merge pull request aws#308 from awslabs/remove-p3
Remove p3 usage from the notebooks.
2 parents 05bc564 + 0a0cb6e commit ec81634

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

sagemaker-python-sdk/pytorch_lstm_word_language_model/pytorch_rnn.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
"\n",
3232
"## Setup\n",
3333
"\n",
34-
"_This notebook was created and tested on an ml.p3.2xlarge notebook instance._\n",
34+
"_This notebook was created and tested on an ml.p2.xlarge notebook instance._\n",
3535
"\n",
3636
"Let's start by creating a SageMaker session and specifying:\n",
3737
"\n",
@@ -171,7 +171,7 @@
171171
"metadata": {},
172172
"source": [
173173
"### Run training in SageMaker\n",
174-
"The PyTorch class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script and source directory, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on ml.p3.2xlarge instance. As you can see in this example you can also specify hyperparameters. "
174+
"The PyTorch class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script and source directory, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on ```ml.p2.xlarge``` instance. As you can see in this example you can also specify hyperparameters. "
175175
]
176176
},
177177
{
@@ -186,7 +186,7 @@
186186
" role=role,\n",
187187
" framework_version='0.4.0',\n",
188188
" train_instance_count=1,\n",
189-
" train_instance_type='ml.p3.2xlarge',\n",
189+
" train_instance_type='ml.p2.xlarge',\n",
190190
" source_dir='source',\n",
191191
" # available hyperparameters: emsize, nhid, nlayers, lr, clip, epochs, batch_size,\n",
192192
" # bptt, dropout, tied, seed, log_interval\n",

sagemaker-python-sdk/pytorch_mnist/pytorch_mnist.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@
143143
"source": [
144144
"### Run training in SageMaker\n",
145145
"\n",
146-
"The `PyTorch` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, the training instance type, and hyperparameters. In this case we are going to run our training job on 2 ```ml.p3.2xlarge``` instances. But this example can be ran on one or multiple, cpu or gpu instances ([full list of available instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)). The hyperparameters parameter is a dict of values that will be passed to your training script -- you can see how to access these values in the `mnist.py` script above."
146+
"The `PyTorch` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, the training instance type, and hyperparameters. In this case we are going to run our training job on 2 ```ml.c4.xlarge``` instances. But this example can be ran on one or multiple, cpu or gpu instances ([full list of available instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)). The hyperparameters parameter is a dict of values that will be passed to your training script -- you can see how to access these values in the `mnist.py` script above."
147147
]
148148
},
149149
{
@@ -158,7 +158,7 @@
158158
" role=role,\n",
159159
" framework_version='0.4.0',\n",
160160
" train_instance_count=2,\n",
161-
" train_instance_type='ml.p3.2xlarge',\n",
161+
" train_instance_type='ml.c4.xlarge',\n",
162162
" hyperparameters={\n",
163163
" 'epochs': 6,\n",
164164
" 'backend': 'gloo'\n",

0 commit comments

Comments
 (0)