|
6 | 6 | SageMaker Python SDK
|
7 | 7 | ====================
|
8 | 8 |
|
| 9 | +.. image:: https://travis-ci.org/aws/sagemaker-python-sdk.svg?branch=master |
| 10 | + :target: https://travis-ci.org/aws/sagemaker-python-sdk |
| 11 | + :alt: Build Status |
| 12 | + |
| 13 | +.. image:: https://codecov.io/gh/aws/sagemaker-python-sdk/branch/master/graph/badge.svg |
| 14 | + :target: https://codecov.io/gh/aws/sagemaker-python-sdk |
| 15 | + :alt: CodeCov |
| 16 | + |
9 | 17 | SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.
|
10 | 18 |
|
11 | 19 | With the SDK, you can train and deploy models using popular deep learning frameworks: **Apache MXNet** and **TensorFlow**. You can also train and deploy models with **Amazon algorithms**, these are scalable implementations of core machine learning algorithms that are optimized for SageMaker and GPU training. If you have **your own algorithms** built into SageMaker compatible Docker containers, you can train and host models using these as well.
|
@@ -39,7 +47,7 @@ You can install from source by cloning this repository and issuing a pip install
|
39 | 47 |
|
40 | 48 | git clone https://github.com/aws/sagemaker-python-sdk.git
|
41 | 49 | python setup.py sdist
|
42 |
| - pip install dist/sagemaker-1.1.0.tar.gz |
| 50 | + pip install dist/sagemaker-1.1.2.tar.gz |
43 | 51 |
|
44 | 52 | Supported Python versions
|
45 | 53 | ~~~~~~~~~~~~~~~~~~~~~~~~~
|
@@ -914,9 +922,10 @@ More details on how to create input functions can be find in `Building Input Fun
|
914 | 922 | Creating a ``serving_input_fn``
|
915 | 923 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
916 | 924 |
|
917 |
| -During training, ``train_input_fn`` ingests data and prepares it for use by the model. |
918 |
| -At the end of training, similarly, ``serving_input_fn`` is used to create the model that |
919 |
| -is exported for TensorFlow Serving. This function has the following purposes: |
| 925 | +``serving_input_fn`` is used to define the shapes and types of the inputs |
| 926 | +the model accepts when the model is exported for Tensorflow Serving. ``serving_input_fn`` is called |
| 927 | +at the end of model training and is not called during inference. (If you'd like to preprocess inference data, |
| 928 | +please see ``input_fn``). This function has the following purposes: |
920 | 929 |
|
921 | 930 | - To add placeholders to the graph that the serving system will feed with inference requests.
|
922 | 931 | - To add any additional ops needed to convert data from the input format into the feature Tensors
|
@@ -1139,7 +1148,7 @@ You need to add them inside the hyperparameters dictionary in the
|
1139 | 1148 | - ``eval_hooks (list)`` A list of `SessionRunHook` hooks to pass during evaluation.
|
1140 | 1149 | - ``eval_delay_secs (int)`` Start evaluating after waiting for this many seconds.
|
1141 | 1150 | - ``continuous_eval_throttle_secs (int)`` Do not re-evaluate unless the last evaluation was started at least this many seconds ago.
|
1142 |
| -- ``min_eval_frequency (int)`` The minimum number of steps between evaluations. Of course, evaluation does not occur if no new snapshot is available, hence, this is the minimum. If 0, the evaluation will only happen after training. If None, defaults to default is 1000. |
| 1151 | +- ``min_eval_frequency (int)`` The minimum number of steps between evaluations. Of course, evaluation does not occur if no new snapshot is available, hence, this is the minimum. If 0, the evaluation will only happen after training. If None, defaults to 1000. |
1143 | 1152 | - ``delay_workers_by_global_step (bool)`` if ``True`` delays training workers based on global step instead of time.
|
1144 | 1153 | - ``train_steps_per_iteration (int)`` Perform this many (integer) number of train steps for each training-evaluation iteration. With a small value, the model will be evaluated more frequently with more checkpoints saved.
|
1145 | 1154 |
|
|
0 commit comments