@@ -330,11 +330,11 @@ def __init__(
330
330
s3_analysis_config_output_path (str): S3 prefix to store the analysis config output.
331
331
If this field is None, then the ``s3_output_path`` will be used
332
332
to store the ``analysis_config`` output.
333
- label (str): Target attribute of the model required by bias metrics.
334
- Specified as column name or index for CSV dataset or as JSONPath for JSONLines.
333
+ label (str): Target attribute of the model required by bias metrics. Specified as
334
+ column name or index for CSV dataset or as JMESPath expression for JSONLines.
335
335
*Required parameter* except for when the input dataset does not contain the label.
336
- features (List[str]): JSONPath for locating the feature columns for bias metrics if the
337
- dataset format is JSONLines.
336
+ features (List[str]): JMESPath expression to locate the feature columns for
337
+ bias metrics if the dataset format is JSONLines.
338
338
dataset_type (str): Format of the dataset. Valid values are ``"text/csv"`` for CSV,
339
339
``"application/jsonlines"`` for JSONLines, and
340
340
``"application/x-parquet"`` for Parquet.
@@ -716,11 +716,11 @@ def __init__(
716
716
``label_headers=['cat','dog','fish']`` and infer the predicted label to be ``'fish'``.
717
717
718
718
Args:
719
- label (str or int): Index or JSONPath location in the model output for the prediction.
720
- In case, this is a predicted label of the same type as the label in the dataset,
721
- no further arguments need to be specified.
722
- probability (str or int): Index or JSONPath location in the model output
723
- for the predicted score(s) .
719
+ label (str or int): Index or JMESPath expression to locate the prediction
720
+ in the model output. In case, this is a predicted label of the same type
721
+ as the label in the dataset, no further arguments need to be specified.
722
+ probability (str or int): Index or JMESPath expression to locate the predicted score(s)
723
+ in the model output .
724
724
probability_threshold (float): An optional value for binary prediction tasks in which
725
725
the model returns a probability, to indicate the threshold to convert the
726
726
prediction to a boolean value. Default is ``0.5``.
@@ -1645,9 +1645,9 @@ def run_explainability(
1645
1645
You can request multiple methods at once by passing in a list of
1646
1646
`~sagemaker.clarify.ExplainabilityConfig`.
1647
1647
model_scores (int or str or :class:`~sagemaker.clarify.ModelPredictedLabelConfig`):
1648
- Index or JSONPath to locate the predicted scores in the model output. This is not
1649
- required if the model output is a single score. Alternatively, it can be an instance
1650
- of :class:`~sagemaker.clarify.SageMakerClarifyProcessor`
1648
+ Index or JMESPath expression to locate the predicted scores in the model output.
1649
+ This is not required if the model output is a single score. Alternatively,
1650
+ it can be an instance of :class:`~sagemaker.clarify.SageMakerClarifyProcessor`
1651
1651
to provide more parameters like ``label_headers``.
1652
1652
wait (bool): Whether the call should wait until the job completes (default: True).
1653
1653
logs (bool): Whether to show the logs produced by the job.
@@ -1774,9 +1774,9 @@ def run_bias_and_explainability(
1774
1774
str or
1775
1775
:class:`~sagemaker.clarify.ModelPredictedLabelConfig`
1776
1776
):
1777
- Index or JSONPath to locate the predicted scores in the model output. This is not
1778
- required if the model output is a single score. Alternatively, it can be an instance
1779
- of :class:`~sagemaker.clarify.SageMakerClarifyProcessor`
1777
+ Index or JMESPath expression to locate the predicted scores in the model output.
1778
+ This is not required if the model output is a single score. Alternatively,
1779
+ it can be an instance of :class:`~sagemaker.clarify.SageMakerClarifyProcessor`
1780
1780
to provide more parameters like ``label_headers``.
1781
1781
wait (bool): Whether the call should wait until the job completes (default: True).
1782
1782
logs (bool): Whether to show the logs produced by the job.
0 commit comments