Skip to content

Retry attempts that fail due to a connection timeout #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Aug 5, 2022

Conversation

susodapop
Copy link
Contributor

@susodapop susodapop commented Jul 28, 2022

Note for reviewers

On 4 August 2022 I reverted all changes in this PR so I could reimplement and apply all your review feedback. This happened in fa1fd50. Every subsequent commit encapsulates one logical change to the code. Working through them one-at-a-time should be quite easy.

  1. Isolate delay bounding bf65b81
  2. Move error details scope up one level a0d340e (H/T @moderakh )
  3. Retry GetOperationStatus for OSError a55cf9d (thanks @sander-goos and @benfleis )
  4. Log when we do it 38411a8 (H/T @benfleis )

The final four commits are simple code cleanup and fixing one warning in the test suite un-related to this change.

Since our e2e tests are not enabled via Github Actions yet I ran them locally and all passed.

Description

Currently, we only retry attempts that returned a code 429 or 503 and include a Retry-After header. This pull request also allows GetOperationStatus requests to be retried if the request fails with an OSError exception. The configured retry policy is still honoured with regard to maximum attempts and max retry duration.

Background

The reason we only retry 429 and 503 responses today is because retries must be idempotent. Otherwise the connector could cause unexpected or harmful consequences (data loss, excess resource utilisation etc.)

We know that 429/503 responses are idempotent because the attempt was halted before the server could execute it, regardless if the attempted operation was itself idempotent.

We also know that GetOperationStatus requests are idempotent because they do not modify data on the server. Therefore we can add an extra case to our retry allow list:

  • If a request is a GetOperationStatus command
  • If it fails due to an OSError such as TimeoutError or ConnectionResetError.

Previously we attempted this same behaviour by retrying GetOperationStatus requests, regardless the nature of the exception. But this change could not pass our e2e tests because there are valid cases when GetOperationStatus will raise an exception from within our own library code: for example, if an operation is canceled in a separate thread GetOperationStatus will raise a "DOES NOT EXIST" exception.

Logging Behaviour

The connector will log whenever it retries an attempt because of an OSError. It will use log level INFO if the OSError is one we consider normal. It will use log level WARNING if the OSError seems unusual. The codes we consider normal are:

                                            # | Debian | Darwin |
                    info_errs = [           # |--------|--------|         
                        errno.ESHUTDOWN,    # |   32   |   32   |
                        errno.EAFNOSUPPORT, # |   97   |   47   |
                        errno.ECONNRESET,   # |   104  |   54   |
                        errno.ETIMEDOUT,    # |   110  |   60   |
                    ]

The full set of OSError codes is platform specific. I wrote this patch to target a specific customer scenario when GetOperationStatus requests were retried after an operating system socket timeout exception. In this customer scenario the error code was Errno 110: Connection timed out. However that error code is specific to Linux. On a Darwin/Macos host the code would be 65 and on Windows it would be 10060.

Rather than catch these specifically, I use Python's errno built-in to check for errno.ETIMEDOUT which will resolve to the platform-specific code at runtime. I tested this manually on Linux and MacOS (but not on Windows). @benfleis helped me pick what we consider "normal".

We log all other OSError codes with WARNING is because it would be pretty unusual for a request to fail because of a "FileNotFound" or a system fault.

References

I found this article extremely helpful while formulating this fix. The author is solving a very similar problem across platforms.

It doesn't like retry policy bounds == None.

Signed-off-by: Jesse Whitehouse <[email protected]>
Signed-off-by: Jesse Whitehouse <[email protected]>
Other OSError's like `EINTR` could indicate a call was interrupted after
it was received by the server, which would potentially not be idempotent

Signed-off-by: Jesse Whitehouse <[email protected]>
Signed-off-by: Jesse Whitehouse <[email protected]>
Only make it non-null for retryable requests

Signed-off-by: Jesse Whitehouse <[email protected]>
every time. Whereas the previous approach they passed in ten seconds.

Signed-off-by: Jesse Whitehouse <[email protected]>
Copy link
Contributor

@sander-goos sander-goos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. Could you please link in the description the resources that assures that the errors we catch are safe to retry (and platform independent).

)


with self.assertRaises(OperationalError) as cm:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be more specific to assert RequestError

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually removed the e2e tests for this behaviour because they were no more useful than unit tests. I'll add an e2e test for this scenario after we merge support for http proxies. Then we can simulate real timeouts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the unit tests asserts on RequestError btw.

Signed-off-by: Jesse Whitehouse <[email protected]>
Signed-off-by: Jesse Whitehouse <[email protected]>
Add retry_delay_default to use in this case.

Signed-off-by: Jesse Whitehouse <[email protected]>
Emit warnings for unexpected OSError codes

Signed-off-by: Jesse Whitehouse <[email protected]>
Signed-off-by: Jesse Whitehouse <[email protected]>
DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
Signed-off-by: Jesse Whitehouse <[email protected]>
Signed-off-by: Jesse Whitehouse <[email protected]>
Signed-off-by: Jesse Whitehouse <[email protected]>
@susodapop susodapop requested a review from sander-goos August 4, 2022 15:53
@susodapop
Copy link
Contributor Author

@benfleis @sander-goos I updated the PR description to reflect the present state after this week's reviews / updates.

Copy link
Contributor

@benfleis benfleis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!

Which real connection tests did you perform to validate? Related, perhaps worth adding a note to PR (or near the retry code) explaining any manual E2E tests you used to validate behavior in "real conditions", so you (and others) won't have to think so hard about what's worth doing as a 1-off manual test.

Signed-off-by: Jesse Whitehouse <[email protected]>
@susodapop
Copy link
Contributor Author

Which real connection tests did you perform to validate?

I used mitmweb (more here) to intercept traffic between the connector and a SQL Warehouse. mitmweb allows me to force a timeout and abort a connection. Both of which raise OSError in Python. The expected behaviour was met: GetOperationStatus requests were retried within the retry policy, but other requests were not such as ExecuteStatement, FetchResults, OpenSession, and CloseSession.

This process is very manual, so I'm not yet incorporating it into the tests/e2e/driver_tests.py file. I will follow-up with an outline of how to automate this testing in the future.

How I ran the tests

The actual integration test I ran looked like this:

def test_make_request_will_retry_GetOperationStatus(self):
    
    import thrift, errno
    from databricks.sql.thrift_api.TCLIService.TCLIService import Client
    from databricks.sql.exc import RequestError
    from databricks.sql.utils import NoRetryReason
    from databricks.sql import client

    from databricks.sql.thrift_api.TCLIService import ttypes

    with self.cursor() as cursor:
        cursor.execute("SELECT 1")
        op_handle = cursor.active_op_handle
    
    req = ttypes.TGetOperationStatusReq(
        operationHandle=op_handle,
        getProgressUpdate=False,
    )

    EXPECTED_RETRIES = 2

    with self.cursor({"_socket_timeout": 10, "_retry_stop_after_attempts_count": 2}) as cursor:
        _client = cursor.connection.thrift_backend._client

        with self.assertRaises(RequestError) as cm:
            breakpoint() # At this point I instructed mitmweb to intercept and suspend all requests
            cursor.connection.thrift_backend.make_request(_client.GetOperationStatus, req)

        self.assertEqual(NoRetryReason.OUT_OF_ATTEMPTS.value, cm.exception.context["no-retry-reason"])
        self.assertEqual(f'{EXPECTED_RETRIES}/{EXPECTED_RETRIES}', cm.exception.context["attempt"])

I used this Dockerfile to make this work on my local machine

FROM python:3.7-slim-buster

RUN useradd --create-home pysql


# Ubuntu packages
RUN apt-get update && \
  apt-get install -y --no-install-recommends \
    curl \
    gnupg \
    build-essential \
    pwgen \
    libffi-dev \
    sudo \
    git-core \
    # Additional packages required for data sources:
    libssl-dev \
    libsasl2-modules-gssapi-mit && \
    apt-get clean && \
     rm -rf /var/lib/apt/lists/*

RUN sudo mkdir /usr/local/share/ca-certificates/extra
RUN curl mitm.it/cert/pem > mitmcert.pem
RUN openssl x509 -in mitmcert.pem -inform PEM -out mitm.crt
RUN sudo cp mitm.crt /usr/local/share/ca-certificates/extra/mitm.crt
RUN sudo update-ca-certificates

RUN pip install poetry --user

COPY --chown=pysql . /pysql
RUN chown pysql /pysql

WORKDIR /pysql

RUN python -m poetry install

And configured Docker using its config.json file to forward all traffic from my containers through the mitmweb proxy that I installed on my local environment, like this:

    "proxies":
    {
      "default":
      {
        "httpProxy": "http://<my local network ip address>:8080",
        "httpsProxy": "http://<my local network ip address>:8080",
        "noProxy": "pypi.org"
      }
    }

The mitmweb proxy must be running when building the Docker container.

Related, perhaps worth adding a note to PR (or near the retry code) explaining any manual E2E tests you used to validate behavior in "real conditions", so you (and others) won't have to think so hard about what's worth doing as a 1-off manual test.

Done here 10016ea

Signed-off-by: Jesse Whitehouse <[email protected]>
This reverts commit 4db4ad0.

Signed-off-by: Jesse Whitehouse <[email protected]>
@susodapop susodapop merged commit c59c393 into main Aug 5, 2022
@susodapop susodapop deleted the retry-connection-attempt-failures branch August 5, 2022 21:23
@sander-goos
Copy link
Contributor

Thanks for the fix and the detailed explanation!

moderakh pushed a commit to moderakh/databricks-sql-python that referenced this pull request Aug 24, 2022
* Isolate delay bounding logic
* Move error details scope up one-level.
* Retry GetOperationStatus if an OSError was raised during execution. Add retry_delay_default to use in this case.
* Log when a request is retried due to an OSError. Emit warnings for unexpected OSError codes
* Update docstring for make_request
* Nit: unit tests show the .warn message is deprecated. DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead

Signed-off-by: Jesse Whitehouse <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants