Skip to content

Add context support #173

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 22, 2022
Merged

Add context support #173

merged 2 commits into from
Jul 22, 2022

Conversation

vr009
Copy link

@vr009 vr009 commented May 19, 2022

What has been done? Why? What problem is being solved?

This patch adds the support of using context in API. The API is based
on using request objects. Added tests that cover almost all cases of
using the context in a query. Added benchamrk tests are equivalent to
other, that use the same query but without any context.

Closes #48

I didn't forget about (remove if it is not applicable):

Related issues:
#48

Context support

Context support supposes a mechanism of canceling requests to instance.
The canceling happens when the timeout/deadline is over or the cancel function is called.
Also contexts may keep key-value inside.

API

There are two ways of representing context in API.

Standard http package supports it inside the Request object.

func NewRequestWithContext(ctx context.Context, method, url string, body io.Reader) (*Request, error)

func (r *Request) Context() context.Context

For outgoing client requests, the context controls cancellation.

For incoming server requests, the context is canceled when the client's connection closes,
the request is canceled (with HTTP/2), or when the ServeHTTP method returns.

The second approach.

The interface package database/sql supposes the next way of passing contexts.

func (db *DB) ExecContext(ctx context.Context, query string, args ...any) (Result, error)

func (db *DB) QueryContext(ctx context.Context, query string, args ...any) (*Rows, error)

The proposal for API:

It is decided to use request objects due to their flexibility and comfortability.
There is also no need to add similar methods to the common API with only difference, that
the first argument is a context.

Example:

    conn, err = Connect(server, opts)
	
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    req := NewSelectRequest(spaceNo).
        Index(indexNo).
        Limit(1).
        Iterator(IterEq).
        Key([]interface{}{uint(1010)}).
        WithContext(ctx)
	
    resp, err := conn.Do(req).Get()

Implementation

Let's consider the next points before discussing the implementation of adding context support.

  • The request that is already passed to a Tarantool instance cannot be canceled.
  • When the context is passed to a synchronous call, it must not be returned until the response
    is come or the context is done.
  • One of the main purposes to use contexts is the ability to keep the internal resources free.
  • There is no way to say to an instance that we don't wait the response to the canceled request anymore.
  • There is already a mechanism of canceling requests with the common timeout.

The architecture of the internal queue

image

Proposal

We will keep a context inside the Future object, so it is associated with the request lifetime.

We add a goroutine that scans all the queues of Future objects and removes each Future that has an expired
context.

We keep two queues for Futures with context not nil and other Futures. So each goroutine has the responsibility
over it's own type of Futures. (Consider side effects later)

When we have already removed the Future due to it's cancelled context, we can get a response from the instance
for that Future, then we log a message that have received an unexpected response from the instance. We assume that
it is expected behavior.

We also check if the context is cancelled:

  • before put the Future to the queue
  • when we call .Get() (sync call) for the Future.
  • when the response comes from Tarantool instance and we search the future with it's request id.

This approach let us almost the full control over the Future lifetime.

Problems

  1. The period of contexts-goroutine work.
  2. If there is a need to combine the behavior of timeouts and contexts.
  3. Mutexes.
  4. Do we need the context support in general?

Premature optimization.

One of the reasons to leave the context inside the Future object, was to avoid
the need to check context only with this goroutine every nanosecond(or some other time).
I would like to have this goroutine as a garbage collector just for cleaning up the queue when it is needed.
I also had an idea to have an internal atomic counter of a Futures count inside and run this goroutine when
this count is critical (we could configure this critical number and have a dependency from GOMAXPROC by default).

Benchmarks

Environment:

goos: darwin
goarch: amd64
pkg: github.com/tarantool/go-tarantool
cpu: Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz
Tarantool 2.11.0
go version go1.16.12 darwin/amd64

Setup:

  • One tarantool instance is up
  • N testing goroutines working parallel (50, 100, 500 and 1000)
  • One pre-inserted tuple in test space
  • Operation Get (Select with limit=1)
  • Limit on used CPU: (1,2,4)

Comparing between two queries:

req := NewSelectRequest(spaceNo).
    Index(indexNo).
    Limit(1).
    Iterator(IterEq).
    Key([]interface{}{uint(1010)}).
    WithContext(ctx)

and

req := NewSelectRequest(spaceNo).
    Index(indexNo).
    Limit(1).
    Iterator(IterEq).
    Key([]interface{}{uint(1010)})

50 testing goroutines

> $ benchstat b50g.txt a50g_alter.txt                                                                                                                               [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      4.24µs ± 0%    4.31µs ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    2.30µs ± 1%    2.48µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.80µs ± 1%    2.74µs ±57%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-2      16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-4      16.0 ± 0%      16.0 ± 0%   ~     (all equal)

100 testing goroutines

> $ benchstat b100g.txt a100g_alter.txt                                                                                                                             [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      3.56µs ± 0%    3.75µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    2.07µs ± 1%    2.18µs ± 2%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.71µs ± 2%    1.82µs ± 0%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-2      16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-4      16.0 ± 0%      16.0 ± 0%   ~     (all equal)

500 testing goroutines

> $ benchstat b500g.txt a500g_alter.txt                                                                                                                             [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      3.20µs ± 2%    3.35µs ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    1.68µs ± 1%    1.97µs ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.38µs ± 1%    1.52µs ± 3%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      849B ± 0%      885B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      803B ± 0%      843B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-2      16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-4      16.0 ± 0%      16.0 ± 0%   ~     (all equal)

1000 testing goroutines

> $ benchstat b1000g.txt a1000g_alter.txt                                                                                                                           [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      2.88µs ± 1%    3.34µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    1.49µs ± 1%    1.85µs ± 2%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.41µs ± 2%    1.49µs ± 1%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        794B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      853B ± 0%      882B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      809B ± 0%      858B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-2      16.0 ± 0%      16.0 ± 0%   ~     (all equal)
ClientParallelRequestObject-4      16.0 ± 0%      16.0 ± 0%   ~     (all equal)

Alternative way

Let us keep the context only inside the request object and check if the context or the future is "done"
in a separated goroutine.

The API is the same one:

    conn, err = Connect(server, opts)
	
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    req := NewSelectRequest(spaceNo).
        Index(indexNo).
        Limit(1).
        Iterator(IterEq).
        Key([]interface{}{uint(1010)}).
        WithContext(ctx)
	
    resp, err := conn.Do(req).Get()

The goroutine that scans ready channels:

if req.Context() != nil {
		go func() {
			select {
			case <-fut.done:
			default:
				select {
				case <-req.Context().Done():
				default:
					select {
					case <-fut.done:
					case <-req.Context().Done():
					}
				}
			}
		}()
}

Benchmarks

Environment:

goos: darwin
goarch: amd64
pkg: github.com/tarantool/go-tarantool
cpu: Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz
Tarantool 2.11.0
go version go1.16.12 darwin/amd64

Setup:

  • One tarantool instance is up
  • N testing goroutines working parallel (50, 100, 500 and 1000)
  • One pre-inserted tuple in test space
  • Operation Get (Select with limit=1)
  • Limit on used CPU: (1,2,4)

Comparing between two queries:

req := NewSelectRequest(spaceNo).
    Index(indexNo).
    Limit(1).
    Iterator(IterEq).
    Key([]interface{}{uint(1010)}).
    WithContext(ctx)

and

req := NewSelectRequest(spaceNo).
    Index(indexNo).
    Limit(1).
    Iterator(IterEq).
    Key([]interface{}{uint(1010)})

50 testing goroutines:

> $ benchstat b50g.txt a50g.txt                                                                                                                                     [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      4.24µs ± 0%    4.93µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    2.30µs ± 1%    2.66µs ± 3%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.80µs ± 1%    2.19µs ± 4%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)

100 testing goroutines:

> $ benchstat b100g.txt a100g.txt                                                                                                                                   [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      3.56µs ± 0%    4.25µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    2.07µs ± 1%    2.66µs ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.71µs ± 2%    2.24µs ± 0%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)

500 testing goroutines:

> $ benchstat b1000g.txt a1000g.txt                                                                                                                                 [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      2.88µs ± 1%    4.03µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    1.49µs ± 1%    2.12µs ± 1%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.41µs ± 2%    1.63µs ± 1%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        794B ± 0%      828B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      853B ± 0%      846B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      809B ± 0%      859B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)

1000 testing goroutines:

> $ benchstat b500g.txt a500g.txt                                                                                                                                   [±vr009/gh-48-add-context-support ●●▴▾]
name                           old time/op    new time/op    delta
ClientParallelRequestObject      3.20µs ± 2%    3.99µs ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2    1.68µs ± 1%    2.36µs ± 2%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4    1.38µs ± 1%    1.64µs ± 2%   ~     (p=0.100 n=3+3)

name                           old alloc/op   new alloc/op   delta
ClientParallelRequestObject        792B ± 0%      824B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      849B ± 0%      843B ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      803B ± 0%      844B ± 0%   ~     (p=0.100 n=3+3)

name                           old allocs/op  new allocs/op  delta
ClientParallelRequestObject        16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-2      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)
ClientParallelRequestObject-4      16.0 ± 0%      17.0 ± 0%   ~     (p=0.100 n=3+3)

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 4 times, most recently from ffa9009 to 6ced96d Compare May 23, 2022 08:13
@vr009
Copy link
Author

vr009 commented May 23, 2022

I have considered two options of API with context support:

First of all I see this as anywhere in go:

req := NewSelectRequest(spaceNo).
	          Index(indexNo)
conn.Do(ctx context.Context, req requestObj)

Also there is an option to use the request object and pass the context with some method like:

req := NewSelectRequest(spaceNo).
	          Index(indexNo).
                  WithCtx(context.Background())
conn.Do(req)

I like the first one due to its native "go" spirit. Also I don't see the way to implement the second option
without repeating the same code for every type of request object.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 5 times, most recently from 4164a29 to e7de5a1 Compare May 24, 2022 13:45
@vr009
Copy link
Author

vr009 commented May 24, 2022

Tried to speed up select based on this thread. For a simple Benchmark the difference between the typical construction (without default cases) and this one, is obvious, but for integration tests with a tarantool instance it is not so clear.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 2 times, most recently from a45284f to a5ae500 Compare May 24, 2022 15:09
@oleg-jukovec
Copy link
Collaborator

Tried to speed up select based on this thread. For a simple Benchmark the difference between the typical construction (without default cases) and this one, is obvious, but for integration tests with a tarantool instance it is not so clear.

An overhead of communicating with the Tarantool instance is much bigger than the effect of such optimizations. It's expected results.

@vr009
Copy link
Author

vr009 commented May 25, 2022

An overhead of communicating with the Tarantool instance is much bigger than the effect of such optimizations. It's expected results.

Yes, that seems to be truth, maybe we should consider the creating some mock package for tarantool for getting more clean results. @Totktonada your thoughts? See, for example, the mock package for Redis https://github.com/go-redis/redismock.

@oleg-jukovec
Copy link
Collaborator

oleg-jukovec commented May 28, 2022

I like the first one due to its native "go" spirit. Also I don't see the way to implement the second option without repeating the same code for every type of request object.

Go has examples of using WithContext method. I've not said that we should use this approach. But the links can be useful for the discussions.

@vr009
Copy link
Author

vr009 commented Jun 6, 2022

Go has examples of using WithContext method. I've not said that we should use this approach. But the links can be useful for the discussions.

Thank you for the links, I've missed that http package uses the similar approach with requests objects.
In any case I still see the proposed option as an acceptable one.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 2 times, most recently from d81c1c9 to 0eb34cd Compare June 6, 2022 07:54
@vr009 vr009 marked this pull request as ready for review June 6, 2022 07:57
@vr009 vr009 requested a review from oleg-jukovec June 6, 2022 07:57
Copy link
Collaborator

@oleg-jukovec oleg-jukovec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this pull request! I think you need to move the Context handling from the Future to the Connection, but we can discuss it in the notes.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from 0eb34cd to 40302db Compare June 17, 2022 22:15
@vr009 vr009 requested a review from oleg-jukovec June 17, 2022 22:53
@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 3 times, most recently from d2e155b to c01e004 Compare June 19, 2022 16:22
@oleg-jukovec
Copy link
Collaborator

oleg-jukovec commented Jun 20, 2022

  1. We create a context with 1s timeout (or deadline after 1 second).
  2. We call fut := Connection.DoAsync(ctx, req).
  3. We do something for 10 seconds or minutes...
  4. We call fut.Get().

What we should got as a result if a Connection object receive response in Connection.writer after 5 seconds (between 2. and 4.)?

@vr009
Copy link
Author

vr009 commented Jun 20, 2022

What we should got as a result if a Connection object receive response in Connection.writer after 5 seconds (between 2. and 4.)?

When we call fut.Get(), we got closed channel fut.done (that happens when we receive a response from instance and fill the Response field of a Future, see SetResponse()) and filled context.Done.
At first we check if the response is come (if fut.done is closed or got a value) and return from the wait() call.

@oleg-jukovec
Copy link
Collaborator

When we call fut.Get(), we got closed channel fut.done (that happens when we receive a response from instance and fill the Response field of a Future, see SetResponse()) and filled context.Done. At first we check if the response is come (if fut.done is closed or got a value) and return from the wait() call.

Yes, in the current implementation. But shouldn't the request be canceled after 1s?

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from 3169baa to 9cd532f Compare July 19, 2022 08:57
@oleg-jukovec
Copy link
Collaborator

oleg-jukovec commented Jul 19, 2022

Why do you think that the context-goroutine doesn't manage to check that the context is done before decoding the response?

I've reviewed a Connection.reader() goroutine - I'm ok with the current implementation.

Copy link
Collaborator

@oleg-jukovec oleg-jukovec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing critical, just few possible improvements.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from 9cd532f to 6e11534 Compare July 19, 2022 10:01
@vr009 vr009 requested a review from DifferentialOrange July 19, 2022 10:07
@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from 6e11534 to 1595398 Compare July 19, 2022 10:23
@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 2 times, most recently from 98bc358 to 6cc18d9 Compare July 19, 2022 10:53
Copy link
Collaborator

@oleg-jukovec oleg-jukovec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the great work! I'm still confused by two counters for request ids, but it is not critical.

You can improve the commit message. Proposal looks strange here + 72-characters per a commit message body line. It may be something like that:

@@ -1,9 +1,8 @@
-api: proposal to add the context support
+api: context support
 
-This patch adds the support of using context in API.
-The proposed API is based on using request objects.
-Added tests that cover almost all cases of using the context
-in a query. Added benchamrk tests are equivalent to other,
-that use the same query but without any context.
+This patch adds the support of using context in API. The API is based
+on using request objects. Added tests that cover almost all cases of
+using the context in a query. Added benchamrk tests are equivalent to
+other, that use the same query but without any context.
 
 Closes #48

In general, I am happy with the pull request, you can not expect new conversations from me.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from 6cc18d9 to cd1c932 Compare July 19, 2022 14:35
Copy link
Member

@DifferentialOrange DifferentialOrange left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIU, two request lists are used to support adding or removing context in the runtime. Is that right?

@vr009
Copy link
Author

vr009 commented Jul 20, 2022

AFAIU, two request lists are used to support adding or removing context in the runtime. Is that right?

Two requests list are needed to avoid the influence of timeouts-goroutine on requests with context.

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch 2 times, most recently from 0e0594d to b539f85 Compare July 20, 2022 09:57
@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from b539f85 to 107692d Compare July 20, 2022 13:54
@DifferentialOrange
Copy link
Member

AFAIU, two request lists are used to support adding or removing context in the runtime. Is that right?

Two requests list are needed to avoid the influence of timeouts-goroutine on requests with context.

So now we don't support any timeouts with context requests? Is it reflected anywhere (like in doc)?

@vr009 vr009 force-pushed the vr009/gh-48-add-context-support branch from 107692d to 1ecc707 Compare July 21, 2022 04:57
@vr009
Copy link
Author

vr009 commented Jul 21, 2022

So now we don't support any timeouts with context requests? Is it reflected anywhere (like in doc)?

Added now, thank you for noting this.

Copy link
Member

@DifferentialOrange DifferentialOrange left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be LGTM after resolving remaining comments.

This patch adds the support of using context in API. The API is based
on using request objects. Added tests that cover almost all cases of
using the context in a query. Added benchamrk tests are equivalent to
other, that use the same query but without any context.

Closes #48
This patch removes the padding field in a connection shard structure.
The _pad field is unused in the code and there is no sense
to keep it anymore.

Closes #197
@oleg-jukovec oleg-jukovec merged commit 4b066ae into master Jul 22, 2022
@oleg-jukovec oleg-jukovec deleted the vr009/gh-48-add-context-support branch July 22, 2022 10:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Proposal to change driver interface to context
3 participants