Skip to content

0.6.0 API #167

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dbanty opened this issue Aug 22, 2020 · 5 comments
Closed

0.6.0 API #167

dbanty opened this issue Aug 22, 2020 · 5 comments
Labels
🥚breaking This change breaks compatibility 🆘 help wanted Extra attention is needed
Milestone

Comments

@dbanty
Copy link
Collaborator

dbanty commented Aug 22, 2020

There are a few goals for an upcoming 0.6.0 version that will involve modifying the endpoint API (i.e. everything in the api and async_api modules in the generated clients).

The primary issue I'm trying to solve is the ability to return additional data as needed from endpoint calls (e.g. status code, headers, raw data). This opens up a number of possibilities around more graceful degradation of generated clients (like in #141) but also provides an escape hatch when there are things that OpenAPI can't represent correctly (or this generator gets wrong) like in #115 .

Currently I have narrowed down my ideas about how to do this to two options, though I'm open to other ideas if someone has a better one.

Option 1: One Module per Endpoint

The general idea is that if you previously had an endpoint function in my_client.api.default.my_endpoint it would become my_client.api.default.my_endpoint.blocking (or sync?). The async_api version would be something like my_client.api.default.my_endpoint.async_() (I hate the trailing underscore so I would love a different name if this is the solution). We then add in a wrapped and async_wrapped (or full or detailed) for the new heavier response.

This is my favorite option right now because the code (while not elegant) is straight forward. This means it should be the most accessible to people reading their generated clients (which they absolutely should) and modifying them. Each of those functions would share most of their code with the others, only differences being which httpx function is called and what is returned.

The biggest downside is that this 100% breaks compatibility for new clients. This isn't the end of the world (we're still 0.x for a reason), but is certain to be annoying.

Option 2: Endpoints as Classes

This code will be harder to read / maintain but can preserve backwards compatibility. So we keep the separate sync and async modules (though I'll be combining some of their code anyway), but instead of my_endpoint being a function, it's a class instance with a __call__ method which does what the function does today. Then we can add a .wrapped() method to the class. So the generated code ends up looking something like

class _MyEndpoint:
    def __call__(self, *, client: Client, ...) -> SomeType:
        return self.wrapped(client=client, ...).body

    def wrapped(self, *, client: CLient, ...) -> Response[SomeType]:
        # Do normal stuff here

my_endpoint = _MyEndpoint()

There is also a sort of moderate option where we do Option 2 but mark __call__ as deprecated (emit a warning) and provide whatever method names we intend to have as functions in Option 1. Then go to Option 1 in 0.7.0 so people have a gradual migration path. I'm not sure if enough people have generated clients today to be worth that effort though 😅.

Mentioning @emannguitar, @pawamoy, and @bowenwr since I believe they have all discussed pieces of this in other issues. Of course this is open to discussion from anyone though.

@dbanty dbanty added 🆘 help wanted Extra attention is needed question 🥚breaking This change breaks compatibility labels Aug 22, 2020
@dbanty dbanty added this to the 0.6.0 milestone Aug 22, 2020
@dbanty dbanty pinned this issue Aug 22, 2020
@bowenwr
Copy link
Contributor

bowenwr commented Aug 22, 2020

Thanks so much for thinking through this! It will help us tremendously to be able to leverage response headers, among other things. (cc @dtkav).

Selfishly, I can tell you for our use case compatibility is not an issue. We can pretty readily adapt to breaking changes here as our implementation is pretty nascent.

With that in mind, I like Option 1 for readability and maintainability. I like detailed for the "heavier" response. Prefer sync to blocking.

Really appreciate the forethought here.

@pawamoy
Copy link

pawamoy commented Aug 23, 2020

Hello @dbanty! As @bowenwr said, thank you for taking the time and effort to think this through!

I'm perfectly OK with breaking changes, as you said we're still in 0.x 🙂

As for my preference over the two options you listed, I don't have any. I think option 1 is indeed more readable. In my case I'm wrapping up things on yet another level, so I don't really mind what the underlying code looks like.

@emann
Copy link
Collaborator

emann commented Aug 24, 2020

I agree with @bowenwr and @pawamoy - Option 1 has me vote for the readability/maintainability + the fact that there isn't any "magic" involved.

@dbanty
Copy link
Collaborator Author

dbanty commented Aug 29, 2020

I've created a pre-release 0.6.0-alpha.1 with the majority of the API changes that will go into 0.6.0 so you all can try it out, give feedback, etc. before it's solidified.

The only other change I'm planning on making as of right now is switching all @dataclass to use the attrs package instead for more flexibility and better version support (e.g. Python 3.6)

@dbanty
Copy link
Collaborator Author

dbanty commented Sep 21, 2020

As far as I know everyone is fairly happy with the new layout of generated clients in the alphas, so I'm going to mark this as closed and head toward a 0.6.0 release.

@dbanty dbanty closed this as completed Sep 21, 2020
@dbanty dbanty unpinned this issue Sep 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🥚breaking This change breaks compatibility 🆘 help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants