You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+5-4
Original file line number
Diff line number
Diff line change
@@ -21,13 +21,14 @@
21
21
22
22
2. When in a Poetry shell (`poetry shell`) run `task check` in order to run most of the same checks CI runs. This will auto-reformat the code, check type annotations, run unit tests, check code coverage, and lint the code.
23
23
24
-
### Rework end to end tests
24
+
### Rework end-to-end tests
25
25
26
-
3. If you're writing a new feature, try to add it to the end to end test.
26
+
3. If you're writing a new feature, try to add it to the end-to-end test.
27
27
1. If adding support for a new OpenAPI feature, add it somewhere in `end_to_end_tests/openapi.json`
28
-
2. Regenerate the "golden records" with `task regen`. This client is generated from the OpenAPI document used for end to end testing.
28
+
2. Regenerate the "golden records" with `task regen`. This client is generated from the OpenAPI document used for end-to-end testing.
29
29
3. Check the changes to `end_to_end_tests/golden-record` to confirm only what you intended to change did change and that the changes look correct.
30
-
4. Run the end to end tests with `task e2e`. This will generate clients against `end_to_end_tests/openapi.json` and compare them with the golden record. The tests will fail if **anything is different**. The end to end tests are not included in `task check` as they take longer to run and don't provide very useful feedback in the event of failure. If an e2e test does fail, the easiest way to check what's wrong is to run `task regen` and check the diffs. You can also use `task re` which will run `regen` and `e2e` in that order.
30
+
4.**If you added a test above OR modified the templates**: Run the end-to-end tests with `task e2e`. This will generate clients against `end_to_end_tests/openapi.json` and compare them with the golden record. The tests will fail if **anything is different**. The end-to-end tests are not included in `task check` as they take longer to run and don't provide very useful feedback in the event of failure. If an e2e test does fail, the easiest way to check what's wrong is to run `task regen` and check the diffs. You can also use `task re` which will run `regen` and `e2e` in that order.
Copy file name to clipboardExpand all lines: integration_tests/open-api-test-server-client/open_api_test_server_client/models/post_body_multipart_multipart_data.py
+7-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,13 @@
10
10
11
11
@attr.s(auto_attribs=True)
12
12
classPostBodyMultipartMultipartData:
13
-
""" """
13
+
"""
14
+
Attributes:
15
+
a_string (str):
16
+
file (File): For the sake of this test, include a file name and content type. The payload should also be valid
Copy file name to clipboardExpand all lines: integration_tests/open-api-test-server-client/open_api_test_server_client/models/post_body_multipart_response_200.py
+8-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,14 @@
7
7
8
8
@attr.s(auto_attribs=True)
9
9
classPostBodyMultipartResponse200:
10
-
""" """
10
+
"""
11
+
Attributes:
12
+
a_string (str): Echo of the 'a_string' input parameter from the form.
13
+
file_data (str): Echo of content of the 'file' input parameter from the form.
14
+
description (str): Echo of the 'description' input parameter from the form.
15
+
file_name (str): The name of the file uploaded.
16
+
file_content_type (str): The content type of the file uploaded.
0 commit comments