Skip to content

add tests that verify behavior of generated code + generator errors/warnings #1156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
14 changes: 14 additions & 0 deletions .changeset/live_tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
default: minor
---

# New categories of end-to-end tests

Automated tests have been extended to include two new types of tests:

1. Happy-path tests that run the generator from an inline API document and then actually import and execute the generated code. See [`end_to_end_tests/generated_code_live_tests`](./end_to_end_tests/generated_code_live_tests).
2. Warning/error condition tests that run the generator from an inline API document that contains something invalid, and make assertions about the generator's output.

These provide more efficient and granular test coverage than the "golden record"-based end-to-end tests, and also replace some tests that were previously being done against low-level implementation details in `tests/unit`.

This does not affect any runtime functionality of openapi-python-client.
2 changes: 1 addition & 1 deletion .github/workflows/checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
test:
strategy:
matrix:
python: [ "3.8", "3.9", "3.10", "3.11", "3.12", "3.13" ]
python: [ "3.9", "3.10", "3.11", "3.12", "3.13" ]
os: [ ubuntu-latest, macos-latest, windows-latest ]
runs-on: ${{ matrix.os }}
steps:
Expand Down
26 changes: 20 additions & 6 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,22 +54,36 @@ If you think that some of the added code is not testable (or testing it would ad
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)

#### End-to-end tests
#### End-to-end snapshot tests

This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and unit tests of generated code.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.

In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:

1. `baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
2. `baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
3. `test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
4. `3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)

#### Unit tests
#### Unit tests of generated code

These verify the runtime behavior of the generated code, without making assertions about the exact implementation of the code. For instance, they can verify that JSON data is correctly decoded into model class attributes.

The tests run the generator against a small API spec (defined inline for each test class), and then import and execute the generated code. This can sometimes identify issues with validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature.

See [`end_to_end_tests/generated_code_live_tests`](./end_to_end_tests/generated_code_live_tests).

#### Other unit tests

> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
These include:

In some cases, we need to test things which cannot be generated—like validating that errors are caught and handled correctly. These should be tested via unit tests in the `tests` directory, using the `pytest` framework.
* Regular unit tests of basic pieces of fairly self-contained low-level functionality, such as helper functions. These are implemented in the `tests/unit` directory, using the `pytest` framework.
* End-to-end tests of invalid spec conditions, where we run the generator against a small spec with some problem, and expect it to print warnings/errors rather than generating code. These are implemented in `end_to_end_tests/generator_errors_and_warnings`.
* Older-style unit tests of low-level functions like `property_from_data` that have complex behavior. These are brittle and difficult to maintain, and should not be used going forward. Instead, use either unit tests of generated code (to test happy paths), or end-to-end tests of invalid spec conditions (to test for warnings/errors), as described above.

### Creating a Pull Request

Expand Down
3 changes: 3 additions & 0 deletions end_to_end_tests/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
""" Generate a complete client and verify that it is correct """
import pytest

pytest.register_assert_rewrite("end_to_end_tests.end_to_end_test_helpers")
267 changes: 267 additions & 0 deletions end_to_end_tests/end_to_end_test_helpers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,267 @@
import importlib
import os
import re
import shutil
from filecmp import cmpfiles, dircmp
from pathlib import Path
import sys
import tempfile
from typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple

from attrs import define
import pytest
from click.testing import Result
from typer.testing import CliRunner

from openapi_python_client.cli import app
from openapi_python_client.utils import snake_case


@define
class GeneratedClientContext:
"""A context manager with helpers for tests that run against generated client code.

On entering this context, sys.path is changed to include the root directory of the
generated code, so its modules can be imported. On exit, the original sys.path is
restored, and any modules that were loaded within the context are removed.
"""

output_path: Path
generator_result: Result
base_module: str
monkeypatch: pytest.MonkeyPatch
old_modules: Optional[Set[str]] = None

def __enter__(self) -> "GeneratedClientContext":
self.monkeypatch.syspath_prepend(self.output_path)
self.old_modules = set(sys.modules.keys())
return self

def __exit__(self, exc_type, exc_value, traceback):
self.monkeypatch.undo()
for module_name in set(sys.modules.keys()) - self.old_modules:
del sys.modules[module_name]
shutil.rmtree(self.output_path, ignore_errors=True)

def import_module(self, module_path: str) -> Any:
"""Attempt to import a module from the generated code."""
return importlib.import_module(f"{self.base_module}{module_path}")


def _run_command(
command: str,
extra_args: Optional[List[str]] = None,
openapi_document: Optional[str] = None,
url: Optional[str] = None,
config_path: Optional[Path] = None,
raise_on_error: bool = True,
) -> Result:
"""Generate a client from an OpenAPI document and return the result of the command."""
runner = CliRunner()
if openapi_document is not None:
openapi_path = Path(__file__).parent / openapi_document
source_arg = f"--path={openapi_path}"
else:
source_arg = f"--url={url}"
config_path = config_path or (Path(__file__).parent / "config.yml")
args = [command, f"--config={config_path}", source_arg]
if extra_args:
args.extend(extra_args)
result = runner.invoke(app, args)
if result.exit_code != 0 and raise_on_error:
raise Exception(result.stdout)
return result


def generate_client(
openapi_document: str,
extra_args: List[str] = [],
output_path: str = "my-test-api-client",
base_module: str = "my_test_api_client",
specify_output_path_explicitly: bool = True,
overwrite: bool = True,
raise_on_error: bool = True,
) -> GeneratedClientContext:
"""Run the generator and return a GeneratedClientContext for accessing the generated code."""
full_output_path = Path.cwd() / output_path
if not overwrite:
shutil.rmtree(full_output_path, ignore_errors=True)
args = extra_args
if specify_output_path_explicitly:
args = [*args, "--output-path", str(full_output_path)]
if overwrite:
args = [*args, "--overwrite"]
generator_result = _run_command("generate", args, openapi_document, raise_on_error=raise_on_error)
return GeneratedClientContext(
full_output_path,
generator_result,
base_module,
pytest.MonkeyPatch(),
)


def generate_client_from_inline_spec(
openapi_spec: str,
extra_args: List[str] = [],
filename_suffix: Optional[str] = None,
config: str = "",
base_module: str = "testapi_client",
add_missing_sections = True,
raise_on_error: bool = True,
) -> GeneratedClientContext:
"""Run the generator on a temporary file created with the specified contents.

You can also optionally tell it to create a temporary config file.
"""
if add_missing_sections:
if not re.search("^openapi:", openapi_spec, re.MULTILINE):
openapi_spec += "\nopenapi: '3.1.0'\n"
if not re.search("^info:", openapi_spec, re.MULTILINE):
openapi_spec += "\ninfo: {'title': 'testapi', 'description': 'my test api', 'version': '0.0.1'}\n"
if not re.search("^paths:", openapi_spec, re.MULTILINE):
openapi_spec += "\npaths: {}\n"

output_path = tempfile.mkdtemp()
file = tempfile.NamedTemporaryFile(suffix=filename_suffix, delete=False)
file.write(openapi_spec.encode('utf-8'))
file.close()

if config:
config_file = tempfile.NamedTemporaryFile(delete=False)
config_file.write(config.encode('utf-8'))
config_file.close()
extra_args = [*extra_args, "--config", config_file.name]

generated_client = generate_client(
file.name,
extra_args,
output_path,
base_module,
raise_on_error=raise_on_error,
)
os.unlink(file.name)
if config:
os.unlink(config_file.name)

return generated_client


def inline_spec_should_fail(
openapi_spec: str,
extra_args: List[str] = [],
filename_suffix: Optional[str] = None,
config: str = "",
add_missing_sections = True,
) -> Result:
"""Asserts that the generator could not process the spec.

Returns the command result, which could include stdout data or an exception.
"""
with generate_client_from_inline_spec(
openapi_spec, extra_args, filename_suffix, config, add_missing_sections=add_missing_sections, raise_on_error=False
) as generated_client:
assert generated_client.generator_result.exit_code != 0
return generated_client.generator_result


def inline_spec_should_cause_warnings(
openapi_spec: str,
extra_args: List[str] = [],
filename_suffix: Optional[str] = None,
config: str = "",
add_missing_sections = True,
) -> str:
"""Asserts that the generator is able to process the spec, but printed warnings.

Returns the full output.
"""
with generate_client_from_inline_spec(
openapi_spec, extra_args, filename_suffix, config, add_missing_sections=add_missing_sections, raise_on_error=True
) as generated_client:
assert generated_client.generator_result.exit_code == 0
assert "Warning(s) encountered while generating" in generated_client.generator_result.stdout
return generated_client.generator_result.stdout


def with_generated_client_fixture(
openapi_spec: str,
name: str="generated_client",
config: str="",
extra_args: List[str] = [],
):
"""Decorator to apply to a test class to create a fixture inside it called 'generated_client'.

The fixture value will be a GeneratedClientContext created by calling
generate_client_from_inline_spec().
"""
def _decorator(cls):
def generated_client(self):
with generate_client_from_inline_spec(openapi_spec, extra_args=extra_args, config=config) as g:
print(g.generator_result.stdout) # so we'll see the output if a test failed
yield g

setattr(cls, name, pytest.fixture(scope="class")(generated_client))
return cls

return _decorator


def with_generated_code_import(import_path: str, alias: Optional[str] = None):
"""Decorator to apply to a test class to create a fixture from a generated code import.

The 'generated_client' fixture must also be present.

If import_path is "a.b.c", then the fixture's value is equal to "from a.b import c", and
its name is "c" unless you specify a different name with the alias parameter.
"""
parts = import_path.split(".")
module_name = ".".join(parts[0:-1])
import_name = parts[-1]

def _decorator(cls):
nonlocal alias

def _func(self, generated_client):
module = generated_client.import_module(module_name)
return getattr(module, import_name)

alias = alias or import_name
_func.__name__ = alias
setattr(cls, alias, pytest.fixture(scope="class")(_func))
return cls

return _decorator


def with_generated_code_imports(*import_paths: str):
def _decorator(cls):
decorated = cls
for import_path in import_paths:
decorated = with_generated_code_import(import_path)(decorated)
return decorated

return _decorator


def assert_model_decode_encode(model_class: Any, json_data: dict, expected_instance: Any) -> None:
instance = model_class.from_dict(json_data)
assert instance == expected_instance
assert instance.to_dict() == json_data


def assert_model_property_type_hint(model_class: Any, name: str, expected_type_hint: Any) -> None:
assert model_class.__annotations__[name] == expected_type_hint


def assert_bad_schema_warning(output: str, schema_name: str, expected_message_str) -> None:
bad_schema_regex = "Unable to (parse|process) schema"
expected_start_regex = f"{bad_schema_regex} /components/schemas/{re.escape(schema_name)}:?\n"
if not (match := re.search(expected_start_regex, output)):
# this assert is to get better failure output
assert False, f"Did not find '{expected_start_regex}' in output: {output}"
output = output[match.end():]
# The amount of other information in between that message and the warning detail can vary
# depending on the error, so just make sure we're not picking up output from a different schema
if (next_match := re.search(bad_schema_regex, output)):
output = output[0:next_match.start()]
assert expected_message_str in output
35 changes: 35 additions & 0 deletions end_to_end_tests/generated_code_live_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
## The `generated_code_live_tests` module

These are end-to-end tests which run the code generator command, but unlike the other tests in `end_to_end_tests`, they are also unit tests _of the behavior of the generated code_.

Each test class follows this pattern:

- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
- The spec can omit the `openapi:`, `info:`, and `paths:`, blocks, unless those are relevant to the test.
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
- `sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
- Use the decorator `@with_generated_code_imports` or `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
- `@with_generated_code_imports(".models.MyModel1", ".models.MyModel2)` would execute `from [package name].models import MyModel1, MyModel2` and inject the imported classes into the test class as fixtures called `MyModel1` and `MyModel2`.
- `@with_generated_code_import(".api.my_operation.sync", alias="endpoint_method")` would execute `from [package name].api.my_operation import sync`, but the fixture would be named `endpoint_method`.
- After the test class finishes, these imports are discarded.

Example:

```python
@with_generated_client_fixture(
"""
components:
schemas:
MyModel:
type: object
properties:
stringProp: {"type": "string"}
""")
@with_generated_code_import(".models.MyModel")
class TestSimpleJsonObject:
def test_encoding(MyModel):
instance = MyModel(string_prop="abc")
assert instance.to_dict() == {"stringProp": "abc"}
```
Loading