-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Feature: anonymous fixtures #4694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The way I've seen this done is to have the fixture return a function, and then tests ask for new values by calling the function |
After reading more about this situation, #2424 made it obvious that calling the same fixture more than once in the same test is not what fixtures are for. I've always understood fixtures to be more of a tool to help reduce boilerplate in my tests, however, I understand that the definition is useful for defining the boundaries of what it means to abuse a concept. It simplifies things for the project maintainers to understand what is and what is not acceptable in terms of features. Here's what it turned into. I'm quite happy with it. The trick was to expose a single fixture that could understand more use cases, removing the need for creating dynamic fixtures for the different request json payloads to land in. They now roll up under a single json payload/request object container for closer analysis in the test. @pytest.fixture
async def test_request(request, api_client):
marker = request.node.get_closest_marker('test_request')
request = marker.args[0]
if isinstance(request, str):
response = await api_client.get(request)
json = await response.json()
return { 'json': json, 'response': response }
elif not isinstance(request, dict):
raise ValueError('test_request fixture expects all arguments passed in as a dict')
if len(marker.args) > 1:
requests = marker.args
return await _test_all_requests(requests, api_client)
response, json = await _test_request(request, api_client)
return { 'json': json, 'response': response }
async def _test_all_requests(requests, api_client):
all_responses = { 'json': {}, 'response': {} }
for index, request in enumerate(requests):
name = request.pop('name', index)
response, json = await _test_request(request, api_client)
all_responses['response'][name] = response
all_responses['json'][name] = json
return all_responses
async def _test_request(request, api_client):
method = request.pop('method', 'GET').lower()
request_fn = getattr(api_client, method)
response = await request_fn(request.pop('endpoint'), **request.pop('request_kwargs', {}))
json = await response.json()
return response, json
@pytest.fixture
def response(test_request):
"""
Use after calling `test_request` or `test_requests`. Provides the resulting api request response object(s).
"""
return test_request.get('response')
@pytest.fixture
def json(test_request):
"""
Use after calling `test_request` or `test_requests`. Provides the resulting api request response payloads(s).
"""
return test_request.get('json')
# test.py
@pytest.mark.test_request({
'endpoint': '/users/1/activity',
'name': 'activity'
}, {
'endpoint': '/users/1/status',
'name': 'status'
)
async def test_user_status_matches_user_activity(json):
assert json['activity']['name'] == json['status']['name'] I believe that these types of feature requests should be captured under a different concept in the project other than a "fixture". This is not a valid use case for fixtures. I will close this for now but I would ask that @nicoddemus consider to keep the comments open for further discussion about the possibility of providing a new abstraction that behaves similarly to fixtures, but aren't inhibited by the terminology, to allow for more dynamic, meta-programming features for users who'd like to see more of that in their tests. |
While running a custom fixture that takes kwargs or args, it'd be nice to generate a one-time use fixture for injecting a result of the first fixture into a test.
Those last two fixtures work great as-is, but I'd like to support stacking multiple
make_get_request
decorators and issue new fixtures for the duration of the current test that allow me to inject multiple json payloads with a unique name for each.Is it possible to use something like Metafunc to create a fixture that doesn't require the use of pytests' generate tests hook? I imagine it's too late to create a fixture that way, or maybe I don't need to create a fixture for something like this? Is there a reference somewhere in pytest that I can update and provide new values as injected arguments to the current test function from within a fixture?
The text was updated successfully, but these errors were encountered: