Skip to content

Enable multiple test suites and test configurations in the Test UI #12075

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
LightCC opened this issue May 30, 2020 · 15 comments
Open

Enable multiple test suites and test configurations in the Test UI #12075

LightCC opened this issue May 30, 2020 · 15 comments
Assignees
Labels
area-testing feature-request Request for new features or functionality needs proposal Need to make some design decisions

Comments

@LightCC
Copy link

LightCC commented May 30, 2020

Problem Statements

  • P1. Currently VS Code for Python only allows a single test configuration for each test framework. For example, using PyTest, I can apply certain arguments to run whenever PyTest is run through the UI, but if I setup that configuration to exclude certain tests (such as those annotated with "slow"), I have to manually update the arguments in the options window to switch to another configuration.

  • P2. There is also no way to setup separate groups of tests and show them in the Test sidebar window as those groups. For example, rather than having a module-based hierarchy, it would be great to have the ability to have them grouped by arbitrary groups of modules/classes/methods and/or based on annotations. Currently one cannot have the tree show different groups of test, such as those annotated as "slow" or "sanity".

  • P3. In addition, any tests that are excluded by the current test configuration cannot be run even when they are directly selected to run as an individual test, because the current configuration causes them to be excluded by the test framework. So that one test will be detected, but then excluded from running. So if I normally want to exclude "slow" tests, that means I cannot even click directly on a slow test and run it directly in the UI.

Current Elements in the UI:

  • Existing layout in "Test" sidebar window:
    image
    image
  • Existing In-line Test Run / Debug overlay:
    image

Use Cases

Use cases for allowing creation and easy selection of multiple test groupings and configurations:

  • UC1 - Running sometimes with coverage, and sometimes not, due to coverage taking longer. Alternately, generating different coverage report types.
  • UC2 - Creating longer-run tests (i.e. "slow" or "detailed" tests) that are only run occasionally for final checks, but the length of time makes it difficult to run the standard test suite regularly after small changes (i.e. after it takes more than a few seconds, and especially after 30-60 seconds).
  • UC3 - Marking "sanity" tests that are the key tests that should be run all the time (typiclaly with the goal for the test suite taking a few seconds or less)
  • UC4 - identifying groups of modules that should be run together when working in a particular module and making changes. The test system often requires either distributing all tests with each module or putting all tests in a package into a single test folder hierarchy, which may not make sense in larger programs. For example, I may want to run detailed tests within the current module, but also run some high-level sanity checks any time I modify within that module. The need is to allow for regrouping of tests in the test sidebar window based on manually selected groups and/or broad rules such as presence or absence of a given test annotation.
  • UC5 - sometimes just running the tests for immediate results, other times running an official test suite for final metrics for review/PR before merging. In this case the test configuration could just be launching a command-line script, etc. In fact, every configuration could potentially be just running a specific Python script, perhaps with specific arguments, rather than running the test framework directly.

Requirements

AS A VS Code user writing larger python programs and packages with tests

I NEED a way to create and run different test suites for a given program or package, that are not solely based on modules or test classes, and to run them under varying sets of arguments or test configurations

  • I need the ability to run specific test suites/groups that have been created from within the Test sidebar and/or inline test code overlay elements (i.e. potentially adding additional options to the "run test" and "debug test" commands that appear inline in the code)
  • I need a way to create different "test configurations" or sets of arguments for launching supported unit test frameworks, which can either be selected directly (say, through a dropdown in the Test sidebar) and/or applied separately to any given test suite.

SO THAT

  1. I can setup multiple test configurations that provide a means of selecting the test arguments and/or options to use when running tests through the Test UI.
  2. I can setup these test configurations to run particular test types (i.e. tests annotated as slow, sanity, run all tests, etc.) - e.g. in PyTest, generally through @pytest.mark annotation
  3. I can select one test configuration of several that have been setup as the current configuration
  4. When I run tests in the sidebar window (through the green triangle "play" icons) there are options to use either the default configuration (either of all tests, or the one setup for this particular test suite) or, optionally, run the currently selected test configuration
  5. Preferably, though optionally, these configurations could be easily run from the command line as well. e.g. if they were saved as separate config files or scripts that coudl be run as separate scripts or arguments when running the test framework.
  6. I am able to run any single test, either in the Test sidebar window, or through the inline code interface, regardless of whether it would run or not run as a part of all tests under the current configuration (e.g. this test is marked a "slow" test, and the current configuration excludes "slow" tests). There are various ways this could be implemented, such as:
    • providing a separate icon for running under a "default" or "single-test" cofiguration selection
    • separate filtering from other configuration arguments - i.e. have a separate config setup for filtering from all other test configs

Concepts

  1. Allow a file or some GUI (perhaps directly in the VS Code options) that defines a test configuration.
    a. Optionally, separate test groupings (i.e. slow, sanity, etc.) from other test options
    b. Options would include any argument that can be fed to the test framework, such as running coverage, the output type of the coverage, different test framework settings.
    c. These settings would supersede the current "test arguments" option in VS Code that is present for each of the individual test frameworks, which would be consider the "default" test configuration for all tests.

  2. Allow the user to select the current test configuration from a dropdown or similar UI element
    a. Likely in the sidebar window, perhaps on a separate line just under the header row that says "Python".
    b. Such a dropdown is envisioned to work like the Run/Debug dropdown in VS Code where you can either select an existing test configuration or create a new one, through an entry always present at the bottom of the list.
    c. Alternately, or in addition, there could be an annotation that allows switching configurations that is inline, or at least that is an inline link that opens whereever configurations are selected in the sidebar (or in VS Code options).

  3. Reorganize the test hierarchy in the sidebar window based on test configurations options that could use module, test class, and/or annotations to specify how to create the test tree.

    • e.g. Name a group as "sanity", and put all tests with the "sanity" annotations in it, Name a second group as "slow" and put all tests with the "slow" annotation in it. Have a separate group for "normal", and finally "other" (any test not showing up elsewhere) and "all" (which includes all discovered tests).
    • This would provide a means of point-and-click to run a specific group of tests that have been defined by the user.
  4. Provide a means of applying different default test configurations to different sets of test groups. For example, under concept 3, the user is already defining different test suites or groups of tests, the user should also have some means of changing the default configuration for that group (or maybe the only configuration for that group?)

    • Under this concept, the user could create a test group/suite for all "slow" tests that run without coverage, but then create a second group that had the same tests in it (all the "slow" tests), but run with coverage.
    • Provide a means of running either the configured test configuration for a given test group, or fall back to the default configuration (i.e. add a second "run" icon or link that would run under the overall default test configuration).

Acceptance Criteria

  1. The user is able to setup multiple test configurations in terms of the test arguments that will be used when running any tests in the current test framework (unittest, pytest, nose, etc.).

  2. User designated test groups
    a. The user is able to setup multiple test suites or test groups that show up in the Test sidebar window based on:

    1. Manually choosing specific test modules, test classes, and/or test functions/methods
    2. Annotations or some other broader criteria
    3. e.g.. run all tests in 2 specific modules, that have the "slow" annotation, or no annotation)

    b. If a test is included in multiple suites, it will show up in each of them
    c. A separate category shows all tests not included in any other custom test groups, if any are specified (i.e. "other" category)
    d. A separate category, "all", always shows all tests in the current module-based hierarchy format

  3. The user is able to run or debug any single test directly with either the default test configuration, or a test configuration specifically setup for running individual tests.

  4. A given test suite or configuration would allow for running an external script that runs the test framework (with all appropriate arguments), rather than running the framework directly by VS Code.

  5. The user is able to select a "current test configuration" that is used instead of the default test configuration, and is used for any case where a group or test doesn't have a configuration specified

    • Depending on the final design, the user has a means of setting the current test configuration to override all other configurations, or to select to run the current configuration instead of the configuration that would otherwise be run (through an alternate "play" icon on all tests and test groups, for example))
@LightCC LightCC added triage-needed Needs assignment to the proper sub-team feature-request Request for new features or functionality labels May 30, 2020
@ghost ghost removed the triage-needed Needs assignment to the proper sub-team label Jun 1, 2020
@brettcannon brettcannon added area-testing triage-needed Needs assignment to the proper sub-team labels Jun 1, 2020
@brettcannon
Copy link
Member

Thank you for the suggestion! We have marked this issue as "needs decision" to make sure we have a conversation about your idea. We plan to leave this feature request open for at least a month to see how many 👍 votes the opening comment gets to help us make our decision.

@karthiknadig karthiknadig removed the triage-needed Needs assignment to the proper sub-team label Jun 1, 2020
@luabud luabud added needs proposal Need to make some design decisions and removed needs decision labels Jul 8, 2020
@luabud luabud self-assigned this Aug 12, 2020
@jarshwah
Copy link

My use case is running different subsets of tests with different environment variables or command line arguments (either would be fine, cli arguments preferred).

I have tests structured in the following fashion:

src/tests/unit/...
src/tests/integration/...
src/tests/functional/...

Each require different settings. Subpaths within each folder structure can also require specific settings.

It seems like the launch configuration concept for debugging and tasks would serve for test configurations. I have no idea how that would extend out to the Testing panel or the lenses.

@BrettMoan
Copy link

BrettMoan commented Nov 17, 2022

I would love this as well. Ideally a way to to define our configurations for testing and venv's by the file path of each file. so one doesn't have to toggle different venvs per file via the GUI each time they start vscode.

one way i could envision this is doing something similar to the way TASKS are configured.

by way of example here is is a current day example from my .vscode/settings.json:

 {
  "python.testing.cwd": "${workspaceRoot}",
  "python.testing.unittestEnabled": false,
  "python.testing.pytestEnabled": true,
  "python.testing.pytestArgs": ["daemon"]
} 

it would be amazing if even just these configureations could be nested list of dicts, the way task are in .vscode/launch.json's "configurations" list of dicts.

with the only difference being I would propose allowing some sort either a "global config" or a "order matters fall through configuration" (since file paths could match multiple splats)

rough draft, maybe something like:

"python.testing.configurations": [
  { 
    "name": "default tests",
    "files": "**/*", all files would match this by default, so when you are in this workspace, this is the first set of configs you get
    "cwd": "${workspaceRoot}",
    "unittestEnabled": false,
    "pytestEnabled": true,
    "pytestArgs": [],
  },
  { 
    "name": "specific tests for foo",
    "files": "${workspaceRoot}/foo/**/*",
    "cwd": "${workspaceRoot}/foo",
    "pytestArgs": ["-vv"],
    "venv": "${workspaceRoot}/.venv/foo", // use the `foo` venv for tests in foo
  },
  { 
    "name": "specific tests for bar",
    "files": "${workspaceRoot}/bar/**/*",
    "cwd": "${workspaceRoot}/bar",
    "pytestArgs": ["-rxXs"],
    "venv": "${workspaceRoot}/.venv/bar", // use the `bar` venv for tests in bar
  },
]

the main problem i see with this is you WOULD have to spawn multiple invocations of python & pytest/unittest for each unique set of configurations

but then again thats exactly what we are asking for ;)

@BrettMoan
Copy link

since this is actually something i would ultimately want not just for python.testing.*, but also python.linting.* arguments, I'm probably way off on the actual implementation of the namespace, but i'm trying to get at the fact that it would be signicantly nicer to scope the python settings to a group of files, that is lower than workspace and i think the simplest way of doing that would be a way of allowing users to define what those file paths are "files": "${workspaceRoot}/foo/**/*", and have unique configurations for those.

@diego351
Copy link

diego351 commented Nov 22, 2022

+1
That feature would be awesome!

My use case is that vscode testing integration doesn't support debugger with pytest running tests in parallel with pytest-xdist. So workaround would be having one configuration for debugging, one for performance parallel testing.

@luabud
Copy link
Member

luabud commented Nov 23, 2022

fyi @eleanorjboyd @karthiknadig

@flea89
Copy link

flea89 commented May 22, 2023

Are there any updates on this?
Is it still something that you are considering to add?

Thanks

@eleanorjboyd
Copy link
Member

Hello! There are no updates on this feature request at this time. It has received the necessary number of votes to move into our backlog and once we are working on it, will we tag this issue in our iteration plan. If you are passionate about this feature request we always welcome community contributions and also prioritize feature requests with a high number of likes so you can also work to increase the upvotes. Thanks!

@Penagwin
Copy link

Penagwin commented Jul 27, 2023

Just to throw in my use case for this feature - Monorepos and anything resembling them are very problematic.

We have a django project that includes several different websites which largely share the same common core, but have modifications to them and their own django settings.

project1/
project2/
common/

To run our tests we specify which settings file we want to run

project1.settings.testing
project2.settings.testing

In our case pytest get's upset because it finds tests in all of the projects, but only has the settings to run one of the projects correctly. My understanding is that I can't have multiple test configurations that specify the settings for each project?

IntelliJ handles this very well:

  • it treats run/debug/test configurations all the same way
  • you select a runner "nodejs/jest/maven/python/pytest/unittests/djangotests/etc"
  • that runner has it's own configuration panel for ease of use and lots of options to make let you configure it to be launched in almost any way you need
  • you select which run configuration you want
  • they all have the same options run/debug/coverage
  • intellij then chooses how to display the panels/output based on the type of run configuration you had choosen

This allows you to do a bunch of things:

  • you can have as many run configurations of any type
  • you can have as many test configurations of any type
  • you can easily run/debug/coverage any run or test (because they're the same thing)

Currently VSCode's implementation appears to be a single global setting for each test type per workspace? So there's only a single configuration for pytest tests.

I'd love to see this added to VSCode.

@eleanorjboyd eleanorjboyd self-assigned this Aug 21, 2023
@dragondive
Copy link

I would also love to have this feature in vscode. I already "liked" the opening post, here is my usecase. I have to run pytests with various options (ordinary unit tests, UI tests with selenium - for multiple browsers, with code coverage, etc.).

In the CI pipeline, there are separate jobs for each of these, but in local development, "copy-pasting" or "commenting/uncommenting" the python.testing.pytestArgs is tedious and error-prone. I would much rather prefer multiple settings, from which I could pick one. For example, something like this:

{
    "python.testing.unittestEnabled": false,
    "python.testing.pytestEnabled": true,
    "python.testing.pytestArgs": 
    [
        # config 1: provide arguments for unit tests
        "..."
    ],
    [
        # config 2: provide arguments for selenium tests with firefox
        "..."
    ],
    [
        # config 3: provide arguments for selenium tests with chrome
        "..."
    ],
    [
        # config 4: provide arguments for unit tests with code coverage
        "..."
    ],
}

@eleanorjboyd
Copy link
Member

this is related to this general discussion, linking so people can follow along and comment on that discussion: #21845

@BrettMoan
Copy link

this is related to this general discussion, linking so people can follow along and comment on that discussion: #21845

Thank you. Added a comment over there.

I saw it mentioned a different config for discover/run but one thing I didn't see captured over there was multiple configs for different configs for different types of "run"

Over on 21845 I've added that multiple configs are important for things like mono-repos, which need unique configs for different paths, but it is also important for things like libraries supporting multiple python versions and wanting to test across multiple versions.

Having a single config as opposed to a list of configs, and having a single interpreter (that's not even part of settings.json) instead of a list of interpreters, limits the usable scope to workspaces that can need a single interpreter.

@eleanorjboyd
Copy link
Member

great point yes- thank you for your in-depth comment on the other post and we will take this into consideration as we move to create what it might actually look like. Thanks!

@ajoga
Copy link

ajoga commented Jan 18, 2024

Hello, I have a use case akin to the ones talked in here. I'm using pytest, and I run my tests in parallel with this snippet in pyproject.toml:

[tool.pytest.ini_options]
addopts = [
    "--numprocesses=auto"
]

When debugging a single test with the ad-hoc button shown bellow, pytest spawns as many workers as CPU threads I have, which the debugger analyses, which add few seconds of extra delay before the test runs.

image

My current workflow is to change the entries "--numprocesses=auto" to "--numprocesses=1" in my pyproject.toml when I have to iterate over a given test, and I wish I could override that with an argument passed to pytest instead.

@diego351
Copy link

I think this is related:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-testing feature-request Request for new features or functionality needs proposal Need to make some design decisions
Projects
None yet
Development

No branches or pull requests