-
Notifications
You must be signed in to change notification settings - Fork 61
api: add performance_test environment which loads more data #4738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
75a5b82
to
20801e1
Compare
Core Meeting Discussion
|
20801e1
to
929f1e4
Compare
Done |
carlobeltrame
approved these changes
Mar 31, 2024
usu
approved these changes
Apr 13, 2024
929f1e4
to
031b319
Compare
Add a threshold that we don't accidentally add a slow query.
That we can have a snapshot for the test environment, and one for the performance_test environment.
To test our application against more data. This uses the feature of the AliceBundle that the fixtures can be loaded depending on the environment. (https://github.com/theofidry/AliceBundle/blob/ab4d7ffd04eef576df4af88630ebb49bba397e40/doc/advanced-usage.md#environment-specific-fixtures) We need that because the nelmio/alice takes a while to generate lots of data, thus we don't want to do this for every test. We need a database with more data to test our application against a postgres instance which needs to use its indices. Else it will just do a full table scan every time and this is fine with 5 entries in the table. But if it then reaches to an inefficient query plan as soon as we have a lot of data, we face issues in production. This allows us to test that before it happens in production. The amount of additional data in additional camps has to be balanced. Now it takes 50s on a fairly good machine to load all these fixtures. If we later also add more activities to camp1, we cannot load a lot more additional camps. With 1000 additional camps and 100 additional activities for camp1 it took 15 min to load the fixtures.
That camp1 is more realistic: activity-progress-labels.yml: add 3 more activityResponsibles.yml: add 200 (to all activities) activityResponsibles.yml: add 10 materialItems.yml: - add 1 to additional_materialList_camp1_1 for each materialnode - add 200 items to additional_materialList_camp1_1 for materialnode additional_materialNode_camp1_1 materialLists.yml: add 12 activities.yml: add 200
Do not add it to every PR or push for now, run it overnight and if the label is set on a PR. We don't know if the tests are flaky, and they need quite some resources.
…y() matcher Like this you see what was in the array.
031b319
to
53cc23a
Compare
Merged
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
With our performance problems in prod (starting with the failure to print a camp with 170 activities) i tried to find a way
that we can see in development or even tests, how the query planner plans the queries and how we can optimize this before we are in production.
This is the most practical solution, but the time for the queries is still too flaky to test it.
Now ready for review with an upper limit for the query time.