Skip to content

Silence the log messages during a test run #1041

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

MVrachev
Copy link
Collaborator

@MVrachev MVrachev commented Jun 2, 2020

Fixes issue #: #1039

Description of the changes being introduced by the pull request:

  1. Filter the GET request messages from tests/slow_retrieval.py:
    image

  2. Filter the code 404, message File not found error messages:
    image

  3. Remove We do not have a console handler warnings.

Please verify and check that the pull request fulfills the following
requirements
:

  • The code follows the Code Style Guidelines
  • Tests have been added for the bug fix or new feature
  • Docs have been added for the bug fix or new feature

MVrachev added 3 commits June 2, 2020 18:50
We are using slow_retrieval_server.py for our tests only.
That's why logging requests made by the server and printing
them while running the tests doesn't give us any information.

Signed-off-by: Martin Vrachev <[email protected]>
Currently, when we run the unit tests we get a lot of
messages with the content: "code 404, message File not found"
even a successful run.

This is caused because tuf will always try to download a newer version
of root.json, but as expected, at one point it will download
the freshest version of root.json and when it tries to download
a newer one it will fail with this error message.

Signed-off-by: Martin Vrachev <[email protected]>
This message doesn't give us useful information during
the test run and it's repeated more than 10 times
when running the tests with python 3.7

Signed-off-by: Martin Vrachev <[email protected]>
@MVrachev MVrachev changed the title Silence log messages Silence the log messages during a test run Jun 2, 2020
@MVrachev
Copy link
Collaborator Author

MVrachev commented Jun 2, 2020

I tested those changes to see if other errors appear.

I checkout on commit 3851f38 just before my fix Fix [Errno 111] Connection refused.
Then cherry-picked the commits introduced in this pr and when I ran the tests I was able to see all kinds of errors including FileNotFoundError which is good and what we expected.

@lukpueh
Copy link
Member

lukpueh commented Jun 3, 2020

Thanks for the patch, @MVrachev! Your solution works nicely but it feels a bit hackish to blacklist a certain undesired output in simple_server.py.

A way to fix this in a more scalable fashion would be to configure the tests so that console output is only displayed if tests fail. I remember doing this for in-toto in in-toto/in-toto#240 (it might also be worth taking a look at in-toto/in-toto#183). Let me know if this helps or if you have any questions.
And sorry for not having shared that earlier on #1039.

@MVrachev
Copy link
Collaborator Author

MVrachev commented Jun 5, 2020

A way to fix this in a more scalable fashion would be to configure the tests so that console output is only displayed if tests fail. I remember doing this for in-toto in in-toto/in-toto#240 (it might also be worth taking a look at in-toto/in-toto#183). Let me know if this helps or if you have any questions.
And sorry for not having shared that earlier on #1039.

Yes, I could do that, but even if you see the logs when the tests fail don't you think we should filter the code 404, message File not found messages together with logging the request made by the server started inslow_retrieval.py?
After all, those are expected messages. They are not errors.

@lukpueh
Copy link
Member

lukpueh commented Jun 5, 2020

I think you can, but I don't think it's necessary, assuming that the messages are only displayed on failure and we expect our tests not to fail. :)

@MVrachev
Copy link
Collaborator Author

MVrachev commented Jun 17, 2020

I have tried a couple of things and here are my thoughts.

First, I saw we are using buffer=True argument for TextTestRunner https://github.com/theupdateframework/tuf/blob/8e6ca67f3131cf2639b997a3e0acf3b4e21c637c/tests/aggregate_tests.py#L95 when we call the tests the documentation says

sys.stdout and sys.stderr will be buffered in between startTest() and stopTest() being called. Collected output will only be echoed onto the real sys.stdout and sys.stderr if the test fails or errors. Any output is also attached to the failure / error message.

then if I redirect stderr and stdout in the subprocess.Popen calls to sys.stderr and sys.stdout respectively everything will be okay.
I tried it, but it failed with the following error:
image

After researching what other people had discovered for that problem it seems like this is the right answer:
https://stackoverflow.com/a/31081957/7058471
because when I try to run the same command in an ipython session it works, but not while running the tests with tox...
image.
Maybe the problem is connected that I am not starting the subprocess with shell, but we know that if we do start it with shell this will be insecure.

Second, I thought then I will look again at what you @lukas are suggesting.
Here, there are a few complications.
We are starting server processes and we are using them through the unit tests and in order to collect logs from them (as you did in in-toto), we have to do it after when they are killed or in the tearDownClass functions.
The problem is that logs cannot be gathered from the tearDownClass or setUpClass functions.
I tried it a couple of times and end up trying with a simple example:
image

and the result was:

image

Then, I realized we would need to start all server_process from the setUp functions (which are called before each unit test) instead of setUpClass functions (which are called before tests in an individual class are executed) and get logs into the tearDown functions (called after each unit test) instead of the tearDownClass functions (called once per class after all unit tests).
With that method, every unit test that needs a server will start its own subprocess.
I tried it and passed the stdout=subprocess.PIPE and stderr=subprocess.PIPE arguments to subprocess.Popen and using server_process.communicate() after that to get the logs.
The results weren't good: sometimes the tests just took forever and sometimes they finished after twice as long as usual...

@jku
Copy link
Member

jku commented Aug 6, 2020

The problem is that logs cannot be gathered from the tearDownClass or setUpClass functions.

WRT our chat before : the loggging does work but if you use a testrunner, it gets to decide what to output. In the example it decides to output only the failing test (and the class setup was obviously run much earlier). If you run the same test without a test runner (so no logging is initialized):

if __name__ == '__main__':
    unittest.main()

you will see the output

@MVrachev
Copy link
Collaborator Author

I have another solution which I want to propose @lukpueh.
Please have a look at pr #1104

@joshuagl
Copy link
Member

@MVrachev as you have proposed an alternative solution, should we close this PR?

@MVrachev
Copy link
Collaborator Author

Yes, I think we can close that one.

@joshuagl
Copy link
Member

Closing due to being superseded by #1104

@joshuagl joshuagl closed this Aug 17, 2020
@MVrachev MVrachev deleted the silence-log-messages branch August 22, 2020 18:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants