Skip to content

bpo-34279: Issue a warning if no tests have been executed #10150

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Nov 29, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion Lib/test/libregrtest/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from test.libregrtest.runtest import (
findtests, runtest, get_abs_module,
STDTESTS, NOTTESTS, PASSED, FAILED, ENV_CHANGED, SKIPPED, RESOURCE_DENIED,
INTERRUPTED, CHILD_ERROR,
INTERRUPTED, CHILD_ERROR, TEST_DID_NOT_RUN,
PROGRESS_MIN_TIME, format_test_result)
from test.libregrtest.setup import setup_tests
from test.libregrtest.utils import removepy, count, format_duration, printlist
Expand Down Expand Up @@ -79,6 +79,7 @@ def __init__(self):
self.resource_denieds = []
self.environment_changed = []
self.rerun = []
self.run_no_tests = []
self.first_result = None
self.interrupted = False

Expand Down Expand Up @@ -118,6 +119,8 @@ def accumulate_result(self, test, result):
elif ok == RESOURCE_DENIED:
self.skipped.append(test)
self.resource_denieds.append(test)
elif ok == TEST_DID_NOT_RUN:
self.run_no_tests.append(test)
elif ok != INTERRUPTED:
raise ValueError("invalid test result: %r" % ok)

Expand Down Expand Up @@ -368,6 +371,11 @@ def display_result(self):
print("%s:" % count(len(self.rerun), "re-run test"))
printlist(self.rerun)

if self.run_no_tests:
print()
print(count(len(self.run_no_tests), "test"), "run no tests:")
printlist(self.run_no_tests)

def run_tests_sequential(self):
if self.ns.trace:
import trace
Expand Down Expand Up @@ -458,6 +466,9 @@ def get_tests_result(self):
result.append("FAILURE")
elif self.ns.fail_env_changed and self.environment_changed:
result.append("ENV CHANGED")
elif not any((self.good, self.bad, self.skipped, self.interrupted,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of always appending "NO TEST RUN" if run_no_tests is non-empty?

"./python -m test -j0 test_os test_sys -m test_access" would say "Tests result: SUCCESS, NO TEST RUN".

Does it sound weird to you?

My expectation is that "SUCCESS, NO TEST RUN" would be rare. You would only get it if you pass --match. But if you get it, it means that maybe you did a mistake. That's why the exit code is 0 (success) and not 1 (error). It's just a hint, not an hard error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hummmm..... I think is a good idea. What do you think about somethink on the lines of:

❯ ./python.exe -m test -j0 test_os test_sys -m test_access
Run tests in parallel using 10 child processes
0:00:00 load avg: 1.72 [1/2] test_sys run no tests
0:00:00 load avg: 1.72 [2/2] test_os passed

== Tests result: SUCCESS, (SOME TESTS DID NOT EXECUTE ANY SUBTESTS) ==

1 test OK.

1 test run no tests:
    test_sys

Total duration: 473 ms
Tests result: SUCCESS, (SOME TESTS DID NOT EXECUTE ANY SUBTESTS)

only if the output is SUCCESS? So basically:

if not result:
     result.append("SUCCESS")
     if self.run_no_tests:
         result.append("(SOME TESTS DID NOT EXECUTE ANY SUBTESTS)")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only if the output is SUCCESS

I don't think that it's useful. And I prefer "NO TEST RUN" because it's shorter :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pablogsal: what do you think? ping?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late response.

I agree, I think printing "NO TEST RUN" when there is at least one item in run_no_tests is a good idea :)

I implemented it in 8a64c0b

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, I am checking somethig that is not fully correct with that commit.

Copy link
Member Author

@pablogsal pablogsal Nov 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we are setting "SUCCESS" only when there are no other thing in result this means that running ./python -m test -j0 test_os -m test_no_existing will also result in:

Tests result: SUCCESS, NO TEST RUN

as to achieve the result for "./python -m test -j0 test_os test_sys -m test_access" we would need to append after the "SUCCESS" is added:

if not result:
    result.append("SUCCESS")
if self.run_no_tests:
    result.append("NO TEST RUN")

otherwise, you never will get "SUCCESS" when "self.run_no_test" is populated.

I suposse that is not what we want. What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we are setting "SUCCESS" only when there are no other thing in result this means that running ./python -m test -j0 test_os -m test_no_existing will also result in:

Tests result: SUCCESS, NO TEST RUN

as to achieve the result for "./python -m test -j0 test_os test_sys -m test_access" we would need to append after the "SUCCESS" is added:

if not result:
    result.append("SUCCESS")
if self.run_no_tests:
    result.append("NO TEST RUN")

otherwise, you never will get "SUCCESS" when "self.run_no_test" is populated.

I suposse that is not what we want. What do you think?

Ok, nervermind. I implemented it in 2272ba8 so only Tests result: SUCCESS, NO TEST RUN will happen if there is at least one test that succeeds and some did not run but only NO TEST RUN appears if no test at all have ran. Some examples:

./python -m test -j0 test_os test_sys -m test_access
Tests result: SUCCESS, NO TEST RUN

./python -m test -j0 test_os test_sys -m test_blech
Tests result: NO TEST RUN

./python -m test -j0 test_os -m test_blech
Tests result: NO TEST RUN

./python -m test -j0 test_os -m test_access
Tests result: SUCCESS

self.environment_changed)):
result.append("NO TEST RUN")

if self.interrupted:
result.append("INTERRUPTED")
Expand Down
5 changes: 5 additions & 0 deletions Lib/test/libregrtest/runtest.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
RESOURCE_DENIED = -3
INTERRUPTED = -4
CHILD_ERROR = -5 # error in a child process
TEST_DID_NOT_RUN = -6 # error in a child process

_FORMAT_TEST_RESULT = {
PASSED: '%s passed',
Expand All @@ -28,6 +29,7 @@
RESOURCE_DENIED: '%s skipped (resource denied)',
INTERRUPTED: '%s interrupted',
CHILD_ERROR: '%s crashed',
TEST_DID_NOT_RUN: '%s run no tests',
}

# Minimum duration of a test to display its duration or to mention that
Expand Down Expand Up @@ -94,6 +96,7 @@ def runtest(ns, test):
ENV_CHANGED test failed because it changed the execution environment
FAILED test failed
PASSED test passed
EMPTY_TEST_SUITE test ran no subtests.

If ns.xmlpath is not None, xml_data is a list containing each
generated testsuite element.
Expand Down Expand Up @@ -197,6 +200,8 @@ def test_runner():
else:
print("test", test, "failed", file=sys.stderr, flush=True)
return FAILED, test_time
except support.TestDidNotRun:
return TEST_DID_NOT_RUN, test_time
except:
msg = traceback.format_exc()
if not ns.pgo:
Expand Down
7 changes: 6 additions & 1 deletion Lib/test/support/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
# globals
"PIPE_MAX_SIZE", "verbose", "max_memuse", "use_resources", "failfast",
# exceptions
"Error", "TestFailed", "ResourceDenied",
"Error", "TestFailed", "TestDidNotRun", "ResourceDenied",
# imports
"import_module", "import_fresh_module", "CleanImport",
# modules
Expand Down Expand Up @@ -120,6 +120,9 @@ class Error(Exception):
class TestFailed(Error):
"""Test failed."""

class TestDidNotRun(Error):
"""Test did not run any subtests."""

class ResourceDenied(unittest.SkipTest):
"""Test skipped because it requested a disallowed resource.

Expand Down Expand Up @@ -1930,6 +1933,8 @@ def _run_suite(suite):
if junit_xml_list is not None:
junit_xml_list.append(result.get_xml_element())

if not result.testsRun:
raise TestDidNotRun
if not result.wasSuccessful():
if len(result.errors) == 1 and not result.failures:
err = result.errors[0][1]
Expand Down
86 changes: 81 additions & 5 deletions Lib/test/test_regrtest.py
Original file line number Diff line number Diff line change
Expand Up @@ -351,11 +351,20 @@ def setUp(self):
self.tmptestdir = tempfile.mkdtemp()
self.addCleanup(support.rmtree, self.tmptestdir)

def create_test(self, name=None, code=''):
def create_test(self, name=None, code=None):
if not name:
name = 'noop%s' % BaseTestCase.TEST_UNIQUE_ID
BaseTestCase.TEST_UNIQUE_ID += 1

if code is None:
code = textwrap.dedent("""
import unittest

class Tests(unittest.TestCase):
def test_empty_test(self):
pass
""")

# test_regrtest cannot be run twice in parallel because
# of setUp() and create_test()
name = self.TESTNAME_PREFIX + name
Expand Down Expand Up @@ -390,7 +399,7 @@ def parse_executed_tests(self, output):

def check_executed_tests(self, output, tests, skipped=(), failed=(),
env_changed=(), omitted=(),
rerun=(),
rerun=(), no_test_ran=(),
randomize=False, interrupted=False,
fail_env_changed=False):
if isinstance(tests, str):
Expand All @@ -405,6 +414,8 @@ def check_executed_tests(self, output, tests, skipped=(), failed=(),
omitted = [omitted]
if isinstance(rerun, str):
rerun = [rerun]
if isinstance(no_test_ran, str):
no_test_ran = [no_test_ran]

executed = self.parse_executed_tests(output)
if randomize:
Expand Down Expand Up @@ -447,8 +458,12 @@ def list_regex(line_format, tests):
regex = "Re-running test %r in verbose mode" % name
self.check_line(output, regex)

if no_test_ran:
regex = list_regex('%s test%s run no tests', no_test_ran)
self.check_line(output, regex)

good = (len(tests) - len(skipped) - len(failed)
- len(omitted) - len(env_changed))
- len(omitted) - len(env_changed) - len(no_test_ran))
if good:
regex = r'%s test%s OK\.$' % (good, plural(good))
if not skipped and not failed and good > 1:
Expand All @@ -465,12 +480,16 @@ def list_regex(line_format, tests):
result.append('ENV CHANGED')
if interrupted:
result.append('INTERRUPTED')
if not result:
if not any((good, result, failed, interrupted, skipped,
env_changed, fail_env_changed)):
result.append("NO TEST RUN")
elif not result:
result.append('SUCCESS')
result = ', '.join(result)
if rerun:
self.check_line(output, 'Tests result: %s' % result)
result = 'FAILURE then %s' % result

self.check_line(output, 'Tests result: %s' % result)

def parse_random_seed(self, output):
Expand Down Expand Up @@ -649,7 +668,14 @@ def test_resources(self):
# test -u command line option
tests = {}
for resource in ('audio', 'network'):
code = 'from test import support\nsupport.requires(%r)' % resource
code = textwrap.dedent("""
from test import support; support.requires(%r)
import unittest
class PassingTest(unittest.TestCase):
def test_pass(self):
pass
""" % resource)

tests[resource] = self.create_test(resource, code)
test_names = sorted(tests.values())

Expand Down Expand Up @@ -978,6 +1004,56 @@ def test_bug(self):
output = self.run_tests("-w", testname, exitcode=2)
self.check_executed_tests(output, [testname],
failed=testname, rerun=testname)
def test_no_tests_ran(self):
code = textwrap.dedent("""
import unittest

class Tests(unittest.TestCase):
def test_bug(self):
pass
""")
testname = self.create_test(code=code)

output = self.run_tests(testname, "-m", "nosuchtest", exitcode=0)
self.check_executed_tests(output, [testname], no_test_ran=testname)

def test_no_tests_ran_multiple_tests_nonexistent(self):
code = textwrap.dedent("""
import unittest

class Tests(unittest.TestCase):
def test_bug(self):
pass
""")
testname = self.create_test(code=code)
testname2 = self.create_test(code=code)

output = self.run_tests(testname, testname2, "-m", "nosuchtest", exitcode=0)
self.check_executed_tests(output, [testname, testname2],
no_test_ran=[testname, testname2])

def test_no_test_ran_some_test_exist_some_not(self):
code = textwrap.dedent("""
import unittest

class Tests(unittest.TestCase):
def test_bug(self):
pass
""")
testname = self.create_test(code=code)
other_code = textwrap.dedent("""
import unittest

class Tests(unittest.TestCase):
def test_other_bug(self):
pass
""")
testname2 = self.create_test(code=other_code)

output = self.run_tests(testname, testname2, "-m", "nosuchtest",
"-m", "test_other_bug", exitcode=0)
self.check_executed_tests(output, [testname, testname2],
no_test_ran=[testname])


class TestUtils(unittest.TestCase):
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
regrtest issue a warning when no tests have been executed in a particular
test file. Also, a new final result state is issued if no test have been
executed across all test files. Patch by Pablo Galindo.