Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement check-result: test check failure affects test result by default #3239

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion docs/releases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
Releases
======================


tmt-1.38.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand All @@ -25,6 +24,23 @@ tests from remote repositories where the user does not have write
access.


The :ref:`/spec/tests/check` specification now supports a new ``result``
key for individual checks. This attribute allows users to control how the
result of each check affects the overall test result. The following values
are supported:

respect (default)
The check result is respected and affects the overall test result.
xfail
The check result is expected to fail (pass becomes fail and vice-versa).
info
The check result is always treated as "INFO" and does not affect the
overall test result.

Please note that tests, which were previously passing with failing checks
will now fail by default, unless the 'xfail' or 'info' is added.


tmt-1.37.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
10 changes: 9 additions & 1 deletion docs/scripts/generate-plugins.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
#!/usr/bin/env python3

import dataclasses
import enum
import sys
import textwrap
from typing import Any
Expand Down Expand Up @@ -56,7 +57,7 @@ def _is_inherited(

# TODO: for now, it's a list, but inspecting the actual tree of classes
# would be more generic. It's good enough for now.
return field.name in ('name', 'where', 'order', 'summary', 'enabled')
return field.name in ('name', 'where', 'order', 'summary', 'enabled', 'result')


def container_ignored_fields(container: ContainerClass) -> list[str]:
Expand Down Expand Up @@ -106,6 +107,12 @@ def container_intrinsic_fields(container: ContainerClass) -> list[str]:
return field_names


def is_enum(value: Any) -> bool:
""" Find out whether a given value is an enum member """

return isinstance(value, enum.Enum)


def _create_step_plugin_iterator(registry: tmt.plugins.PluginRegistry[tmt.steps.Method]):
""" Create iterator over plugins of a given registry """

Expand Down Expand Up @@ -184,6 +191,7 @@ def main() -> None:
STEP=step_name,
PLUGINS=plugin_generator,
REVIEWED_PLUGINS=REVIEWED_PLUGINS,
is_enum=is_enum,
container_fields=tmt.utils.container_fields,
container_field=tmt.utils.container_field,
container_ignored_fields=container_ignored_fields,
Expand Down
4 changes: 3 additions & 1 deletion docs/templates/plugins.rst.j2
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,10 @@
{% else %}
Default: {% for default_item in actual_default %}``{{ default_item.pattern | default(default_item) }}``{% if not loop.last %}, {% endif %}{% endfor %}
{% endif %}
{% elif is_enum(actual_default) %}
Default: ``{{ actual_default.value }}``
{% else %}
{% set _ = LOGGER.warn("%s/%s.%s: could not render default value, '%s'" | format(STEP, plugin_id, field_name, actual_default)) %}
{% set _ = LOGGER.warn("%s/%s.%s: could not render default value, '%s'" | format(STEP, plugin_id, field_name, actual_default), shift=0) %}
Default: *could not render default value correctly*
{% endif %}
{% endif %}
Expand Down
23 changes: 23 additions & 0 deletions spec/tests/check.fmf
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,13 @@ description: |
the user can run into. Another useful checks would be kernel
panic detection, core dump collection or collection of system
logs.
By default, the check results affect the overall test outcome.
To change this behaviour, use the 'result' key, which apart from
the default ``respect`` can be changed to ``xfail``, which
expects the check to fail, or ``info``, which changes the check
result to 'INFO'.

.. versionchanged:: 1.38.0 ``result`` key added

See :ref:`/plugins/test-checks` for the list of available checks.

Expand All @@ -37,7 +44,23 @@ example:
- how: test-inspector
enable: false

- |
# Expect the AVC check to fail
check:
- how: avc
result: xfail
- how: dmesg
result: respect

- |
# Treat the dmesg check as informational only
check:
- how: dmesg
result: info

link:
- implemented-by: /tmt/checks
- implemented-by: /tmt/result.py
- verified-by: /tests/test/check/avc
- verified-by: /tests/test/check/dmesg
- verified-by: /tests/execute/result/check_results
4 changes: 2 additions & 2 deletions tests/execute/result/basic.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ rlJournalStart
run "errr" "/test/error" "" 2
run "pass" "/test/xfail-fail" "fail" 0
run "fail" "/test/xfail-pass" "pass" 1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we ok with this? I see no reason to add note when original and actual results are the same, no?

run "errr" "/test/xfail-error" "error" 2
run "errr" "/test/xfail-error" "" 2
run "pass" "/test/always-pass" "fail" 0
run "info" "/test/always-info" "pass" 0
run "warn" "/test/always-warn" "pass" 1
Expand Down Expand Up @@ -67,7 +67,7 @@ rlJournalStart
00:00:01 errr /test/error-timeout (on default-0) (timeout) [7/12]
00:00:00 fail /test/fail (on default-0) [8/12]
00:00:00 pass /test/pass (on default-0) [9/12]
00:00:00 errr /test/xfail-error (on default-0) (original result: error) [10/12]
00:00:00 errr /test/xfail-error (on default-0) [10/12]
00:00:00 pass /test/xfail-fail (on default-0) (original result: fail) [11/12]
00:00:00 fail /test/xfail-pass (on default-0) (original result: pass) [12/12]
EOF
Expand Down
53 changes: 53 additions & 0 deletions tests/execute/result/check_results.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#!/bin/bash
. /usr/share/beakerlib/beakerlib.sh || exit 1

run()
{
res=$1 # expected final result of test
tn=$2 # test name
ret=$3 # tmt return code

rlRun -s "tmt run -a --scratch --id \${run} test --name ${tn} provision --how local report -v 2>&1 >/dev/null | grep report -A2 | tail -n 1" \
${ret} "Result: ${res}, Test name: ${tn}, tmt return code: ${ret}"

rlAssertGrep "${res} ${tn}" $rlRun_LOG

echo
}

rlJournalStart
rlPhaseStartSetup
rlRun "run=\$(mktemp -d)" 0 "Create run directory"
rlRun "pushd check_results"
rlRun "set -o pipefail"
rlPhaseEnd

rlPhaseStartTest "Check Results Tests"
run "pass" "/test/check-pass" 0
run "fail" "/test/check-fail-respect" 1
run "pass" "/test/check-fail-info" 0
run "fail" "/test/check-xfail-pass" 1
run "pass" "/test/check-xfail-fail" 0
run "pass" "/test/check-override" 0
rlPhaseEnd

rlPhaseStartTest "Verbose execute prints result"
rlRun -s "tmt run --id \${run} --scratch --until execute tests --filter tag:-cherry_pick provision --how local execute -v 2>&1 >/dev/null" "1"

while read -r line; do
rlAssertGrep "$line" "$rlRun_LOG" -F
done <<-EOF
00:00:00 pass /test/check-fail-info (on default-0) [1/6]
00:00:00 fail /test/check-fail-respect (on default-0) (Check 'dmesg' failed, original result: pass) [2/6]
00:00:00 pass /test/check-override (on default-0) [3/6]
00:00:00 pass /test/check-pass (on default-0) [4/6]
00:00:00 pass /test/check-xfail-fail (on default-0) [5/6]
00:00:00 fail /test/check-xfail-pass (on default-0) (Check 'dmesg' failed, original result: pass) [6/6]
EOF
rlPhaseEnd

rlPhaseStartCleanup
rlRun "popd"
rlRun "rm -r ${run}" 0 "Remove run directory"
rlPhaseEnd
rlJournalEnd
1 change: 1 addition & 0 deletions tests/execute/result/check_results/.fmf/version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1
84 changes: 84 additions & 0 deletions tests/execute/result/check_results/test.fmf
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
summary: Tests for check results behaviour
description: Verify that check results are correctly interpreted and affect test results

/check-pass:
summary: Test with passing checks
test: echo "Test passed"
framework: shell
duration: 1m
check:
- how: dmesg
result: respect
# Expected outcome: PASS (test passes, check passes)

/check-fail-respect:
summary: Test with failing dmesg check (respect)
test: echo "Fail Test Check Pattern" | sudo tee /dev/kmsg
framework: shell
duration: 1m
check:
- how: dmesg
failure-pattern: Fail Test Check Pattern
result: respect
# Expected outcome: FAIL (test passes, but check fails and is respected)

/check-fail-info:
summary: Test with failing dmesg check (info)
test: echo "Fail Test Check Pattern" | sudo tee /dev/kmsg
framework: shell
duration: 1m
check:
- how: dmesg
failure-pattern: Fail Test Check Pattern
result: info
# Expected outcome: PASS (test passes, check fails, but should be just info)

/check-xfail-pass:
summary: Test with passing dmesg check (xfail)
test: echo "Test passed"
framework: shell
duration: 1m
check:
- how: dmesg
result: xfail
# Expected outcome: FAIL (test passes, check passes but xfail expects it to fail)

/check-xfail-fail:
summary: Test with failing dmesg check (xfail)
test: echo "Fail Test Check Pattern" | sudo tee /dev/kmsg
framework: shell
duration: 1m
check:
- how: dmesg
failure-pattern: Fail Test Check Pattern
result: xfail
# Expected outcome: PASS (test passes, check fails but xfail expects it to fail)

# TODO: handle multiple checks with same 'name'/'how'
#/check-multiple:
# summary: Test with multiple checks with different result interpretations
# test: echo "Fail Test Check Pattern" | sudo tee /dev/kmsg
# framework: shell
# duration: 1m
# check:
# - how: dmesg
# failure-pattern: Fail Test Check Pattern
# result: respect
# - how: dmesg
# result: xfail
# - how: dmesg
# failure-pattern: Fail Test Check Pattern
# result: info
# # Expected outcome: FAIL (first dmesg check fails and is respected, second dmesg check passes but xfail expects it to fail, third failing dmesg check is just info)

/check-override:
summary: Test with failing dmesg check but overridden by test result
test: echo "Fail Test Check Pattern" | sudo tee /dev/kmsg
framework: shell
duration: 1m
result: pass
check:
- how: dmesg
failure-pattern: Fail Test Check Pattern
result: respect
# Expected outcome: PASS (test passes, check fails but is overridden by 'result: pass')
3 changes: 3 additions & 0 deletions tests/execute/result/main.fmf
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,6 @@
/special:
summary: Test special characters generated to tmt-report-results.yaml
test: ./special.sh
/check_results:
summary: Test check result interpretations
test: ./check_results.sh
22 changes: 21 additions & 1 deletion tmt/checks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ def find_plugin(name: str) -> 'CheckPluginClass':
class _RawCheck(TypedDict):
how: str
enabled: bool
result: str


class CheckEvent(enum.Enum):
Expand All @@ -83,6 +84,19 @@ def from_spec(cls, spec: str) -> 'CheckEvent':
raise tmt.utils.SpecificationError(f"Invalid test check event '{spec}'.")


class CheckResultInterpret(enum.Enum):
INFO = 'info'
RESPECT = 'respect'
XFAIL = 'xfail'

@classmethod
def from_spec(cls, spec: str) -> 'CheckResultInterpret':
try:
return CheckResultInterpret(spec)
except ValueError:
raise ValueError(f"Invalid check result interpretation '{spec}'.")


@dataclasses.dataclass
class Check(
SpecBasedContainer[_RawCheck, _RawCheck],
Expand All @@ -100,6 +114,12 @@ class Check(
default=True,
is_flag=True,
help='Whether the check is enabled or not.')
result: CheckResultInterpret = field(
default=CheckResultInterpret.RESPECT,
help='How to interpret the check result.',
serialize=lambda result: result.value,
unserialize=CheckResultInterpret.from_spec
)

@functools.cached_property
def plugin(self) -> 'CheckPluginClass':
Expand Down Expand Up @@ -228,7 +248,7 @@ def normalize_test_check(
if isinstance(raw_test_check, str):
try:
return CheckPlugin.delegate(
raw_data={'how': raw_test_check, 'enabled': True},
raw_data={'how': raw_test_check, 'enabled': True, 'result': 'respect'},
logger=logger)

except Exception as exc:
Expand Down
Loading
Loading