Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[monitoring] fix: use retrying module in the fixture class #3285

Merged
merged 7 commits into from
Apr 7, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions monitoring/api/v3/alerts-client/requirements-test.txt
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
pytest==5.3.2
gcp-devrel-py-tools==0.0.15
google-cloud-core
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to pin a version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a temporary workaround. It'll go away.
(Actually, I'll use flaky, so those 2 lines will go away)

6 changes: 6 additions & 0 deletions monitoring/api/v3/alerts-client/snippets_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
import random
import string

from gcp_devrel.testing import eventually_consistent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I'd prefer we stop relying on this module altogether. It's not maintained and a lot of the logic isn't really great for testing.

IIRC, eventually_consistent just attempts the function until it passes. This could make the issue of failures during concurrent tests worse. There are other libraries (like flaky) that do this better and don't add extra maintenance on us.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kurtisvg
Flaky sounds like a plan. I'll use flaky for this test.

But, I kinda disagree with your two points; for the first point "It's not maintained", we can maintain it, right? I'm happy to do that from now on. For the second point, "lot of the logic isn't really great for testing." Can you elaborate? Maybe you can file a bug on the repo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think eventually_consistent is good for eventually consistent tests. It is basically a wrapper around retrying module with default exponential backoff parameters.

But I agree that flaky is more suitable with this test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RE: maintenance - it doesn't seem worth the effort of us maintaining and releasing a special module just for testing our samples. Are our libraries really so hard to test that we can't find an out of the box solution to reach reasonable tests? (I think the answer is no)

RE: logic - IMO, flakey tests are bad tests. Running a test several times hoping at least one passes means that potentially, our users also have to run the sample multiple times to make sure that it's working. Ideally, the snippet should work the first time, every time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kurtisvg Thanks for the discussion. Let's discuss further off PR :)

from google.cloud import monitoring_v3
import google.protobuf.json_format
import pytest
Expand Down Expand Up @@ -74,12 +75,14 @@ def pochan():
yield pochan


@eventually_consistent.mark
def test_list_alert_policies(capsys, pochan):
snippets.list_alert_policies(pochan.project_name)
out, _ = capsys.readouterr()
assert pochan.alert_policy.display_name in out


@eventually_consistent.mark
def test_enable_alert_policies(capsys, pochan):
snippets.enable_alert_policies(pochan.project_name, False)
out, _ = capsys.readouterr()
Expand All @@ -97,6 +100,7 @@ def test_enable_alert_policies(capsys, pochan):
assert "already enabled" in out


@eventually_consistent.mark
def test_replace_channels(capsys, pochan):
alert_policy_id = pochan.alert_policy.name.split('/')[-1]
notification_channel_id = pochan.notification_channel.name.split('/')[-1]
Expand All @@ -106,6 +110,7 @@ def test_replace_channels(capsys, pochan):
assert "Updated {0}".format(pochan.alert_policy.name) in out


@eventually_consistent.mark
def test_backup_and_restore(capsys, pochan):
snippets.backup(pochan.project_name, 'backup.json')
out, _ = capsys.readouterr()
Expand All @@ -117,6 +122,7 @@ def test_backup_and_restore(capsys, pochan):
pochan.notification_channel.display_name) in out


@eventually_consistent.mark
def test_delete_channels(capsys, pochan):
notification_channel_id = pochan.notification_channel.name.split('/')[-1]
snippets.delete_notification_channels(
Expand Down