Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump androidtv to 0.0.26 and update tests #26340

Merged
merged 11 commits into from
Sep 2, 2019

Conversation

JeffLIrion
Copy link
Contributor

@JeffLIrion JeffLIrion commented Sep 1, 2019

Description:

Update the androidtv tests to use pytest instead of unittest.

Checklist:

  • The code change is tested and works locally.
  • Local tests pass with tox. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • I have followed the development checklist

If the code communicates with devices, web services, or third-party tools:

  • The manifest file has all fields filled out correctly. Update and include derived files by running python3 -m script.hassfest.
  • New or updated dependencies have been added to requirements_all.txt by running python3 -m script.gen_requirements_all.

If the code does not interact with devices:

  • Tests have been added to verify that the new code works.

@probot-home-assistant probot-home-assistant bot added has-tests small-pr PRs with less than 30 lines. labels Sep 1, 2019
@JeffLIrion
Copy link
Contributor Author

JeffLIrion commented Sep 1, 2019

@MartinHjelmare you said (source):

New tests must be standalone pytest test functions instead of unittest class method tests.

Is there an easy way to capture logging statements using pytest? If not, would I be able to use unittest.TestCase.assertLogs? Update: I got this working using caplog.

Also, to check whether a device is available, do I simply check that hass.states.get(entity_id).state != STATE_UNAVAILABLE?

@MartinHjelmare
Copy link
Member

Yes, but we need to also check that the state is not None.

@JeffLIrion
Copy link
Contributor Author

Yes, but we need to also check that the state is not None.

How do I check that? Won't it always be true that hass.states.get(entity_id).state is not None?

assert hass.states.get(entity_id).state == STATE_UNAVAILABLE  # this passes
assert hass.states.get(entity_id).state is None               # this fails

@MartinHjelmare
Copy link
Member

I was thinking something like this, without looking at further context:

state = hass.states.get(entity_id)
assert state is not None
assert state.state != STATE_UNAVAILABLE

@JeffLIrion
Copy link
Contributor Author

I was thinking something like this, without looking at further context:

state = hass.states.get(entity_id)
assert state is not None
assert state.state != STATE_UNAVAILABLE

I could add that if you'd like, but as it is the test would fail if state is None because trying to access state.state would cause an exception.

@MartinHjelmare
Copy link
Member

Tests shouldn't fail cause we cause an exception in the test. They should fail cause the code under test doesn't behave as asserted in the tests.

@JeffLIrion JeffLIrion changed the title [WIP, not ready for review] Update androidtv tests Bump androidtv to 0.0.26 and update tests Sep 2, 2019
@JeffLIrion
Copy link
Contributor Author

Tests shouldn't fail cause we cause an exception in the test. They should fail cause the code under test doesn't behave as asserted in the tests.

OK, I updated the tests so that they always make sure the state object is not None. This is ready for review!

class AdbCommandsFake:
"""A fake of the `adb.adb_commands.AdbCommands` class."""

def ConnectDevice(self, *args, **kwargs): # pylint: disable=invalid-name
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is necessary because it needs to match the name in the adb.adb_commands.AdbCommands class. Same goes for the other names below.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another approach would be to spec / autospec a Mock with the correct library class and then set side effects as needed on the mock methods. The benefit with that is that it will raise or fail tests if the library changes and we don't update accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense, but I don't know how to do that.

As it is, I structured the androidtv package so that all ADB stuff is handled in adb_manager.py. In these tests, I'm patching two classes in this module that are imported from two ADB-related packages. If those packages changed, I would need to either update the androidtv package or pin its requirements. But for now, androidtv's requirements are actually my forks of those two ADB-related packages with necessary modifications, and unless/until that changes then I can guarantee that these tests won't break.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}


async def _test_reconnect(hass, caplog, config):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is ok, but another approach that I would prefer, is to try to extract the set up and patching code into pytest fixtures and keep the assertions in the main test functions. It's easier to follow what each test should do, that way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with pytest fixtures.

Also, it's basically the same two tests repeated four times each (Python ADB vs. ADB server, Android TV device vs. Fire TV device). I thought it was more clear to use those hidden helper functions than to have four nearly identical tests.

If the tests pass for one of those four configurations then they should pass for the other three, but I thought it better to test it and be sure than to assume.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pytest fixtures are awesome:
https://docs.pytest.org/en/latest/fixture.html

tests/components/androidtv/patchers.py Outdated Show resolved Hide resolved
@MartinHjelmare
Copy link
Member

Let me know if you're ready to merge.

@JeffLIrion
Copy link
Contributor Author

Let me know if you're ready to merge.

I replied to two of your comments. If my responses were satisfactory, then go ahead and merge.

And thanks for reviewing!

@MartinHjelmare MartinHjelmare merged commit 85473d2 into home-assistant:dev Sep 2, 2019
@lock lock bot locked and limited conversation to collaborators Sep 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cla-signed has-tests small-pr PRs with less than 30 lines.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants