-
-
Notifications
You must be signed in to change notification settings - Fork 32.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump androidtv to 0.0.26 and update tests #26340
Bump androidtv to 0.0.26 and update tests #26340
Conversation
@MartinHjelmare you said (source):
Also, to check whether a device is available, do I simply check that |
Yes, but we need to also check that the state is not None. |
How do I check that? Won't it always be true that assert hass.states.get(entity_id).state == STATE_UNAVAILABLE # this passes
assert hass.states.get(entity_id).state is None # this fails |
I was thinking something like this, without looking at further context: state = hass.states.get(entity_id)
assert state is not None
assert state.state != STATE_UNAVAILABLE |
I could add that if you'd like, but as it is the test would fail if |
Tests shouldn't fail cause we cause an exception in the test. They should fail cause the code under test doesn't behave as asserted in the tests. |
OK, I updated the tests so that they always make sure the state object is not |
class AdbCommandsFake: | ||
"""A fake of the `adb.adb_commands.AdbCommands` class.""" | ||
|
||
def ConnectDevice(self, *args, **kwargs): # pylint: disable=invalid-name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name is necessary because it needs to match the name in the adb.adb_commands.AdbCommands
class. Same goes for the other names below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another approach would be to spec / autospec a Mock with the correct library class and then set side effects as needed on the mock methods. The benefit with that is that it will raise or fail tests if the library changes and we don't update accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, but I don't know how to do that.
As it is, I structured the androidtv
package so that all ADB stuff is handled in adb_manager.py
. In these tests, I'm patching two classes in this module that are imported from two ADB-related packages. If those packages changed, I would need to either update the androidtv
package or pin its requirements. But for now, androidtv
's requirements are actually my forks of those two ADB-related packages with necessary modifications, and unless/until that changes then I can guarantee that these tests won't break.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This explains more about autospeccing:
https://docs.python.org/3/library/unittest.mock.html#auto-speccing
} | ||
|
||
|
||
async def _test_reconnect(hass, caplog, config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is ok, but another approach that I would prefer, is to try to extract the set up and patching code into pytest fixtures and keep the assertions in the main test functions. It's easier to follow what each test should do, that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with pytest fixtures.
Also, it's basically the same two tests repeated four times each (Python ADB vs. ADB server, Android TV device vs. Fire TV device). I thought it was more clear to use those hidden helper functions than to have four nearly identical tests.
If the tests pass for one of those four configurations then they should pass for the other three, but I thought it better to test it and be sure than to assume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pytest fixtures are awesome:
https://docs.pytest.org/en/latest/fixture.html
Let me know if you're ready to merge. |
I replied to two of your comments. If my responses were satisfactory, then go ahead and merge. And thanks for reviewing! |
Description:
Update the
androidtv
tests to usepytest
instead ofunittest
.Checklist:
tox
. Your PR cannot be merged unless tests passIf the code communicates with devices, web services, or third-party tools:
python3 -m script.hassfest
.requirements_all.txt
by runningpython3 -m script.gen_requirements_all
.If the code does not interact with devices: