-
-
Notifications
You must be signed in to change notification settings - Fork 31.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async version of Yr.no #4158
Async version of Yr.no #4158
Conversation
@kellerza, thanks for your PR! By analyzing the history of the files in this pull request, we identified @balloob, @Danielhiversen and @fabaff to be potential reviewers. |
@@ -57,7 +57,7 @@ def call_count(self): | |||
return len(self.mock_calls) | |||
|
|||
@asyncio.coroutine | |||
def match_request(self, method, url, *, auth=None): | |||
def match_request(self, method, url, *, auth=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This allows the timeout
parameter in the get request. (and possible future arguments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to use kwargs
if we know exactly what keywords to expect. By adding uncertainty we add tech debt.
@@ -80,7 +80,7 @@ def __init__(self, method, url, status, response): | |||
|
|||
def match_request(self, method, url): | |||
"""Test if response answers request.""" | |||
return method == self.method and url == self.url | |||
return method == self.method and url.startswith(self.url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-async requests mock handled "startswith" type functionality with the old tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't seem to be the case, see the source. The difference however is that the query string was tested separately.
self.update() | ||
self.hass = hass | ||
async_track_time_change(self.hass, self.async_update, second=0, | ||
minute=0, hour=3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This request schedules a once per day update in case there was a connection error etc in the async_update method. It should effectively "restart" the updating in case anything went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this approach you are going to have a new connection run in parallel each day. This will only work if you keep track of the unsub functions that async_track_point_in_utc_time
will return.
It's also bad style to just put a restart everything piece in your code. Doesn't show much faith in your own abilities 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Not sure what I was thinking...
self.update() | ||
self.hass = hass | ||
async_track_time_change(self.hass, self.async_update, second=0, | ||
minute=0, hour=3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this approach you are going to have a new connection run in parallel each day. This will only work if you keep track of the unsub functions that async_track_point_in_utc_time
will return.
It's also bad style to just put a restart everything piece in your code. Doesn't show much faith in your own abilities 😛
if resp.status != 200: | ||
return | ||
_text = (yield from resp.text()) | ||
except asyncio.TimeoutError: | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any time you return you should reschedule another connection.
data = response.text | ||
finally: | ||
self.hass.loop.create_task(resp.release()) | ||
# ? yield from resp.release() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
create_task is fine, no need to wait for it to release to continue processing this response.
except requests.RequestException: | ||
if resp.status != 200: | ||
return | ||
_text = (yield from resp.text()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefixing a local variable with _
usually indicates that it is not going to be used.
# Check if new will be available | ||
if self._nextrun is not None and dt_util.utcnow() <= self._nextrun: | ||
return | ||
resp = yield from self.hass.websession.get(self._url, timeout=30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not how aiohttp does timeouts. See their Getting Started
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably have read timeouts here then and not connection setup timeout http://aiohttp.readthedocs.io/en/stable/client.html#timeouts
# pylint: disable=protected-access | ||
if new_state != dev._state: | ||
dev._state = new_state | ||
yield from dev.async_update_ha_state() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of yielding on a per sensor basis, you can use gather.
tasks = []
for dev in y:
tasks.append(dev.async_update_ha_state())
yield from asyncio.gather(*tasks, loop=hass.loop)
now = datetime(2016, 6, 9, 1, tzinfo=dt_util.UTC) | ||
def _setup_platform(hass, config): | ||
"""Setup a yr platform.""" | ||
hass.allow_pool = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You would not have to allow pool if you convert the setup to be async_setup_platform
@@ -57,7 +57,7 @@ def call_count(self): | |||
return len(self.mock_calls) | |||
|
|||
@asyncio.coroutine | |||
def match_request(self, method, url, *, auth=None): | |||
def match_request(self, method, url, *, auth=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to use kwargs
if we know exactly what keywords to expect. By adding uncertainty we add tech debt.
@@ -80,7 +80,7 @@ def __init__(self, method, url, status, response): | |||
|
|||
def match_request(self, method, url): | |||
"""Test if response answers request.""" | |||
return method == self.method and url == self.url | |||
return method == self.method and url.startswith(self.url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't seem to be the case, see the source. The difference however is that the query string was tested separately.
state = hass.states.get('sensor.yr_symbol') | ||
|
||
assert state.state == '3' | ||
assert state.state.isnumeric() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assert makes no sense, we know the state is '3'
so we know it will be numeric.
dba3cf2
to
c7d5c1e
Compare
@asyncio.coroutine | ||
def test_default_setup(hass, aioclient_mock): | ||
"""Test the default setup.""" | ||
aioclient_mock.get('http://api.yr.no/weatherapi/locationforecast/1.9/', | ||
text=load_fixture('yr.no.json')) | ||
config = {'platform': 'yr', | ||
'elevation': 0} | ||
yield from hass.loop.run_in_executor(None, _setup_platform(hass, config)) | ||
hass.allow_pool = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could not get rid of this yet...
add_devices(dev) | ||
weather = YrData(hass, coordinates, dev) | ||
tasks = [async_add_devices(dev), weather.async_update()] | ||
yield from asyncio.gather(*tasks, loop=hass.loop) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of this work around you can just write
yield from asyncio.gather(async_add_devices(dev), weather.async_update(), loop=hass.loop)
The *
operator in a function call will spread out an iterable over the positional arguments
However, in this case it is actually important that async_add_devices
runs first, because it sets the hass
attribute and entity_id which is needed to call async_update_ha_state
. However, without weather.async_update
, you will write unknown
to the state machine.
Do note that this way you will first get a bunch of unknown states that will resolve once weather.async_update
comes through.
I guess the ultimate solution would be to make an async friendly throttle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I will yield in two statements (1) to make sure entities are created and then (2) updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not quite following the "throttling" comment. Delay/wait makes sense, but guess yielding gives that. This should also fix the sometimes failing Travis tests, 2which seems to pass consistently on my dev machine
model = model[0] | ||
next_run = dt_util.parse_datetime(model['@nextrun']) | ||
except (ExpatError, IndexError) as err: | ||
return try_again(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you sometimes return something, but not in case of a success?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is simply to exit from the function in case of aborting the update, the return value is not important. try_again
also returns nothing (None?). Should I rather split this statement over two lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, split it over two lines. If people should not use the return value, it should have none.
@@ -15,7 +16,7 @@ def __init__(self): | |||
self.mock_calls = [] | |||
|
|||
def request(self, method, url, *, | |||
auth=None, | |||
auth=None, # pylint: disable=unused-variable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty sure this one is disabled in our pylintrc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will make sure it is there, this raises a warning on my recent dev branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused-argument
is present in pylintrc, but not unused-variable
. I decided against adding unused-variable
since it seems too wide.
According to the documentation unused-argument
should be raised here, so not sure why I'm getting unused-variable
here
@@ -57,7 +58,8 @@ def call_count(self): | |||
return len(self.mock_calls) | |||
|
|||
@asyncio.coroutine | |||
def match_request(self, method, url, *, auth=None): | |||
def match_request(self, method, url, *, auth=None, timeout=None): \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you still need to add timeout
now that you are no longer using it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove. It is only applicable for long streaming type requests I guess, for most of our uses the default should be ok
self._url = url | ||
try: | ||
self._url_parts = urlparse(url.lower()) | ||
except AttributeError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't catch this, just blow up. It's usually pretty bad to swallow errors because it means that we won't notice that we mess up until way later and it will be difficult to find the cause.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mimick requests_mock. The reason for this is that you can pass in a regex object for url
, but will rather ask for permission with a test than ask for forgiveness
|
||
# Reuse Mock request's match function | ||
# pylint: disable=protected-access, attribute-defined-outside-init | ||
from requests_mock.adapter import _Matcher |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was considering extracting this class eventually as a standalone lib - so we can't depend on other mockers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, will revert to blatant plagiarism :-P
Makes sense if you want to contribute it back to aiohttp
Woohoo 🐬 |
@kellerza , @balloob The next_run says when new xml data are available. But each xml file can contain data for several time periods. Example: |
Pretty sure it did not update more often in the past, but looking at the XML data it seems like it should update more often though... I can work on a patch tomorrow evening. EDIT: Looking at the Entity's update this does seem to need some work |
@balloob : But each senor should check if more recently data are available in the YrData class: |
Indeed, as noted in my edit above. |
i'm sorry, you're right @Danielhiversen |
I am also getting this error:
I think it comes from here: https://github.com/home-assistant/home-assistant/pull/4158/files#diff-08b07de97392e5a044466a28e35157c9R147 So for the moment, the yr sensor will never update |
Description:
Async version of yr.no. Continue work of #3597
The logic of the YrSensor / YrData changed:
Example entry for
configuration.yaml
(if applicable):Checklist:
If the code communicates with devices, web services, or third-party tools:
tox
run successfully. Your PR cannot be merged unless tests pass