Skip to content

Conversation

@TheJulianJES
Copy link
Contributor

@TheJulianJES TheJulianJES commented Feb 2, 2026

Proposed change

This changes the OnOffClientClusterHandler to first process the event, updating attribute cache and emitting attribute_updated ZHA events, then forwarding the cluster command to emit the ZHA event.

Additional information

Issue

There are a lot of blueprints available that use the restart automation mode and unfortunately trigger on all ZHA events. Even if they're later ignored in the automations "action" section, the automation will still restart for them, aborting the existing run.

Ideally, the automation should only trigger for events that actually need to be processed (filter in triggers section or conditions section). But since this is not the case, we need to change our behavior to not break all of the blueprints.

Previous behavior & alternative solutions

The previous behavior did not emit attribute_updated events at all for remotes, as the normal ClusterHandler was used, not the ClientClusterHandler. Only the latter class includes behavior to emit ZHA events for attribute updates:

def _handle_attribute_updated_event(
self,
event: AttributeReadEvent
| AttributeReportedEvent
| AttributeUpdatedEvent
| AttributeWrittenEvent,
) -> None:
"""Handle an attribute updated on this cluster."""
super()._handle_attribute_updated_event(event)
self.emit_zha_event(
SIGNAL_ATTR_UPDATED,
{
ATTRIBUTE_ID: event.attribute_id,
ATTRIBUTE_NAME: event.attribute_name or "Unknown",
ATTRIBUTE_VALUE: event.value,
VALUE: event.value,
},
)

Alternatively, I'm thinking if we should just call ClusterHandler._handle_attribute_updated_event() instead, skipping the ZHA event being emitted in ClientClusterHandler. I think that should restore old behavior completely?

As a third solution, I can only think of skipping ZHA events being emitted depending on which entity is associated with that cluster handler (where the entity already depends on the device type). This likely won't work nicely though.

Previous TODO:

Investigate not emitting attribute_updated events for entire OnOff cluster (OnOffClientClusterHandler).
See "Previous behavior & alternative solutions".

-> EDIT: See: #642 (comment)

Reported on Discord:

Related:

@codecov
Copy link

codecov bot commented Feb 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 97.29%. Comparing base (3adf2db) to head (b0915b0).

Additional details and impacted files
@@           Coverage Diff           @@
##              dev     #642   +/-   ##
=======================================
  Coverage   97.29%   97.29%           
=======================================
  Files          62       62           
  Lines       10712    10712           
=======================================
  Hits        10422    10422           
  Misses        290      290           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@TheJulianJES
Copy link
Contributor Author

For the alternative solutions, we (unfortunately) can't just remove all attribute_updated ZHA events for the OnOffClientClusterHandler, since we have quirk device triggers relying on attribute_updated events for the OnOff cluster for some remotes, e.g.: zhaquirks/xiaomi/aqara/switch_aq2.py#L93

So, from my understanding, the old behavior was the following:

  • emit attribute_updated ZHA event for attribute reports on the OnOff cluster
  • do NOT emit attribute_updated ZHA event for cluster commands like on/off, for which we call update_attribute (and modify our local attribute cache)

I think the solution from this PR with just switching the order should work for all blueprints I've looked at. I don't fully like it though..

Only emitting ZHA events for attribute updates we have not caused ourselves might get a bit weird/complicated now.
I think the previous behavior was just a side effect of having both the normal OnOffClientClusterHandler (which did not process cluster commands) and the "exception" OnOffClusterHandler for remotes, which did process cluster commands and called update_attribute, but did not emit ZHA events.

@TheJulianJES TheJulianJES marked this pull request as ready for review February 2, 2026 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant