Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

app/cluster: Level Control transition time may longer than expected #20775

Open
Kxuan opened this issue Jul 15, 2022 · 7 comments
Open

app/cluster: Level Control transition time may longer than expected #20775

Kxuan opened this issue Jul 15, 2022 · 7 comments
Labels
app-clusters Application cluster work stale Stale issue or PR

Comments

@Kxuan
Copy link
Contributor

Kxuan commented Jul 15, 2022

Problem

Most of commands in Level Control cluster have a transition time settings.

Currently, the code in level-control.cpp implements this transition.

When command received by DUT, the command handler create a EmberAfLevelControlState object, and then add a timer on main loop to call emberAfLevelControlClusterServerTickCallback.
emberAfLevelControlClusterServerTickCallback increase or decrease current level by 1 per tick.

For example, if a Move command increase current level from 1 to 10, with increase rate 1. What the current code expected is:

Time        |  0s  |  1s  |  2s  |  3s  |  4s  |  5s  |  6s  |  7s  |  8s  |  9s  |
Timer Tick  *      *      *      *      *      *      *      *      *      *
Level       1      2      3      4      5      6      7      8      9     10

So at 9s, the current level is 10.

But in real world, the timer tick does not done as such precisely. It may have random delay before calling emberAfLevelControlClusterServerTickCallback

It may do this in actural world:

Time        |  0s  |  1s  |  2s  |  3s  |  4s  |  5s  |  6s  |  7s  |  8s  |  9s  |  10s  |
Timer Tick  *         *      *        *        *        *        *         *        *      *
Level       1         2      3        4        5        6        7         8        9     10

So at 9s, the current level is only 8 or 9.

If the target level is 254, it will be very likely that the Move command can not finished in expected time.

So some test case, such as TC-LVL-5.1 may fail.

Proposed Solution

emberAfLevelControlClusterServerTickCallback should calculate new current level with real elapse time.
For example:

Time        |  0s  |  1s  |  2s  |  3s  |  4s  |  5s  |  6s  |  7s  |  8s  |  9s  |
Timer Tick  *         *      *        *        *        *        *         * 
Level       1         2      3        4        6        7        8        10
                                               ^                           ^

At the start of 5s, instead of just increase current level by 1, emberAfLevelControlClusterServerTickCallback calculate:

current level = start level + (current time - start time ) * change rate

Then, if current level has reached the target level, set current level to target level, and stop.

@bzbarsky-apple bzbarsky-apple added the app-clusters Application cluster work label Jul 15, 2022
@Kxuan
Copy link
Contributor Author

Kxuan commented Aug 29, 2022

Same issue for OffWaitTime on On/Off Cluster.

@Kxuan
Copy link
Contributor Author

Kxuan commented Aug 30, 2022

Will this issue be closed during SVE2?
It affects time related test cases, such as TC-OO-2.3, TC-LVL-5.1.

@raju-apple
Copy link
Contributor

@Kxuan for a lot of the test scripts , we have added a 15% tolerance to help in such cases . Do you think that would suffice ?

@innovation-matters-iot
Copy link

@raju-apple I am currently running in this issue during testing in SVE2. There was a test script issue thats is closed because the 15% tolerance is already added ..

@AlvinHsiao
Copy link
Contributor

@raju-apple Should the tolerance be defined by ourselves with PIXIT.OO.MaxCommunicationTurnaround instead of hard-coded 15%?

@stale
Copy link

stale bot commented Mar 7, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

1 similar comment
@stale
Copy link

stale bot commented Oct 15, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale Stale issue or PR label Oct 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
app-clusters Application cluster work stale Stale issue or PR
Projects
None yet
Development

No branches or pull requests

5 participants