Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add window states API #2473

Merged
merged 301 commits into from
Dec 2, 2024
Merged

Add window states API #2473

merged 301 commits into from
Dec 2, 2024

Conversation

proneon267
Copy link
Contributor

@proneon267 proneon267 commented Apr 2, 2024

Fixes #1857

Design discussion: #1884

The following are the API changes:

On the interface:

toga.constants.WindowState
WindowState.NORMAL
WindowState.MAXIMIZED
WindowState.MINIMIZED
WindowState.FULLSCREEN
WindowState.PRESENTATION
toga.Window
Added toga.Window.state(getter)
toga.Window.state(setter)
Deprecated toga.Window.full_screen(getter)
toga.Window.full_screen(setter)
toga.App
Added toga.App.is_in_presentation_mode(getter)
toga.App.enter_presentation_mode()
toga.App.exit_presentation_mode()
Deprecated toga.App.is_full_screen(getter)
toga.App.set_full_screen()
toga.App.exit_full_screen()

On the backend:

toga.Window
Added get_window_state()
set_window_state()
toga.App
Added enter_presentation_mode()
exit_presentation_mode()

However, I did encounter some issues, which I have put as inline comments. I do have some other questions about rubicon for implementing the new API, but I will post them later on.

TODO: Write and modify documentation, fix issues with tests, implement for the other remaining platforms, etc.

PR Checklist:

  • All new features have been tested
  • All new features have been documented
  • I have read the CONTRIBUTING.md file
  • I will abide by the code of conduct

@proneon267
Copy link
Contributor Author

@freakboy3742 Can you also review this, when you are free?

@freakboy3742
Copy link
Member

Unless there's a design or implementation issue that requires feedback, I generally don't review code until it's passing CI - this is currently failing coverage and Android tests, and I'm going to guess iOS tests as well (although #2476 is likely masking that problem).

Is there a specific design or implementation issue where you're seeking feedback prior to getting the tests to pass?

@proneon267
Copy link
Contributor Author

I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py.

@freakboy3742
Copy link
Member

I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py.

Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here?

Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail.

@proneon267
Copy link
Contributor Author

Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here?

Mostly Fullscreen and presentation mode where navigation bar, title bar& menu bars remain hidden.

Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail.

Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing. I thought maybe adding a delay to the redraw method would work, but it doesn't. The tests only pass when I set the window state directly on the window object or run the test in isolation.

I have also tried to identify any problems with the app interface but didn't find any. The same test logic is run in test_window, but there it works properly. The app implementation of presentation mode calls the window implementation of presentation mode. So, the behaviour should be identical.

@proneon267
Copy link
Contributor Author

@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints.

@freakboy3742
Copy link
Member

@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints.

I have, and I've already told you the general class of problem you're hitting. You've even hinted at the problem yourself:

Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing.

If a single test isn't leaving the app in the same state that it found it... well, that's the source of your problem. That's what you need to fix.

As for "the implementation is identical"... well, the golden rule of computers applies: if in doubt, the computer is right. The computer doesn't have opinions. It runs the code it has. If you think something is identical, but testing is producing inconsistent results... something about your implementation isn't identical.

You're the one proposing this PR; the onus is on you to solve problems with the implementation. If you've got a specific question about the way Toga or the testbed operates, then I can do what I can to provide an explanation, but I can only dedicate time to resolving an open ended "why doesn't it work?" questions when the problem is on my personal todo list - which usually means it's something on Toga's roadmap, or it's the source of a bug that is preventing people from using the published version of Toga.

@proneon267
Copy link
Contributor Author

proneon267 commented May 23, 2024

The coverage report complains about:

Name              Stmts   Miss Branch BrPart  Cover   Missing
-------------------------------------------------------------
src/toga/app.py     [35](https://github.com/beeware/toga/actions/runs/9203102559/job/25314158711#step:6:36)7      2     82      2  99.1%   846, 852
-------------------------------------------------------------
TOTAL              5067      2   12[38](https://github.com/beeware/toga/actions/runs/9203102559/job/25314158711#step:6:39)      2  99.9%

toga/core/src/toga/app.py

Lines 846 to 849 in 06b7f2a

@property
def is_in_presentation_mode(self) -> bool:
"""Is the app currently in presentation mode?"""
return any(window.state == WindowState.PRESENTATION for window in self.windows)

So, Line 846 is just @property. I first thought maybe the is_in_presentation_mode property is not being invoked in the test, but it is invoked several times in the tests:
assert not app.is_in_presentation_mode

assert app.is_in_presentation_mode

Next, it reports missing coverage for Line 852:

toga/core/src/toga/app.py

Lines 851 to 854 in 06b7f2a

def enter_presentation_mode(
self,
window_list_or_screen_window_dict: list[Window] | dict[Screen, Window],
) -> None:

So, line 852 is just self,. Here also, `enter_presentation_mode() is invoked serveral times during the tests:

app.enter_presentation_mode({app.screens[0]: window1})

app.enter_presentation_mode([window1, window2])

So, both of these reported missing coverage don't make much sense to me.

Also, the tests expectedly fail on python 3.13 due to Pillow.

@freakboy3742
Copy link
Member

I can't work out what's going on with the coverage report in CI (perhaps a state cache somewhere?); but if I run the tests locally (tox -m test310), the lines in app.py that report as uncovered are 836 and 842, which corresponds nicely to to return conditions in set_full_screen():

toga/core/src/toga/app.py

Lines 830 to 843 in 06b7f2a

# DeprecationWarning,
# stacklevel=2,
# )
if self.windows is not None:
self.exit_full_screen()
if windows is None:
return
screen_window_dict = dict()
for window, screen in zip(windows, self.screens):
screen_window_dict[screen] = window
self.enter_presentation_mode(screen_window_dict)
else:
warn("App doesn't have any windows")

@proneon267
Copy link
Contributor Author

proneon267 commented Jun 1, 2024

I can't work out what's going on with the coverage report in CI (perhaps a state cache somewhere?); but if I run the tests locally (tox -m test310), the lines in app.py that report as uncovered are 836 and 842, which corresponds nicely to to return conditions in set_full_screen():

Thanks! I was running the test locally with tox -e py; maybe that's why it wasn't showing the missing coverage. I have added the missing coverage and this PR is ready for a review.

Copy link
Member

@freakboy3742 freakboy3742 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done an initial pass review of the core API; there's still a bunch of little details to sort out here. I've flagged some areas of interest; in general, if I've flagged a topic once, you can take it as written that the same comment applies to all other analogous issues.

In particular, the biggest areas needing attention are:

  • Handling of deprecated APIs. They need to raise warnings when used; test that they raise warnings when used; and their implementations should not call other deprecated APIs
  • Clarification of the state of full-screen and presentation mode.
  • Test parameterization. Rather than writing a single test that does a lot of things in sequence, it's preferable to have a parameterised test that does lots of individual changes. That way you get multiple failures when something unexpected changes, rather than a single test failure.

# ),
# DeprecationWarning,
# stacklevel=2,
# )
self.exit_full_screen()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably this should be exit_presentation_mode()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified it.

Comment on lines 849 to 851
screen_window_dict = dict()
for window, screen in zip(windows, self.screens):
screen_window_dict[screen] = window
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a duplicate of the logic in enter_presentation_mode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed it.

# ---------------------------------------------------------------------

@property
def is_in_presentation_mode(self) -> bool:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_in_ is two prepositions. in_presentation_mode() is sufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I've modified it.


def enter_presentation_mode(
self,
window_list_or_screen_window_dict: list[Window] | dict[Screen, Window],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

windows would be a perfectly adequate argument name here. We don't need to do explicit typing in the variable name; _or_ in variable names is used in other APIs when there's an ambiguity between passing in other values.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I've modified it.

##########################################################################


@unique
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no need to use both unique and auto. The values are guaranteed to be unique.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I've modified it.

assert_action_not_performed(app, "exit presentation mode")

# Trying to enter full screen with no windows is a no-op
app.set_full_screen(None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a difference between set_full_screen(), set_full_screen([]), and set_full_screen(None). I can't see any reason why the latter of these should be legal at all.

window1 = toga.Window()
window2 = toga.Window()
app = toga.App(formal_name="Test App", app_id="org.example.test")

assert not app.is_full_screen

# If we're not full screen, exiting full screen is a no-op
app.exit_full_screen()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is a deprecated API, then it should be in a pytest.warns.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I've modified it.

"""A window can have different states."""
assert window.state == WindowState.NORMAL

for state in WindowState:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For loops in tests are generally an anti pattern - you should be using a parameterised test so that specific cases in the for loop can be identified as a problem. In this case, we're interested in all possible "initial state" to "final state" transitions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have parameterized the test.

######################################################################

# ------------------------Deprecated methods--------------------------
# Warnings are disabled as old API tests are still in testbed and warnings will cause error.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what pytest.warns is for. It both silences the warning, and validates that a warning has been raised.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I've modified it.


assert not app.is_in_presentation_mode

# If we're not in presentation mode, exiting presentation mode is a no-op
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than one long test, it's better to have a large number of parameterised short tests - that way you can validate which specific pieces of behavior are failing, rather than getting a single "it's broken" test failure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have broken the tests of presentation mode into smaller tests, but they are not parameterized. This because, doing:

@pytest.mark.parametrize(
    "windows",
    [
        [],
        [toga.Window()],
        [toga.Window(), toga.Window()],
        [toga.Window(), toga.Window(), toga.Window()],
        {},
        {toga.App.app.screens[0]: toga.Window()},
        {
            toga.App.app.screens[0]: toga.Window(),
            toga.App.app.screens[1]: toga.Window(),
        },
    ],
)
def test_presentation_mode_with_windows_list(event_loop, app, windows):

gives error, since the app is not instantiated. I have tried several variations of the above, but all of them gave error because of the app not being instantiated. I also found out that I cannot use pytest fixtures like app fixxture inside @pytest.mark.parametrize decorator.

Hence, I have instead broken down presentation mode test into smaller tests.

Copy link
Contributor Author

@proneon267 proneon267 Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have thought of another way to achieve parameterization of the test without creating additional tests, by:

@pytest.mark.parametrize(
    "num_of_windows, data_type",
    [
        (0, list),
        (1, list),
        (2, list),
        (3, list),
        (0, dict),
        (1, dict),
        (2, dict),
    ],
)
def test_presentation_mode_with_windows_list(event_loop, app, num_of_windows, data_type):

Then in the test body, I can dynamically create the number of required windows and the required data type of the windows parameter for the test.

Let me know if I should change the current core tests by the above parametrized test. But I don't think the above parameterized test is any clearer; in fact, it would become more unclear and harder to debug on test failures.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I'd advise away from this as an approach. Tests should have as little logic in them as possible - because otherwise, you've got additional logic that needs to be tested. Sometimes logic is unavoidable; but if the option is slightly more verbose code or logic, go with verbose code.

Also - when picking test cases, be wary of what it is you're actually testing. What does validating the 2 window case tell you that 3 windows doesn't? Unless there's a good reason why the 2 and 3 case is different, there's no need to include both.

)
],
)
async def test_window_state_rapid_assignment(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new test ensures complete coverage of this PR on macOS and gtk. So, I have removed all #pragma: no cover from this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok... but it also seems like a direct reproduction of the previous test, but without the (implied) delay between state changes.

Prior to this PR, the testbed takes ~3:20 to complete on macOS ARM64. As of the most recent CI run on this PR, it took almost 7 minutes to run. Window state changes aren't fast - and there's a lot of tests needed here.

For a backend that doesn't have any delay issues (like Windows), this test is identical to the window state change test. And for some macOS/GTK states, it will also be identical.

In an ideal world - yes, we'd run all these tests, reflecting every possible permutation "done fast". However, these tests are slow, so we need to ensure we're only testing what we need to test.

There's even an argument to be made that the two tests are the same test, but with some transitions having a boolean "...and wait" condition between the two state changes.

@proneon267
Copy link
Contributor Author

I am thinking of ignoring window state requests when the window is hidden i.e., Window.visible is False. This should prevent any unexpected showing up of the window when the user has explicitly hidden the window.

@proneon267
Copy link
Contributor Author

I did a few more tests and found out that resizing the window while in FULLSCREEN or PRESENTATION, can cause glitches. So, I am also ignoring window resizing requests while in FULLSCREEN or PRESENTATION state.

@freakboy3742
Copy link
Member

There's currently a merge conflict - I'm guessing this has been introduced by the landing of #2993.

@proneon267
Copy link
Contributor Author

I also have found out that changing the window position while in FULLSCREEN or PRESENTATION can also cause glitches. So, I am also going to ignore window position requests while in FULLSCREEN or PRESENTATION state.

@freakboy3742
Copy link
Member

I did a few more tests and found out that resizing the window while in FULLSCREEN or PRESENTATION, can cause glitches. So, I am also ignoring window resizing requests while in FULLSCREEN or PRESENTATION state.

I don't deny this is a problem - but the last time I looked at this PR, it was really close, except for some clarification around some test cases. You've now added a new functionality, which requires new tests, and all of that code will need to go through a review process, and likely multiple revisions - so the date where this PR gets merged is pushed off a little more.

It's perfectly acceptable to land a PR that has known issues, as long as those issues are documented. In this case, I'm guessing the issue is pre-existing; so it's not even an issue with this PR - you've just discovered it because it's adjacent to window state issues.

The code is here now, so it's not worth pulling it out - but if you want this PR to eventually land, you have to draw a line somewhere. :-)

@proneon267
Copy link
Contributor Author

I agree with you, I should have fixed them in separate PRs. From now on, if I find any more issues then I'm going to fix them in separate PRs. If it helps, the 3 "ignore window state requests" fixes that I have added are in the core. I will not make any more changes to this PR and would document out new issues separately.

Copy link
Member

@freakboy3742 freakboy3742 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As expected, the new additions need some modifications; and we're still not quite there with the "test that used to be the intermediate tests". As noted inline, it seems to that the two tests are actually the same - just with a "do you want me to wait for completion before moving to the next state?" qualifier.

cocoa/src/toga_cocoa/window.py Show resolved Hide resolved
@@ -390,6 +390,8 @@ def size(self) -> Size:

@size.setter
def size(self, size: SizeT) -> None:
if self.state in {WindowState.FULLSCREEN, WindowState.PRESENTATION}:
raise ValueError(f"Cannot resize window while in {self.state}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't really a ValueError - there's nothing wrong with the value. I'd suggest a RuntimeError, as it's an error due to request being invalid due to the state at runtime.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring for the property should also describe that size can't be changed when in fullscreen/presentation mode (using a :raises: annotation).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two comments also apply to the position and screen_position properties.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added docstrings to size, position, screen_position.

# Add slight delay to prevent glitching on wayland during rapid
# state switching.
# Add a 10ms delay to wait for the native window state
# operation to complete to prevent glitching on wayland
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor typo here

Suggested change
# operation to complete to prevent glitching on wayland
# operation to complete to prevent glitching on wayland

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.

@state.setter
def state(self, state: WindowState) -> None:
if not self.visible:
raise ValueError("Window state of a hidden window cannot be changed.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above - this should be a documented RuntimeError.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added docstrings to state.

)
],
)
async def test_window_state_rapid_assignment(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok... but it also seems like a direct reproduction of the previous test, but without the (implied) delay between state changes.

Prior to this PR, the testbed takes ~3:20 to complete on macOS ARM64. As of the most recent CI run on this PR, it took almost 7 minutes to run. Window state changes aren't fast - and there's a lot of tests needed here.

For a backend that doesn't have any delay issues (like Windows), this test is identical to the window state change test. And for some macOS/GTK states, it will also be identical.

In an ideal world - yes, we'd run all these tests, reflecting every possible permutation "done fast". However, these tests are slow, so we need to ensure we're only testing what we need to test.

There's even an argument to be made that the two tests are the same test, but with some transitions having a boolean "...and wait" condition between the two state changes.

@proneon267
Copy link
Contributor Author

Adding a boolean to indicate waiting between states can work, but I think it would introduce additional logic to the test and would create more confusion in case of a failure, like it was happening when I had the tests combined. I agree that testing all rapid state changes would be impractical. Hence, I have identified and kept only the most complex states.

@freakboy3742
Copy link
Member

Adding a boolean to indicate waiting between states can work, but I think it would introduce additional logic to the test and would create more confusion in case of a failure, like it was happening when I had the tests combined. I agree that testing all rapid state changes would be impractical. Hence, I have identified and kept only the most complex states.

I'm uncertain what you're proposing here. Are you suggesting there are more changes required to this PR? I'm unclear what an additional boolean would give us that the pending handling doesn't already give us.

@proneon267
Copy link
Contributor Author

No I am not proposing any changes to the code. I was referring to this boolean:

There's even an argument to be made that the two tests are the same test, but with some transitions having a boolean "...and wait" condition between the two state changes

Copy link
Member

@freakboy3742 freakboy3742 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - I think we're finally there. Thanks for the persistence with this PR; it's taken a while, but I feel like the final product has ended up in a good place.

@freakboy3742 freakboy3742 merged commit 1c8b7d4 into beeware:main Dec 2, 2024
40 checks passed
@proneon267
Copy link
Contributor Author

Thank you for keeping patience with me. Next, I am planning to break up #2484 into 3 PRs: one for Android, one for iOS and one for winforms for easy reviewing. With any luck, I am hoping to complete them or at least some of the PRs before the end of the year.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Window maximize & minimize API functionality
3 participants