Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agenda for Jan 27 meeting #50

Closed
foolip opened this issue Jan 26, 2022 · 4 comments
Closed

Agenda for Jan 27 meeting #50

foolip opened this issue Jan 26, 2022 · 4 comments
Labels
agenda Agenda item for the next meeting

Comments

@foolip
Copy link
Member

foolip commented Jan 26, 2022

Here's the agenda for our meeting tomorrow:

Previous meeting: #47

@foolip foolip added the agenda Agenda item for the next meeting label Jan 26, 2022
@foolip
Copy link
Member Author

foolip commented Jan 26, 2022

Since I'm not sure if I will be able to attend the meeting, here's an "update" on metrics. Finishing the metrics computation is on me, and in particular I've promised to look into #46 and to make sure we account for all non-OK harness statuses and subtest mismatches in scoring, making a deliberate decision about what to do with each. I've been sick so far this week and have no progress to show, but will work out a plan for completing it if I'm not back at work by Monday.

@chrishtr
Copy link
Contributor

Additional agenda item:

  • List of organizations/people who sign on to supporting this effort?

@una
Copy link

una commented Jan 27, 2022

Interop 2022 Browser Vendor Sync Notes

Jan 27, 2022
Jen Simmons, James Graham, Anne van Kesteren, Eric Meyer, Chris Harrelson, Brial Kardell, Tantek Celik, Una Kravets

Topic 1: Score "Investigate" progress as part of the overall metric

  • James: portion of overall score represents how we’re progressing on the “investigate” activities (4 items marked)
    Where we don’t know exactly what it’ll turn out to be but we need to investigate more
    Should we pick some percentage of the overall score (i.e. 15%?) to dedicate to “investigate” issues, and score that some way, maybe manually, by some predetermined goals
  • Brian: Maybe a different way to solve this would be to have a separate score for this
  • Jen: Initial thinking (RE viewport unit issue) – would be good for manual testing to be a part of score in addition to automated.
    What the position of “investigate” means changed over the course of the planning process
    In some parts of the process, investigate meant “not this year, but the future”
    Now if we decide if “investigate” means it does count towards the score, that changes initial discussion and we might need to change some of our positions to “oppose” – we need time to discuss
  • Chris: Sounds like you’re saying we need to be precise toward things we need to investigate vs. leave out and investigate?
  • Jen: No, there’s several different kinds of investigate (i.e. finishing standards, or figure out how to improve automated testing in this particular area, or manual testing is the only way to do this) – shouldn’t look like browsers are failing at a particular technology just bc the project isn’t finished
  • James: editing and mouse event stuff is pretty clear – needs a real effort from browser vendors to figure out what’s going on there. Punting it off to another score would make it easier to ignore those issues
    Spec work and things we see causing issues just as important as launching new features like subgrid – thats why we’re keen to have it be a part of main score
    Whenever we try to convert anything to test pass rate it’s difficult
    What we want to see is progress and have a way of measuring progress for these areas
  • Una: we’ve had a lot of different interpretations of what “investigate” means – i.e. viewport test measurement, future-facing technologies to launch together
    Need to really nail down what “investigate” means and a plan to migrate those investigations back to the main score (i.e. research on viewport measurements is an investigation on how to measure – once that’s sorted out, should be added to main score)
  • Brian: The reason some of these aren’t in the main score is bc we cant measure and score them in the same way – maybe with some of the investigate things we shouldn’t aim to put them in a score but aim for a collective progress report – could we give ourselves a self-grade
  • Tantek: +1 to alot of things Una said. What does it mean to investigate something? Key phrase in James’ proposal here is: key areas we see a lot of things in practice – key areas to consider these areas of investigation for the overall metric – can be a good filter for figuring out which areas meet the criteria
    RE: separate score vs individual score – as soon as you separate the score, it’ll get lost – the single overall score is what is primary RE: sharing, i’d not want to separate it
    Maybe set aside a percent until we figure out how to evaluate it
  • Chris: Agree we need to do a lot of investigating in some topic areas, and those things are necessary to solve a problem for developers. Each group can grade themselves on how it’s going - encourage each other to keep going/motivate each other
  • Jen: Feelings changing as I’m looking at this more closely – i.e. viewport measurements, figuring out manual testing could easily be a part of that score, but the other 3 items Apple either opposed or said “investigate”
    Seems like Mozilla is re-litigating things we decided as a group wouldn’t be in the project but they want in the project. If we’re going to relitigate, what about container queries, which is a top developer request? The idea that we’ll relitigate those decisions isn’t a good idea because we made the decisions already
  • James: Oppose meant oppose and investigate meant neutral or commented on it. It shouldn’t be seen as trying to do something people have formerly objected to, but understand that this could be seen as outside the scope of the original conversation. We want to do things that make the most compat impact in the longterm, and making progress on these issues is just as important as making progress on the things we’re including
    In terms of improving the web for developers, it’s really worthwhile – can re-align internally on positions, but shouldn’t reject the idea on process grounds (misrepresentation)
  • Una: Should “investigate” be a means to figure out a migration path to testing for the main score. Everything in “investigate” should have a goal to move to main score with a clear path of how (tests, etc.). Key goal of investigate to move to main score
  • Jen: Investigate meant “investigate but not into 2022 score”
  • Una: Investigate needs a path/process from “investigate” to inclusion
  • Jen: Would like to not include investigate into total score
  • Brian: If we say 15% of total score would amount to some amount of these issues – if we all had veto power, would that be something you could take back and see how people feel about that?
  • Chris: to clarify: consider again the 15% score and take back for next week?
  • Brian: We have this list and won’t include things not on this list, but take back the idea that we will include some of these things, take out what we explicitly don’t want, and add some of those tests to the 2022 score after we have clarity on “investigate” definition
    I’d also like to take this back and discuss it some more
  • Jen: Right now we have 10 items we added to the group and 5 from previous year, so now we’ve already agreed that each area gets 1/15th of total score – you’re proposing that this category gets > 2x that. Apple marked “investigate” with the understanding that it didn’t get added to score – might want to change position
  • Anne: thought we only included things that people agreed were investigate
  • Jen: contenteditable, and other items we wouldn’t want to add to the score
    Wanted to add a few tests for subgrid, and subgrid would be one of the 15
    Definite objection to these items being 15% of the total score
  • Chris: Need next steps – Tantek and James, how do you feel about this? Strongly?
  • Tantek: Do feel reasonably strongly about it – don’t want to ignore some of the less glamorous work for interop
  • James: Agree with that: it’s pretty important to us that we don’t neglect the areas that we don’t yet have the ability to score on tests. Completely reasonable for Apple to go back and discuss internally, and possibly change everything to oppose. The 15% proposal was a proposal, open to discussion. Can discuss the weighting of the score & various possibilities to balance the metric. But it is very important to us that we have the same kind of incentive structure for things where we already have the spec text as things we don’t
  • Chris: my proposal for next step is if you’re okay with it Jen – Apple to review oppose/investigate table more carefully and update score to oppose if not okay to add to score
  • Jen: sounds good
  • AI: each group to go back and re-vote using scoring mechanism that’s consistent with this discussion

Topic 2: Test list review.

  • Comments on many of the issues, but not easy to see what has been addressed.
  • Proposal: assign each issue (linked in Focus area labels #42) to one person, who combs through comments and then summarizes what still needs action by the next meeting.
  • Chris: need to finish focus area labels
  • Form Controls #11
    • Brian
  • [meta] Web Compat Bucket #9
    • James
  • AI: Comment in the issue if you can take it – let Philip or Chris know
  • How to add labels to wpt.fyi tests? Need a demo
    • AI: James to document how to add a label to wpt.fyi tests

Topic 3: Update on dashboard.

Topic 4: Next week’s meeting (Feb 3)

  • Would conflict with HTML triage meeting again. Intention was for last conflict to be the last one and switch to monthly. How to resolve conflict?
  • Chris: Need next week’s meeting to discuss item 1
  • Anne OOO next week, probably okay with folks to move it

Topic 5: List of organizations/people who sign on to supporting this effort? [Chris]

  • Chris: how do we make this better known? Should we have a markdown file?
    Would like to know list of organizations and people involved and saying “yes, Interop 2022 is a go” so when we do public announcements we don’t forget to mention everyone who’s participated and is in support
  • Jen: Can we open a new issue where people can comment to add folks?
  • Una: might be easier w/Github’s in-browser editor to edit a markdown file rather than track an issue
  • AI: Chris to create markdown file to track
  • Chris: Want to make sure everyone’s acknowledged for the work they do, please let me know if anyone’s missing or edit the file to add

Next week

  • Only pressing item is the rescoring (Topic 1) discussion for next week
  • Chris: All have to agree to put it in the score, definitely valuable and useful items in “investigate”
  • Jen: Also need to talk about final point ranking and how test percent is included in the total score
  • AI: Jen to summarize scoring issues in an issue to discuss next week

@chrishtr
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agenda Agenda item for the next meeting
Projects
None yet
Development

No branches or pull requests

3 participants