Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Planning of next releases #132

Closed
11 tasks done
JKRhb opened this issue Sep 3, 2022 · 24 comments
Closed
11 tasks done

Planning of next releases #132

JKRhb opened this issue Sep 3, 2022 · 24 comments

Comments

@JKRhb
Copy link
Contributor

JKRhb commented Sep 3, 2022

Considering the number of open PRs, I wanted to open this issue to discuss which PRs could go into which new release version. The open PRs in question are the following:

I think #124 and #125 are trivial enough to actually become part of a 5.0.2 release, as they do not add a new feature or a breaking change. These could also be merged right away, I think.

#123 could become part of the next minor release, as it introduces new features without breaking changes.

The other PRs could become part of the next major release 6.0.0, as they contain breaking changes and also apply quite significant changes to the project.

Do you think this roadmap makes sense, or do you think a different approach would be more reasonable? CC @JosefWN

@shamblett
Copy link
Owner

Sounds like a plan to me, PR's 124 and 125 merged as suggested and package re published at version 5.0.2.

@JosefWN
Copy link
Contributor

JosefWN commented Sep 4, 2022

Hey, sorry, been really busy. Sounds like a good plan.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 1, 2022

@shamblett Thank you for merging #123! As mentioned above, I think it could go into a version 5.1.0.

Regarding the PRs that are still open, I think the easiest ones to review would be #128 and #129. Maybe you could have a look on them? As all remaining PRs contain breaking changes, I think we should consider them for a new major 6.0.0 release.

@shamblett
Copy link
Owner

Package re published at version 5.1.0, I'll look at #128 and #129

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 10, 2022

Thank you for merging these two! I think when it comes to the remaining PRs, it might make sense to review them in the following order (of increasing complexity): #130, #127, and #122. I just rebased them to the latest upstream version.

@shamblett
Copy link
Owner

OK, #130 merged, #127 and #122 now have merge conflicts.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 15, 2022

Thanks! I'll rebase the two against the new upstream

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 15, 2022

The two PRs are rebased and updated now :) I also included #133 as a potential addition for 6.0.0 in the checklist – in contrast to the other PRs, it's rather simple, though :)

@shamblett
Copy link
Owner

OK, after merging #127 I'm seeing two examples hang, the get_max_retransmit and ping, not sure about the get_max_retransmit but I'm sure ping used to work OK(I tend to run all the examples after any large merge). Not saying this is a problem yet, just an observation for now, I'll dig deeper.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 16, 2022

Thanks for uncovering these problems, I will also investigate what is causing them

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 16, 2022

@shamblett I opened #135 as a fix for the ping problem :)

@shamblett
Copy link
Owner

Merged #135, this fixes ping as you say, thanks. get_max_retransmit is still hanging, I know it says it could take a while but I've run this for 45 mins or so with no response. Wondering if there is a problem with request timeout somewhere, I'll dig deeper this week if I get chance.

@shamblett
Copy link
Owner

OK, think I know whats happening with the get_max_retransmit example hanging.

It revolves round the _sendWithStreamResponse method in CoapClient. This method waits on a CoapRespondEvent, in the case when we don't get a response this of course won't be fired so although the request is correctly cancelled the request in the client will still hang.

A quick fix is to fire a psuedo-response when we have determined the request has timed out, i.e. add the following code underneath the call to cancel() in the _timerElapsed method of ReliabilityLayer -

// SJH pseudo response with same request token
      final response = CoapResponse(CoapCode.notFound, _message.type)
        ..token = _message.token;
      _exchange.fireRespond(response);

and update the the _sendWithStreamResponse to also check for timeouts -

 .takeWhile((final _) => request.isActive || request.isTimedOut );

the request now correcly completes on timeourt with a 'CoAP encountered an exception: CoapRequestTimeoutException: Request timed out after 2 retransmits.' message.

I'm not saying this is the fix for this, I'm sure there are better and more elegant solutions, it just highlights the problem.

Maybe we should have a higher level event, maybe the CoapCompleted event that indicates that a request has completed, either by getting a response, timing out or being cancelled.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 23, 2022

Thank you for investigating this error, @shamblett! I think I found a more or less elegant solution which I will open a PR for soon :)

@shamblett
Copy link
Owner

#137 works fine and is an elegant implementation of what I was thinking of, thanks.

OK, all the examples are now running so I'm happy to proceed with the merging of the other PR's, this is an opportune time to assess where we are and remind me again of what order we want to look at these.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 24, 2022

#137 works fine and is an elegant implementation of what I was thinking of, thanks.

That's great :)

OK, all the examples are now running so I'm happy to proceed with the merging of the other PR's, this is an opportune time to assess where we are and remind me again of what order we want to look at these.

I think we could proceed with #122 now if it looks okay to you. #136 might need some discussion, as I am not sure if reworking the defined addresses as an enum is that much of an improvement, but it would also be ready to merge I think.

@shamblett
Copy link
Owner

#122 merged, all looking OK.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 25, 2022

Awesome, thank you! Maybe you could have a quick look at #137? As mentioned above, I am not entirely sure if it is an improvement, so we could also skip it for 6.0.0.

@shamblett
Copy link
Owner

On #136, it does add the ability to get the InternetAddress and the InternetAddressType directly from the multicast address itself whereas the original implementation didn't, this can be expanded upon if needed, also it doesn't stop the user defining any multicast address they wish as a host name, the more I look at it the more I like it. I'm happy to merge this unless anyone can think of any drawbacks.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 26, 2022

On #136, it does add the ability to get the InternetAddress and the InternetAddressType directly from the multicast address itself whereas the original implementation didn't, this can be expanded upon if needed, also it doesn't stop the user defining any multicast address they wish as a host name, the more I look at it the more I like it. I'm happy to merge this unless anyone can think of any drawbacks.

Okay, great! :) Feel free to merge :)

@shamblett
Copy link
Owner

OK, #136 merged, are we ready for a 6.0.0? There's quite a lot gone in lately, this is probably a good time.

@JKRhb
Copy link
Contributor Author

JKRhb commented Oct 26, 2022

Thanks! Yeah, I think we are ready for 6.0.0 now :) Feel free to close this issue afterwards.

@shamblett
Copy link
Owner

HI guys, #138 has just been raised, to do with request timeouts again but a slightly different use case if you could take a look. This won't affect the 6.0.0 release, we'll fix it as a subsequent revision, unless of course a quick solution can be found.

@shamblett
Copy link
Owner

Version 6.0.0 of the package now published, a lot of work went into this, a good effort. I've raised #140 to address the list of To-do's that are building up, just a marker, these can be addressed as we go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants