-
Notifications
You must be signed in to change notification settings - Fork 450
fix: client connected InvokeOnClientConnectedCallback with scene management disabled #1123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: client connected InvokeOnClientConnectedCallback with scene management disabled #1123
Conversation
Fix for the client connected message not being invoked on the client if scene management is disabled. This includes an integration test, MultiClientConnectionApproval.ClientConnectedCallbackTest, to verify client connected is being invoked when scene management is both enabled and disabled.
// Increasing this value for various reasons: | ||
// Some tests (i.e. MessageHandlerReceivedMessageServerClient) expect certain log messages and if we happen to hit the threshold it will cause those tests to fail | ||
// This slows down tests having to log warnings repeatedly. | ||
private const uint k_MaxBitPoolBuffers = 2048; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like it's just hiding a problem in our tests, or worse, in the framework... Why are we using more than a thousand buffers simultaneously?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I can put it back to 1024. Our unit tests will consume much more log file space and MessageHandlerReceivedMessageServerClient will fail because it is detecting the warning that we are exceeding 1024 buffers.
It is a terrible pooling system, I was just trying to provide plenty of room. It isn't the amount we are allocating, it is the amount required before it spams a warning message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just reverted the changes, this PR won't be able to pass Yamato tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well that's just crazy...it passed even though it fails on my end.
Go figure.,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is an old JIRA ticket here: MTT-656
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, increasing the k_MaxBitPoolBuffers only really prevents the warning message spam during unit tests which reduces processing overhead, time, and the size of the log files. It doesn't pre-allocate buffers based on that size...just doesn't start warning until it reaches that threshold. Keeping it at 1024 just means lots of console spam during unit and integration tests.
…nsform * develop: (26 commits) fix: client connected InvokeOnClientConnectedCallback with scene management disabled (#1123) fix: removed `public` class `NetcodeObserver` (MTT-1157) (#1122) feat: add NetworkMessageSent/Received metrics (#1112) feat: snapshot. MTU sizing option for Snapshot. MTT-1087 (#1111) Add metrics for transport bytes sent and received (#1104) fix: Missing end profiling sample (#1118) chore: support standalone mode for netcode runtimetests (#1115) feat: Change MetricNames for a more complex value type (#1109) feat: Track scene event metrics (#1089) style: whitespace fixes (#1117) feat: replace scene registration with scenes in build list (#1080) fix: mtt-857 GitHub issue 915 (#1099) fix: NetworkSceneManager exception when DontDestroyOnLoad NetworkObjects are being synchronized (#1090) feat: NetworkTransform Custom Editor Inspector UI (#1101) refactor: remove TempGlobalObjectIdHashOverride (#1105) fix: MTT-1124 Counters are now reported in sync with other metrics (#1096) refactor: convert using var statements to using var declarations (#1100) chore: updated all of the namespaces to match the tools package change (#1095) refactor!: remove network variable settings, network behaviour cleanup (#1097) fix: mtt-1088 review. Safer handling of out-of-order or old messages (#1091) ... # Conflicts: # com.unity.netcode.gameobjects/Prototyping/NetworkTransform.cs
…gement disabled (Unity-Technologies#1123) * fix Fix for the client connected message not being invoked on the client if scene management is disabled. This includes an integration test, MultiClientConnectionApproval.ClientConnectedCallbackTest, to verify client connected is being invoked when scene management is both enabled and disabled. * fix This is a fix to reduce the amount of console spew due to network buffer pools exceeding 1024. This reduces unit test processing time and console log sizes, while also providing a more realistic buffer pool size. * test The unit test. * refactor Removing this minor LF as the file has nothing to do with this PR. * style removing 2 LF that were not intended * refactor style Some straggler style requests from PR-1080 that are harmless and piggy-backing onto this PR. * refactor Reverting the NetworkBufferPool changes. This will make MessageHandlerReceivedMessageServerClient fail because of the warning message that we have exceeded 1024 buffers.
This fixes the issue where a client would not receive the OnClientConnectedCallback event when scene management was disabled.
It also does some slight refactoring to when it is invoked as well as invokes it on the server after the client has notified the server that its synchronization is complete (as opposed to invoking right when the client was approved) in order to assure a user can start sending RPCs to a client from a server when this is invoked.
Side note: Users can also use the C2S_SyncComplete scene event notification to determine if it is ok to send RPCs to a client.
This includes an OnClientConnectedCallback integration test to verify this is working when scene management is enabled and disabled.
This includes a bump in the number of NetworkBufferPools to remove the spamming which is causing performance issues and finally reached a point where it was causing MessageHandlerReceivedMessageServerClient to fail due to the warning messages.