-
Notifications
You must be signed in to change notification settings - Fork 4
Run tests using network-transport-inmemory #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Is this documented? I don't remember anything of the sort. |
We don't have any documentation for n-t semantics, this may be a good time to create that. |
Shouldn't we have tests built and run with both inmemory and TCP for now? |
I think the idea was to make CI reliable, with tcp implementation 1 of 4 version fails, because of internal races (or possibly not well written test). I thinks I can leave 2 options, but disable tcp by default until it will not work reliably. Does that make sense? |
In that case, perhaps it is better to have the tests fail until the problems are fixed. |
I think we should use inmemory by default, and have CI test both inmemory and tcp. |
There is no semantics to refer to in this case. So we shouldn't be citing any. |
@@ -734,7 +745,8 @@ testReconnect TestTransport{..} = do | |||
|
|||
|
|||
-- Simulate network failure | |||
liftIO $ testBreakConnection (nodeAddress nid1) (nodeAddress nid2) | |||
liftIO $ do testBreakConnection (nodeAddress nid1) (nodeAddress nid2) | |||
threadDelay 1000000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
threadDelay
is so evil. Because it doesn't guarantee the absence of races. And slows tests down. Can we avoid this?
This patch depend on haskell-distributed/distributed-process#246 |
Only TestCHInTCP uses TCP? What about TestStats, TestMx, etc? |
Those tests do not depend on network-backend implementation, I see to reason in testing them on multiple backends. Correct me if I'm wrong. |
Alright. I'm not familiar with those. I gave them a look and testing with inmemory looks ok for now. This could change, say, if we ever find that the things they test don't work with tcp. |
@@ -11,6 +11,10 @@ category: Control, Cloud Haskell | |||
build-type: Simple | |||
cabal-version: >=1.8 | |||
|
|||
flag tcp-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or simply flag tcp
, since this is a test package.
Addressed all comments, could merge? |
@mboes, your comment lost during rewrite:
Yes we could there are 2 approaches here:
should I do that? This is needed because handling connection failure on d-p layer is an asynchronous operation. |
@@ -302,19 +302,25 @@ testMonitorRemoteDeadProcess TestTransport{..} mOrL un = do | |||
testMonitorDisconnect :: TestTransport -> Bool -> Bool -> Assertion | |||
testMonitorDisconnect TestTransport{..} mOrL un = do | |||
processAddr <- newEmptyMVar | |||
processAddr2 <- newEmptyMVar | |||
monitorSetup <- newEmptyMVar | |||
done <- newEmptyMVar | |||
|
|||
forkIO $ do | |||
localNode <- newLocalNode testTransport initRemoteTable | |||
addr <- forkProcess localNode . liftIO $ threadDelay 1000000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this threadDelay
. Why not just block forever with expect
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks reasonable (it was not mine code)..
Network-transport are not guaranteed to emit connection failure event unless there were communication between local and remote endpoints. As a result additional communication was introduced in order introduce that communication.
Addressed all comments. |
👍 |
Run tests using network-transport-inmemory
This patch provides an ability to test distributed-process using inmemory backend, this could reduce a change of network failures because of problem in n-t implementation.