Skip to content

Run tests using network-transport-inmemory #19

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 6, 2015
Merged

Conversation

qnikst
Copy link
Contributor

@qnikst qnikst commented Jul 3, 2015

This patch provides an ability to test distributed-process using inmemory backend, this could reduce a change of network failures because of problem in n-t implementation.

@facundominguez
Copy link
Contributor

According to semantics network failures is guaranteed to be
noticed by network-transport iff communication happened after
connection failure.

Is this documented? I don't remember anything of the sort.

@qnikst
Copy link
Contributor Author

qnikst commented Jul 3, 2015

We don't have any documentation for n-t semantics, this may be a good time to create that.
But this assumption us the only sane one.

@facundominguez
Copy link
Contributor

Shouldn't we have tests built and run with both inmemory and TCP for now?

@qnikst
Copy link
Contributor Author

qnikst commented Jul 3, 2015

I think the idea was to make CI reliable, with tcp implementation 1 of 4 version fails, because of internal races (or possibly not well written test). I thinks I can leave 2 options, but disable tcp by default until it will not work reliably. Does that make sense?

@facundominguez
Copy link
Contributor

In that case, perhaps it is better to have the tests fail until the problems are fixed.
Wouldn't disabling tcp tests give a false sense of stability?

@mboes
Copy link
Contributor

mboes commented Jul 3, 2015

I think we should use inmemory by default, and have CI test both inmemory and tcp.

@mboes
Copy link
Contributor

mboes commented Jul 3, 2015

There is no semantics to refer to in this case. So we shouldn't be citing any.

@@ -734,7 +745,8 @@ testReconnect TestTransport{..} = do


-- Simulate network failure
liftIO $ testBreakConnection (nodeAddress nid1) (nodeAddress nid2)
liftIO $ do testBreakConnection (nodeAddress nid1) (nodeAddress nid2)
threadDelay 1000000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

threadDelay is so evil. Because it doesn't guarantee the absence of races. And slows tests down. Can we avoid this?

@qnikst qnikst changed the title Inmemory tests Implement tests based on n-t-inmemory backend Jul 3, 2015
@qnikst
Copy link
Contributor Author

qnikst commented Jul 3, 2015

This patch depend on haskell-distributed/distributed-process#246

@mboes mboes changed the title Implement tests based on n-t-inmemory backend Run tests using network-transport-inmemory Jul 3, 2015
@facundominguez
Copy link
Contributor

Only TestCHInTCP uses TCP? What about TestStats, TestMx, etc?

@qnikst
Copy link
Contributor Author

qnikst commented Jul 3, 2015

Those tests do not depend on network-backend implementation, I see to reason in testing them on multiple backends. Correct me if I'm wrong.

@facundominguez
Copy link
Contributor

Alright. I'm not familiar with those. I gave them a look and testing with inmemory looks ok for now. This could change, say, if we ever find that the things they test don't work with tcp.

@@ -11,6 +11,10 @@ category: Control, Cloud Haskell
build-type: Simple
cabal-version: >=1.8

flag tcp-test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or simply flag tcp, since this is a test package.

@qnikst
Copy link
Contributor Author

qnikst commented Jul 6, 2015

Addressed all comments, could merge?

@qnikst
Copy link
Contributor Author

qnikst commented Jul 6, 2015

@mboes, your comment lost during rewrite:

threadDelay is so evil. Because it doesn't guarantee the absence of races. And slows tests down. Can we avoid this?

Yes we could there are 2 approaches here:

  1. use node monitoring, and run registerRemoteAsync after we have seen
    that connection failed.
  2. use loop in registerRemoteAsync operation
ref <- registerRemoteAsync 
let loop = do mx <-  receiveTimeout n  [...]
                     maybe loop (return x) mx
loop

should I do that? This is needed because handling connection failure on d-p layer is an asynchronous operation.

@@ -302,19 +302,25 @@ testMonitorRemoteDeadProcess TestTransport{..} mOrL un = do
testMonitorDisconnect :: TestTransport -> Bool -> Bool -> Assertion
testMonitorDisconnect TestTransport{..} mOrL un = do
processAddr <- newEmptyMVar
processAddr2 <- newEmptyMVar
monitorSetup <- newEmptyMVar
done <- newEmptyMVar

forkIO $ do
localNode <- newLocalNode testTransport initRemoteTable
addr <- forkProcess localNode . liftIO $ threadDelay 1000000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this threadDelay. Why not just block forever with expect?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks reasonable (it was not mine code)..

Network-transport are not guaranteed to emit connection
failure event unless there were communication between
local and remote endpoints. As a result additional communication
was introduced in order introduce that communication.
@qnikst
Copy link
Contributor Author

qnikst commented Jul 6, 2015

Addressed all comments.

@facundominguez
Copy link
Contributor

👍

qnikst added a commit that referenced this pull request Jul 6, 2015
Run tests using network-transport-inmemory
@qnikst qnikst merged commit f403439 into master Jul 6, 2015
@qnikst qnikst deleted the inmemory-tests branch July 6, 2015 17:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants