Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(test): close all open connections in abstract_client test #1917

Merged
merged 24 commits into from
Aug 8, 2024

Conversation

MaximoLiberata
Copy link
Contributor

@MaximoLiberata MaximoLiberata commented Aug 1, 2024

Continue the work made in #1911

@robertsLando
Copy link
Member

@MaximoLiberata Let me know when this is ready for review

@MaximoLiberata
Copy link
Contributor Author

MaximoLiberata commented Aug 2, 2024

@robertsLando How can I configure my computer to run this test https://github.com/mqttjs/MQTT.js/actions/runs/10217524082/job/28271502928?pr=1917? It's a new error for me.

I don't know how this change 8d154b0 directly affects the new error

@robertsLando
Copy link
Member

I have a feel that's happening because I removed mqtt from deps in one of the last commits I pushed to master, it's just a tought but I'm not on my PC right now so cannot say much more.

In order to run it just check how workflow is setup, if on windows you may need to use WSL

@MaximoLiberata
Copy link
Contributor Author

I could to run the test. The test might throw an error because hivemq rejects the connection and the client never connect, sometimes the client can connect even if hivemq rejected the connection. So, depend of hivemq if the test fails sometimes.

@robertsLando
Copy link
Member

robertsLando commented Aug 2, 2024

We could switch to a self hosted broker like I did for browser tests with aedes cli?

@MaximoLiberata
Copy link
Contributor Author

Sincerely I don't know. I could try to move it to a self hosted broker later.

Copy link

codecov bot commented Aug 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 81.24%. Comparing base (9fc79df) to head (2dc39f2).
Report is 50 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1917      +/-   ##
==========================================
+ Coverage   80.96%   81.24%   +0.27%     
==========================================
  Files          24       24              
  Lines        1408     1466      +58     
  Branches      331      348      +17     
==========================================
+ Hits         1140     1191      +51     
- Misses        185      191       +6     
- Partials       83       84       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@MaximoLiberata
Copy link
Contributor Author

MaximoLiberata commented Aug 4, 2024

I couldn't to reproduce the hanging bug of the keepalive test in my computer, but I could to reproduce it only in the GitHub Actions.

It seems the test fails because the client never publish the packages even it's connected. Using this commit to show the logs of the test's states I could to get two cases:

Case 1 - GitHub Action
Case 2 - GitHub Action

Case 1 - Test Pass
   ▶ keepalive
    ✔ should send ping at keepalive interval (10.629001ms)
    ✔ should not shift ping on publish (3.967672ms)
PACKETRECEIVE {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'connack',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    sessionPresent: false,
    reasonCode: 0
  }
}
CONNECT { reschedulePings: true, callCount: 0, received: 0 }
PUBLISH {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'publish',
    retain: false,
    qos: 1,
    dup: false,
    length: 11,
    topic: 'foo',
    payload: <Buffer 62 61 72>,
    messageId: 46591
  }
}
PUBLISH {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'publish',
    retain: false,
    qos: 1,
    dup: false,
    length: 11,
    topic: 'foo',
    payload: <Buffer 62 61 72>,
    messageId: 46592
  }
}
PACKETRECEIVE {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'puback',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    messageId: 46591,
    reasonCode: 0
  }
}
PACKETRECEIVE {
  reschedulePings: true,
  callCount: 1,
  received: 1,
  packet: Packet {
    cmd: 'puback',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    messageId: 46592,
    reasonCode: 0
  }
}
    ✔ should reschedule pings if publishing at a higher rate than keepalive and reschedulePings===true (8.595212ms)
PACKETRECEIVE {
  reschedulePings: false,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'connack',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    sessionPresent: false,
    reasonCode: 0
  }
}
CONNECT { reschedulePings: false, callCount: 0, received: 0 }
PUBLISH {
  reschedulePings: false,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'publish',
    retain: false,
    qos: 1,
    dup: false,
    length: 11,
    topic: 'foo',
    payload: <Buffer 62 61 72>,
    messageId: 23409
  }
}
PUBLISH {
  reschedulePings: false,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'publish',
    retain: false,
    qos: 1,
    dup: false,
    length: 11,
    topic: 'foo',
    payload: <Buffer 62 61 72>,
    messageId: 23410
  }
}
PACKETRECEIVE {
  reschedulePings: false,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'puback',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    messageId: 23409,
    reasonCode: 0
  }
}
PACKETRECEIVE {
  reschedulePings: false,
  callCount: 0,
  received: 1,
  packet: Packet {
    cmd: 'puback',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    messageId: 23410,
    reasonCode: 0
  }
}
    ✔ should not reschedule pings if publishing at a higher rate than keepalive and reschedulePings===false (4.602472ms)
    ✔ should shift ping on pingresp when reschedulePings===true (3.483994ms)
PACKETRECEIVE {
  reschedulePings: false,
  callCount: 0,
  received: 2,
  packet: Packet {
    cmd: 'connack',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    sessionPresent: false,
    reasonCode: 0
  }
}
    ✔ should shift ping on pingresp when reschedulePings===false (3.745445ms)
  ▶ keepalive (35.51644ms)
Case 2 - Test Fails
  ▶ keepalive
    ✔ should send ping at keepalive interval (10.889386ms)
    ✔ should not shift ping on publish (4.545398ms)
PACKETRECEIVE {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'connack',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    sessionPresent: false,
    reasonCode: 0
  }
}
CONNECT { reschedulePings: true, callCount: 0, received: 0 }
PUBLISH {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'publish',
    retain: false,
    qos: 1,
    dup: false,
    length: 11,
    topic: 'foo',
    payload: <Buffer 62 61 72>,
    messageId: 62056
  }
}
PUBLISH {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'publish',
    retain: false,
    qos: 1,
    dup: false,
    length: 11,
    topic: 'foo',
    payload: <Buffer 62 61 72>,
    messageId: 62057
  }
}
PACKETRECEIVE {
  reschedulePings: true,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'puback',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    messageId: 62056,
    reasonCode: 0
  }
}
PACKETRECEIVE {
  reschedulePings: true,
  callCount: 1,
  received: 1,
  packet: Packet {
    cmd: 'puback',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    messageId: 62057,
    reasonCode: 0
  }
}
    ✔ should reschedule pings if publishing at a higher rate than keepalive and reschedulePings===true (9.137764ms)
PACKETRECEIVE {
  reschedulePings: false,
  callCount: 0,
  received: 0,
  packet: Packet {
    cmd: 'connack',
    retain: false,
    qos: 0,
    dup: false,
    length: 3,
    topic: null,
    payload: null,
    sessionPresent: false,
    reasonCode: 0
  }
}
CONNECT { reschedulePings: false, callCount: 0, received: 0 }
✖ /home/runner/work/MQTT-ReactNative-Support/MQTT-ReactNative-Support/test/client_mqtt5.ts (60001.160737ms)
  'test timed out after 60000ms'

@robertsLando Do you have any idea why client do not publish the packages?

@robertsLando
Copy link
Member

robertsLando commented Aug 5, 2024

@robertsLando Do you have any idea why client do not publish the packages?

Usually in such situations the reason is a race condition, using fake timers there has been a pain, I had to put clock.tick here and there in order to prevent this kind of issues, my feel is that sometimes the stack trace (in slower machines like GH actions) could be a bit different for some reason and the clock ticks are not enough in order to trigger the next tick from the call queue so the event is not emitted. A possible (dirty) solution could be to call a clock.tick inside an interval in order to make the process go on when this happens

Sometinh likE:

 // Use a real timer to advance the fake clock
    let tickInterval = setInterval(() => {
      clock.tick(1);
    }, 100);

Also I would add a timeout to the test so in case it hangs it allows to continue others tests

@MaximoLiberata
Copy link
Contributor Author

I don't know if this will throw an idea to resolve the bug, but these are the mqtt logs (using console.log in log option) that GH Actions threw:

GH Action - Test Pass
GH Action - Test Fails

And here is the difference between two mqtt logs when Pass and when Fails https://www.diffchecker.com/902zCyLn/ (The left side is when Pass and the right side is when fails)

Also, client throw an error when keepalive is reached and only happens when reschedulePings===false

Error: Keepalive timeout
    at onKeepaliveTimeout (/home/max/Downloads/MQTT/src/lib/client.ts:2143:22)
    at <anonymous> (/home/max/Downloads/MQTT/src/lib/KeepaliveManager.ts:108:17)
    at callTimer (/home/max/Downloads/MQTT/node_modules/@sinonjs/fake-timers/src/fake-timers-src.js:783:24)
    at doTickInner (/home/max/Downloads/MQTT/node_modules/@sinonjs/fake-timers/src/fake-timers-src.js:1380:29)
    at doTick (/home/max/Downloads/MQTT/node_modules/@sinonjs/fake-timers/src/fake-timers-src.js:1461:20)
    at Object.tick (/home/max/Downloads/MQTT/node_modules/@sinonjs/fake-timers/src/fake-timers-src.js:1469:20)
    at <anonymous> (/home/max/Downloads/MQTT/test/abstract_client.ts:2150:15)

@robertsLando
Copy link
Member

FYI you can check this box when running tests in order to see debug logs:

2024-08-05_14-18

Anyway I'm quite sure the issue is related to the thing I told you here: #1917 (comment)

@MaximoLiberata
Copy link
Contributor Author

@robertsLando I think I could to prevent or fix the hanging bug in reschedulePings test. I ran 80 GH actions to check it and now I only receive other new bugs. Maybe they're correlational to this change or not, but I hope that this change will be an advance.

GitHub Actions
Start 481 and end 560

GH Action bug 1
GH Action bug 2
GH Action bug 3

@MaximoLiberata
Copy link
Contributor Author

@robertsLando Could you check the changes, please?

I think we got it, I ran 100 GH Actions and not one of them failed. I removed or changed some setTimeout, they probably were affected by clock because parallel process.

GH Actions
Start 620 and end 720

@robertsLando
Copy link
Member

@MaximoLiberata Sorry but I have been super busy! Seems you found it so, well done! :)

test/abstract_client.ts Outdated Show resolved Hide resolved
Copy link
Member

@robertsLando robertsLando left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants