Skip to content

Conversation

@pengz1
Copy link
Member

@pengz1 pengz1 commented May 11, 2017

To decouple secure erase with Megaraid operation when possible, below changes are made:

  • Non-RAID disks won't extend drive catalogs
  • getDriveIdCatalogExt is updated with flag as if we should extend drive catalog for JBOD disks
  • secure erase job is update to get drive protocol from exiting driveId catalogs

@pengz1
Copy link
Member Author

pengz1 commented May 11, 2017

 * Non-RAID disks won't extend drive catalogs by default
 * getDriveIdCatalogExt is updated with flag as if we should extend drive
   catalog for JBOD disks
 * secure erase job is update to get drive protocol from exiting
   driveId catalogs
The output parameter will be:
[
{
"disks": [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

miss "["?

return Promise.all([
foundVdHasValue ? getVirtualDiskCatalog(nodeId) : Promise.resolve(),
extendJbod ? getPhysicalDiskCatalog(nodeId) : Promise.resolve(),
getRaidControllerVendor(nodeId)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could add "_" in the beginning of the above three function names to signify they are used internally.

* @param {Boolean} extendJbod - flag for if we should extend JBOD physical information
* @return {Promise} Drive catalogs extended with Megaraid information
*/
function getDriveIdCatalogExt(nodeId, filter, extendJbod) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems you add a parameter "extendJbod" here, but secure erase job calls this function without the newly-added one. Is it left for further extension?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, I am struggling if we should extend disks for JBODs. For secure erase actually we don't need it. But we can leave it as it is in case we will use it in future.

@JenkinsRHD
Copy link
Contributor

BUILD on-tasks #95 : FAILURE

BUILD on-tasks #95 Error Logs ▼Test Name: test_nodes_discovery Error Details: timeout waiting for task discovery -------------------- >> begin captured logging << -------------------- tests.api.v2_0.nodes_tests: INFO: Wait start time: 2017-05-15 04:51:59.118976 amqp: DEBUG: Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2013 GoPivotal, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'connection.blocked': True, u'authentication_failure_close': True, u'basic.nack': True, u'consumer_priorities': True, u'consumer_cancel_notify': True, u'publisher_confirms': True}, u'platform': u'Erlang/OTP', u'version': u'3.2.4'}, mechanisms: [u'AMQPLAIN', u'PLAIN'], locales: [u'en_US'] amqp: DEBUG: Open OK! kombu: INFO: Starting AMQP worker -> graph.finished.*> amqp: DEBUG: Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2013 GoPivotal, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'connection.blocked': True, u'authentication_failure_close': True, u'basic.nack': True, u'consumer_priorities': True, u'consumer_cancel_notify': True, u'publisher_confirms': True}, u'platform': u'Erlang/OTP', u'version': u'3.2.4'}, mechanisms: [u'AMQPLAIN', u'PLAIN'], locales: [u'en_US'] amqp: DEBUG: Open OK! kombu.mixins: INFO: Connected to amqp://guest:**@127.0.0.1:9091// amqp: DEBUG: using channel_id: 1 amqp: DEBUG: Channel open tests.api.v2_0.nodes_tests: INFO: { "duration": "0:01:49.728226", "graph_name": "Graph.SKU.Discovery", "route_id": "b6a198f4-6c7b-4a70-b534-664756b2d8e3", "status": "succeeded" } tests.api.v2_0.nodes_tests: INFO: { "duration": "0:01:50.280265", "graph_name": "Graph.SKU.Discovery", "route_id": "0ad61f1f-9871-41d2-8ac3-b51c53a43cfa", "status": "succeeded" } modules.worker: ERROR: subtask timeout after 1200 seconds, (id=discovery), stopping.. kombu: INFO: Stopping AMQP worker -> graph.finished.*> modules.worker: INFO: stopping subtask for discovery amqp: DEBUG: Closed channel #1 --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib/python2.7/unittest/case.py", line 331, in run testMethod() File "/usr/lib/python2.7/unittest/case.py", line 1043, in runTest self._testFunc() File "/home/jenkins/workspace/on-tasks/RackHD/test/.venv/on-build-config/local/lib/python2.7/site-packages/proboscis/case.py", line 296, in testng_method_mistake_capture_func compatability.capture_type_error(s_func) File "/home/jenkins/workspace/on-tasks/RackHD/test/.venv/on-build-config/local/lib/python2.7/site-packages/proboscis/compatability/exceptions_2_6.py", line 27, in capture_type_error func() File "/home/jenkins/workspace/on-tasks/RackHD/test/.venv/on-build-config/local/lib/python2.7/site-packages/proboscis/case.py", line 350, in func func(test_case.state.get_state()) File "/home/jenkins/workspace/on-tasks/RackHD/test/tests/api/v2_0/nodes_tests.py", line 126, in test_nodes_discovery message='timeout waiting for task {0}'.format(self.__task.id)) File "/home/jenkins/workspace/on-tasks/RackHD/test/.venv/on-build-config/local/lib/python2.7/site-packages/proboscis/asserts.py", line 67, in assert_false raise ASSERTION_ERROR(message) 'timeout waiting for task discovery\n-------------------- >> begin captured logging << --------------------\ntests.api.v2_0.nodes_tests: INFO: Wait start time: 2017-05-15 04:51:59.118976\namqp: DEBUG: Start from server, version: 0.9, properties: {u\'information\': u\'Licensed under the MPL. See http://www.rabbitmq.com/\', u\'product\': u\'RabbitMQ\', u\'copyright\': u\'Copyright (C) 2007-2013 GoPivotal, Inc.\', u\'capabilities\': {u\'exchange_exchange_bindings\': True, u\'connection.blocked\': True, u\'authentication_failure_close\': True, u\'basic.nack\': True, u\'consumer_priorities\': True, u\'consumer_cancel_notify\': True, u\'publisher_confirms\': True}, u\'platform\': u\'Erlang/OTP\', u\'version\': u\'3.2.4\'}, mechanisms: [u\'AMQPLAIN\', u\'PLAIN\'], locales: [u\'en_US\']\namqp: DEBUG: Open OK!\nkombu: INFO: Starting AMQP worker -> graph.finished.*>\namqp: DEBUG: Start from server, version: 0.9, properties: {u\'information\': u\'Licensed under the MPL. See http://www.rabbitmq.com/\', u\'product\': u\'RabbitMQ\', u\'copyright\': u\'Copyright (C) 2007-2013 GoPivotal, Inc.\', u\'capabilities\': {u\'exchange_exchange_bindings\': True, u\'connection.blocked\': True, u\'authentication_failure_close\': True, u\'basic.nack\': True, u\'consumer_priorities\': True, u\'consumer_cancel_notify\': True, u\'publisher_confirms\': True}, u\'platform\': u\'Erlang/OTP\', u\'version\': u\'3.2.4\'}, mechanisms: [u\'AMQPLAIN\', u\'PLAIN\'], locales: [u\'en_US\']\namqp: DEBUG: Open OK!\nkombu.mixins: INFO: Connected to amqp://guest:**@127.0.0.1:9091//\namqp: DEBUG: using channel_id: 1\namqp: DEBUG: Channel open\ntests.api.v2_0.nodes_tests: INFO: {\n "duration": "0:01:49.728226",\n "graph_name": "Graph.SKU.Discovery",\n "route_id": "b6a198f4-6c7b-4a70-b534-664756b2d8e3",\n "status": "succeeded"\n}\ntests.api.v2_0.nodes_tests: INFO: {\n "duration": "0:01:50.280265",\n "graph_name": "Graph.SKU.Discovery",\n "route_id": "0ad61f1f-9871-41d2-8ac3-b51c53a43cfa",\n "status": "succeeded"\n}\nmodules.worker: ERROR: subtask timeout after 1200 seconds, (id=discovery), stopping..\nkombu: INFO: Stopping AMQP worker -> graph.finished.*>\nmodules.worker: INFO: stopping subtask for discovery\namqp: DEBUG: Closed channel #1\n--------------------- >> end captured logging << ---------------------'

@iceiilin
Copy link
Member

test this please

@iceiilin iceiilin merged commit 01d13cb into RackHD:master May 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants