Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for Service Fabric client commands #3136

Merged
merged 3 commits into from
May 5, 2017

Conversation

samedder
Copy link
Contributor

@samedder samedder commented May 2, 2017

This PR introduces the following set of changes

  • New command module designed to manage Service Fabric clusters once they have been provisioned. This includes managing the cluster entities such as nodes and applications
  • New preparer for scenario tests that require an active cluster connection
  • Small set of unit tests that cover basic commands that do not actively alter the state of the Service Fabric cluster

@msftclas
Copy link

msftclas commented May 2, 2017

@samedder,
Thanks for your contribution as a Microsoft full-time employee or intern. You do not need to sign a CLA.
Thanks,
Microsoft Pull Request Bot

@samedder
Copy link
Contributor Author

samedder commented May 2, 2017

CI is expected to fail as this depends on a package that is not currently publicly available.

@derekbekoe derekbekoe modified the milestone: Build Milestone May 3, 2017
@derekbekoe derekbekoe added the P1 label May 3, 2017
@troydai
Copy link
Contributor

troydai commented May 3, 2017

CI will fail because of the code style as well. Please fix these errors first. Run the following command first:

check_style --module sf --pep8 --pylint


from azure.cli.core.help_files import helps

# pylint: disable=line-too-long
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need

# Custom commands

# cluster
cli_command(__name__, "sf cluster select", "azure.cli.command_modules.sf.custom#sf_select")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since your parameters definition uses the context module, the command's definition could adopt that pattern, too. Reference: https://github.com/Azure/azure-cli/blob/master/src/command_modules/azure-cli-sql/azure/cli/command_modules/sql/commands.py

@@ -0,0 +1,154 @@
import os

from azure.cli.testsdk import ScenarioTest, JMESPathCheck, \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from azure.cli.testsdk import (ScenarioTest, JMESPathCheck, 
                               JMESPathCheckExists, NoneCheck)


def create_resource(self, _, **kwargs):
# Omit name here since there is no randomization required
endpoint = os.environ.get(self.env_variable_name, self.endpoint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make the test unrunnable in CI unless a specific environment is set. We want to avoid special setting for one particular group of test otherwise parallel run is prohibited. Can this endpoint be retrieved by other means?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So unfortunately, unlike mgmt command modules our tests require that whoever makes the change provisions a Service Fabric cluster, and then runs this small set of tests. Hence, the use of the environment variable since right now we cannot provision a cluster from the CLI for every CI test, it would take far too long.

We could make these manual tests and omit them from CI, telling users if they want to modify the sf commands, they need to provision a cluster then run these tests.

I'm not sure what's best here, unfortunately we don't have a good light weight testing mechanism since a these commands end up exercising long code paths in the Service Fabric clusters.

def create_resource(self, _, **kwargs):
# Omit name here since there is no randomization required
endpoint = os.environ.get(self.env_variable_name, self.endpoint)
print("endpoint {}".format(endpoint))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use logger instead. Stdout is usually suppressed in automation.

t = os.stat(os.path.join(root, file))
total_files_size += t.st_size

def print_progress(size, rel_file_path):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least offer user options to run this command quietly. Respect the Rule of Silence (http://www.linfo.org/rule_of_silence.html)

import ApplicationCapacityDescription
from azure.servicefabric.models.application_metric_description import \
ApplicationMetricDescription
from azure.cli.command_modules.sf._factory import cf_sf_client
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import ServicePlacementRequireDomainDistributionPolicyDescription

r = None
if formatted_placement_policies is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to make sure this is your intended behavior here:

  1. formatted_placement_policies is None, function return None
  2. len(formatted_placement_policies) == 0, function return [] and empty list
  3. return a fulfilled list

Remember the expression bool([]) returns False, which offers a more useful check on list than is not None.

Ususally, I would write this function in a simpler manner:

if formatted_placement_policies:
    r = []
    # other logic
    return r
else:
    return None # or []

f += 512
return f

def sf_create_service(app_id, name, type, stateful=False, stateless=False, # pylint: disable=R0913
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pylint: disable=too-many-arguments
for better readability

"ExclusiveProcess"
]
if activation_mode not in valid_modes:
raise CLIError("Invalid activation mode specified")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if activation_mode not in ('SharedProcess', 'ExclusiveProcess`, None):
    raise CLIError( # ... )

@derekbekoe
Copy link
Member

@samedder Any update on this PR?

@samedder
Copy link
Contributor Author

samedder commented May 4, 2017

@derekbekoe and @troydai thanks for the review. The only issue I don't know how to address well is the testing side.

I'll push an update later tonight.

@samedder
Copy link
Contributor Author

samedder commented May 4, 2017

New changes to address comments, should also be passing PEP8 and PyLint now. Let me know what else needs to be changed. I think the only issue I have outstanding is how to do the testing. @troydai , I just renamed the files for the time being so they won't be used in CI.

total_files_count += (len(files) + 1)
for file in files:
t = os.stat(os.path.join(root, file))
total_files_size += t.st_size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to be that progress report is removed. Therefore the total file size and count are not used anywhere. Am I right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the logic to display file upload progress but only on --verbose. Is that okay or should we include maybe a --show-progress flag?

Copy link
Contributor

@troydai troydai May 4, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things to be done regarding the progress indicator:

  1. use a --show-progress, --version has its own meaning, it is associated with elevating logging level.
  2. print the progress information to stderr instead of stdout, as other Unix tools do.

rel_path, file))
).replace("\\", "/")
# pylint: disable=line-too-long
with open(os.path.normpath(os.path.join(root, file)), 'rb') as file_opened: # noqa: justification, no way to shorten
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can move the path expression out of the open function to shorten the line.

ServicePlacementRequireDomainDistributionPolicyDescription
)

if formatted_placement_policies:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making the changes. However, there are multiple similar cases in this module which I didn't comment on. Do they suit the same fixes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They do, I changed it so we treat [] and None the same and return None



# pylint: disable=too-many-arguments
# pylint: disable=R0915
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplication.


if activation_mode is not None:
if activation_mode not in ["SharedProcess", "ExclusiveProcess"]:
raise CLIError("Invalid activation mode specified")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a more concise form?

if activation_mode not in ('SharedProcess', 'ExclusiveProcess`, None):
    raise CLIError( # ... )

"Cannot specify standby replica keep duration for stateless "
"service"
)
# pylint: disable=R0204
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the readable rule name.

def sf_report_partition_health(partition_id, source_id, health_property,
health_state, ttl=None, description=None,
sequence_number=None, remove_when_expired=None,
timeout=60):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the pylint disable argument in line. This way the disable statement won't leak into the file scope.

def sf_report_partition_health(partition_id,  # pylint: disable=too-many-arguments
                               source_id, health_property,
                               health_state, ttl=None, description=None,
                               sequence_number=None, remove_when_expired=None,
                               timeout=60):

For other places as well.

@samedder
Copy link
Contributor Author

samedder commented May 4, 2017

One more question, should we do a rebase squash before the final merge for this?

Copy link
Contributor

@troydai troydai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall as long as the progress indicator is addressed. Please also fix the CI.

def print_progress(size, rel_file_path):
nonlocal current_file_size
current_file_size += size
logger.info("[%d/%d] files, [%d/%d] bytes, %s", current_files_count,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print to stderr

@samedder
Copy link
Contributor Author

samedder commented May 4, 2017

Changed the behavior to print to stderr, let me know if this is unexpected. The python package now also exists on PyPi.

with open('HISTORY.rst', 'r', encoding='utf-8') as f:
HISTORY = f.read()

setup(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure that you are using the azure_bdist_wheel.py file to exclude the azure/init.py, azure/cli/init.py, azure/cli/command_modules/init.py files (see the setup.py/setup.cfg for any of the other command packages)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah thanks, that seems to be the issue. I see the bdist_wheel after pulling the latest changes.

@troydai
Copy link
Contributor

troydai commented May 5, 2017

Fix CI.

  1. Code style
  2. Files without licenses

@samedder samedder force-pushed the feature_servicefabric_init branch from e1ae36d to 02e97a9 Compare May 5, 2017 01:47
@samedder
Copy link
Contributor Author

samedder commented May 5, 2017

@troydai can you help us figure out why the the python3 builds are failing with missing licenses? I don't see any changes that could be related to this.

@troydai
Copy link
Contributor

troydai commented May 5, 2017

Because the headers you added are not the exact match:

It will fall on all platforms. It shows on 3.6 only because the build fail on other two platforms before headers are checked. As I repeatedly emphasize throughout this pull request, please run check_style --ci or check_style --module sf --pep8 --pylint before pushing your changes. It is quicker to discover issues locally.

@samedder
Copy link
Contributor Author

samedder commented May 5, 2017

@troydai I do :) every time. I just found that bug as well. I see what I need to change now.
I promise I'm not cheating

(env) E:\work_root\azure\cli>git reset --hard
HEAD is now at 02e97a90 Adding ServiceFabric commands for version 5.6

(env) E:\work_root\azure\cli>.\scripts\check_style --module sf --pep8 --pylint


Run flake8 for PEP8 compliance
Modules: sf
Flake8 passed


Run pylint
Modules: sf
Pylint passed

@troydai
Copy link
Contributor

troydai commented May 5, 2017

You may need to clean up your workspace and run on both python 2 and 3.

git clean ./src -xdff

@codecov-io
Copy link

codecov-io commented May 5, 2017

Codecov Report

Merging #3136 into master will decrease coverage by 0.98%.
The diff coverage is 26.72%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3136      +/-   ##
==========================================
- Coverage   71.12%   70.14%   -0.99%     
==========================================
  Files         362      371       +9     
  Lines       23929    24586     +657     
  Branches     3658     3767     +109     
==========================================
+ Hits        17019    17245     +226     
- Misses       5812     6237     +425     
- Partials     1098     1104       +6
Impacted Files Coverage Δ
...command_modules/azure-cli-sf/azure/cli/__init__.py 100% <100%> (ø)
src/command_modules/azure-cli-sf/azure/__init__.py 100% <100%> (ø)
...azure-cli-sf/azure/cli/command_modules/__init__.py 100% <100%> (ø)
...re-cli-sf/azure/cli/command_modules/sf/__init__.py 100% <100%> (ø)
...re-cli-sf/azure/cli/command_modules/sf/commands.py 100% <100%> (ø)
...ure-cli-sf/azure/cli/command_modules/sf/_params.py 100% <100%> (ø)
...azure-cli-sf/azure/cli/command_modules/sf/_help.py 100% <100%> (ø)
...re-cli-sf/azure/cli/command_modules/sf/_factory.py 5.55% <5.55%> (ø)
...zure-cli-sf/azure/cli/command_modules/sf/custom.py 9.15% <9.15%> (ø)
...urce/azure/cli/command_modules/resource/_params.py 100% <0%> (ø) ⬆️
... and 13 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 49c281b...29053f3. Read the comment docs.

DEPENDENCIES = [
'azure-servicefabric==5.6.130',
'azure-cli-core',
'adal==0.4.3'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is breaking CI. I have submitted #3183

@samedder samedder deleted the feature_servicefabric_init branch May 11, 2017 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants