Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADT perf tests #21944

Merged
merged 12 commits into from
Jun 22, 2021
Merged

ADT perf tests #21944

merged 12 commits into from
Jun 22, 2021

Conversation

azabbasi
Copy link
Contributor

@azabbasi azabbasi commented Jun 17, 2021

This PR is to add the project structure for performance tests that you can read more about here:
https://github.com/Azure/azure-sdk-for-net/wiki/Writing-performance-tests-for-Client-libraries

We will only add the test for the Query API to get some feedback from the SDK team and evaluate whether or not we would have to add more perf tests for our SDK or not.

Quotes from discussions with the SDK team:

"
Performance Testing using our perf framework, in general, allows you to test throughput and latency offered to the customers via the SDKs.

Major Benefits include:

Performance Regressions are caught prior to release. Regressions can come in from new code changes that get merged between two releases, new dependencies that get introduced or old dependencies that are upgraded.

Performance against Track 1 gets evaluated and if Track 2 Digital Twins is slower than Track 1, then we also offer the guidance and support to investigate the bottlenecks which are slowing down the Track 2 SDK.

The Digital Twins performance tests will be plugged into our perf Automation pipelines automatically and will run the tests regularly to scan for any performance issues that should be fixed before releasing the SDK.
"

@ghost ghost added the Digital Twins label Jun 17, 2021
/// <summary>
/// The Digital Twins instance endpoint to run the tests against.
/// </summary>
public string DigitalTwinsUrl => GetVariable("DIGITALTWINS_URL");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These environment variables are set during the live test pipeline resource deployment. I assume we have to use the same ones for the perf tests (since it was mentioned that we write them just as we would write Live tests)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out that there is currently no automation in perf test framework and the perf team will run the tests manually (with my help of course)

{
}

public override async Task SetupAsync()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally we would use this method to populate the digital twins instance with resources that we can query.


// Global setup code that runs once at the beginning of test execution.
// Create the model globally so all tests can take advantage of it.
await AdtInstancePopulator.CreateRoomModelAsync(_digitalTwinsClient).ConfigureAwait(false);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a one-time setup for all parallel tests. we only need to do this once per run.

public override async Task SetupAsync()
{
await base.SetupAsync();
await AdtInstancePopulator.CreateIndividualRoomTwins(_digitalTwinsClient, _testId, _size).ConfigureAwait(false);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We populate the instance with digital twins with a specific test Id associated with this object.

public override void Run(CancellationToken cancellationToken)
{
Pageable<BasicDigitalTwin> result = _digitalTwinsClient
.Query<BasicDigitalTwin>($"SELECT * FROM DIGITALTWINS WHERE TestId = '{_testId}'", CancellationToken.None);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will only query for twins with the test Id that is associated with this object.

@azabbasi azabbasi force-pushed the feature/adt/azabbasi/perfTests branch from 2508e34 to 7f5f869 Compare June 22, 2021 16:47
Copy link
Member

@jsquire jsquire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@jsquire jsquire self-requested a review June 22, 2021 16:48

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

Please see our [contributing guide](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/CONTRIBUTING.md) for more information.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updating the copy-paste mistake

@azabbasi azabbasi enabled auto-merge (squash) June 22, 2021 23:25
@check-enforcer
Copy link

This pull request is protected by Check Enforcer.

What is Check Enforcer?

Check Enforcer helps ensure all pull requests are covered by at least one check-run (typically an Azure Pipeline). When all check-runs associated with this pull request pass then Check Enforcer itself will pass.

Why am I getting this message?

You are getting this message because Check Enforcer did not detect any check-runs being associated with this pull request within five minutes. This may indicate that your pull request is not covered by any pipelines and so Check Enforcer is correctly blocking the pull request being merged.

What should I do now?

If the check-enforcer check-run is not passing and all other check-runs associated with this PR are passing (excluding license-cla) then you could try telling Check Enforcer to evaluate your pull request again. You can do this by adding a comment to this pull request as follows:
/check-enforcer evaluate
Typically evaulation only takes a few seconds. If you know that your pull request is not covered by a pipeline and this is expected you can override Check Enforcer using the following command:
/check-enforcer override
Note that using the override command triggers alerts so that follow-up investigations can occur (PRs still need to be approved as normal).

What if I am onboarding a new service?

Often, new services do not have validation pipelines associated with them. In order to bootstrap pipelines for a new service, please perform following steps:

For data-plane/track 2 SDKs Issue the following command as a pull request comment:

/azp run prepare-pipelines
This will run a pipeline that analyzes the source tree and creates the pipelines necessary to build and validate your pull request. Once the pipeline has been created you can trigger the pipeline using the following comment:
/azp run net - [service] - ci

For track 1 management-plane SDKs

Please open a separate PR and to your service SDK path in this file. Once that PR has been merged, you can re-run the pipeline to trigger the verification.

@azabbasi azabbasi merged commit caf436c into main Jun 22, 2021
@azabbasi azabbasi deleted the feature/adt/azabbasi/perfTests branch June 22, 2021 23:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants