-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADT perf tests #21944
ADT perf tests #21944
Conversation
/// <summary> | ||
/// The Digital Twins instance endpoint to run the tests against. | ||
/// </summary> | ||
public string DigitalTwinsUrl => GetVariable("DIGITALTWINS_URL"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These environment variables are set during the live test pipeline resource deployment. I assume we have to use the same ones for the perf tests (since it was mentioned that we write them just as we would write Live tests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out that there is currently no automation in perf test framework and the perf team will run the tests manually (with my help of course)
...ins/Azure.DigitalTwins.Core/perf/Azure.DigitalTwins.Core.Perf/Scenarios/QueryDigitalTwins.cs
Show resolved
Hide resolved
{ | ||
} | ||
|
||
public override async Task SetupAsync() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally we would use this method to populate the digital twins instance with resources that we can query.
|
||
// Global setup code that runs once at the beginning of test execution. | ||
// Create the model globally so all tests can take advantage of it. | ||
await AdtInstancePopulator.CreateRoomModelAsync(_digitalTwinsClient).ConfigureAwait(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a one-time setup for all parallel tests. we only need to do this once per run.
public override async Task SetupAsync() | ||
{ | ||
await base.SetupAsync(); | ||
await AdtInstancePopulator.CreateIndividualRoomTwins(_digitalTwinsClient, _testId, _size).ConfigureAwait(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We populate the instance with digital twins with a specific test Id associated with this object.
public override void Run(CancellationToken cancellationToken) | ||
{ | ||
Pageable<BasicDigitalTwin> result = _digitalTwinsClient | ||
.Query<BasicDigitalTwin>($"SELECT * FROM DIGITALTWINS WHERE TestId = '{_testId}'", CancellationToken.None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we will only query for twins with the test Id that is associated with this object.
sdk/eventhub/Azure.Messaging.EventHubs/samples/Sample09_ObservableEventBatch.md
Outdated
Show resolved
Hide resolved
2508e34
to
7f5f869
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
||
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. | ||
|
||
Please see our [contributing guide](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/CONTRIBUTING.md) for more information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updating the copy-paste mistake
...ins/Azure.DigitalTwins.Core/perf/Azure.DigitalTwins.Core.Perf/Scenarios/QueryDigitalTwins.cs
Outdated
Show resolved
Hide resolved
...ins/Azure.DigitalTwins.Core/samples/DigitalTwinsClientSample/DigitalTwinsLifecycleSamples.cs
Outdated
Show resolved
Hide resolved
This pull request is protected by Check Enforcer. What is Check Enforcer?Check Enforcer helps ensure all pull requests are covered by at least one check-run (typically an Azure Pipeline). When all check-runs associated with this pull request pass then Check Enforcer itself will pass. Why am I getting this message?You are getting this message because Check Enforcer did not detect any check-runs being associated with this pull request within five minutes. This may indicate that your pull request is not covered by any pipelines and so Check Enforcer is correctly blocking the pull request being merged. What should I do now?If the check-enforcer check-run is not passing and all other check-runs associated with this PR are passing (excluding license-cla) then you could try telling Check Enforcer to evaluate your pull request again. You can do this by adding a comment to this pull request as follows: What if I am onboarding a new service?Often, new services do not have validation pipelines associated with them. In order to bootstrap pipelines for a new service, please perform following steps: For data-plane/track 2 SDKs Issue the following command as a pull request comment:
For track 1 management-plane SDKsPlease open a separate PR and to your service SDK path in this file. Once that PR has been merged, you can re-run the pipeline to trigger the verification. |
This PR is to add the project structure for performance tests that you can read more about here:
https://github.com/Azure/azure-sdk-for-net/wiki/Writing-performance-tests-for-Client-libraries
We will only add the test for the Query API to get some feedback from the SDK team and evaluate whether or not we would have to add more perf tests for our SDK or not.
Quotes from discussions with the SDK team:
"
Performance Testing using our perf framework, in general, allows you to test throughput and latency offered to the customers via the SDKs.
Major Benefits include:
Performance Regressions are caught prior to release. Regressions can come in from new code changes that get merged between two releases, new dependencies that get introduced or old dependencies that are upgraded.
Performance against Track 1 gets evaluated and if Track 2 Digital Twins is slower than Track 1, then we also offer the guidance and support to investigate the bottlenecks which are slowing down the Track 2 SDK.
The Digital Twins performance tests will be plugged into our perf Automation pipelines automatically and will run the tests regularly to scan for any performance issues that should be fixed before releasing the SDK.
"