-
Notifications
You must be signed in to change notification settings - Fork 128
[DAM analyzer] Check for unexpected analyzer warnings in linker tests #2523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
2a14edb
to
f723d31
Compare
* Fix inconsistencies in Requires attribute checks * Fix build and continue verifying attribute arguments * PR feedback - Remove unnecessary check for ITypeSymbol - Clean up null check logic * containingSymbol -> symbol * Rename symbol -> member
/azp run |
Azure Pipelines failed to run 1 pipeline(s). |
/azp run |
Azure Pipelines failed to run 1 pipeline(s). |
And check in correct suppression test generated sources
450f780
to
73218c8
Compare
- Support SandboxDependency with type argument - Don't crash on SetupCompileResource with a text file - Add analyzer ExpectedWarnings to work around lack of support for top-level ExpectedWarning with SourceLine/SourceColumn
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't quite fully understand the part of the generated analyzer test passing even if it doesn't produce expected warnings. Does not having [ExpectedWarning]
in the linker test ensure that at least the analyzer achieves parity with the linker via TestChecker
?
Thanks for automating this process!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good - only one question about the compiler/source control setup.
<CompilerGeneratedFilesOutputPath>generated</CompilerGeneratedFilesOutputPath> | ||
</PropertyGroup> | ||
|
||
<ItemGroup> | ||
<Compile Remove="$(CompilerGeneratedFilesOutputPath)" /> | ||
</ItemGroup> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand this - we ask the compiler to write the generated code to disk (I get that, at least for validation purposes and such). But then remove it from the compilation... this is probably because the source generator already added it?
Also - why do we commit the generated files into the source control, if they are always regenerated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if we didn't remove them from the compilation they would be defined twice. I'm committing them to source control to make it easier to review changes, and because it sounds like the test UI may have trouble displaying generated tests otherwise. I don't have a strong opinion on it but this is per @agocke's suggestion - happy to remove them if we would rather not check in a bunch of generated sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For code that you're expected to interact w/ directly (e.g. tests), I'd prefer to have them checked in. VS Code can't run them individually otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, to properly enable these test cases, it will be easy to move the file into a non-generated directory. The generator will see that the test file is already present and will stop generating it, and git will understand and track the move operation.
Generally yes, but in this change the generated tests (and any tests which pass |
…dotnet/linker#2523) This adds a source generator which generates analyzer xunit facts for the linker testcases. It discovers linker testcases by examining the referenced assembly, and avoids generating cases which already exist as analyzer facts (this requires tests to follow the existing convention for naming testcase files and facts). The generated sources are checked in to make changes easier to review. The generated testcases only validate that the analyzer produces no unexpected warnings on the input (that is, these tests may pass even if the analyzer doesn't produce some of the expected warnings). Any tests which are currently skipped don't get this validation, so before closing out the feature branch we will want to replace skipped facts with facts that run in the allowMissingWarnings mode. The failing tests have been disabled and are tracked in separate issues. This also contains a number of small infrastructure tweaks to match the linker's test infrastructure enough to run all of these tests. * Add test run to check for unexpected analyzer warnings * Allow more flexible test file organization * Check expected warning locations * Respect ExpectedNoWarningsAttribute * Write generated code to disk * Add generated sources * Don't produce warnings for typeof(Foo<>) * Don't crash on InvalidOperation * Compare symbols correctly * Don't run analyzer on .il test dependencies * Make LinkAttributesTests partial * Fix test infra bugs with LogContains attributes And check in correct suppression test generated sources * Skip failing tests - dotnet/linker#2578 - dotnet/linker#2415 - dotnet/linker#2579 * Fix remaining test infrastructure issues - Support SandboxDependency with type argument - Don't crash on SetupCompileResource with a text file - Add analyzer ExpectedWarnings to work around lack of support for top-level ExpectedWarning with SourceLine/SourceColumn Commit migrated from dotnet/linker@917830d
This adds a source generator which generates analyzer xunit facts for the linker testcases. It discovers linker testcases by examining the referenced assembly, and avoids generating cases which already exist as analyzer facts (this requires tests to follow the existing convention for naming testcase files and facts). The generated sources are checked in to make changes easier to review.
The generated testcases only validate that the analyzer produces no unexpected warnings on the input (that is, these tests may pass even if the analyzer doesn't produce some of the expected warnings). Any tests which are currently skipped don't get this validation, so before closing out the feature branch we will want to replace skipped facts with facts that run in the
allowMissingWarnings
mode.The failing tests have been disabled and are tracked in separate issues. This also contains a number of small infrastructure tweaks to match the linker's test infrastructure enough to run all of these tests.