Skip to content

OLD Code based test generators dotnet

Zoltan Micskei edited this page Oct 19, 2021 · 1 revision

Author: Dávid Honfi

NOTE: (2018) This part is optional, the mandatory part is the Java-based tools.

Prerequisites

  • .NET framework 4.5 or higher
  • Visual Studio 2015 Enterprise or higher

1. Introduction

In this exercise, students will get familiar with Microsoft IntelliTest (formerly known as Microsoft Pex), a state-of-the-art code-based test generator tool. IntelliTest is able to automatically generate test inputs for source code written in C#. In the background, IntelliTest uses the symbolic execution technique, which attempts deriving inputs that are able to execute reachable parts of the source code.

IntelliTest uses Parameterized Unit Tests (PUTs), which are test methods extended with parameters. This idea was also introduced in the Developer Testing with JUnit exercise of this course. PUTs can be filled with the commonly used testing statements (Arrange/Act/Assert). By using IntelliTest, there is no need for defining inputs for the PUTs, because the tool generates them during a so-called Exploration phase (this is where symbolic execution is applied).

2. Simple test generation

2.1 Tutorial

In this part of the exercise, the basic functions of IntelliTest are introduced through very simple code samples.

  1. Open the SWSV.Simple solution in Visual Studio.

  2. IntelliTest is able to generate inputs by simply right-clicking on a method without having parameterized test methods (in the background, the tool creates temporary files).

  3. In the class Simple, right click on the method IfBranching and select Run IntelliTest.

  4. Examine the output of IntelliTest! The tool exposes several information regarding the test generation process and its results in the IntelliTest window:

  5. In the second row of the window (starting with a green tick), the aggregated results can be found.

  6. The number next to the tick indicates the number of tests finished without errors, while the number next to the cross means the number of failing tests.

  7. Right after this, a coverage indicator is placed, which measures the overall block coverage (see this thread for explanation) of the generated test cases.

  8. Below, there is a table containing the details of the generated test cases and the generated test code on the right hand side.

  9. The target column denotes the construction of the unit under test. In the current example, the class Simple was instantiated with its default constructor.

  10. The next column (condition) contains the values to be assigned for the only input parameter (in this case 11 and 0).

  • The result(target) column contains the object state of the unit under test after the execution with the given input values. In this scenario, it did not changed (since the class Simple does not have any properties or fields to modify).
  • The result column contains the return value of the method method under test in the given test case.
  • The Summary/Exception column denotes the occurrence of Exceptions during the execution of the test case (if there is any).
  1. The selected test cases can be persisted for further usage by clicking on the save icon in the top row of the IntelliTest window.

2.2 Exercises

Try to answer the following questions for the methods SwitchBranching, ForLoop and SwitchAndFor. See the required artefacts below.

  1. What input values shall be chosen for testing?
  2. Run IntelliTest. What kind of inputs were generated? Do they match the expectations?
  3. Get an overview of the generated test code. What type of assertions are they containing?
  4. Save the generated test cases and right-click on the saved test cases and select Run Tests.
  5. Select Test in the Visual Studio menu bar and choose Analyze Code Coverage -> All Tests. What is the coverage achieved of the test suite?

CHECK: Create a screenshot containing the generated test cases for one of the three methods.

CHECK: Create a screenshot about the coverage results of method SwitchBranching. The screenshot shall contain both the overall coverage metrics and the colored source code.

3. Advanced test generation

  1. Open the SWSV.Complex solution.

The solution contains a project, which has three classes: Service, ServiceManager and ServiceConsumer. In this exercise, the scenario is that the ServiceConsumer consumes a Service that can be only obtained using the ServiceManager. The testing goal is to reach high code coverage with generated tests for classes ServiceConsumer and Service.

  1. The unit under test is the ServiceConsumer class and its one and only method ConsumeService.

  2. Create a PUT method for ConsumeService by right-clicking on the method and selecting Create IntelliTest. This will show a window which asks for details for the new test project. In the current exercise, the default values are acceptable.

  3. Try running IntelliTest on the PUT by right-clicking on the newly generated method. What are your expectations? What are the results?

  4. Click on the yellow warning sign found in the top bar of the IntelliTest window. The warning message states the following: could not guess how to create SWSV.Complex.ServiceManager

  5. This indicates that IntelliTest could not create an instance of ServiceManager. Take a look at the ServiceManager class and try to guess what could be the problem.

  6. These kind of issues can be resolved using factory methods, which guide IntelliTest on how to create an instance of a class in a desired state.

Right-click on the warning message mentioned before and select Fix. This will generate a new Factories folder in the Test project and a new file called ServiceManagerFactory. This static class has a method, which is annotated with a PexFactoryMethod attribute that denotes the type of the factory. The method, by default, throws an exception. Delete this statement and add a statement that is able to create and return an instance of ServiceManager by using the appropriate method (property).

  1. Run IntelliTest again. What improvements can be noticed? What problem can be noticed?

CHECK: Create a screenshot of the factory method you created for ServiceManager.

Now IntelliTest is able to obtain an instance of ServiceManager. However, the documentation of the method has constraints on its behavior that must be also taken into account during test generation.

  1. For specifying additional constraints pre- and postconditions can be used with special statements in the PUT method. These are assumptions and assertions that are able to guide IntelliTest to generate relevant test cases.

  2. The documentation of method ConsumeService states that parameter manager is known to non-null. Moreover, the result of this method can be only chosen from the following values: -1, 0, 1, 2.

  3. Define these requirements into the PUT using assumptions (PexAssume.XXX()) and assertions (PexAssert.XXX()). Note that assumptions are only usable if they are located before calling into the method under test. Accordingly, assertions are only used after the invocation to the method under test.

CHECK: Create a screenshot of the PUT you filled with assumptions and assertions for method ConsumeService.

After executing IntelliTest again, it can be noticed that only one test case is being generated.

  1. Taking a closer look at class Service will reveal that the service itself is stateful, and the behavior of method DoService() depends on the state. Also, this state can be changed only via calling some methods.

  2. The only way to alleviate this is to parameterize the previously created factory method with the required state of the service. This enables IntelliTest to choose from a specific state of the service.

  3. Extend your factory method for class ServiceManager with new parameter(s). Based on the new parameter(s), the method should be able to return the ServiceManager and its Service instance in various states. NOTE: After changing the parameters of the factory, the already generated tests will not compile. To tackle this issue, delete the file containing the already generated tests (seek for a .g.cs file underneath the file containing the PUT).

CHECK: Create a screenshot of the extended factory method.

Run IntelliTest again on the PUT.

  1. What did you experience?

  2. What is the coverage of the generated test cases in classes ServiceConsumer and Service?

CHECK: Create a screenshot of the generated test cases that use the extended factory method.

4. Visualizing test generation

NOTE: This task is only for those, who are interested in the background of symbolic execution.

The previous parts of the lab exercise ignored the background functionality of IntelliTest. Taking a little bit deeper look into IntelliTest may help understanding the workflow of symbolic execution-based test generation. Thus, in the final part of this exercise, visualization of symbolic execution is applied for that purpose.

A symbolic execution of a program can be represented in a so-called symbolic execution tree. The nodes are representing places in the source code, while directed edges are drawn when a symbolic execution path is executed through the connected nodes. The visual representation also contains several additional information regarding the nodes and the execution.

OPTION A - Building from source: Download the source code of SEViz from here. This is a Visual Studio extension that is able to visualize the test generation process of IntelliTest. Follow the instructions provided on the website of the tool for building the solution and installing it into Visual Studio using its .vsix file. Alternatively, you can run SEViz in an experimental instance of Visual Studio.

OPTION B - Using the release: Another option is to download the release of the tool from here for VS2015 or here for VS2017.

  1. Open the SWSV.Simple solution again, which was used previously in the tutorial part.
  2. Create PUTs for the four methods found in class Simple by right-clicking on the class name and selecting Create IntelliTest.
  3. Add SEViz.Monitoring.dll as reference to the previously created test project by right-clicking on the References in the test project tree and selecting Add Reference. The dll can be found in the SEViz build directory (..\SEViz\SEViz.Monitoring\bin\Debug\).
  4. Add the [SEViz] attribute on top of the [PexMethod] attribute for the method IfBranchingTest (Visual Studio will require to add the using SEViz.Monitoring statement to the top of the file).
  5. Run IntelliTest on method IfBranchingTest. When the execution finishes, a popup will appear that indicates new SEViz graph availability. Select yes for loading.
  6. In the opening window, the symbolic execution tree starting from the method IfBranching is being visualized interactively.
  • Shape, color and border of each node represent different information snippets.
    • Shape: Indicates whether a call to a constraint solver was made in that point of the execution.
    • Border: Indicates whether the node has source code place information attached.
    • Color: Leaf nodes are colored to indicate whether the corresponding execution path has a generated test case (red or green depending on its result [red: erroneous test case execution, green: error-free test case execution ]) or not (orange).
  • Hovering a node will present its path condition (in incremental form), name of the containing method and the source code mapping (if it is available).
  • Clicking on a node (selecting) will show its detailed metadata including every essential information with detailed descriptions in the Properties window of Visual Studio.
  • Double-clicking on a source-mapped node will jump to and select the corresponding source code line in the editor.
  • Selecting source lines, then right-clicking and selecting "Show Selected Lines in SEViz" will select the corresponding nodes in the symbolic execution tree.
  1. Try out the features on the other three methods too by attributing them with SEViz and answer the following questions.
  • Why does every execution contains the target != null condition? (Hint: think of the unit under test as the target)
  • How are the path conditions built up in a branching?
  • Which branch is executed first in a branching? (Hint: look at the node identifiers and use the source mapping)
  • When do orange nodes appear? (Hint: Use the ForLoop method)