-
-
Notifications
You must be signed in to change notification settings - Fork 96
Description
Hi, I'm in the process of switching from xUnit to TUnit and noticed an inconvenient behavior related to test initialization.
I have a relatively small project, but it contains 20+ integration test classes. Each class is a self-contained use case, and I intentionally use a separate WebApplicationFactory instance per class to isolate them. Here's my WebApplicationFactory implementation:
public class TestingServer : WebApplicationFactory<TProgram>, IAsyncInitializer where TProgram : class
{
private TestServer? TestingServer;
public Task InitializeAsync()
{
// This ensures the server is initialized once in a thread-safe way.
TestingServer = Server;
return Task.CompletedTask;
}
}
And here's an example of a test class:
public class MyTestClass
{
[ClassDataSource<TestingServer>(Shared = SharedType.PerClass)]
public required TestingServer TestingServer { get; set; } = default!;
[Test, NotInParallel(Order = 1)]
public async Task Test1()
{
// Test logic
}
[Test, NotInParallel(Order = 2)]
public async Task Test2()
{
// Test logic
}
[Test, NotInParallel(Order = 99)]
public async Task CleanUp()
{
// Deletes the database
}
}
The problem I’m encountering is that when I run the tests (even for a single class), all test servers for all classes are initialized upfront, each with their own database. Since each API initialization triggers database creation, the startup time becomes significant, especially as the number of classes grows.
Additionally, if I stop the test run before it finishes (e.g., while debugging), the CleanUp method does not execute, and all those databases remain. While this also happened occasionally in xUnit, it was limited to a single database. With TUnit, I’m now left with 20+ orphaned databases after an interrupted run.
This behavior seems to scale poorly, and I’m wondering:
Is this test server initialization behavior by design?
Are there recommended patterns in TUnit to avoid initializing unused test servers?
Am I doing something wrong in the way I structure or configure my tests?
Any guidance or examples on how others handle similar scenarios would be greatly appreciated.