-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
deferredFeature deferred for future considerationFeature deferred for future considerationdemoExample/demo plugin implementationExample/demo plugin implementationenhancementNew feature or requestNew feature or request
Description
Testing Plugin for SmartSwitch
Summary
Add a built-in TestingPlugin to enable declarative testing with automatic validation of method calls, arguments, and return values.
Motivation
Currently, when testing code that uses SmartSwitch, developers must:
- Manually assert on return values
- Write repetitive validation code
- Track call counts manually
A testing plugin would provide:
- Declarative expectations: Define what you expect upfront
- Auto-validation: SmartSwitch validates automatically
- Spy pattern: Record all calls for inspection
- Mock support: Override return values for testing
Proposed Solution
Basic API
from smartswitch import Switcher
from smartswitch.testing import TestingPlugin
# Setup
registry = MyRegistry()
registry.api = Switcher()
test_plugin = TestingPlugin()
registry.api.add_plugin(test_plugin)
# Set expectations
test_plugin.expect('create_user',
args=('john@example.com',),
kwargs={'role': 'admin'},
returns={'id': 1, 'email': 'john@example.com', 'role': 'admin'}
)
# Execute (plugin validates automatically)
result = registry.api('create_user')('john@example.com', role='admin')
# Verify all expectations met
test_plugin.verify_all_called()Advanced Features
Argument Matchers:
test_plugin.expect('create_user',
args_matcher=lambda args: '@' in args[0], # Any valid email
returns_matcher=lambda r: 'id' in r # Has ID field
)Mock Mode:
# Override real implementation
test_plugin.mock('get_user', returns={'id': 999, 'name': 'Mock User'})
result = registry.api('get_user')(123)
# Returns mock data without calling real methodPerformance Assertions:
test_plugin.expect('fetch_data',
max_time_ms=100.0 # Must complete in <100ms
)Call Count Validation:
test_plugin.expect('notify', min_calls=1, max_calls=3)
# ... perform operations
test_plugin.verify() # Fails if notify() called <1 or >3 timesUse Cases
1. Testing Registry Operations
def test_mount_registry():
storage = StorageManager()
plugin = TestingPlugin()
storage.mount_registry.api.add_plugin(plugin)
# Expect specific flow
plugin.expect('add', args=('uploads', 's3_aws', {...}, None))
plugin.expect('list', returns=[{'name': 'uploads', ...}])
plugin.expect('getNode',
args=('uploads', 'file.pdf'),
returns={'mount': 'uploads', 'path': 'file.pdf', ...}
)
# Execute test
storage.mount_registry.add('uploads', 's3_aws', {...}, None)
mounts = storage.mount_registry.list()
node = storage.mount_registry.getNode('uploads', 'file.pdf')
# All validations happened automatically!
plugin.verify_all_called()2. Spy Pattern for Call Inspection
plugin = TestingPlugin(mode='spy') # Record but don't validate
# ... perform operations
# Inspect calls
assert len(plugin.calls) == 3
assert plugin.calls[0]['handler'] == 'add'
assert plugin.calls[1]['args'] == ('uploads',)3. Integration with Pytest
@pytest.fixture
def registry_with_testing():
registry = MyRegistry()
plugin = TestingPlugin()
registry.api.add_plugin(plugin)
yield registry, plugin
def test_user_creation(registry_with_testing):
registry, plugin = registry_with_testing
plugin.expect('create', args=('john@test.com',))
registry.api('create')('john@test.com')
plugin.verify()Implementation Notes
Core Plugin Structure
class TestingPlugin:
def __init__(self, mode='validate'):
self.mode = mode # 'validate', 'spy', 'mock'
self.expectations = {}
self.calls = []
self.mocks = {}
def expect(self, method_name, **conditions):
"""Set expectations for a method."""
self.expectations[method_name] = conditions
def mock(self, method_name, returns=None, raises=None):
"""Mock a method's return value."""
self.mocks[method_name] = {'returns': returns, 'raises': raises}
def __call__(self, handler_name, args, kwargs, result):
"""Called by SmartSwitch on each method invocation."""
# Record call
call_info = {
'handler': handler_name,
'args': args,
'kwargs': kwargs,
'result': result
}
self.calls.append(call_info)
# Mock mode: override result
if handler_name in self.mocks:
mock = self.mocks[handler_name]
if 'raises' in mock:
raise mock['raises']
return mock['returns']
# Validate mode: check expectations
if self.mode == 'validate' and handler_name in self.expectations:
self._validate_call(handler_name, args, kwargs, result)
return result
def _validate_call(self, handler, args, kwargs, result):
"""Validate call against expectations."""
exp = self.expectations[handler]
if 'args' in exp:
assert args == exp['args'], \
f"{handler}: expected args {exp['args']}, got {args}"
if 'args_matcher' in exp:
assert exp['args_matcher'](args), \
f"{handler}: args matcher failed for {args}"
if 'kwargs' in exp:
assert kwargs == exp['kwargs'], \
f"{handler}: expected kwargs {exp['kwargs']}, got {kwargs}"
if 'returns' in exp:
assert result == exp['returns'], \
f"{handler}: expected {exp['returns']}, got {result}"
if 'returns_matcher' in exp:
assert exp['returns_matcher'](result), \
f"{handler}: return matcher failed for {result}"
def verify_all_called(self):
"""Verify all expected methods were called."""
called = {c['handler'] for c in self.calls}
expected = set(self.expectations.keys())
not_called = expected - called
if not_called:
raise AssertionError(f"Expected methods not called: {not_called}")
def verify_call_count(self, method_name, min_calls=None, max_calls=None):
"""Verify call count for a method."""
count = sum(1 for c in self.calls if c['handler'] == method_name)
if min_calls is not None and count < min_calls:
raise AssertionError(
f"{method_name} called {count} times, expected >= {min_calls}"
)
if max_calls is not None and count > max_calls:
raise AssertionError(
f"{method_name} called {count} times, expected <= {max_calls}"
)
def reset(self):
"""Clear all calls and expectations."""
self.calls.clear()
self.expectations.clear()
self.mocks.clear()Benefits
- Less Boilerplate: Write less test code
- More Readable: Tests become declarative
- Better Errors: Clear assertion messages
- Spy Pattern: Built-in call recording
- Mock Support: Easy to mock dependencies
- Performance Testing: Built-in timing validation
Package Structure
smartswitch/
├── __init__.py
├── switcher.py
└── testing/
├── __init__.py
├── plugin.py # TestingPlugin
├── matchers.py # Argument matchers (any(), contains(), etc.)
└── fixtures.py # Pytest fixtures
Related
This was inspired by real-world usage in genro-storage refactoring where SmartSwitch logging was used to validate architecture. A dedicated testing plugin would make this pattern even more powerful.
Questions
- Should this be part of core smartswitch or a separate package (
smartswitch-testing)? - Should we support async validation?
- Integration with other test frameworks beyond pytest?
Metadata
Metadata
Assignees
Labels
deferredFeature deferred for future considerationFeature deferred for future considerationdemoExample/demo plugin implementationExample/demo plugin implementationenhancementNew feature or requestNew feature or request