A robust, maintainable, and DRY API test automation framework for a Book API using Python, pytest, and requests.
🔗 GitHub Page with Allure report
- Custom APIClient: Centralized HTTP client (
helpers/api_client.py
) with:- Built-in retry logic for transient errors (HTTP 429, 5xx) using urllib3's
Retry
. - Logging of all requests, responses, and retry attempts (INFO/DEBUG level).
- Automatic use of
Retry-After
header for backoff. - URL building and session management.
- Built-in retry logic for transient errors (HTTP 429, 5xx) using urllib3's
- Reusable Validators: Assertion helpers (
helpers/validator.py
) for:- Status code validation with logging on pass/fail.
- Error message and book response validation.
- All assertions log both pass and fail for traceability.
- Pytest Fixtures: Clean setup/teardown in
conftest.py
andBaseTest.py
:- Session/module/class-scoped fixtures for API client and test data.
- DRY test setup using base classes and autouse fixtures.
- Parameterization: Use of
@pytest.mark.parametrize
for edge cases and error scenarios. - Parallel Test Execution: Supports pytest-xdist for parallel test runs by file/module:
dist=loadfile
ensures all tests in a file run sequentially, files run in parallel.- Test data is made unique per worker to avoid collisions.
- Comprehensive Logging:
- All requests, responses, assertions, and retries are logged.
- Logging configuration in
pytest.ini
supports both CLI and file output. - DEBUG/INFO logs available in log file; INFO logs in CLI (with xdist, some logs may only appear in file).
- Test Reports: Generates HTML reports with pytest-html for easy review.
- Dependency Management: Handles test dependencies and ordering with pytest-dependency.
book-api-pytest-automation/
├── helpers/
│ ├── api_client.py # Custom API client with retry and logging
│ └── validator.py # Assertion and validation helpers
│
├── tests/
│ ├── BaseTest.py # Base test class with client fixture
│ ├── test_create_book.py
│ ├── test_update_book.py
│ ├── test_delete_book.py
│ └── test_get_book.py
│
├── .pylintrc # pylint overrides
├── conftest.py # Global fixtures and setup
├── pytest.ini # Pytest configuration
└── requirements.txt # Python dependencies
pip install -r requirements.txt
Ensure your Book API (Node.js or other) is running at http://localhost:3000
.
pytest
To view a live Allure report:
allure serve test-results/allure-results
To generate a static HTML Allure report:
allure generate test-results/allure-results --clean -o test-results/allure-report
pytest -n auto --dist loadfile # or -n 4 for 4 workers
- helpers/api_client.py: Custom requests.Session subclass with retry, logging, and URL helpers.
- helpers/validator.py: Assertion helpers for status codes, error messages, and book validation.
- conftest.py: Global fixtures for API client and test setup.
- tests/BaseTest.py: Base class for all test classes, injects API client.
- tests/test_*.py: Test modules for create, update, delete, and get book scenarios.
- pytest.ini: Pytest configuration (logging, parallelism, test discovery, reporting).
- DRY Principle: Use helpers and fixtures to avoid code duplication.
- Parameterize: Use
@pytest.mark.parametrize
for edge cases and error scenarios. - Logging: All requests, responses, and assertions are logged for traceability.
- Retry Logic: Built-in retry for transient HTTP errors (429, 5xx) with logging.
- Parallel Safety: Test data is made unique per worker for parallel runs.
- Test Dependencies: Use
pytest-dependency
for ordered/conditional tests.
class TestCreateBook(BaseTest):
HEADERS = {"authorization": "Bearer user-token"}
def test_should_create_book_when_title_and_author_are_valid(self):
book = {"title": "New Book", "author": "Author"}
response = self.client.post(self.client.build_url(), json=book, headers=self.HEADERS)
validator.validate_status_code(response, 201)
validator.validate_response_book(response, book)
This framework supports parallel test execution using pytest-xdist for faster test runs:
# Run tests in parallel with verbose output
pytest -n auto --dist loadfile -v
--dist loadfile
: All tests in a file run sequentially, files run in parallel (recommended)--dist loadscope
: Tests grouped by scope (class/module) run together--dist worksteal
: Dynamic work distribution (fastest but less predictable)
- Unique Test Data: Test data is made unique per worker to avoid collisions
- Worker-Safe Fixtures: API client and test setup fixtures are worker-safe
- Isolated Test Runs: Each worker operates independently with separate test data
This project uses pytest markers to organize and categorize tests for flexible execution:
- smoke: Quick smoke tests for core functionality
- regression: Comprehensive regression tests for business logic
- negative: Error handling and negative scenario tests
Run only regression tests:
pytest -m regression
Run both smoke and regression tests:
pytest -m "smoke or regression"
Run regression tests but exclude negative scenarios:
pytest -m "regression and not negative"
Run smoke tests in parallel:
pytest -m smoke -n auto --dist loadfile
Each test function is decorated with appropriate markers:
@pytest.mark.smoke
@pytest.mark.regression
@allure.title("Should create book successfully")
def test_should_create_book_successfully(self):
# Test implementation
- Smoke Tests: Core functionality validation (create, get, update, delete basic scenarios)
- Regression Tests: Complex business scenarios, pagination, search functionality, edge cases
- Negative Tests: Error handling, invalid authentication, resource not found scenarios
This marker system enables flexible test execution strategies, from quick smoke tests during development to comprehensive regression testing in CI/CD pipelines.
- Test Case Mapping: Each test function is mapped to a unique test case ID using the
test-plan-suite.json
file. This enables traceability between automated tests and business requirements. - Result Collection: During test execution, results are collected for each test case, including outcome, duration (in milliseconds), and iteration details (for parameterized tests). Results are aggregated and written to
test-results/test-results-report.json
after the test run, supporting both serial and parallel execution (pytest-xdist).
- Connection errors: Ensure the Book API server is running at the correct URL.
- Parallel test issues: Make sure test data is unique per worker or run tests serially.
- HTML report not generated: Ensure
pytest-html
is installed and use the--html
option.
This project uses GitHub Actions for automated workflows:
- Code Analysis: Runs linting and static analysis on every push and pull request.
- Test Execution: Runs all tests and generates reports for every push and pull request, including parallel execution and publishing Allure/HTML reports.
- Badges: See the top of this README for live status of these workflows.
Workflow files are located in .github/workflows/
.
- The workflow runs automatically on every push and pull request to the repository.
- You can also trigger it manually from the GitHub Actions tab by selecting the
Test Execution & Publish Report
workflow and clicking 'Run workflow'.
- Installs Python and all dependencies from
requirements.txt
. - Runs all tests using pytest (including parallel execution with xdist).
- Publishes test results as HTML and Allure reports (if configured).
- Uploads the reports as workflow artifacts for download and review.
- Updates status badges at the top of the README to reflect the latest run.
- After test execution, the workflow now includes a step to post the test results to Azure (e.g., Azure DevOps, Azure Storage, or a custom API endpoint).
- This enables centralized reporting, dashboard integration, or further automation in your Azure environment.
- The step uses secure credentials and API endpoints configured in your repository secrets.
- You can customize the target Azure service and payload format as needed for your organization.
- Go to the 'Actions' tab in your GitHub repository.
- Select the latest run of
Test Execution & Publish Report
. - Download the HTML/Allure report artifacts for detailed results.