Skip to content

NayeemJohnY/book-api-python-requests-pytest-automation

Repository files navigation

Book API Python Requests Pytest Automation

A robust, maintainable, and DRY API test automation framework for a Book API using Python, pytest, and requests.

Code Analysis Test Execution & Publish Report

🔗 GitHub Page with Allure report

Features

  • Custom APIClient: Centralized HTTP client (helpers/api_client.py) with:
    • Built-in retry logic for transient errors (HTTP 429, 5xx) using urllib3's Retry.
    • Logging of all requests, responses, and retry attempts (INFO/DEBUG level).
    • Automatic use of Retry-After header for backoff.
    • URL building and session management.
  • Reusable Validators: Assertion helpers (helpers/validator.py) for:
    • Status code validation with logging on pass/fail.
    • Error message and book response validation.
    • All assertions log both pass and fail for traceability.
  • Pytest Fixtures: Clean setup/teardown in conftest.py and BaseTest.py:
    • Session/module/class-scoped fixtures for API client and test data.
    • DRY test setup using base classes and autouse fixtures.
  • Parameterization: Use of @pytest.mark.parametrize for edge cases and error scenarios.
  • Parallel Test Execution: Supports pytest-xdist for parallel test runs by file/module:
    • dist=loadfile ensures all tests in a file run sequentially, files run in parallel.
    • Test data is made unique per worker to avoid collisions.
  • Comprehensive Logging:
    • All requests, responses, assertions, and retries are logged.
    • Logging configuration in pytest.ini supports both CLI and file output.
    • DEBUG/INFO logs available in log file; INFO logs in CLI (with xdist, some logs may only appear in file).
  • Test Reports: Generates HTML reports with pytest-html for easy review.
  • Dependency Management: Handles test dependencies and ordering with pytest-dependency.

Project Structure

    book-api-pytest-automation/
    ├── helpers/
    │   ├── api_client.py               # Custom API client with retry and logging
    │   └── validator.py                # Assertion and validation helpers
    │
    ├── tests/
    │   ├── BaseTest.py                 # Base test class with client fixture
    │   ├── test_create_book.py
    │   ├── test_update_book.py
    │   ├── test_delete_book.py
    │   └── test_get_book.py
    │  
    ├── .pylintrc                       # pylint overrides
    ├── conftest.py                     # Global fixtures and setup
    ├── pytest.ini                      # Pytest configuration
    └── requirements.txt                # Python dependencies

Getting Started

1. Install Dependencies

pip install -r requirements.txt

2. Start the Book API Server

Ensure your Book API (Node.js or other) is running at http://localhost:3000.

3. Run Tests

pytest

4. Generate Allure Report

To view a live Allure report:

allure serve test-results/allure-results

To generate a static HTML Allure report:

allure generate test-results/allure-results --clean -o test-results/allure-report

4. Run Tests in Parallel

pytest -n auto  --dist loadfile # or -n 4 for 4 workers

Key Files

  • helpers/api_client.py: Custom requests.Session subclass with retry, logging, and URL helpers.
  • helpers/validator.py: Assertion helpers for status codes, error messages, and book validation.
  • conftest.py: Global fixtures for API client and test setup.
  • tests/BaseTest.py: Base class for all test classes, injects API client.
  • tests/test_*.py: Test modules for create, update, delete, and get book scenarios.
  • pytest.ini: Pytest configuration (logging, parallelism, test discovery, reporting).

Best Practices

  • DRY Principle: Use helpers and fixtures to avoid code duplication.
  • Parameterize: Use @pytest.mark.parametrize for edge cases and error scenarios.
  • Logging: All requests, responses, and assertions are logged for traceability.
  • Retry Logic: Built-in retry for transient HTTP errors (429, 5xx) with logging.
  • Parallel Safety: Test data is made unique per worker for parallel runs.
  • Test Dependencies: Use pytest-dependency for ordered/conditional tests.

Example Test

class TestCreateBook(BaseTest):
    HEADERS = {"authorization": "Bearer user-token"}

    def test_should_create_book_when_title_and_author_are_valid(self):
        book = {"title": "New Book", "author": "Author"}
        response = self.client.post(self.client.build_url(), json=book, headers=self.HEADERS)
        validator.validate_status_code(response, 201)
        validator.validate_response_book(response, book)

Parallel Test Execution

This framework supports parallel test execution using pytest-xdist for faster test runs:

Basic Parallel Execution

# Run tests in parallel with verbose output
pytest -n auto --dist loadfile -v

Parallel Execution Modes

  • --dist loadfile: All tests in a file run sequentially, files run in parallel (recommended)
  • --dist loadscope: Tests grouped by scope (class/module) run together
  • --dist worksteal: Dynamic work distribution (fastest but less predictable)

Parallel Safety Features

  • Unique Test Data: Test data is made unique per worker to avoid collisions
  • Worker-Safe Fixtures: API client and test setup fixtures are worker-safe
  • Isolated Test Runs: Each worker operates independently with separate test data

Pytest Markers

This project uses pytest markers to organize and categorize tests for flexible execution:

Available Markers

  • smoke: Quick smoke tests for core functionality
  • regression: Comprehensive regression tests for business logic
  • negative: Error handling and negative scenario tests

Usage Examples

Run only regression tests:

pytest -m regression

Run both smoke and regression tests:

pytest -m "smoke or regression"

Run regression tests but exclude negative scenarios:

pytest -m "regression and not negative"

Run smoke tests in parallel:

pytest -m smoke -n auto --dist loadfile

Marker Usage in Tests

Each test function is decorated with appropriate markers:

@pytest.mark.smoke
@pytest.mark.regression
@allure.title("Should create book successfully")
def test_should_create_book_successfully(self):
    # Test implementation

Test Organization by Marker

  • Smoke Tests: Core functionality validation (create, get, update, delete basic scenarios)
  • Regression Tests: Complex business scenarios, pagination, search functionality, edge cases
  • Negative Tests: Error handling, invalid authentication, resource not found scenarios

This marker system enables flexible test execution strategies, from quick smoke tests during development to comprehensive regression testing in CI/CD pipelines.

Test Case Mapping & Result Collection

  • Test Case Mapping: Each test function is mapped to a unique test case ID using the test-plan-suite.json file. This enables traceability between automated tests and business requirements.
  • Result Collection: During test execution, results are collected for each test case, including outcome, duration (in milliseconds), and iteration details (for parameterized tests). Results are aggregated and written to test-results/test-results-report.json after the test run, supporting both serial and parallel execution (pytest-xdist).

Troubleshooting

  • Connection errors: Ensure the Book API server is running at the correct URL.
  • Parallel test issues: Make sure test data is unique per worker or run tests serially.
  • HTML report not generated: Ensure pytest-html is installed and use the --html option.

CI/CD & GitHub Actions

This project uses GitHub Actions for automated workflows:

  • Code Analysis: Runs linting and static analysis on every push and pull request.
  • Test Execution: Runs all tests and generates reports for every push and pull request, including parallel execution and publishing Allure/HTML reports.
  • Badges: See the top of this README for live status of these workflows.

Workflow files are located in .github/workflows/.

Running the Test Execution Workflow (GitHub Actions)

How to Trigger

  • The workflow runs automatically on every push and pull request to the repository.
  • You can also trigger it manually from the GitHub Actions tab by selecting the Test Execution & Publish Report workflow and clicking 'Run workflow'.

What the Workflow Does

  • Installs Python and all dependencies from requirements.txt.
  • Runs all tests using pytest (including parallel execution with xdist).
  • Publishes test results as HTML and Allure reports (if configured).
  • Uploads the reports as workflow artifacts for download and review.
  • Updates status badges at the top of the README to reflect the latest run.

New: Post Results to Azure

  • After test execution, the workflow now includes a step to post the test results to Azure (e.g., Azure DevOps, Azure Storage, or a custom API endpoint).
  • This enables centralized reporting, dashboard integration, or further automation in your Azure environment.
  • The step uses secure credentials and API endpoints configured in your repository secrets.
  • You can customize the target Azure service and payload format as needed for your organization.

Viewing Results

  • Go to the 'Actions' tab in your GitHub repository.
  • Select the latest run of Test Execution & Publish Report.
  • Download the HTML/Allure report artifacts for detailed results.