Skip to content

Conversation

@zMynxx
Copy link
Contributor

@zMynxx zMynxx commented Oct 17, 2025

User description

  • feat: pydantic added.
  • feat: upgraded coverage.
  • feat: mypy fixes and typing.
  • feat: mypy in checks.

PR Type

Enhancement, Tests


Description

  • Integrated Pydantic for runtime input validation with CalculationRequest and CalculationResult models

  • Updated calculate() function to return CalculationResult object instead of tuple

  • Added comprehensive test coverage for edge cases, error handling, and model validation

  • Enabled mypy type checking in CI/CD pipeline with proper configuration

  • Fixed type hints throughout codebase for better static analysis


Diagram Walkthrough

flowchart LR
  A["Input Parameters"] -->|"Pydantic Validation"| B["CalculationRequest Model"]
  B -->|"Valid Data"| C["calculate() Function"]
  C -->|"Computation"| D["CalculationResult Model"]
  D -->|"Return Object"| E["Client Code"]
  F["mypy Type Checker"] -.->|"Static Analysis"| C
Loading

File Walkthrough

Relevant files
Enhancement
5 files
models.py
New Pydantic models for validation and typing                       
+95/-0   
calculator.py
Integrate Pydantic validation and update return types       
+36/-13 
pricing_scraper.py
Add type hints to dictionary declaration                                 
+1/-1     
aws_lambda.py
Update Lambda handler to use CalculationResult                     
+3/-3     
cli.py
Update CLI to use CalculationResult and fix request unit 
+4/-4     
Tests
6 files
test_calculator.py
Update tests to use CalculationResult object                         
+6/-6     
test_calculator_coverage.py
New comprehensive coverage tests for calculator functions
+166/-0 
test_models.py
New tests for Pydantic model validation                                   
+241/-0 
test_pytest_generate_tests_sample_code.py
Update dynamic tests to use CalculationResult                       
+4/-4     
test_edge_cases.py
New edge case and boundary condition tests                             
+280/-0 
test_error_handling.py
New error handling tests for Lambda and CLI                           
+305/-0 
Configuration changes
4 files
mypy.ini
New mypy configuration for type checking                                 
+15/-0   
code-checks.yaml
Enable mypy type checking in CI/CD pipeline                           
+5/-4     
poetry.just
Uncomment and enable mypy type checking command                   
+2/-3     
mypy.ini
Root-level mypy configuration file                                             
+15/-0   
Dependencies
2 files
pyproject.toml
Add Pydantic dependency and update mypy version                   
+3/-1     
pyproject.toml
Add Pydantic and update mypy dependencies                               
+3/-1     

@codiumai-pr-agent-free
Copy link
Contributor

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Semantic Commit Message Check

Failed stage: Check PR for Semantic Commit Message [❌]

Failed test name: amannn/action-semantic-pull-request

Failure summary:

The action failed because the pull request title "feat/pydantic" does not follow the conventional
commits format. The semantic-pull-request action expects the PR title to have a valid release type
prefix (like "feat:", "fix:", etc.) according to the Conventional Commits specification, but the
title uses a slash instead of a colon after "feat".

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

100:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
101:  [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
102:  http.https://github.com/.extraheader
103:  [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader
104:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
105:  ##[endgroup]
106:  ##[group]Run amannn/action-semantic-pull-request@48f256284bd46cdaab1048c3721360e808335d50
107:  with:
108:  requireScope: false
109:  validateSingleCommit: true
110:  ignoreLabels: release merge
111:  githubBaseUrl: https://api.github.com
112:  env:
113:  GITHUB_TOKEN: ***
114:  ##[endgroup]
115:  ##[error]No release type found in pull request title "feat/pydantic". Add a prefix to indicate what kind of release this pull request corresponds to. For reference, see https://www.conventionalcommits.org/
116:  

@codiumai-pr-agent-free
Copy link
Contributor

codiumai-pr-agent-free bot commented Oct 17, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🎫 No ticket provided
- [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true -->

</details></td></tr>
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@zMynxx zMynxx changed the title feat/pydantic feat: pydantic Oct 17, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Oct 17, 2025

☂️ Python Coverage

current status: ✅

Overall Coverage

Lines Covered Coverage Threshold Status
430 429 100% 0% 🟢

New Files

No new covered files...

Modified Files

No covered modified files...

updated for commit: 1107410 by action🐍

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces Pydantic for runtime validation, upgrades test coverage, adds comprehensive mypy type checking, and refactors the calculate function to return a structured result object instead of a tuple.

  • Added Pydantic models (CalculationRequest and CalculationResult) for input validation and structured output
  • Expanded test coverage with new test files for error handling, edge cases, model validation, and calculator function coverage
  • Enabled mypy type checking in CI/CD pipeline and added type hints throughout the codebase

Reviewed Changes

Copilot reviewed 17 out of 19 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
tests/test_error_handling.py New comprehensive error handling tests for Lambda handler and CLI
tests/test_edge_cases.py New edge case tests covering boundary conditions, unit combinations, and extreme values
src/cli.py Fixed typo in request unit and updated to use CalculationResult object
src/aws_lambda.py Updated to use CalculationResult object instead of tuple unpacking
pyproject.toml Added pydantic dependency and types-requests for type checking
mypy.ini New mypy configuration file with Python 3.13 settings
justfiles/poetry.just Enabled mypy type checking command
aws-lambda-calculator/tests/test_pytest_generate_tests_sample_code.py Updated to use CalculationResult object
aws-lambda-calculator/tests/test_models.py New comprehensive tests for Pydantic models validation
aws-lambda-calculator/tests/test_calculator_coverage.py New tests covering calculator utility functions
aws-lambda-calculator/tests/test_calculator.py Updated to use CalculationResult object
aws-lambda-calculator/src/aws_lambda_calculator/pricing_scraper.py Added type hint for region_dict
aws-lambda-calculator/src/aws_lambda_calculator/models.py New Pydantic models for request validation and result structure
aws-lambda-calculator/src/aws_lambda_calculator/calculator.py Added type hints, Pydantic validation, and changed return type to CalculationResult
aws-lambda-calculator/pyproject.toml Added pydantic and types-requests dependencies, reordered mypy
aws-lambda-calculator/mypy.ini New mypy configuration file
.github/workflows/code-checks.yaml Enabled mypy type checking step in CI/CD pipeline

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 17, 2025

Coverage report

Click to see where and how coverage changed

FileStatementsMissingCoverageCoverage
(new stmts)
Lines missing
  aws-lambda-calculator/src/aws_lambda_calculator
  calculator.py
  models.py
  aws-lambda-calculator/tests
  test_calculator.py
  test_calculator_coverage.py
  test_models.py
  test_pytest_generate_tests_sample_code.py
Project Total  

This report was generated by python-coverage-comment-action

@codiumai-pr-agent-free
Copy link
Contributor

codiumai-pr-agent-free bot commented Oct 17, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Eliminate global state for thread safety

To ensure thread safety, refactor the code to use a local steps variable within
the calculate function instead of a global one, and pass it as an argument to
helper functions.

aws-lambda-calculator/src/aws_lambda_calculator/calculator.py [11-388]

 logger = logging.getLogger(__name__)
-steps = []
 
 
 def open_json_file(region: str) -> dict[str, Any]:
     """Open a JSON file containing cost factors for a specific region."""
     base_dir = os.path.dirname(__file__)
     file_path = os.path.join(base_dir, "jsons", f"{region}.json")
     if not os.path.exists(file_path):
         logger.error(f"Cost factors file for region '{region}' not found.")
         return {}
 
     with open(file_path, "r") as file:
         data = json.load(file)
         logger.debug(f"Loaded cost factors for region '{region}': {data}")
         return data
 
 
-def unit_conversion_requests(number_of_requests: int, request_unit: str) -> int:
+def unit_conversion_requests(number_of_requests: int, request_unit: str, steps: list[str]) -> int:
     """
     @brief Convert requests based on the unit provided.
     @param number_of_requests: number of requests.
     @param request_unit: per second, per minute, per hour, per day, per month, million per month.
     @return: The number of requests per month.
     """
     # Average hours in a month (365.25 days/year * 24 hours/day / 12 months)
     HOURS_IN_MONTH = 730
     match request_unit:
         case "per second":
             requests_per_month = number_of_requests * 60 * 60 * HOURS_IN_MONTH
             logger.debug(
                 f"{number_of_requests} requests per second * 60 seconds * 60 minutes * {HOURS_IN_MONTH} hours = {requests_per_month} requests per month"
             )
             steps.append(
                 f"{number_of_requests} requests per second * 60 seconds * 60 minutes * {HOURS_IN_MONTH} hours = {requests_per_month} requests per month"
             )
             return requests_per_month
         case "per minute":
             requests_per_month = number_of_requests * 60 * HOURS_IN_MONTH
             logger.debug(
                 f"{number_of_requests} requests per minute * 60 minutes * {HOURS_IN_MONTH} hours = {requests_per_month} requests per month"
             )
             steps.append(
                 f"{number_of_requests} requests per minute * 60 minutes * {HOURS_IN_MONTH} hours = {requests_per_month} requests per month"
             )
             return requests_per_month
         case "per hour":
             requests_per_month = number_of_requests * HOURS_IN_MONTH
             logger.debug(
                 f"{number_of_requests} requests per hour * {HOURS_IN_MONTH} hours = {requests_per_month} requests per month"
             )
             steps.append(
                 f"{number_of_requests} requests per hour * {HOURS_IN_MONTH} hours = {requests_per_month} requests per month"
             )
             return requests_per_month
         case "per day":
             requests_per_month = int(number_of_requests * (HOURS_IN_MONTH / 24))
             logger.debug(
                 f"{number_of_requests} requests per day * ({HOURS_IN_MONTH} / 24) days = {requests_per_month} requests per month"
             )
             steps.append(
                 f"{number_of_requests} requests per day * ({HOURS_IN_MONTH} / 24) days = {requests_per_month} requests per month"
             )
             return requests_per_month
         case "per month":
             return number_of_requests
         case "million per month":
             requests_per_month = number_of_requests * 1000000
             logger.debug(
                 f"{number_of_requests} million requests per month * 1,000,000 = {requests_per_month} requests per month"
             )
             steps.append(
                 f"{number_of_requests} million requests per month * 1,000,000 = {requests_per_month} requests per month"
             )
             return requests_per_month
         case _:
             raise ValueError(f"Unknown request unit: {request_unit}")
 
 
-def unit_conversion_memory(memory: float, memory_unit: str) -> float:
+def unit_conversion_memory(memory: float, memory_unit: str, steps: list[str]) -> float:
     """
     @brief Convert memory based on the unit provided.
     @param memory: amount of memory.
     @param memory_unit: per MB, per GB.
     @return: The memory in GB.
     """
     match memory_unit:
         case "MB":
             logger.debug(
                 f"Amount of memory allocated: {memory} MB * 0.0009765625 GB in MB = {memory * 0.0009765625} GB"
             )
             steps.append(
                 f"Amount of memory allocated: {memory} MB * 0.0009765625 GB in MB = {memory * 0.0009765625} GB"
             )
             return memory * 0.0009765625
         case "GB":
             return memory
         case _:
             raise ValueError(f"Unknown memory unit: {memory_unit}")
 
 
 def unit_conversion_ephemeral_storage(
-    ephemeral_storage_mb: float, storage_unit: str
+    ephemeral_storage_mb: float, storage_unit: str, steps: list[str]
 ) -> float:
     """
     @brief Convert ephemeral storage based on the unit provided.
     @param ephemeral_storage_mb: amount of ephemeral storage.
     @param storage_unit: per MB, per GB.
     @return: The ephemeral storage in GB.
     """
     match storage_unit:
         case "MB":
             logger.debug(
                 f"Amount of ephemeral storage allocated: {ephemeral_storage_mb} MB * 0.0009765625 GB in MB = {ephemeral_storage_mb * 0.0009765625} GB"
             )
             steps.append(
                 f"Amount of ephemeral storage allocated: {ephemeral_storage_mb} MB * 0.0009765625 GB in MB = {ephemeral_storage_mb * 0.0009765625} GB"
             )
             return ephemeral_storage_mb * 0.0009765625
         case "GB":
             return ephemeral_storage_mb
         case _:
             raise ValueError(f"Unknown storage unit: {storage_unit}")
 
 
 def calculate_tiered_cost(
     total_compute_gb_sec: float,
     tier_cost_factor: dict[str, float],
     overflow_rate: float,
+    steps: list[str],
 ) -> float:
     """
     @brief Calculate the tiered cost based on total compute GB-seconds.
     @param total_compute_gb_sec: The total compute GB-seconds.
     @param tier_cost_factor: A dictionary containing tier cost factors.
     @param overflow_rate: The overflow rate for compute GB-seconds beyond the tiers.
     @return: The tiered cost.
     """
     remaining_compute = total_compute_gb_sec
     total_cost = 0.0
 
     # Sort tiers by usage limit (integer key)
     sorted_tiers = sorted(tier_cost_factor.items(), key=lambda x: int(x[0]))
 
     for limit_str, rate_str in sorted_tiers:
         limit = float(limit_str)
         rate = float(rate_str)
         if remaining_compute > limit:
             total_cost += limit * rate
             remaining_compute -= limit
             logger.debug(
                 f"Tiered cost: {limit} GB-seconds * {rate} USD = {limit * rate} USD"
             )
             steps.append(
                 f"Tiered cost: {limit} GB-seconds * {rate} USD = {limit * rate} USD"
             )
         else:
             total_cost += remaining_compute * rate
             logger.debug(
                 f"Tiered cost: {remaining_compute} GB-seconds * {rate} USD = {remaining_compute * rate} USD"
             )
             steps.append(
                 f"Tiered cost: {remaining_compute} GB-seconds * {rate} USD = {remaining_compute * rate} USD"
             )
             remaining_compute = 0
             break
 
     if remaining_compute > 0:
         total_cost += remaining_compute * overflow_rate
         logger.debug(
             f"Overflow cost: {remaining_compute} GB-seconds * {overflow_rate} USD = {remaining_compute * overflow_rate} USD"
         )
         steps.append(
             f"Overflow cost: {remaining_compute} GB-seconds * {overflow_rate} USD = {remaining_compute * overflow_rate} USD"
         )
 
     return total_cost
 
 
 # 3. Calculate the monthly compute charges.
 def calc_monthly_compute_charges(
     requests_per_month: int,
     duration_of_each_request_in_ms: int,
     memory_in_gb: float,
     tier_cost_factor: dict[str, float],
+    steps: list[str],
 ) -> tuple[float, float]:
     """
     @brief Calculate the monthly compute charges based on requests per month, duration of each request in ms, and memory in GB.
     @param requests_per_month: The number of requests per month.
     @param duration_of_each_request_in_ms: The duration of each request in milliseconds.
     @param memory_in_gb: The amount of memory allocated in GB.
     @return: The monthly compute charges.
     """
     total_compute_sec = requests_per_month * (duration_of_each_request_in_ms * 0.001)
     logger.debug(
         f"{requests_per_month} requests x {duration_of_each_request_in_ms} ms x 0.001 ms to sec conversion factor = {total_compute_sec} total compute (seconds)"
     )
     steps.append(
         f"{requests_per_month} requests x {duration_of_each_request_in_ms} ms x 0.001 ms to sec conversion factor = {total_compute_sec} total compute (seconds)"
     )
 
     total_compute_gb_sec = memory_in_gb * total_compute_sec
     logger.debug(
         f"{memory_in_gb} GB x {total_compute_sec} seconds = {total_compute_gb_sec} total compute (GB-seconds)"
     )
     steps.append(
         f"{memory_in_gb} GB x {total_compute_sec} seconds = {total_compute_gb_sec} total compute (GB-seconds)"
     )
 
     monthly_compute_charges = calculate_tiered_cost(
-        total_compute_gb_sec, tier_cost_factor, 0.0000133334
+        total_compute_gb_sec, tier_cost_factor, 0.0000133334, steps
     )
     return monthly_compute_charges, total_compute_gb_sec
 
 
 # 4. Calculate the monthly request charges.
 def calc_monthly_request_charges(
-    requests_per_month: float, requests_cost_factor: float
+    requests_per_month: float, requests_cost_factor: float, steps: list[str]
 ) -> float:
     """
     @brief Calculate the monthly request charges based on requests per month and requests cost factor.
     @param requests_per_month: The number of requests per month.
     @param requests_cost_factor: The cost factor for requests.
     @return: The monthly request charges.
     """
     monthly_request_charges = requests_per_month * requests_cost_factor
     logger.debug(
         f"{requests_per_month} requests * {requests_cost_factor} USD = {monthly_request_charges} USD"
     )
     steps.append(
         f"{requests_per_month} requests * {requests_cost_factor} USD = {monthly_request_charges} USD"
     )
     return monthly_request_charges
 
 
 # 5. Calculate the monthly ephemeral storage charges.
 def calc_monthly_ephemeral_storage_charges(
     storage_in_gb: float,
     ephemeral_storage_cost_factor: float,
     total_compute_gb_sec: float,
+    steps: list[str],
 ) -> float:
     """
     @brief Calculate the monthly ephemeral storage charges based on storage in GB, ephemeral storage cost factor, and total compute GB-seconds.
     @param storage_in_gb: The amount of ephemeral storage in GB.
     @param ephemeral_storage_cost_factor: The cost factor for ephemeral storage.
     @param total_compute_gb_sec: The total compute GB-seconds.
     @return: The monthly ephemeral storage charges.
     """
     # Subtract 512MB from storage_in_gb because the first 512MB are free
     chargeable_storage_gb = max(0, storage_in_gb - 0.5)
     logger.debug(
         f"Chargeable ephemeral storage: max(0, {storage_in_gb} GB - 0.5 GB) = {chargeable_storage_gb} GB"
     )
     steps.append(
         f"Chargeable ephemeral storage: max(0, {storage_in_gb} GB - 0.5 GB) = {chargeable_storage_gb} GB"
     )
 
     # Calculate the total storage GB-seconds
     total_storage_gb_sec = chargeable_storage_gb * total_compute_gb_sec
     logger.debug(
         f"Total storage GB-seconds: {chargeable_storage_gb} GB * {total_compute_gb_sec} GB-seconds = {total_storage_gb_sec} GB-seconds"
     )
     steps.append(
         f"Total storage GB-seconds: {chargeable_storage_gb} GB * {total_compute_gb_sec} GB-seconds = {total_storage_gb_sec} GB-seconds"
     )
 
     # Calculate the monthly ephemeral storage charges
     monthly_ephemeral_storage_charges = (
         total_storage_gb_sec * ephemeral_storage_cost_factor
     )
     logger.debug(
         f"Monthly ephemeral storage charges: {total_storage_gb_sec} GB-seconds * {ephemeral_storage_cost_factor} USD = {monthly_ephemeral_storage_charges} USD"
     )
     steps.append(
         f"Monthly ephemeral storage charges: {total_storage_gb_sec} GB-seconds * {ephemeral_storage_cost_factor} USD = {monthly_ephemeral_storage_charges} USD"
     )
 
     return monthly_ephemeral_storage_charges
 
 
 # 6. Calculate the total monthly cost by summing up the monthly compute charges, monthly request charges, and monthly ephemeral storage charges.
 def calculate(
     region: str = "us-east-1",
     architecture: Literal["x86", "arm64"] = "x86",
     number_of_requests: int = 1000000,
     request_unit: Literal["per second", "per minute", "per hour", "per day", "per month", "million per month"] = "per day",
     duration_of_each_request_in_ms: int = 1500,
     memory: float = 128,
     memory_unit: Literal["MB", "GB"] = "MB",
     ephemeral_storage: float = 512,
     storage_unit: Literal["MB", "GB"] = "MB",
 ) -> CalculationResult:
     """Calculate the total cost of execution."""
     
     # Validate inputs using pydantic
     request = CalculationRequest(
         region=region,
         architecture=architecture,
         number_of_requests=number_of_requests,
         request_unit=request_unit,
         duration_of_each_request_in_ms=duration_of_each_request_in_ms,
         memory=memory,
         memory_unit=memory_unit,
         ephemeral_storage=ephemeral_storage,
         storage_unit=storage_unit,
     )
 
-    global steps
     steps = []
 
     logger.info("Starting cost calculation...")
 
     # Step 2
     cost_factors = open_json_file(region)
     if not cost_factors:
         raise ValueError(f"Could not load cost factors for region '{region}'")
 
     # Step 1
     requests_per_month = unit_conversion_requests(
-        number_of_requests, request_unit
-    )
-    memory_in_gb = unit_conversion_memory(memory, memory_unit)
+        number_of_requests, request_unit, steps
+    )
+    memory_in_gb = unit_conversion_memory(memory, memory_unit, steps)
     storage_in_gb = unit_conversion_ephemeral_storage(
-        ephemeral_storage, storage_unit
+        ephemeral_storage, storage_unit, steps
     )
 
     # Step 3
     monthly_compute_charges, total_compute_gb_sec = calc_monthly_compute_charges(
         requests_per_month,
         duration_of_each_request_in_ms,
         memory_in_gb,
         cost_factors[architecture]["Tier"],
+        steps,
     )
 
     # Step 4
     monthly_request_charges = calc_monthly_request_charges(
-        requests_per_month, float(cost_factors["Requests"])
+        requests_per_month, float(cost_factors["Requests"]), steps
     )
 
     # Step 5
     monthly_ephemeral_storage_charges = calc_monthly_ephemeral_storage_charges(
         storage_in_gb,
         float(cost_factors["EphemeralStorage"]),
         total_compute_gb_sec,
+        steps,
     )
 
     # Step 6
     total = (
         monthly_compute_charges
         + monthly_request_charges
         + monthly_ephemeral_storage_charges
     )
     logger.debug(
         f"{monthly_compute_charges} USD + {monthly_request_charges} USD + {monthly_ephemeral_storage_charges} USD = {total} USD"
     )
     steps.append(
         f"{monthly_compute_charges} USD + {monthly_request_charges} USD + {monthly_ephemeral_storage_charges} USD = {total} USD\n"
     )
     logger.debug(f"Lambda cost (monthly): {total} USD")
     steps.append(f"Lambda cost (monthly): {total} USD")
     
     return CalculationResult(
         total_cost=total,
         calculation_steps=steps
     )

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a critical concurrency bug due to the use of a global steps variable, which can lead to incorrect results in an AWS Lambda environment, and proposes a valid solution.

High
General
Simplify validation by normalizing units

Simplify the validation logic in validate_aws_lambda_limits by converting memory
and storage values to a common unit (MB) before checking them against AWS Lambda
limits.

aws-lambda-calculator/src/aws_lambda_calculator/models.py [64-83]

 @model_validator(mode='after')
 def validate_aws_lambda_limits(self) -> 'CalculationRequest':
     """Validate memory and ephemeral storage are within AWS Lambda limits."""
-    # Validate memory
-    if self.memory_unit == 'MB':
-        if self.memory < 128 or self.memory > 10240:
-            raise ValueError('Memory must be between 128 MB and 10,240 MB')
-    elif self.memory_unit == 'GB':
-        if self.memory < 0.125 or self.memory > 10.24:
-            raise ValueError('Memory must be between 0.125 GB and 10.24 GB')
+    # Convert memory to MB for validation
+    memory_mb = self.memory
+    if self.memory_unit == 'GB':
+        memory_mb *= 1024
     
-    # Validate ephemeral storage
-    if self.storage_unit == 'MB':
-        if self.ephemeral_storage < 512 or self.ephemeral_storage > 10240:
-            raise ValueError('Ephemeral storage must be between 512 MB and 10,240 MB')
-    elif self.storage_unit == 'GB':
-        if self.ephemeral_storage < 0.5 or self.ephemeral_storage > 10.24:
-            raise ValueError('Ephemeral storage must be between 0.5 GB and 10.24 GB')
+    if not (128 <= memory_mb <= 10240):
+        raise ValueError('Memory must be between 128 MB (0.125 GB) and 10,240 MB (10.24 GB)')
+
+    # Convert ephemeral storage to MB for validation
+    storage_mb = self.ephemeral_storage
+    if self.storage_unit == 'GB':
+        storage_mb *= 1024
+
+    if not (512 <= storage_mb <= 10240):
+        raise ValueError('Ephemeral storage must be between 512 MB (0.5 GB) and 10,240 MB (10.24 GB)')
     
     return self
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion offers a good refactoring to improve code readability and maintainability by simplifying the validation logic, although the existing code is functionally correct.

Low
  • Update



def unit_conversion_memory(memory: int, memory_unit: str) -> float:
def unit_conversion_memory(memory: float, memory_unit: str) -> float:

Check notice

Code scanning / CodeQL

Explicit returns mixed with implicit (fall through) returns Note

Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.

Copilot Autofix

AI 24 days ago

Copilot could not generate an autofix suggestion

Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
0.0% Coverage on New Code (required ≥ 80%)

See analysis details on SonarQube Cloud

@zMynxx zMynxx merged commit b6936c0 into main Oct 18, 2025
12 of 13 checks passed
@zMynxx zMynxx deleted the feat/pydantic branch October 18, 2025 00:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants