Skip to content

HYPERFLEET-485 - test: add test cases for adapter, API validation, async processing, concurrent processing, and nodepool#29

Open
tzhou5 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
tzhou5:testcase
Open

HYPERFLEET-485 - test: add test cases for adapter, API validation, async processing, concurrent processing, and nodepool#29
tzhou5 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
tzhou5:testcase

Conversation

@tzhou5
Copy link

@tzhou5 tzhou5 commented Feb 12, 2026

Summary by CodeRabbit

  • Tests
    • Added comprehensive test designs covering adapter resilience (job failures, crash recovery), API validation scenarios, asynchronous processing workflows, concurrent operations, and NodePool resource management.

…ync processing, concurrent processing, and nodepool
@openshift-ci
Copy link

openshift-ci bot commented Feb 12, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign xueli181114 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

Walkthrough

This PR adds comprehensive test design documentation across six test specification files. It introduces new test scenarios covering adapter job failures and health reporting, crash recovery with message redelivery, incomplete adapter status report handling, API input validation, asynchronous processing with generation tracking, concurrent cluster creation, nodepool list/get operations and coexistence validation, and negative test cases for nodepools under non-existent clusters. All changes are documentation-only, consisting of test preconditions, step-by-step actions, and expected results organized in Markdown format with no modifications to executable code or public API signatures.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested labels

lgtm, approved

Suggested reviewers

  • yingzhanredhat
  • aredenba-rh
  • Mischulee
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and accurately summarizes the main change: adding comprehensive test cases across multiple functional areas (adapter, API validation, async processing, concurrent processing, and nodepool).
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@test-design/testcases/async-processing.md`:
- Around line 41-71: The create-cluster step doesn't show how to set CLUSTER_ID
for later steps; after the POST to ${API_URL}/api/hyperfleet/v1/clusters
(payload name "generation-test") capture the created resource id from the
response or Location header and export it to CLUSTER_ID so subsequent commands
using ${CLUSTER_ID} work (e.g., capture the JSON response with jq to extract .id
or read the Location header and parse the ID, then export CLUSTER_ID for later
steps).

In `@test-design/testcases/concurrent-processing.md`:
- Around line 39-67: Step 1 currently fires 5 parallel POSTs but never records
returned IDs; modify the loop that posts to /api/hyperfleet/v1/clusters to
capture each response ID (e.g., parse with jq -r '.id' into a variable like ID)
and append it to an array CLUSTER_IDS so later steps can iterate
${CLUSTER_IDS[@]}; ensure the curl runs in background still but store IDs safely
(collect stdout from each backgrounded curl, parse with jq to extract the id,
and append to CLUSTER_IDS) so subsequent polling uses the captured IDs.

In `@test-design/testcases/nodepool_lifecycle_negative.md`:
- Around line 59-94: Step 3's expected result is non-deterministic ("HTTP 404 OR
empty list"); pick the intended API contract (e.g., "API returns HTTP 404 Not
Found when cluster doesn't exist") and update the test to assert only that
behavior: change the Expected Result to a single deterministic assertion (for
example "API returns HTTP 404 with Problem Details response") and remove the
alternative, and ensure the curl call using ${FAKE_CLUSTER_ID} is validated
against that single status and response shape in the test case for Step 3.

In `@test-design/testcases/nodepool.md`:
- Around line 212-245: The create step doesn't record the new NodePool's ID, so
update the Step 1 POST that creates the NodePool to capture the response id into
NODEPOOL_ID (e.g., assign NODEPOOL_ID from the create curl output using jq -r
'.id'); use a silent curl (-s) to avoid noise and then reference NODEPOOL_ID in
Step 3's GET call; also consider checking the captured value is non-empty before
proceeding.
- Around line 306-343: Step 1 currently creates nodepools but doesn't populate
NODEPOOL_IDS used in Step 3; modify the POST loop that creates np-coexist-1..3
to capture each response's nodepool ID (use curl -s and jq -r '.id' on the POST
response) and append it to an array variable NODEPOOL_IDS (e.g.,
NODEPOOL_IDS+=("$ID")) so the subsequent for NODEPOOL_ID in ${NODEPOOL_IDS[@]}
loop can iterate the created IDs reliably.
🧹 Nitpick comments (3)
test-design/testcases/nodepool_lifecycle_negative.md (1)

59-82: Add RFC 9457 response headers/fields to validation.

To fully validate RFC 9457, include Content-Type: application/problem+json and optionally instance if supported.

💡 Suggested edit
 **Expected Result:**
 - API returns HTTP 404 Not Found
 - Response follows RFC 9457 Problem Details format:
+- Response header `Content-Type` is `application/problem+json`

{
"type": "https://api.hyperfleet.io/errors/not-found",
"title": "Not Found",
"status": 404,
"detail": "Cluster not found"

  • // "instance": "" (if supported)
    }
test-design/testcases/api-validation.md (2)

108-111: Specify the expected error schema for name-length validation.

Other tests include structured error payloads; aligning this one improves consistency and assertiveness.

💡 Suggested edit
 **Expected Result:**
 - API returns HTTP 400 Bad Request
-- Response contains validation error about name length
+- Response contains validation error about name length (code/title/status/detail)

264-307: Clarify error response shape for invalid Kind / JSON.

Consider specifying the expected error code/title/status (and Content-Type for JSON parsing errors) to make the tests deterministic.

💡 Suggested edit
 **Expected Result:**
 - API returns HTTP 400 Bad Request
-- Response contains validation error about invalid Kind
+- Response contains validation error about invalid Kind (code/title/status/detail)

 ...

 **Expected Result:**
 - API returns HTTP 400 Bad Request
-- Response contains JSON parsing error
+- Response contains JSON parsing error with problem-details or parser error fields

Comment on lines +41 to +71
#### Step 1: Create a cluster and wait for Ready state
**Action:**
```bash
# Create cluster
curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \
-H "Content-Type: application/json" \
-d '{
"kind": "Cluster",
"name": "generation-test",
"spec": {"platform": {"type": "gcp", "gcp": {"projectID": "test", "region": "us-central1"}}}
}'

# Wait for Ready state
```

**Expected Result:**
- Cluster created with generation=1
- Cluster reaches Ready state

---

#### Step 2: Record initial generation and observed_generation
**Action:**
```bash
# Get cluster generation
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID} | jq '.generation'

# Get adapter observed_generation
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}/statuses | jq '.items[0].observed_generation'
```

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Capture CLUSTER_ID after creation.

Several steps rely on ${CLUSTER_ID} but the create request doesn’t show how it’s obtained.

💡 Suggested edit
-# Create cluster
-curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \
+# Create cluster
+CLUSTER_ID=$(curl -s -X POST ${API_URL}/api/hyperfleet/v1/clusters \
   -H "Content-Type: application/json" \
   -d '{
     "kind": "Cluster",
     "name": "generation-test",
     "spec": {"platform": {"type": "gcp", "gcp": {"projectID": "test", "region": "us-central1"}}}
-  }'
+  }' | jq -r '.id')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#### Step 1: Create a cluster and wait for Ready state
**Action:**
```bash
# Create cluster
curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \
-H "Content-Type: application/json" \
-d '{
"kind": "Cluster",
"name": "generation-test",
"spec": {"platform": {"type": "gcp", "gcp": {"projectID": "test", "region": "us-central1"}}}
}'
# Wait for Ready state
```
**Expected Result:**
- Cluster created with generation=1
- Cluster reaches Ready state
---
#### Step 2: Record initial generation and observed_generation
**Action:**
```bash
# Get cluster generation
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID} | jq '.generation'
# Get adapter observed_generation
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}/statuses | jq '.items[0].observed_generation'
```
# Create cluster
CLUSTER_ID=$(curl -s -X POST ${API_URL}/api/hyperfleet/v1/clusters \
-H "Content-Type: application/json" \
-d '{
"kind": "Cluster",
"name": "generation-test",
"spec": {"platform": {"type": "gcp", "gcp": {"projectID": "test", "region": "us-central1"}}}
}' | jq -r '.id')
# Wait for Ready state
🤖 Prompt for AI Agents
In `@test-design/testcases/async-processing.md` around lines 41 - 71, The
create-cluster step doesn't show how to set CLUSTER_ID for later steps; after
the POST to ${API_URL}/api/hyperfleet/v1/clusters (payload name
"generation-test") capture the created resource id from the response or Location
header and export it to CLUSTER_ID so subsequent commands using ${CLUSTER_ID}
work (e.g., capture the JSON response with jq to extract .id or read the
Location header and parse the ID, then export CLUSTER_ID for later steps).

Comment on lines +39 to +67
#### Step 1: Submit 5 cluster creation requests simultaneously
**Action:**
- Submit 5 POST requests in parallel using background processes:
```bash
for i in $(seq 1 5); do
curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \
-H "Content-Type: application/json" \
-d "{
\"kind\": \"Cluster\",
\"name\": \"concurrent-test-${i}\",
\"spec\": {\"platform\": {\"type\": \"gcp\", \"gcp\": {\"projectID\": \"test\", \"region\": \"us-central1\"}}}
}" &
done
wait
```

**Expected Result:**
- All 5 requests return successful responses (HTTP 200/201)
- Each response contains a unique cluster ID
- No request is rejected or fails due to concurrency

#### Step 2: Wait for all clusters to be processed
**Action:**
- Poll each cluster's status until all reach Ready state or a timeout is reached:
```bash
for CLUSTER_ID in ${CLUSTER_IDS[@]}; do
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID} | jq '.conditions'
done
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Capture CLUSTER_IDS from create responses.

Later steps depend on ${CLUSTER_IDS[@]} but Step 1 doesn’t record them.

💡 Suggested edit
-```bash
-for i in $(seq 1 5); do
-  curl -X POST ${API_URL}/api/hyperfleet/v1/clusters \
+```bash
+CLUSTER_IDS=()
+for i in $(seq 1 5); do
+  ID=$(curl -s -X POST ${API_URL}/api/hyperfleet/v1/clusters \
     -H "Content-Type: application/json" \
     -d "{
       \"kind\": \"Cluster\",
       \"name\": \"concurrent-test-${i}\",
       \"spec\": {\"platform\": {\"type\": \"gcp\", \"gcp\": {\"projectID\": \"test\", \"region\": \"us-central1\"}}}
-    }" &
+    }" | jq -r '.id')
+  CLUSTER_IDS+=("$ID") &
 done
 wait
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In @test-design/testcases/concurrent-processing.md around lines 39 - 67, Step 1
currently fires 5 parallel POSTs but never records returned IDs; modify the loop
that posts to /api/hyperfleet/v1/clusters to capture each response ID (e.g.,
parse with jq -r '.id' into a variable like ID) and append it to an array
CLUSTER_IDS so later steps can iterate ${CLUSTER_IDS[@]}; ensure the curl runs
in background still but store IDs safely (collect stdout from each backgrounded
curl, parse with jq to extract the id, and append to CLUSTER_IDS) so subsequent
polling uses the captured IDs.


</details>

<!-- fingerprinting:phantom:triton:eagle -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +59 to +94
**Expected Result:**
- API returns HTTP 404 Not Found
- Response follows RFC 9457 Problem Details format:
```json
{
"type": "https://api.hyperfleet.io/errors/not-found",
"title": "Not Found",
"status": 404,
"detail": "Cluster not found"
}
```

---

#### Step 2: Verify error response format (RFC 9457)
**Action:**
- Parse the error response and verify required fields

**Expected Result:**
- Response contains `type` field
- Response contains `title` field
- Response contains `status` field with value 404
- Optional: Response contains `detail` field with descriptive message

---

#### Step 3: Verify no nodepool was created
**Action:**
```bash
curl -X GET ${API_URL}/api/hyperfleet/v1/clusters/${FAKE_CLUSTER_ID}/nodepools
```

**Expected Result:**
- API returns HTTP 404 (cluster doesn't exist, so nodepools list is not accessible)
- OR returns empty list if API allows listing nodepools for non-existent clusters

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make the Step 3 expected result deterministic.

“HTTP 404 OR empty list” makes the test non-assertive. Pick the intended API behavior and validate only that to avoid false positives.

💡 Suggested edit
-**Expected Result:**
-- API returns HTTP 404 (cluster doesn't exist, so nodepools list is not accessible)
-- OR returns empty list if API allows listing nodepools for non-existent clusters
+**Expected Result:**
+- API returns HTTP 404 (cluster doesn't exist, so nodepools list is not accessible)
🤖 Prompt for AI Agents
In `@test-design/testcases/nodepool_lifecycle_negative.md` around lines 59 - 94,
Step 3's expected result is non-deterministic ("HTTP 404 OR empty list"); pick
the intended API contract (e.g., "API returns HTTP 404 Not Found when cluster
doesn't exist") and update the test to assert only that behavior: change the
Expected Result to a single deterministic assertion (for example "API returns
HTTP 404 with Problem Details response") and remove the alternative, and ensure
the curl call using ${FAKE_CLUSTER_ID} is validated against that single status
and response shape in the test case for Step 3.

Comment on lines +212 to +245
#### Step 1: Create a nodepool under the cluster
**Action:**
- Submit a POST request to create a NodePool resource:
```bash
curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \
-H "Content-Type: application/json" \
-d '{
"kind": "NodePool",
"name": "list-get-test",
"cluster_id": "'${CLUSTER_ID}'",
"spec": { ... }
}'
```

**Expected Result:**
- API returns successful response with nodepool ID

#### Step 2: List all nodepools belonging to the cluster
**Action:**
```bash
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}/nodepools | jq '.'
```

**Expected Result:**
- Response returns HTTP 200 (OK) status code
- Response contains an array of nodepools
- The created nodepool appears in the list
- Each nodepool entry includes: id, name, cluster_id, conditions, spec

#### Step 3: Get individual nodepool details by ID
**Action:**
```bash
curl -s ${API_URL}/api/hyperfleet/v1/nodepools/${NODEPOOL_ID} | jq '.'
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Define how NODEPOOL_ID is captured.

The test uses NODEPOOL_ID in Step 3 without recording it in Step 1. Add a step or command snippet to capture the created nodepool ID from the create response.

💡 Suggested edit
 **Action:**
 - Submit a POST request to create a NodePool resource:
 ```bash
-curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \
+NODEPOOL_ID=$(curl -s -X POST ${API_URL}/api/hyperfleet/v1/nodepools \
   -H "Content-Type: application/json" \
   -d '{
     "kind": "NodePool",
     "name": "list-get-test",
     "cluster_id": "'${CLUSTER_ID}'",
     "spec": { ... }
-  }'
+  }' | jq -r '.id')
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In @test-design/testcases/nodepool.md around lines 212 - 245, The create step
doesn't record the new NodePool's ID, so update the Step 1 POST that creates the
NodePool to capture the response id into NODEPOOL_ID (e.g., assign NODEPOOL_ID
from the create curl output using jq -r '.id'); use a silent curl (-s) to avoid
noise and then reference NODEPOOL_ID in Step 3's GET call; also consider
checking the captured value is non-empty before proceeding.


</details>

<!-- fingerprinting:phantom:triton:eagle -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +306 to +343
#### Step 1: Create multiple nodepools under the same cluster
**Action:**
- Create 3 nodepools with different names:
```bash
for i in 1 2 3; do
curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \
-H "Content-Type: application/json" \
-d "{
\"kind\": \"NodePool\",
\"name\": \"np-coexist-${i}\",
\"cluster_id\": \"${CLUSTER_ID}\",
\"spec\": { ... }
}"
done
```

**Expected Result:**
- All 3 nodepools are created successfully
- Each returns a unique nodepool ID

#### Step 2: Verify all nodepools appear in the list
**Action:**
```bash
curl -s ${API_URL}/api/hyperfleet/v1/clusters/${CLUSTER_ID}/nodepools | jq '.items | length'
```

**Expected Result:**
- List contains all 3 nodepools
- Each nodepool has a distinct ID and name

#### Step 3: Verify each nodepool reaches Ready state independently
**Action:**
- Check each nodepool's conditions:
```bash
for NODEPOOL_ID in ${NODEPOOL_IDS[@]}; do
curl -s ${API_URL}/api/hyperfleet/v1/nodepools/${NODEPOOL_ID} | jq '.conditions'
done
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify how NODEPOOL_IDS is populated.

Step 3 uses ${NODEPOOL_IDS[@]} but Step 1 doesn’t capture the IDs. Add collection logic so the loop is reproducible.

💡 Suggested edit
-```bash
-for i in 1 2 3; do
-  curl -X POST ${API_URL}/api/hyperfleet/v1/nodepools \
+```bash
+NODEPOOL_IDS=()
+for i in 1 2 3; do
+  ID=$(curl -s -X POST ${API_URL}/api/hyperfleet/v1/nodepools \
     -H "Content-Type: application/json" \
     -d "{
       \"kind\": \"NodePool\",
       \"name\": \"np-coexist-${i}\",
       \"cluster_id\": \"${CLUSTER_ID}\",
       \"spec\": { ... }
-    }"
+    }" | jq -r '.id')
+  NODEPOOL_IDS+=("$ID")
 done
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In @test-design/testcases/nodepool.md around lines 306 - 343, Step 1 currently
creates nodepools but doesn't populate NODEPOOL_IDS used in Step 3; modify the
POST loop that creates np-coexist-1..3 to capture each response's nodepool ID
(use curl -s and jq -r '.id' on the POST response) and append it to an array
variable NODEPOOL_IDS (e.g., NODEPOOL_IDS+=("$ID")) so the subsequent for
NODEPOOL_ID in ${NODEPOOL_IDS[@]} loop can iterate the created IDs reliably.


</details>

<!-- fingerprinting:phantom:triton:eagle -->

<!-- This is an auto-generated comment by CodeRabbit -->


1. [Nodepools Resource Type - Workflow Validation](#test-title-nodepools-resource-type---workflow-validation)
2. [Nodepools Resource Type - K8s Resource Check Aligned with Preinstalled NodePool Related Adapters Specified](#test-title-nodepools-resource-type---k8s-resource-check-aligned-with-preinstalled-nodepool-related-adapters-specified)
3. [Nodepools Resource Type - List and Get API Operations](#test-title-nodepools-resource-type---list-and-get-api-operations)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this one duplicated with Nodepools Resource Type - Workflow Validation?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. The "Workflow Validation" test already covers a GET operation in Step 3. The only additional coverage here is the List endpoint. I'll merge the List/Get verification into the "Workflow Validation" test as additional steps and remove this separate test case.

```bash
kubectl set env deployment -n hyperfleet -l app.kubernetes.io/instance=example-adapter SIMULATE_RESULT=failure
kubectl rollout status deployment/example-adapter-hyperfleet-adapter -n hyperfleet --timeout=60s
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may like there is a failure-model adapters rather than updating the origin adapter's configuration. Every update/rollback will have the risk to make the environment become dirty.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Modifying the existing adapter's environment variable has the risk of leaving the environment dirty if cleanup fails. I'll update the test to deploy a separate failure-adapter Helm release with SIMULATE_RESULT=failure pre-configured, and delete it after the test completes.

-H "Content-Type: application/json" \
-d '{
"kind": "Cluster",
"name": "failure-test",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to have a random string. And also I'd like the cluster should have a label with the test run id.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I'll update it.

---


## Test Title: Adapter Job failure correctly reports Health=False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be the basic case to validate the workflow with the adapter status is false, it is not on the adapter.

| **Field** | **Value** |
|-----------|-----------|
| **Pos/Neg** | Negative |
| **Priority** | Tier1 |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be Tier2

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I'll update it.


---

## Test Title: API validates Kind field
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking it should be covered by the component unit/integration test.


---

## Test Title: API handles database connection failure gracefully
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this testing is out of scope of the E2E testing.

@@ -0,0 +1,108 @@
# Feature: NodePool Lifecycle - Negative Tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need an individual file for this?

1. [Nodepools Resource Type - Workflow Validation](#test-title-nodepools-resource-type---workflow-validation)
2. [Nodepools Resource Type - K8s Resource Check Aligned with Preinstalled NodePool Related Adapters Specified](#test-title-nodepools-resource-type---k8s-resource-check-aligned-with-preinstalled-nodepool-related-adapters-specified)
3. [Nodepools Resource Type - List and Get API Operations](#test-title-nodepools-resource-type---list-and-get-api-operations)
4. [Nodepools Resource Type - Multiple NodePools Coexistence](#test-title-nodepools-resource-type---multiple-nodepools-coexistence)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it better move this to concurrent-processing.md?

@@ -0,0 +1,386 @@
# Feature: Asynchronous Processing and Generation Tracking
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the Asynchronous Processing, most of them are black box for us, except the disaster cases, I do not think we need break the structure and dig every components behavior, that should be the component-level unit/integration testing, but not E2E.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants