The compute service is essentially a cut down version of the Kubernetes service that provisions its own compute servers using hardware abstraction provided by the Region service.
Where possible, as the Compute service is very similar to the Kubernetes service, we must maintain type and API parity to ease creation of UX tools and services.
To use the Compute service you first need to install:
- The identity service to provide API authentication and authorization.
- The region service to provide provider agnostic cloud services (e.g. images, flavors and identity management).
The compute server component has a couple prerequisites that are required for correct functionality. If not installing the server component, skip to the next section.
You'll need to install:
- cert-manager (used to generate keying material for JWE/JWS and for ingress TLS)
- nginx-ingress (to perform routing, avoiding CORS, and TLS termination)
Helm
Create a values.yaml for the server component:
A typical values.yaml that uses cert-manager and ACME, and external DNS might look like:
global:
identity:
host: https://identity.unikorn-cloud.org
region:
host: https://region.unikorn-cloud.org
compute:
host: https://compute.unikorn-cloud.orghelm install unikorn-compute charts/compute --namespace unikorn-compute --create-namespace --values values.yamlArgoCD
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: unikorn-compute
namespace: argocd
spec:
project: default
source:
repoURL: https://unikorn-cloud.github.io/compute
chart: compute
targetRevision: v0.1.0
destination:
namespace: unikorn
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueThe Identity Service describes how to configure a service organization, groups and role mappings for services that require them.
This service requires asynchronous access to the Region API in order to poll cloud identity and physical network status during cluster creation, and delete those resources on cluster deletion.
This service defines the unikorn-compute user that will need to be added to a group in the service organization.
It will need the built in role infra-manager-service that allows:
- Read access to the
regionendpoints to access external networks - Read/delete access to the
identitesendpoints to poll and delete cloud identities - Read/delete access to the
physicalnetworksendpoints to poll and delete physical networks - Create/Read/Delete access to the
serversendpoints to manage compute instances
The compute service includes comprehensive API integration tests that validate cluster lifecycle management, machine operations, security, and metadata discovery endpoints.
Tests are configured via environment variables. It's recommended you create a .env file in the test/ directory; there is a template .env.example you can copy and adapt.
Required Environment Variables:
# API endpoints
API_BASE_URL=https://compute.your-domain.org
IDENTITY_BASE_URL=https://identity.your-domain.org
# Authentication
API_AUTH_TOKEN=your-auth-token-here
# Test resources
TEST_ORG_ID=your-organization-id
TEST_PROJECT_ID=your-project-id
TEST_SECONDARY_PROJECT_ID=secondary-project-id
TEST_REGION_ID=your-region-id
TEST_SECONDARY_REGION_ID=secondary-region-id
TEST_FLAVOR_ID=your-flavor-id
TEST_IMAGE_ID=your-image-id
# Optional configuration
REQUEST_TIMEOUT=30s # Default: 30s
TEST_TIMEOUT=20m # Default: 20m
DEBUG_LOGGING=false # Default: false
LOG_REQUESTS=false # Default: false
LOG_RESPONSES=false # Default: falseRun all tests:
make test-apiRun all tests in parallel (not yet implemeted):
make test-api-parallelRun specific test suite using focus:
# Example Run only cluster management tests, which is the suite name
make test-api-focus FOCUS="Core Cluster Management"Run specific test spec using focus:
# Example Run only the return all clusters test spec, which uses the test spec name.
make test-api-focus FOCUS="should return all clusters for the organization"Advanced Ginkgo options:
# Run with different parallel workers
cd test/api/suites && ginkgo run --procs=8 --json-report=test-results.json
# Run with verbose output
cd test/api/suites && ginkgo run -v --show-node-events
# Skip specific tests
cd test/api/suites && ginkgo run --skip="Machine Operations"
# Randomize test order
cd test/api/suites && ginkgo run --randomize-allThe API tests can be triggered manually via GitHub Actions using workflow_dispatch:
Workflow Inputs:
| Input | Type | Description | Default |
|---|---|---|---|
focus |
choice | Test suite to run | All |
parallel |
boolean | Run tests in parallel | false |
Available Test Suite Options:
All- Run all test suitesCore Cluster Management- Cluster CRUD operations and lifecycle testsDiscovery and Metadata- Region, flavor, and image discovery testsSecurity and Authentication- Authentication and input validation testsMachine Operations- Machine power operations and eviction tests
Triggering Manually:
- Navigate to Actions tab in GitHub
- Select API Tests workflow
- Click Run workflow
- Choose test suite
- Click Run workflow
Automatic Triggers:
Tests automatically run on pushes to main branch
Test Artifacts:
After each run, test results are uploaded as artifacts:
api-test-results- JSON format test resultsapi-test-junit- JUnit XML format for CI integration
make test-api-clean