Skip to content

Conversation

@tkan145
Copy link
Contributor

@tkan145 tkan145 commented Jan 19, 2026

What

Fix THREESCALE-12133

NOTE:

  • Try to download the flamegraph file and and open it in your browser (interactive graph).
  • Latency was measured by reloading the static file, adding 2-3 seconds to the result if loading from a portal.
  • JSON encoding/decoding still consumes a significant portion of the CPU, but latency has decreased to an acceptable level, and it requires more extensive changes, so I decided to defer the refactor to the next PR.

Before:

File read

bash-5.2# accessed-files -p `pgrep -f 'nginx: worker'` -t 60        
Neither -r nor -w options are specified.                            
bash-5.2# accessed-files -p `pgrep -f 'nginx: worker'` -r -t 60     
Tracing 3599829 (/usr/local/openresty/nginx/sbin/nginx)...          
Please wait for 60 seconds.                                         
                                                                    
=== Top 10 file reads ===                                           
#1: 117280 times, 233684323 bytes reads in file apicast-policy.json.
#2: 44 times, 31357920 bytes reads in file config.json.             

Latency

HTTP                                                                                
http_req_duration..............: med=42.46ms p(95)=48.14ms p(99)=2.37s p(99.9)=2.76s
  { expected_response:true }...: med=42.46ms p(95)=48.14ms p(99)=2.37s p(99.9)=2.76s
http_req_failed................: 0.00%  0 out of 9000                               
http_reqs......................: 9000   47.464397/s

Flamegraph

reload-current

After:

File read

bash-5.2# accessed-files -p `pgrep -f 'nginx: worker'` -r -t 60
Tracing 3605605 (/usr/local/openresty/nginx/sbin/nginx)...     
Please wait for 60 seconds.                                    
                                                               
=== Top 10 file reads ===                                      
#1: 44 times, 31357920 bytes reads in file config.json.        

Latency

HTTP                                                                                
http_req_duration..............: med=42.66ms p(95)=44.6ms p(99)=68.14ms p(99.9)=205.02ms
  { expected_response:true }...: med=42.66ms p(95)=44.6ms p(99)=68.14ms p(99.9)=205.02ms
http_req_failed................: 0.00%  0 out of 9147                                   
http_reqs......................: 9147   49.677724/s  

Flamegraph

reload-after

Verification steps

  • Install 3scale
  • Setup products, applications, backend, etc...
Details
#!/bin/bash

# Get the ADMIN_URL and ADMIN_ACCESS_TOKEN from apimanager and system-seed secret
DOMAIN=$(oc get routes console -n openshift-console -o json | jq -r '.status.ingress[0].routerCanonicalHostname' | sed 's/router-default.//')
ADMIN_ACCESS_TOKEN=$(oc get secret system-seed -n 3scale-test -o jsonpath="{.data.ADMIN_ACCESS_TOKEN}"| base64 --decode)
oc project 3scale-test
# Create the required secrets for Accounts, products and backends. 
oc apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
  name: mytenant
type: Opaque
stringData:
  adminURL: https://3scale-admin.$DOMAIN
  token: $ADMIN_ACCESS_TOKEN
EOF
# user secret
oc apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
  name: myusername01
stringData:
  password: "123456"
EOF
# Developer User
oc apply -f - <<EOF
---
apiVersion: capabilities.3scale.net/v1beta1
kind: DeveloperUser
metadata:
  name: developeruser01
  namespace: 3scale-test
  annotations:
    "insecure_skip_verify": "true"
spec:
  developerAccountRef:
    name: developeraccount01
  email: myusername01@example.com
  passwordCredentialsRef:
    name: myusername01
  providerAccountRef:
    name: mytenant
  role: admin
  username: myusername01
EOF
sleep 30
# TODO check for developer user completed
oc apply -f - <<EOF
---
apiVersion: capabilities.3scale.net/v1beta1
kind: DeveloperAccount
metadata:
  name: developeraccount01
  namespace: 3scale-test
  annotations:
    "insecure_skip_verify": "true"
spec:
  orgName: 3scale-test
  providerAccountRef:
    name: mytenant
EOF
deploy httpbin and use it as the backend
oc new-project httpbin
oc new-app quay.io/trepel/httpbin
oc get svc
oc scale deployment/httpbin --namespace httpbin --replicas=1 
oc project 3scale-test

# create backend
oc apply -f - <<EOF
---
apiVersion: capabilities.3scale.net/v1beta1
kind: Backend
metadata:
  name: backend1-cr
  namespace: 3scale-test
  annotations:
    "insecure_skip_verify": "true"
spec:
  mappingRules:
    - httpMethod: GET
      increment: 1
      last: true
      metricMethodRef: hits
      pattern: /
    - httpMethod: POST
      pattern : "/"
      metricMethodRef: hits
      increment: 1    
  name: backend1
  privateBaseURL: 'http://httpbin.httpbin.svc:8080'
  systemName: backend1
EOF

TOTAL_PRODUCTS=600

for ((n=1; n<=$TOTAL_PRODUCTS; n++))
do
# Product
oc apply -f - <<EOF
---
apiVersion: capabilities.3scale.net/v1beta1
kind: Product
metadata:
  name: product$n-cr
  namespace: 3scale-test
  annotations:
    "insecure_skip_verify": "true"
spec:
  applicationPlans:
    plan01:
      name: "My Plan 01"
  deployment:
    apicastHosted:
      authentication:
        userkey:
          authUserKey: token
  name: product$n
  backendUsages:
    backend1:
      path: /
  mappingRules:
    - httpMethod: GET
      pattern : "/"
      metricMethodRef: hits
      increment: 1
    - httpMethod: POST
      pattern : "/"
      metricMethodRef: hits
      increment: 1
  policies:
    - name: headers
      version: builtin
      enabled: true
      configuration:
        set_headers:
          - name: "echo1"
            value: "test"
            value_type: "plain"
    - name: headers
      version: builtin
      enabled: true
      configuration:
        set_headers:
          - name: "echo2"
            value: "test"
            value_type: "plain"
    - name: headers
      version: builtin
      enabled: true
      configuration:
        set_headers:
          - name: "echo3"
            value: "test"
            value_type: "plain"
    - name: logging
      version: builtin
      enabled: true
      configuration:
        custom_logging: "[{{time_local}}] {{host}}:{{server_port}} {{remote_addr}}:{{remote_port}} \"{{request}}\" {{status}} {{body_bytes_sent}} ({{request_time}}) {{post_action_impact}} AND {{upstream_response_time}}"
    - name: headers
      version: builtin
      enabled: true
      configuration:
        set_headers:
          - name: "echo4"
            value: "test"
            value_type: "plain"
    - name: headers
      version: builtin
      enabled: true
      configuration:
        set_headers:
          - name: "echo5"
            value: "test"
            value_type: "plain"
    - name: apicast
      version: builtin
      enabled: true
      configuration: {}
EOF

# application
oc apply -f - <<EOF
---
apiVersion: capabilities.3scale.net/v1beta1
kind: Application
metadata:
  name: application$n-cr
  namespace: 3scale-test
  annotations:
    "insecure_skip_verify": "true"
spec:
  accountCR: 
    name: developeraccount01
  applicationPlanName: plan01
  productCR: 
    name: product$n-cr
  name: testApp
  description: further testing
EOF
# TODO proxy promote
oc apply -f - <<EOF
---
apiVersion: capabilities.3scale.net/v1beta1
kind: ProxyConfigPromote
metadata:
  name: product$n-v1-production
  namespace: 3scale-test
  annotations:
    "insecure_skip_verify": "true"
spec:
  productCRName: product$n-cr
  production: true
  deleteCR: true
EOF
done


sleep 30
echo Product Route: 
echo "https://$(oc get routes | grep product1 |grep production| awk '{print $2}')" 
echo
echo User_key: 
echo $(curl -s -X 'GET' "https://3scale-admin.$DOMAIN/admin/api/applications.xml?access_token=$ADMIN_ACCESS_TOKEN&page=1&per_page=500&service_id=3" -H 'accept: */*' | grep -oP '<user_key>\K[^<]+' | sed 's/\s//g')
  • Once the script finish, visit the admin portal and wait for all products to be populated (going to take a while)
  • Create an token
  • Inside APIcast repo, move into dev-environments folder
▲ lib/3scale/APIcast cd dev-environments/http-proxy-plain-http-upstream/
  • Update the docker-compose.yml file as follow. Update <token> and <3scale_admin_portal> to match values from previous step.
diff --git a/dev-environments/http-proxy-plain-http-upstream/docker-compose.yml b/dev-environments/http-proxy-plain-http-upstream/docker-compose.yml
index f7b311a5..5032bc13 100644                                                                                                                     
--- a/dev-environments/http-proxy-plain-http-upstream/docker-compose.yml                                                                            
+++ b/dev-environments/http-proxy-plain-http-upstream/docker-compose.yml                                                                            
@@ -12,11 +12,12 @@ services:                                                                                                                       
     - two.upstream                                                                                                                                 
     environment:                                                                                                                                   
       THREESCALE_CONFIG_FILE: /tmp/config.json                                                                                                     
-      THREESCALE_DEPLOYMENT_ENV: staging                                                                                                           
-      APICAST_CONFIGURATION_LOADER: lazy                                                                                                           
+      THREESCALE_PORTAL_ENDPOINT: "https://<token>:<3scale_admin_portal>"                                                                          
+      THREESCALE_DEPLOYMENT_ENV: production                                                                                                        
+      APICAST_CONFIGURATION_LOADER: boot                                                                                                           
       APICAST_WORKERS: 1                                                                                                                           
-      APICAST_LOG_LEVEL: debug                                                                                                                     
-      APICAST_CONFIGURATION_CACHE: "0"                                                                                                             
+      APICAST_LOG_LEVEL: notice                                                                                                                    
+      APICAST_CONFIGURATION_CACHE: "15"                                                                                                            
     expose:                                                                                                                                        
       - "8080"                                                                                                                                     
       - "8090"                                                                                                                                     
  • Start the gateway
make gateway IMAGE_NAME=quay.io/3scale/apicast:latest 
  • Get the gateway's IP
 ▲ lib/3scale/3scale-operator docker inspect \                                                         
  -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' http-proxy-plain-http-upstream-gateway-1
  • In another terminal, create k6 script with the following content
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '3m', target: 100 },
    { duration: '6m', target: 200 },
    { duration: '2m', target: 0 },
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'], 
    http_req_failed: ['rate<0.01'], 
  },
};

export function setup() {
  // This runs once for the whole test run, after the init context and before default
  const msg = `[${new Date().toISOString()}] Starting load test`;
  console.log(msg);
}

export default function () {
  const url = 'http://<APICast_gateway_IP>:8080/echo?user_key=<user_key>'
  const params = {
    headers: {
      'Host': '<hostname>'
      'Content-Type': 'application/json',
    },
  };

  const res = http.get(url, params);
  if(res.timings.duration > 1000) {
    const msg = `[${new Date().toISOString()}] Response time above 1s: ${String(res.timings.duration)} ms`;
    console.log(msg);
  }

  check(res, {
    'status 200': (r) => r.status === 200
  });

  sleep(1);
}

Replace:

  • APICast_gateway_IP, user_key with value from previous steps

  • Visit the admin portal and replace the hostname with the hostname of one product of your choice

  • Start k6 test

k6 run --summary-trend-stats="med,p(95),p(99),p(99.9)" script.js 
  • Once the test is finish, stop the gateway
CTRL-C
  • Checkout this branch and build a new runtime-image
make runtime-image IMAGE_NAME=quay.io/3scale/apicast:reload-patch
  • Start the gateway again with new image
make gateway IMAGE_NAME=quay.io/3scale/apicast:reload-patch 
  • Start the test again, you should see the delay drop to sub 1s
k6 run --summary-trend-stats="med,p(95),p(99),p(99.9)" script.js 

Previously, every time the policy chain was rebuilt, the gateway would look
for the manifests file on the disk. This caused a delay for incoming requests.

Since the manifests are static and do not change at runtime, it is more
efficient to cache them at the module level. This caching will speed up the
lookup process.
Creating a json schema validator is somewhat expensive, so we cache
this step with a local LRU cache
@tkan145 tkan145 requested a review from a team as a code owner January 19, 2026 04:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant