Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus metrics are not collected #488

Closed
yujinchoi-94 opened this issue Feb 28, 2023 · 8 comments
Closed

Prometheus metrics are not collected #488

yujinchoi-94 opened this issue Feb 28, 2023 · 8 comments

Comments

@yujinchoi-94
Copy link
Contributor

yujinchoi-94 commented Feb 28, 2023

Hi, I'm trying to collect some metrics using apisix prometheus plugin.

I've found this PR and followed the document.
(I manually configured prometheus port based on this)

However, prometheus is not collecting any metrics from apisix service monitor.

I found that kubernetes_sd_config has been configured for apisix in Prometheus UI.

Prometheus configuration (through Prometheus UI)

scrape_configs:
- job_name: serviceMonitor/ingress-apisix/dev-apisix/0
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 5s
  metrics_path: /apisix/prometheus/metrics
  scheme: http
  follow_redirects: true
  enable_http2: true
  relabel_configs:
  - source_labels: [job]
    separator: ;
    regex: (.*)
    target_label: __tmp_prometheus_job_name
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_instance, __meta_kubernetes_service_labelpresent_app_kubernetes_io_instance]
    separator: ;
    regex: (dev-apisix);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_managed_by,
      __meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by]
    separator: ;
    regex: (Helm);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name, __meta_kubernetes_service_labelpresent_app_kubernetes_io_name]
    separator: ;
    regex: (apisix);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_service, __meta_kubernetes_service_labelpresent_app_kubernetes_io_service]
    separator: ;
    regex: (apisix-gateway);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_version, __meta_kubernetes_service_labelpresent_app_kubernetes_io_version]
    separator: ;
    regex: (3.1.0);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_helm_sh_chart, __meta_kubernetes_service_labelpresent_helm_sh_chart]
    separator: ;
    regex: (apisix-1.1.0);true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_name]
    separator: ;
    regex: prometheus
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Node;(.*)
    target_label: node
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Pod;(.*)
    target_label: pod
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_container_name]
    separator: ;
    regex: (.*)
    target_label: container
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_phase]
    separator: ;
    regex: (Failed|Succeeded)
    replacement: $1
    action: drop
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: prometheus
    action: replace
  - source_labels: [__address__]
    separator: ;
    regex: (.*)
    modulus: 3
    target_label: __tmp_hash
    replacement: $1
    action: hashmod
  - source_labels: [__tmp_hash]
    separator: ;
    regex: "2"
    replacement: $1
    action: keep
  kubernetes_sd_configs:
  - role: endpoints
    kubeconfig_file: ""
    follow_redirects: true
    enable_http2: true
    namespaces:
      own_namespace: false
      names:
      - ingress-apisix

By the way, there was no target for apisix.
Moreover, in service discovery page, there were some discovered metrics but it was dropped.
스크린샷 2023-02-28 오후 6 17 55
image

FYI, Prometheus has deployed in another namespace so I set this option serviceMonitorSelectorNilUsesHelmValues to false.

Here's what I've configured.

Apisix Helm

apisix:

  gateway:
    type: NodePort
    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internal
        alb.ingress.kubernetes.io/certificate-arn: ***
        alb.ingress.kubernetes.io/security-groups: 'common-internal-alb-sg'
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
        alb.ingress.kubernetes.io/healthcheck-path: /
        alb.ingress.kubernetes.io/success-codes: '404' # 404 Route Not Found
        alb.ingress.kubernetes.io/backend-protocol: HTTP
      hosts:
        - host: ***
          paths:
            - /*

  discovery:
    enabled: true
    registry:
      eureka:
        host:
          - ***
          - ***
        prefix: "/eureka/"
        fetch_interval: 30
        weight: 100
        timeout:
          connect: 2000
          send: 2000
          read: 5000

  # Observability configuration.
  # ref: https://apisix.apache.org/docs/apisix/plugins/prometheus/
  serviceMonitor:
    enabled: true
    labels:
      release: kube-prometheus-stack

  dashboard:
    enabled: true

    config:
      conf:
        etcd:
          endpoints:
            - dev-apisix-etcd:2379

    service:
      type: NodePort
      port: 80

    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internal
        alb.ingress.kubernetes.io/certificate-arn: ***
        alb.ingress.kubernetes.io/security-groups: 'common-internal-alb-sg'
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
        alb.ingress.kubernetes.io/healthcheck-path: /ping
        alb.ingress.kubernetes.io/success-codes: '200'
        alb.ingress.kubernetes.io/backend-protocol: HTTP
      hosts:
        - host: ***
          paths:
            - /*

  ingress-controller:
    enabled: true
    config:
      apisix:
        serviceName: dev-apisix-admin
        serviceNamespace: ingress-apisix
  plugins:
    - api-breaker
    - prometheus
    - forward-auth

Gateway Service

kubectl get service dev-apisix-gateway -o json
{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/instance\":\"dev-apisix\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"apisix\",\"app.kubernetes.io/service\":\"apisix-gateway\",\"app.kubernetes.io/version\":\"3.1.0\",\"argocd.argoproj.io/instance\":\"dev-apisix\",\"helm.sh/chart\":\"apisix-1.1.0\"},\"name\":\"dev-apisix-gateway\",\"namespace\":\"ingress-apisix\"},\"spec\":{\"externalTrafficPolicy\":\"Cluster\",\"ports\":[{\"name\":\"apisix-gateway\",\"port\":80,\"protocol\":\"TCP\",\"targetPort\":9080}],\"selector\":{\"app.kubernetes.io/instance\":\"dev-apisix\",\"app.kubernetes.io/name\":\"apisix\"},\"type\":\"NodePort\"}}\n"
        },
        "creationTimestamp": "2023-01-30T08:01:53Z",
        "labels": {
            "app.kubernetes.io/instance": "dev-apisix",
            "app.kubernetes.io/managed-by": "Helm",
            "app.kubernetes.io/name": "apisix",
            "app.kubernetes.io/service": "apisix-gateway",
            "app.kubernetes.io/version": "3.1.0",
            "argocd.argoproj.io/instance": "dev-apisix",
            "helm.sh/chart": "apisix-1.1.0"
        },
        "name": "dev-apisix-gateway",
        "namespace": "ingress-apisix",
        "resourceVersion": "520319969",
        "uid": "c30b9ffb-feab-46a9-a174-96e7e19c91ae"
    },
    "spec": {
        "clusterIP": "172.20.54.80",
        "clusterIPs": [
            "172.20.54.80"
        ],
        "externalTrafficPolicy": "Cluster",
        "ipFamilies": [
            "IPv4"
        ],
        "ipFamilyPolicy": "SingleStack",
        "ports": [
            {
                "name": "apisix-gateway",
                "nodePort": 31768,
                "port": 80,
                "protocol": "TCP",
                "targetPort": 9080
            },
            {
                "name": "prometheus",
                "nodePort": 31040,
                "port": 9091,
                "protocol": "TCP",
                "targetPort": 9091
            }
        ],
        "selector": {
            "app.kubernetes.io/instance": "dev-apisix",
            "app.kubernetes.io/name": "apisix"
        },
        "sessionAffinity": "None",
        "type": "NodePort"
    },
    "status": {
        "loadBalancer": {}
    }
}

ServiceMonitor

kubectl get servicemonitor dev-apisix -o json
{
    "apiVersion": "monitoring.coreos.com/v1",
    "kind": "ServiceMonitor",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"monitoring.coreos.com/v1\",\"kind\":\"ServiceMonitor\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/instance\":\"dev-apisix\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"apisix\",\"app.kubernetes.io/version\":\"3.1.0\",\"argocd.argoproj.io/instance\":\"dev-apisix\",\"helm.sh/chart\":\"apisix-1.1.0\",\"release\":\"kube-prometheus-stack\"},\"name\":\"dev-apisix\",\"namespace\":\"ingress-apisix\"},\"spec\":{\"endpoints\":[{\"interval\":\"15s\",\"path\":\"/apisix/prometheus/metrics\",\"scheme\":\"http\",\"targetPort\":\"prometheus\"}],\"namespaceSelector\":{\"matchNames\":[\"ingress-apisix\"]},\"selector\":{\"matchLabels\":{\"app.kubernetes.io/instance\":\"dev-apisix\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"apisix\",\"app.kubernetes.io/service\":\"apisix-gateway\",\"app.kubernetes.io/version\":\"3.1.0\",\"helm.sh/chart\":\"apisix-1.1.0\"}}}}\n"
        },
        "creationTimestamp": "2023-01-30T08:01:53Z",
        "generation": 6,
        "labels": {
            "app.kubernetes.io/instance": "dev-apisix",
            "app.kubernetes.io/managed-by": "Helm",
            "app.kubernetes.io/name": "apisix",
            "app.kubernetes.io/version": "3.1.0",
            "argocd.argoproj.io/instance": "dev-apisix",
            "helm.sh/chart": "apisix-1.1.0",
            "release": "kube-prometheus-stack"
        },
        "name": "dev-apisix",
        "namespace": "ingress-apisix",
        "resourceVersion": "525298855",
        "uid": "3199f803-fc45-4450-88bc-300068e05cce"
    },
    "spec": {
        "endpoints": [
            {
                "interval": "15s",
                "path": "/apisix/prometheus/metrics",
                "scheme": "http",
                "targetPort": "prometheus"
            }
        ],
        "namespaceSelector": {
            "matchNames": [
                "ingress-apisix"
            ]
        },
        "selector": {
            "matchLabels": {
                "app.kubernetes.io/instance": "dev-apisix",
                "app.kubernetes.io/managed-by": "Helm",
                "app.kubernetes.io/name": "apisix",
                "app.kubernetes.io/service": "apisix-gateway",
                "app.kubernetes.io/version": "3.1.0",
                "helm.sh/chart": "apisix-1.1.0"
            }
        }
    }
}

Prometheus

kubectl get prometheus kube-prometheus-stack-prometheus -o json
{
    "apiVersion": "monitoring.coreos.com/v1",
    "kind": "Prometheus",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"monitoring.coreos.com/v1\",\"kind\":\"Prometheus\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kube-prometheus-stack-prometheus\",\"app.kubernetes.io/instance\":\"dev-kube-prometheus-stack\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/part-of\":\"kube-prometheus-stack\",\"app.kubernetes.io/version\":\"44.2.1\",\"argocd.argoproj.io/instance\":\"dev-kube-prometheus-stack\",\"chart\":\"kube-prometheus-stack-44.2.1\",\"heritage\":\"Helm\",\"release\":\"dev-kube-prometheus-stack\"},\"name\":\"kube-prometheus-stack-prometheus\",\"namespace\":\"monitor\"},\"spec\":{\"additionalScrapeConfigs\":{\"key\":\"additional-scrape-configs.yaml\",\"name\":\"kube-prometheus-stack-prometheus-scrape-confg\"},\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"spec\",\"operator\":\"In\",\"values\":[\"4xlarge\"]}]}]}}},\"alerting\":{\"alertmanagers\":[{\"apiVersion\":\"v2\",\"name\":\"kube-prometheus-stack-alertmanager\",\"namespace\":\"monitor\",\"pathPrefix\":\"/\",\"port\":\"http-web\"}]},\"configMaps\":[\"bucket\"],\"enableAdminAPI\":false,\"externalUrl\":\"http://kube-prometheus-stack-prometheus.monitor:9090\",\"hostNetwork\":false,\"image\":\"quay.io/prometheus/prometheus:v2.41.0\",\"listenLocal\":false,\"logFormat\":\"logfmt\",\"logLevel\":\"info\",\"paused\":false,\"podMonitorNamespaceSelector\":{},\"podMonitorSelector\":{\"matchLabels\":{\"release\":\"dev-kube-prometheus-stack\"}},\"portName\":\"http-web\",\"probeNamespaceSelector\":{},\"probeSelector\":{\"matchLabels\":{\"release\":\"dev-kube-prometheus-stack\"}},\"replicas\":1,\"resources\":{\"limits\":{\"cpu\":8,\"memory\":\"8G\"},\"requests\":{\"cpu\":8,\"memory\":\"8G\"}},\"retention\":\"10d\",\"routePrefix\":\"/\",\"ruleNamespaceSelector\":{},\"ruleSelector\":{\"matchLabels\":{\"release\":\"dev-kube-prometheus-stack\"}},\"scrapeInterval\":\"1m\",\"scrapeTimeout\":\"5s\",\"securityContext\":{\"fsGroup\":2000,\"runAsGroup\":2000,\"runAsNonRoot\":true,\"runAsUser\":1000},\"serviceAccountName\":\"kube-prometheus-stack-prometheus\",\"serviceMonitorNamespaceSelector\":{},\"serviceMonitorSelector\":{\"matchExpressions\":[{\"key\":\"release\",\"operator\":\"In\",\"values\":[\"kube-prometheus-stack\",\"dev-kube-prometheus-stack\"]}]},\"shards\":3,\"storage\":{\"volumeClaimTemplate\":{\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"50Gi\"}},\"storageClassName\":\"efs\"}}},\"thanos\":{\"baseImage\":\"quay.io/thanos/thanos\",\"objectStorageConfigFile\":\"/etc/prometheus/configmaps/bucket/bucket.yaml\",\"version\":\"v0.30.1\",\"volumeMounts\":[{\"mountPath\":\"/etc/prometheus/configmaps/bucket\",\"name\":\"configmap-bucket\"}]},\"version\":\"v2.41.0\",\"walCompression\":true}}\n"
        },
        "creationTimestamp": "2022-04-14T05:37:13Z",
        "generation": 24,
        "labels": {
            "app": "kube-prometheus-stack-prometheus",
            "app.kubernetes.io/instance": "dev-kube-prometheus-stack",
            "app.kubernetes.io/managed-by": "Helm",
            "app.kubernetes.io/part-of": "kube-prometheus-stack",
            "app.kubernetes.io/version": "44.2.1",
            "argocd.argoproj.io/instance": "dev-kube-prometheus-stack",
            "chart": "kube-prometheus-stack-44.2.1",
            "heritage": "Helm",
            "release": "dev-kube-prometheus-stack"
        },
        "name": "kube-prometheus-stack-prometheus",
        "namespace": "monitor",
        "resourceVersion": "525328723",
        "uid": "a89b443b-7c98-4594-8ffe-66d963cfbb29"
    },
    "spec": {
        "additionalScrapeConfigs": {
            "key": "additional-scrape-configs.yaml",
            "name": "kube-prometheus-stack-prometheus-scrape-confg"
        },
        "affinity": {
            "nodeAffinity": {
                "requiredDuringSchedulingIgnoredDuringExecution": {
                    "nodeSelectorTerms": [
                        {
                            "matchExpressions": [
                                {
                                    "key": "spec",
                                    "operator": "In",
                                    "values": [
                                        "4xlarge"
                                    ]
                                }
                            ]
                        }
                    ]
                }
            }
        },
        "alerting": {
            "alertmanagers": [
                {
                    "apiVersion": "v2",
                    "name": "kube-prometheus-stack-alertmanager",
                    "namespace": "monitor",
                    "pathPrefix": "/",
                    "port": "http-web"
                }
            ]
        },
        "configMaps": [
            "bucket"
        ],
        "enableAdminAPI": false,
        "evaluationInterval": "30s",
        "externalUrl": "http://kube-prometheus-stack-prometheus.monitor:9090",
        "hostNetwork": false,
        "image": "quay.io/prometheus/prometheus:v2.41.0",
        "listenLocal": false,
        "logFormat": "logfmt",
        "logLevel": "debug",
        "paused": false,
        "podMonitorNamespaceSelector": {},
        "podMonitorSelector": {
            "matchLabels": {
                "release": "dev-kube-prometheus-stack"
            }
        },
        "portName": "http-web",
        "probeNamespaceSelector": {},
        "probeSelector": {
            "matchLabels": {
                "release": "dev-kube-prometheus-stack"
            }
        },
        "replicas": 1,
        "resources": {
            "limits": {
                "cpu": 8,
                "memory": "8G"
            },
            "requests": {
                "cpu": 8,
                "memory": "8G"
            }
        },
        "retention": "10d",
        "routePrefix": "/",
        "ruleNamespaceSelector": {},
        "ruleSelector": {
            "matchLabels": {
                "release": "dev-kube-prometheus-stack"
            }
        },
        "scrapeInterval": "1m",
        "scrapeTimeout": "5s",
        "securityContext": {
            "fsGroup": 2000,
            "runAsGroup": 2000,
            "runAsNonRoot": true,
            "runAsUser": 1000
        },
        "serviceAccountName": "kube-prometheus-stack-prometheus",
        "serviceMonitorNamespaceSelector": {},
        "serviceMonitorSelector": {
            "matchExpressions": [
                {
                    "key": "release",
                    "operator": "In",
                    "values": [
                        "kube-prometheus-stack",
                        "dev-kube-prometheus-stack"
                    ]
                }
            ]
        },
        "shards": 3,
        "storage": {
            "volumeClaimTemplate": {
                "spec": {
                    "accessModes": [
                        "ReadWriteOnce"
                    ],
                    "resources": {
                        "requests": {
                            "storage": "50Gi"
                        }
                    },
                    "storageClassName": "efs"
                }
            }
        },
        "thanos": {
            "baseImage": "quay.io/thanos/thanos",
            "objectStorageConfigFile": "/etc/prometheus/configmaps/bucket/bucket.yaml",
            "version": "v0.30.1",
            "volumeMounts": [
                {
                    "mountPath": "/etc/prometheus/configmaps/bucket",
                    "name": "configmap-bucket"
                }
            ]
        },
        "version": "v2.41.0",
        "walCompression": true
    },
    "status": {
        "availableReplicas": 3,
        "conditions": [
            {
                "lastTransitionTime": "2023-02-28T08:09:33Z",
                "observedGeneration": 24,
                "status": "True",
                "type": "Available"
            },
            {
                "lastTransitionTime": "2023-01-31T08:00:15Z",
                "observedGeneration": 24,
                "status": "True",
                "type": "Reconciled"
            }
        ],
        "paused": false,
        "replicas": 3,
        "shardStatuses": [
            {
                "availableReplicas": 1,
                "replicas": 1,
                "shardID": "0",
                "unavailableReplicas": 0,
                "updatedReplicas": 1
            },
            {
                "availableReplicas": 1,
                "replicas": 1,
                "shardID": "1",
                "unavailableReplicas": 0,
                "updatedReplicas": 1
            },
            {
                "availableReplicas": 1,
                "replicas": 1,
                "shardID": "2",
                "unavailableReplicas": 0,
                "updatedReplicas": 1
            }
        ],
        "unavailableReplicas": 0,
        "updatedReplicas": 3
    }
}

Thank you in advance :)

@Gallardot
Copy link
Member

@yujinchoi-94
Could you please provide your version of k8s? Is prometheus operator installed? Parameters for helm charts installation command?

@yujinchoi-94
Copy link
Contributor Author

yujinchoi-94 commented Feb 28, 2023

@Gallardot
Yes, prometheus operator has been installed.

Here's my configuration for prometheus operator

kube-prometheus-stack:

  fullnameOverride: "kube-prometheus-stack"

  grafana:
    service:
      type: NodePort
    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internal
        alb.ingress.kubernetes.io/certificate-arn: ***
        alb.ingress.kubernetes.io/security-groups: 'common-internal-alb-sg'
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
        alb.ingress.kubernetes.io/healthcheck-path: /api/health
        alb.ingress.kubernetes.io/success-codes: '200'
        alb.ingress.kubernetes.io/backend-protocol: HTTP
      hosts:
        - ***
      pathType: Prefix
      path: /*


    defaultDashboardsTimezone: Asia/Seoul


    additionalDataSources:
    - name: Thanos
      type: prometheus
      url: http://thanos-query-frontend.monitor:9090
      uid: Thanos
    - name: Thanos(auto-resolution)
      type: prometheus
      url: http://thanos-query-frontend.monitor:9090
      uid: ThanosAutoResolution
      jsonData: 
        customQueryParameters: max_source_resolution=auto

    persistence:
      enabled: true
      type: pvc
      size: 1Gi
      storageClassName: efs
      accessModes: ["ReadWriteOnce"]

    plugins: []
  kubeApiServer:
    enabled: false
  kubelet:
    enabled: true
  kubeControllerManager:
    enabled: false
  coreDns:
    enabled: false
  kubeEtcd:
    enabled: false
  kubeScheduler:
    enabled: false
  kubeProxy:
    enabled: false
  kubeStateMetrics:
    enabled: true
  nodeExporter:
    enabled: false
  prometheus:
    serviceAccount:
      create: true
      name: "kube-prometheus-stack-prometheus"
      annotations:
        eks.amazonaws.com/role-arn: ***

    thanosService:
      enabled: true

    thanosServiceMonitor:
      enabled: true

    prometheusSpec:
      scrapeInterval: "1m"

      scrapeTimeout: "5s"

      configMaps:
        - bucket

      serviceMonitorSelector:
        matchExpressions:
          - {key: release, operator: In, values: [kube-prometheus-stack, dev-kube-prometheus-stack]}

      shards: 3

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: spec
                    operator: In
                    values:
                      - 4xlarge

      storageSpec:
        volumeClaimTemplate:
          spec:
            storageClassName: efs
            accessModes: ["ReadWriteOnce"]
            resources:
              requests:
                storage: 50Gi

      ## Resource limits & requests
      ##
      resources:
        requests:
          cpu: 8
          memory: 8G
        limits:
          cpu: 8
          memory: 8G


      additionalScrapeConfigs:
      ###
      thanos:
        baseImage: quay.io/thanos/thanos
        version: v0.30.1
        objectStorageConfigFile: /etc/prometheus/configmaps/bucket/bucket.yaml

        volumeMounts:
          - name: configmap-bucket # prometheus.prometheusSpec.configMaps 설정에 의해 등록된 volume 참조
            mountPath: /etc/prometheus/configmaps/bucket

Kubernetes Version

❯ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.14-eks-ffeb93d", GitCommit:"f76e2b475d1433cdb6bd546e9e8f129fde938fb7", GitTreeState:"clean", BuildDate:"2022-11-29T18:41:00Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

(For your information, I've installed apisix and prometheus using argocd.)

@Gallardot
Copy link
Member

Gallardot commented Feb 28, 2023

@yujinchoi-94 Thanks. I also need the helm charts parameters for when you install apisix. I tried to reproduce it

@yujinchoi-94
Copy link
Contributor Author

yujinchoi-94 commented Feb 28, 2023

@Gallardot You can check below. (What I provided in the first comment)

apisix:

  gateway:
    type: NodePort
    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internal
        alb.ingress.kubernetes.io/certificate-arn: ***
        alb.ingress.kubernetes.io/security-groups: 'common-internal-alb-sg'
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
        alb.ingress.kubernetes.io/healthcheck-path: /
        alb.ingress.kubernetes.io/success-codes: '404' # 404 Route Not Found
        alb.ingress.kubernetes.io/backend-protocol: HTTP
      hosts:
        - host: ***
          paths:
            - /*

  discovery:
    enabled: true
    registry:
      eureka:
        host:
          - ***
          - ***
        prefix: "/eureka/"
        fetch_interval: 30
        weight: 100
        timeout:
          connect: 2000
          send: 2000
          read: 5000

  # Observability configuration.
  # ref: https://apisix.apache.org/docs/apisix/plugins/prometheus/
  serviceMonitor:
    enabled: true
    labels:
      release: kube-prometheus-stack

  dashboard:
    enabled: true

    config:
      conf:
        etcd:
          endpoints:
            - dev-apisix-etcd:2379

    service:
      type: NodePort
      port: 80

    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internal
        alb.ingress.kubernetes.io/certificate-arn: ***
        alb.ingress.kubernetes.io/security-groups: 'common-internal-alb-sg'
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
        alb.ingress.kubernetes.io/healthcheck-path: /ping
        alb.ingress.kubernetes.io/success-codes: '200'
        alb.ingress.kubernetes.io/backend-protocol: HTTP
      hosts:
        - host: ***
          paths:
            - /*

  ingress-controller:
    enabled: true
    config:
      apisix:
        serviceName: dev-apisix-admin
        serviceNamespace: ingress-apisix
  plugins:
    - api-breaker
    - prometheus
    - forward-auth

@Gallardot
Copy link
Member

Gallardot commented Feb 28, 2023

@yujinchoi-94
I can't reproduce it. It works fine in my environment.

an example

helm install kube-prometheus-stack -n apisix prometheus-community/kube-prometheus-stack --create-namespace
helm upgrade --install apisix apisix/apisix --create-namespace  --namespace apisix --set serviceMonitor.enabled=true --set serviceMonitor.labels.release=kube-prometheus-stack

@yujinchoi-94
Copy link
Contributor Author

@Gallardot Thank you for responding so quickly :)

I've installed them in different namespaces.
In my case, I've installed kube-prometheus-stack in monitor namespace and apisix in ingress-apisix namespace.
ServiceMonitor for apisix is also installed in ingress-apisix namespace.

@yujinchoi-94
Copy link
Contributor Author

@Gallardot
I don't know why, but after i completely delete the namespace where the ingress-apisix helm chart is installed and create namespace again, it works fine. I guess there was some installing related issues since I installed and deleted it frequently.

Thank you for your help :)

@Gallardot
Copy link
Member

@yujinchoi-94
OK, I will close this issue. If you still have questions, please feel free to reopen it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants