Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

request help: AWS NLB Integration with APISIX Ingress controller in EKS #2315

Open
pavankumar-go opened this issue Oct 29, 2024 · 3 comments

Comments

@pavankumar-go
Copy link

pavankumar-go commented Oct 29, 2024

Issue description

getting 400s while accessing a apisix route via the DNS of the NLB which is targetting apisix controller nodeport service

I have manually setup NLB and its target groups to target the nodeports of controller service

apisix-ingress-controller-apisix-gateway   NodePort    172.16.172.162   <none>        80:31570/TCP,443:30636/TCP   40m
Screenshot 2024-10-29 at 10 12 52 PM

I have created an Ingress to use the apisix class

spec:
  ingressClassName: apisix
  rules:
  - host: sample-app.dev.vida.id
    http:
      paths:
      - backend:
          service:
            name: sample-app
            port:
              number: 80
        path: /
        pathType: Prefix

I was able to verify the route creation by port-forwarding to controller gateway service

10723 ◯  curl 0:9080/healthz -v  -H 'Host:sample-app.dev.vida.id' -k                                                                                
*   Trying 0.0.0.0:9080...
* Connected to 0.0.0.0 (0.0.0.0) port 9080
> GET /healthz HTTP/1.1
> Host:sample-app.dev.vida.id
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Content-Length: 0
< Connection: keep-alive
< Date: Tue, 29 Oct 2024 16:23:26 GMT
< Server: APISIX/3.5.0
<
* Connection #0 to host 0.0.0.0 left intact

But i'm getting 400 while accessing via the DNS of the NLB which is targetting controller nodeport service
Screenshot 2024-10-29 at 10 16 48 PM

Also I noticed that sample-app ingress address field is empty.

10750 ◯  k get ingress -n dev sample-app
NAME                            CLASS    HOSTS                                ADDRESS   PORTS   AGE
sample-app                   apisix   sample-app.dev.vida.id                                80      1d

I'm using the apisix-ingress-controller chart.

here's the helm values that i have used

nameOverride: ""

fullnameOverride: "apisix-ingress-controller"

labelsOverride: {}

annotations: {}

rbac:
  create: true

serviceAccount:
  create: true
  name: ""
  automountServiceAccountToken: true

replicaCount: 3

image:
  repository: apache/apisix-ingress-controller
  pullPolicy: IfNotPresent
  tag: "1.8.3"

podAnnotations: {}

priorityClassName: ""

imagePullSecrets: []
clusterDomain: cluster.local

service:
  port: 80

config:
  etcdserver:
    enabled: true
  logLevel: "info"
  logOutput: "stderr"
  httpListen: ":8080"
  httpsListen: ":8443"
  # ingressPublishService: "ingress-apisix/apisix-ingress-controller"
  ingressStatusAddress: []
  # - "108.136.185.81"
  enableProfiling: false
  apisixResourceSyncInterval: "1h"
  pluginMetadataCM: ""
  kubernetes:
    # -- the Kubernetes configuration file path, default is "", so the in-cluster
    # configuration will be used.
    kubeconfig: ""
    # -- how long should apisix-ingress-controller re-synchronizes with Kubernetes,
    # default is 6h,
    resyncInterval: "6h"
    # -- namespace_selector represent basis for selecting managed namespaces.
    # the field is support since version 1.4.0
    # For example, "apisix.ingress=watching", so ingress will watching the namespaces which labels "apisix.ingress=watching"
    namespaceSelector: [""]
    # -- the election id for the controller leader campaign,
    # only the leader will watch and delivery resource changes,
    # other instances (as candidates) stand by.
    electionId: "ingress-apisix-leader"
    # -- The class of an Ingress object is set using the field IngressClassName in
    # Kubernetes clusters version v1.18.0 or higher or the annotation
    # "kubernetes.io/ingress.class" (deprecated).
    ingressClass: "apisix"
    # -- the supported ingress api group version, can be "networking/v1beta1",
    # "networking/v1" (for Kubernetes version v1.19.0 or higher), and
    # "extensions/v1beta1", default is "networking/v1".
    ingressVersion: "networking/v1"
    # -- whether to watch EndpointSlices rather than Endpoints.
    watchEndpointSlices: false
    # -- the supported apisixroute api group version, can be "apisix.apache.org/v2"
    # "apisix.apache.org/v2beta3" or "apisix.apache.org/v2beta2"
    apisixRouteVersion: "apisix.apache.org/v2"
    # -- whether to enable support for Gateway API.
    # Note: This feature is currently under development and may not work as expected.
    # It is not recommended to use it in a production environment.
    # Before we announce support for it to reach Beta level or GA.
    enableGatewayAPI: false
    # -- the resource API version, support "apisix.apache.org/v2beta3" and "apisix.apache.org/v2".
    # default is "apisix.apache.org/v2"
    apiVersion: "apisix.apache.org/v2"


  # -- APISIX related configurations.
  apisix:
    # -- Enabling this value, overrides serviceName and serviceNamespace.
    # serviceFullname: "apisix-admin.apisix.svc.local"
    serviceNamespace: ingress-apisix
    servicePort: 9180
    adminKey: REDACTED
    clusterName: "dev"
    adminAPIVersion: "v3"
    # -- The APISIX Helm chart supports storing user credentials in a secret.
    # The secret needs to contain a single key for admin token with key adminKey by default.
    existingSecret: ""
    # -- Name of the admin token key in the secret, overrides the default key name "adminKey"
    existingSecretAdminKeyKey: ""

resources: {}

initContainer:
  image: busybox
  tag: 1.28

autoscaling:
  enabled: false

# -- Update strategy for apisix ingress controller deployment
updateStrategy:
  type: RollingUpdate

nodeSelector:
  nodesrole: ingress
tolerations:
- key: noderole
  value: ingress
  operator: Equal
  effect: NoSchedule
affinity: {}
# -- Topology Spread Constraints for pod assignment spread across your cluster among failure-domains
# ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
topologySpreadConstraints: []

serviceMonitor:
  enabled: false

podDisruptionBudget:
  enabled: false

podSecurityContext: {}

securityContext: {}

gateway:
  # -- Apache APISIX service type for user access itself
  type: NodePort
  externalTrafficPolicy: Cluster
  externalIPs: []
  nginx:
    # -- Nginx workerRlimitNoFile
    workerRlimitNofile: "20480"
    # -- Nginx worker connections
    workerConnections: "10620"
    # -- Nginx worker processes
    workerProcesses: auto
    # -- Nginx error logs path
    errorLog: stderr
    # -- Nginx error logs level
    errorLogLevel: warn
  resources: {}
  securityContext: {}
  tls:
    enabled: true
    http2:
      enabled: true
    # -- TLS protocols allowed to use.
    sslProtocols: "TLSv1.2 TLSv1.3"
    # -- Define SNI to fallback if none is presented by client
    fallbackSNI: ""

Environment

  • your apisix-ingress-controller version (output of apisix-ingress-controller version --long):
  • your Kubernetes cluster version (output of kubectl version):
  • if you run apisix-ingress-controller in Bare-metal environment, also show your OS version (uname -a):
@pavankumar-go pavankumar-go changed the title request help: How to integrate AWS NLB to APISIX Ingress controller in EKS? request help: AWS NLB Integration with APISIX Ingress controller in EKS Oct 29, 2024
@ProfessorMDA
Copy link

@pavankumar-go Have you figured out how to manage the deployment of apisix-ingress to EKS (AWS)?

Currently I'm trying to move from ingress-nginx to apisix-ingress, but still stucked

@pavankumar-go
Copy link
Author

yeah, I was able to set up apisix controller in parallel with k8s ingress nginx controller (i'm using Terraform helm release to manage both)

@pavankumar-go
Copy link
Author

It works with apisix service typeLoadBalancer but not NodePort + self managed NLB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants