Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

frontend started at random port #1001

Closed
sahilgupta3 opened this issue Jul 22, 2023 · 6 comments
Closed

frontend started at random port #1001

sahilgupta3 opened this issue Jul 22, 2023 · 6 comments
Labels
question Further information is requested

Comments

@sahilgupta3
Copy link

sahilgupta3 commented Jul 22, 2023

Question

Use Github Discussions.

Hi, I used the docker compose and seems like frontend service is starting at a random port. Anyone saw similar issue?
any idea what I am doing wrong?

Commands used:
git clone https://github.com/open-telemetry/opentelemetry-demo.git
cd opentelemetry-demo/
sudo docker-compose up --no-build -d

OS: AmazonLinux - EC2
docker --version --> Docker version 20.10.25, build b82b9f3
docker-compose --version --> Docker Compose version v2.20.2

image
image
image
image

Thanks,
Sahil

@sahilgupta3 sahilgupta3 added the question Further information is requested label Jul 22, 2023
@noMoreCLI
Copy link

This is expected, as the frontendproxy exposes the port 8080 and all access to the demo app is through the envoy proxy. See https://opentelemetry.io/docs/demo/architecture/

@leonardo3791
Copy link

leonardo3791 commented Aug 8, 2023

Hello,
I've encountered the same issue on a AWS EC2 instance. I started the demo with the docker-compose command as you mentioned. According to the documentation, after the start it seems you can access the web UI at localhost:8080 but 8080 is the frontend container port and not the host port that is a random port. In fact, the port 8080 is a listening port output from the netstat command inside the container and not from the netstat command inside the host. Same issues faced for the other services. Anyone have an explanation for this behavior and a solution to fix the frontend port?

Thanks,
Leonardo

@leonardo3791
Copy link

Hello Sahil,
If you're interested I find the root cause of the issue and a solution. The problem was caused by the frontend-proxy container that doesn't start and if you inspect with the command:
docker ps -a
you can notice that the container is in a "exited" status. This container is responsable for opening the port 8080 of the web store and takes care of all the redirecting rules. For this reason you can't access the port 8080 of your host. So it is correct that on all other containers, including the frontend, you see a random port because it is then the frontend-proxy container that opens the correct ports in the localhost.
Regarding the error causing the frontend-proxy container to crash, I find a solution on the following other github issue:
#971
Basically there is a mismatch on the environment variables of this container and a solution is to add the following line to re-map a variable, in the docker-compose.yml file, on the "environment" section of the "frontend-proxy" container definition:
- OTEL_COLLECTOR_PORT=${OTEL_COLLECTOR_PORT_GRPC}
After doing this, the section looks like this:

  # Frontend Proxy (Envoy)
  frontendproxy:
    image: ${IMAGE_NAME}:${IMAGE_VERSION}-frontendproxy
    container_name: frontend-proxy
    build:
      context: ./
      dockerfile: src/frontendproxy/Dockerfile
    deploy:
      resources:
        limits:
          memory: 50M
    ports:
      - "${ENVOY_PORT}:${ENVOY_PORT}"
      - 10000:10000
    environment:
      - FRONTEND_PORT
      - FRONTEND_HOST
      - FEATURE_FLAG_SERVICE_PORT
      - FEATURE_FLAG_SERVICE_HOST
      - LOCUST_WEB_HOST
      - LOCUST_WEB_PORT
      - GRAFANA_SERVICE_PORT
      - GRAFANA_SERVICE_HOST
      - JAEGER_SERVICE_PORT
      - JAEGER_SERVICE_HOST
      - OTEL_COLLECTOR_HOST
      - OTEL_COLLECTOR_PORT_GRPC
      - OTEL_COLLECTOR_PORT_HTTP
      - OTEL_COLLECTOR_PORT=${OTEL_COLLECTOR_PORT_GRPC}
      - ENVOY_PORT
    depends_on:
      frontend:
        condition: service_started
      featureflagservice:
        condition: service_started
      loadgenerator:
        condition: service_started
      jaeger:
        condition: service_started
      grafana:
        condition: service_started

This works for me.
Hope this can help you or someone else facing the issue.

@wa-spare
Copy link

wa-spare commented Aug 9, 2023

This worked for me... frontend-proxy no longer crashing and able to get to the frontend on http://localhost:8080. Now onto the 'ConnectionResetError: [Errno 104] Connection reset by peer' errors I'm getting from load-generator.

@szberko
Copy link

szberko commented Aug 23, 2023

This worked for me... frontend-proxy no longer crashing and able to get to the frontend on http://localhost:8080. Now onto the 'ConnectionResetError: [Errno 104] Connection reset by peer' errors I'm getting from load-generator.

I'm getting the same error from load-generator you do. Did you find out what could be the problem? TIA!

@puckpuck
Copy link
Contributor

Closing this as the initial issue is resolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants