Skip to content

DockerSwarmOperator does not work well with Python's tqdm when displaying logs #40571

@bladerail

Description

@bladerail

Apache Airflow Provider(s)

docker

Versions of Apache Airflow Providers

apache-airflow-providers-docker==3.10.0

Apache Airflow version

2.9.1-python3.11

Operating System

CentOS 7

Deployment

Other Docker-based deployment

Deployment details

I am running a Docker Swarm environment consisting of 3 manager hosts and 10 worker hosts, all running CentOS 7, docker version 24.0.7.

What happened

We run our DAGs as Python scripts embedded inside Docker Containers. When said python script uses the tqdm progress bar, the DAG reflects a failure because an exception is raised in DockerSwarmOperator's stream_new_logs method, which expects the log message to begin with a timestamp. The Docker Container actually continues running to completion, but the DAG has already been marked a failure because of the raised exception.

As a result, the container/service does not get cleaned up, the logs for a successful run do not display, and the DAG mistakenly shows as a failed run.

An example error log has been attached.
error.txt

What you think should happen instead

Since the python script inside the Docker Container actually runs to completion and successfully, Airflow should display this as a successful run instead of failure.

How to reproduce

  1. Create a Docker Swarm.
  2. Start Airflow with default settings.
  3. Create a Docker image that runs the following python code.
from tqdm import tqdm


if __name__ == "__main__":
    for i in tqdm(range(10000), total=10000):
        if i % 5 == 0:
            print(i)
  1. Create a DAG to run the Docker Image created in Step 3.
  2. Run said DAG created in step 4.

Anything else

I am attaching several screenshots below. They show different runs of the same DAG, and show that it fails at different times. This means that, for processes where the tqdm progress bar is short enough, the DAG can actually pass and does not get affected by the stream_new_logs issue. I created a python script that did a tqdm from 0 to 10000, printing on every 5th iteration. The screenshots below show that Airflow is generally able to display the logs and fail at various points, as early as 775 and as late as 8170. This DAG has never succeeded before, but I am able to verify via the dead containers that they actually all ran to completion (docker logs ${container_id} shows that tqdm ran all the way to 10000).

Sample Run 1
Demo1

Sample Run 2
Demo2

Sample Run 3
Demo3

Sample Run 4
Demo4

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions