Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create guide for Machine Learning Engine operators #8207 #8968

Closed

Conversation

U-Ozdemir
Copy link

This my first time working with an open source project and I posting here the first attempt for an ML operator guide. It is still in progress now, but some feedback is always welcome.

Guide for ML operator (Work in progress)

AI Platform:

  • The AI Platform is used to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.

  • Machine learning (ML) is a subfield of artificial intelligence (AI). The goal of ML is to make computers learn from the data that you give them. Instead of writing code that describes the action the computer should take, your code provides an algorithm that adapts based on examples of intended behavior. The resulting program, consisting of the algorithm and associated learned parameters, is called a trained model.

Prerequisite Tasks

To use these operators, you must do a few things:

pip install 'apache-airflow[gcp]'

Detailed information is available Installation

Service description:

AI Platform:

  • The AI Platform is used to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.

  • Machine learning (ML) is a subfield of artificial intelligence (AI). The goal of ML is to make computers learn from the data that you give them. Instead of writing code that describes the action the computer should take, your code provides an algorithm that adapts based on examples of intended behavior. The resulting program, consisting of the algorithm and associated learned parameters, is called a trained model.

Operators

  • You will need to set up a Python dictionary containing all the arguments applied to all the tasks in your workflow by using default_args.
  • start_date = determines the execution day of the first DAG task instant
  • params = a dictionary of DAG level parameters that are made accessible in templates, namespaced under params. These params can be overridden at the task level.
default_args = {
    "start_date": days_ago(1),
    "params": {
        "model_name": MODEL_NAME
    }
}

MLEngineManageModelOperator

  • Use the MLEngineManageModelOperator to create a ML model. The task_id refers to name of task (creating a model in this case). It's basically a description of what your task does. Project_id refers to the name you have given your project, model_name refers to the name you have given your model.
 create_model = MLEngineManageModelOperator(
        task_id="create-model",
        project_id=PROJECT_ID,
        operation='create',
        model={
            "name": MODEL_NAME,
        },
    )

MLEngineCreateVersionOperator

  • With the MLEngineCreateVersionOperator a version can be created of a operator. The task_id is a unique, meaningful id for the task, project_id is a the name of the project it refers too, model_name refers to the name of your model, version contains information you have giving to this version of operator. (Do I need to explain the arguments in version? I don't see this in other examples.)
create_version = MLEngineCreateVersionOperator(
        task_id="create-version",
        project_id=PROJECT_ID,
        model_name=MODEL_NAME,
        version={
            "name": "v1",
            "description": "First-version",
            "deployment_uri": '{}/keras_export/'.format(JOB_DIR),
            "runtime_version": "1.14",
            "machineType": "mls1-c1-m2",
            "framework": "TENSORFLOW",
            "pythonVersion": "3.5"
        }
    )

MLEngineDeleteVersionOperator

  • Use the MLEngineDeleteVersionOperator to delete a version of your ML model. The task_id refers to what your task does (it's just name), project_id refers to the name you have given your project, model_name refers to the name you have given your model, version_name refers to which version of a model you want to delete.
delete_version = MLEngineDeleteVersionOperator(
        task_id="delete-version",
        project_id=PROJECT_ID,
        model_name=MODEL_NAME,
        version_name="v1"
    )

MLEngineDeleteModelOperator

  • The MLEngineDeleteModelOperator deletes your whole ML model. The task_id is the a descriptive name given to a task, project_id refers to the name you have given your project, model_name refers to the name you have given your model, delete_contents when set on True is will delete eveything within your ML model (model_name).
delete_model = MLEngineDeleteModelOperator(
        task_id="delete-model",
        project_id=PROJECT_ID,
        model_name=MODEL_NAME,
        delete_contents=True
    )

Make sure to mark the boxes below before creating PR: [x]

  • Description above provides context of the change
  • Unit tests coverage for changes (not needed for documentation changes)
  • Target Github ISSUE in description if exists
  • Commits follow "How to write a good git commit message"
  • Relevant documentation is updated including usage instructions.
  • I will engage committers as explained in Contribution Workflow Example.

In case of fundamental code change, Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in UPDATING.md.
Read the Pull Request Guidelines for more information.

oxymor0n and others added 30 commits April 30, 2020 10:16
…8625)

PostgresHook's parent class, DbApiHook, implements upsert in its insert_rows() method
with the replace=True flag. However, the underlying generated SQL is specific to MySQL's
"REPLACE INTO" syntax and is not applicable to PostgreSQL.

This pulls out the sql generation code for insert/upsert out in to a method that is then
overridden in the PostgreSQL subclass to generate the "INSERT ... ON CONFLICT DO
UPDATE" syntax ("new" since Postgres 9.5)
Right now requirements will be only checked during the
CI build if the setup.py has changed and if yes, clear instructions
will be given. The diff will still be printed otherwise but
it will not cause the job to fail
Their response format is like {"example_dag_id": [{"state": "success", "dag_id": "example_dag_id"}, ...], ...}

The dag_id is already used as the "key", but still repeatedly appear in each element,
which makes the response payload size unnecessarily bigger
Allow EmrCreateJobFlowOperator and EmrAddStepsOperator
to receive their 'job_flow_overrides', and 'steps'
arguments respectively as Jinja template filenames.
This is similar to BashOperator's capability of
receiving a filename as its 'bash_command' argument.
Changes deprecated config check rules. Now uses regex to look for an old pattern in the val. Updates 'hostname_callable'.

This lets us pull the change back in to 1.10.x, so that by the time 2.0 is around people will have had time and notice to update, without reading (the now quite long) UPDATING.md.

Depends on #8463
…8663)

Otherwise the behaviour in UI is incorrect
Addressing issue #8662
When using KubernetesExecutor without any centralized PV for log storage, one has to wait until the logs get uploaded to cloud storage before viewing them on UI. With this change, the webserver will try to fetch logs from running worker pods and display them.
connection add/edit UI pages were not working correctly for Spark connections. 

The root-cause is that "spark" is not listed in models.Connection._types.
So when www/forms.py tries to produce the UI,
"spark" is not available and it always tried to "fall back" to the option list
whose first entry is "Docker"

In addition, we should hide irrelevant entries for
spark connections ("schema", "login", and "password")
Currently the connection type list in the UI is sorted in the original order of
`Connection._types`, which may be a bit inconvenient for users.

It would be better if it can be sorted alphabetically.
* Remove config side effects

* Fix LatestOnlyOperator return type to be json serializable

* Fix tests/test_configuration.py

* Fix tests/executors/test_dask_executor.py

* Fix tests/jobs/test_scheduler_job.py

* Fix tests/models/test_cleartasks.py

* Fix tests/models/test_taskinstance.py

* Fix tests/models/test_xcom.py

* Fix tests/security/test_kerberos.py

* Fix tests/test_configuration.py

* Fix tests/test_logging_config.py

* Fix tests/utils/test_dag_processing.py

* Apply isort

* Fix tests/utils/test_email.py

* Fix tests/utils/test_task_handler_with_custom_formatter.py

* Fix tests/www/api/experimental/test_kerberos_endpoints.py

* Fix tests/www/test_views.py

* Code refactor

* Fix tests/www/api/experimental/test_kerberos_endpoints.py

* Fix requirements

* fixup! Fix tests/www/test_views.py
Co-authored-by: Ace Haidrey <ahaidrey@pinterest.com>
Co-authored-by: James Timmins <james@astronomer.io>
Co-authored-by: michalslowikowski00 <michal.slowikowski@polidea.com>
kaxil and others added 14 commits May 21, 2020 10:51
…8910)

Currently there is no way to determine the state of a TaskInstance in the graph view or tree view for people with colour blindness

Approximately 4.5% of people experience some form of colour vision deficiency
The singularity operator tests _have always_ used mocking, so we were
adding 700MB to our docker image for nothing.

Fixes #8774
CSRF_ENABLED does nothing.

Thankfully, due to sensible defaults in flask-wtf, CSRF is on by
default, but we should set this correctly.

Fixes #8915
All PRs will used cached "latest good" version of the python
base images from our GitHub registry. The python versions in
the Github Registry will only get updated after a master
build (which pulls latest Python image from DockerHub) builds
and passes test correctly.

This is to avoid problems that we had recently with Python
patchlevel releases breaking our Docker builds.
…c dag (#8952)

The scheduler_dag_execution_timing script wants to run _n_ dag runs to
completion. However since the start date of those dags is Dynamic (`now
- delta`) we can't pre-compute the execution_dates like we were before.
(This is because the execution_date of the very first dag run would be
`now()` of the parser process, but if we try to pre-compute that in
the benchmark process it would see a different value of now().)

This PR changes it to instead watch for the first _n_ dag runs to be
completed. This should make it work with more dags with less changes to
them.
* Push CI images to Docker packcage cache for v1-10 branches

This is done as a commit to master so that we can keep the two branches
in sync

Co-Authored-By: Ash Berlin-Taylor <ash_github@firemirror.com>

* Run Github Actions against v1-10-stable too

Co-authored-by: Ash Berlin-Taylor <ash_github@firemirror.com>
`field_path` was renamed to `tag_template_field_path` in >=0.8 and there might be other unknown errors
* [AIRFLOW-5262] Update timeout exception to include dag

* PR comment: extract dag id in log to variable
@boring-cyborg boring-cyborg bot added area:CLI area:dev-tools area:Scheduler including HA (high availability) scheduler labels May 22, 2020
@boring-cyborg
Copy link

boring-cyborg bot commented May 22, 2020

Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst)
Here are some useful points:

  • Pay attention to the quality of your code (flake8, pylint and type annotations). Our pre-commits will help you with that.
  • In case of a new feature add useful documentation (in docstrings or in docs/ directory). Adding a new operator? Check this short guide Consider adding an example DAG that shows how users should use it.
  • Consider using Breeze environment for testing locally, it’s a heavy docker but it ships with a working Airflow and a lot of integrations.
  • Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
  • Be sure to read the Airflow Coding style.
    Apache Airflow is a community-driven project and together we are making it better 🚀.
    In case of doubts contact the developers at:
    Mailing List: dev@airflow.apache.org
    Slack: https://apache-airflow-slack.herokuapp.com/

@ad-m
Copy link
Contributor

ad-m commented May 23, 2020

Could you fix source & target branch?

@kaxil
Copy link
Member

kaxil commented May 25, 2020

@U-Ozdemir Can you create a new PR against Master please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:CLI area:dev-tools area:Scheduler including HA (high availability) scheduler
Projects
None yet
Development

Successfully merging this pull request may close these issues.