Skip to content
This repository was archived by the owner on Sep 3, 2022. It is now read-only.

Commit 2295860

Browse files
authored
Cloudmlmerge (#239)
* Adding evaluationanalysis API to generate evaluation stats from eval … (#99) * Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file. The resulting stats file will be fed to a visualization component which will come in a separate change. * Follow up CR comments. * Feature slicing view visualization component. (#109) * Datalab Inception (image classification) solution. (#117) * Datalab Inception (image classification) solution. * Fix dataflow URL. * Datalab "ml" magics for running a solution package. Update Inception Package. (#121) * Datalab Inception (image classification) solution. * Fix dataflow URL. * Datalab "ml" magics for running a solution package. - Dump function args and docstrings - Run functions Update Inception Package. - Added docstring on face functions. - Added batch prediction. - Use datalab's lib for talking to cloud training and prediction service. - More minor fixes and changes. * Follow up code review comments. * Fix an PackageRunner issue that temp installation is done multiple times unnecessarily. * Update feature-slice-view supporting file, which fixes some stability UI issues. (#126) * Remove old feature-slicing pipeline implementation (is replaced by BigQuery) Add Confusion matrix magic. (#129) * Remove old feature-slicing pipeline implementation (is replaced by BigQuery). Add Confusion matrix magic. * Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size. * Fix set union. * Mergemaster/cloudml (#134) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that prediction right after preprocessing fails in inception package local run. (#135) * add structure data preprocessing and training (#132) merging the preprocessing and training parts. * first full-feature version of structured data is done (#139) * added the preprocessing/training files. Preprocessing is connected with datalab. Training is not fully connected with datalab. * added training interface. * local/cloud training ready for review * saving work * saving work * cloud online prediction is done. * split config file into two (schema/transforms) and updated the unittests. * local preprocess/train working * 1) merged --model_type and --problem_type 2) online/local prediction is done * added batch prediction * all prediction is done. Going to make a merge request next * Update _package.py removed some white space + add a print statement to local_predict * --preprocessing puts a copy of schema in the outut dir. --no need to pass schema to train in datalab. * tests can be run from any folder above the test folder by python -m unittest discover Also, the training test will parse the output of training and check that the loss is small. * Inception Package Improvements (#138) * Fix an issue that prediction right after preprocessing fails in inception package local run. * Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels. Make online prediction works with GCS images. "%%ml alpha deploy" now also check for "/model" subdir if needed. Other minor improvements. * Make local batch prediction really batched. Batch prediction input may not have to include target column. Sort labels, so it is consistent between preprocessing and training. Follow up other core review comments. * Follow up code review comments. * Cloudmlm (#152) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that setting project id from datalab does not set gcloud default project. (#136) * Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143) As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency. * Remove tensorflow and CloudML SDK from setup.py (#144) * Install TensorFlow 0.12.1. * Remove TensorFlow and CloudML SDK from setup.py. * Add comments why we ignore errors when importing mlalpha. * Adding evaluationanalysis API to generate evaluation stats from eval … (#99) * Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file. The resulting stats file will be fed to a visualization component which will come in a separate change. * Follow up CR comments. * Feature slicing view visualization component. (#109) * Datalab Inception (image classification) solution. (#117) * Datalab Inception (image classification) solution. * Fix dataflow URL. * Datalab "ml" magics for running a solution package. Update Inception Package. (#121) * Datalab Inception (image classification) solution. * Fix dataflow URL. * Datalab "ml" magics for running a solution package. - Dump function args and docstrings - Run functions Update Inception Package. - Added docstring on face functions. - Added batch prediction. - Use datalab's lib for talking to cloud training and prediction service. - More minor fixes and changes. * Follow up code review comments. * Fix an PackageRunner issue that temp installation is done multiple times unnecessarily. * Update feature-slice-view supporting file, which fixes some stability UI issues. (#126) * Remove old feature-slicing pipeline implementation (is replaced by BigQuery) Add Confusion matrix magic. (#129) * Remove old feature-slicing pipeline implementation (is replaced by BigQuery). Add Confusion matrix magic. * Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size. * Fix set union. * Mergemaster/cloudml (#134) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that prediction right after preprocessing fails in inception package local run. (#135) * add structure data preprocessing and training (#132) merging the preprocessing and training parts. * first full-feature version of structured data is done (#139) * added the preprocessing/training files. Preprocessing is connected with datalab. Training is not fully connected with datalab. * added training interface. * local/cloud training ready for review * saving work * saving work * cloud online prediction is done. * split config file into two (schema/transforms) and updated the unittests. * local preprocess/train working * 1) merged --model_type and --problem_type 2) online/local prediction is done * added batch prediction * all prediction is done. Going to make a merge request next * Update _package.py removed some white space + add a print statement to local_predict * --preprocessing puts a copy of schema in the outut dir. --no need to pass schema to train in datalab. * tests can be run from any folder above the test folder by python -m unittest discover Also, the training test will parse the output of training and check that the loss is small. * Inception Package Improvements (#138) * Fix an issue that prediction right after preprocessing fails in inception package local run. * Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels. Make online prediction works with GCS images. "%%ml alpha deploy" now also check for "/model" subdir if needed. Other minor improvements. * Make local batch prediction really batched. Batch prediction input may not have to include target column. Sort labels, so it is consistent between preprocessing and training. Follow up other core review comments. * Follow up code review comments. * Remove old DataSet implementation. Create new DataSets. (#151) * Remove old DataSet implementation. The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries. * Raise error when sample is larger than number of rows. * Inception package improvements (#155) * Inception package improvements. - It takes DataSets as input instead of CSV files. It also supports BigQuery source now. - Changes to make latest DataFlow and TensorFlow happy. - Changes in preprocessing to remove partial support for multiple labels. - Other minor improments. * Add a comment. * Update feature slice view UI. Added Slices Overview. (#161) * Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163) * Update feature slice view UI. Added Slices Overview. * Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github). Improve TensorFlow Events list/get APIs. * Follow up on CR comments. * new preprocessing and training for structured data (#160) * new preprocessing is done next: work on training, and then update the tests * saving work * sw * seems to be working, going to do tests next * got preprocessing test working * training test pass!!! * added exported graph back in * dl preprocessing for local, cloud/csv, cloud/bigquery DONE :) * gcloud cloud training works * cloud dl training working * ops, this files should not be saved * removed junk function * sw * review comments * removed cloudml sdk usage + lint * review comments * Move job, models, and feature_slice_view plotting to API. (#167) * Move job, models, and feature_slice_view plotting to API. * Follow up on CR comments. * A util function to repackage and copy the package to staging location. (#169) * A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training. * Follow up CR comments. * Follow up CR comments. * Move confusion matrix from %%ml to library. (#159) * Move confusion matrix from %%ml to library. This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only). * Add a comment. * Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175) * Cloudmlsdp (#177) * added the ',' graph hack * sw * batch prediction done * sw * review comments * Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178) * Add CloudTrainingConfig namedtuple to wrap cloud training configurations. * Follow up code review comments. * prediction update (#183) * added the ',' graph hack * sw * batch prediction done * sw * review comments * updated the the prediction graph keys, and makde the csvcoder not need any other file. * sw * sw * added newline * review comments * review comments * trying to fix the Contributor License Agreement error. * Inception Package Improvements (#186) * Implement inception cloud batch prediction. Support explicit eval data in preprocessing. * Follow up on CR comments. Also address changes from latest DataFlow. * Cloudmlmerge (#188) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that setting project id from datalab does not set gcloud default project. (#136) * Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143) As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency. * Remove tensorflow and CloudML SDK from setup.py (#144) * Install TensorFlow 0.12.1. * Remove TensorFlow and CloudML SDK from setup.py. * Add comments why we ignore errors when importing mlalpha. * Adding evaluationanalysis API to generate evaluation stats from eval … (#99) * Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file. The resulting stats file will be fed to a visualization component which will come in a separate change. * Follow up CR comments. * Feature slicing view visualization component. (#109) * Datalab Inception (image classification) solution. (#117) * Datalab Inception (image classification) solution. * Fix dataflow URL. * Datalab "ml" magics for running a solution package. Update Inception Package. (#121) * Datalab Inception (image classification) solution. * Fix dataflow URL. * Datalab "ml" magics for running a solution package. - Dump function args and docstrings - Run functions Update Inception Package. - Added docstring on face functions. - Added batch prediction. - Use datalab's lib for talking to cloud training and prediction service. - More minor fixes and changes. * Follow up code review comments. * Fix an PackageRunner issue that temp installation is done multiple times unnecessarily. * Update feature-slice-view supporting file, which fixes some stability UI issues. (#126) * Remove old feature-slicing pipeline implementation (is replaced by BigQuery) Add Confusion matrix magic. (#129) * Remove old feature-slicing pipeline implementation (is replaced by BigQuery). Add Confusion matrix magic. * Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size. * Fix set union. * Mergemaster/cloudml (#134) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that prediction right after preprocessing fails in inception package local run. (#135) * add structure data preprocessing and training (#132) merging the preprocessing and training parts. * first full-feature version of structured data is done (#139) * added the preprocessing/training files. Preprocessing is connected with datalab. Training is not fully connected with datalab. * added training interface. * local/cloud training ready for review * saving work * saving work * cloud online prediction is done. * split config file into two (schema/transforms) and updated the unittests. * local preprocess/train working * 1) merged --model_type and --problem_type 2) online/local prediction is done * added batch prediction * all prediction is done. Going to make a merge request next * Update _package.py removed some white space + add a print statement to local_predict * --preprocessing puts a copy of schema in the outut dir. --no need to pass schema to train in datalab. * tests can be run from any folder above the test folder by python -m unittest discover Also, the training test will parse the output of training and check that the loss is small. * Inception Package Improvements (#138) * Fix an issue that prediction right after preprocessing fails in inception package local run. * Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels. Make online prediction works with GCS images. "%%ml alpha deploy" now also check for "/model" subdir if needed. Other minor improvements. * Make local batch prediction really batched. Batch prediction input may not have to include target column. Sort labels, so it is consistent between preprocessing and training. Follow up other core review comments. * Follow up code review comments. * Remove old DataSet implementation. Create new DataSets. (#151) * Remove old DataSet implementation. The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries. * Raise error when sample is larger than number of rows. * Inception package improvements (#155) * Inception package improvements. - It takes DataSets as input instead of CSV files. It also supports BigQuery source now. - Changes to make latest DataFlow and TensorFlow happy. - Changes in preprocessing to remove partial support for multiple labels. - Other minor improments. * Add a comment. * Update feature slice view UI. Added Slices Overview. (#161) * Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163) * Update feature slice view UI. Added Slices Overview. * Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github). Improve TensorFlow Events list/get APIs. * Follow up on CR comments. * new preprocessing and training for structured data (#160) * new preprocessing is done next: work on training, and then update the tests * saving work * sw * seems to be working, going to do tests next * got preprocessing test working * training test pass!!! * added exported graph back in * dl preprocessing for local, cloud/csv, cloud/bigquery DONE :) * gcloud cloud training works * cloud dl training working * ops, this files should not be saved * removed junk function * sw * review comments * removed cloudml sdk usage + lint * review comments * Move job, models, and feature_slice_view plotting to API. (#167) * Move job, models, and feature_slice_view plotting to API. * Follow up on CR comments. * A util function to repackage and copy the package to staging location. (#169) * A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training. * Follow up CR comments. * Follow up CR comments. * Move confusion matrix from %%ml to library. (#159) * Move confusion matrix from %%ml to library. This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only). * Add a comment. * Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175) * Cloudmlsdp (#177) * added the ',' graph hack * sw * batch prediction done * sw * review comments * Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178) * Add CloudTrainingConfig namedtuple to wrap cloud training configurations. * Follow up code review comments. * prediction update (#183) * added the ',' graph hack * sw * batch prediction done * sw * review comments * updated the the prediction graph keys, and makde the csvcoder not need any other file. * sw * sw * added newline * review comments * review comments * trying to fix the Contributor License Agreement error. * Inception Package Improvements (#186) * Implement inception cloud batch prediction. Support explicit eval data in preprocessing. * Follow up on CR comments. Also address changes from latest DataFlow. * CsvDataSet no longer globs files in init. (#187) * CsvDataSet no longer globs files in init. * removed file_io, that fix will be done later * removed junk lines * sample uses .file * fixed csv dataset def files() * Update _dataset.py * Move cloud trainer and predictor from their own classes to Job and Model respectively. (#192) * Move cloud trainer and predictor from their own classes to Job and Model respectively. Cloud trainer and predictor will be cleaned up in a seperate change. * Rename CloudModels to Models, CloudModelVersions to ModelVersions. Move their iterator from self to get_iterator() method. * Switch to cloudml v1 endpoint. * Remove one comment. * Follow up on CR comments. Fix a bug in datalab iterator that count keeps incrementing incorrectly. * removed the feature type file (#199) * sw * removed feature types file from preprocessing * training: no longer needs the input types file prediction: cloud batch works now * updated the tests * added amazing comment to local_train check that target column is the first column * transforms file is not optional on the DL side. * comments * comments * Make inception to work with tf1.0. (#204) * Workaround a TF summary issue. Force online prediction to use TF 1.0. (#209) * sd package. Local everything is working. (#211) * sw * sw * Remove tf dependency from structured data setup.py. (#212) * Workaround a TF summary issue. Force online prediction to use TF 1.0. * Remove tf dependency from structured data setup.py. * Cloudmld (#213) * sw * sw * cloud uses 0.12.0rc? and local uses whatever is in datalab * for local testing * master_setup is copy of ../../setup.py * Add a resize option for inception package to avoid sending large data to online prediction (#215) * Add a resize option for inception package to avoid sending large data to online prediction. Update Lantern browser. * Follow up on code review comments and fix a bug for inception. * Cleanup mlalpha APIs that are not needed. (#218) * Inception package updates. (#219) - Instead of hard code setup.py path, duplicate it along with all py files, just like structured data package. - Use Pip installable TensorFlow 1.0 for packages. - Fix some TF warnings. * Cloudml Branch Merge From Master (#222) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that setting project id from datalab does not set gcloud default project. (#136) * Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143) As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency. * Remove tensorflow and CloudML SDK from setup.py (#144) * Install TensorFlow 0.12.1. * Remove TensorFlow and CloudML SDK from setup.py. * Add comments why we ignore errors when importing mlalpha. * Fix project_id from `gcloud config` in py3 (#194) - `Popen.stdout` is a `bytes` in py3, needs `.decode()` - Before: ```py >>> %%sql -d standard ... select 3 Your active configuration is: [test] HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash. ``` ```sh $ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done Your active configuration is: [test] foo-bar Your active configuration is: [test] b'foo-bar' ``` - After: ```py >>> %%sql -d standard ... select 3 Your active configuration is: [test] QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o ``` ```sh $ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done Your active configuration is: [test] foo-bar Your active configuration is: [test] foo-bar ``` * Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195) - Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result - After (with Keep-Alive): ~1.5-3s - Query sends these 6 http requests and runtime appears to be dominated by network RTT * cast string to int (#217) `table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int() * Remove CloudML SDK as dependency for PyDatalab. (#227) * Remove CloudML dependency from Inception. (#225) * TensorFlow's save_model no longer creates export.meta, so disable the check in deploying models. (#228) * TensorFlow's save_model no longer creates export.meta, so disable the check in deploying models. * Also check for saved_model.pb for deployment. * Cloudmlsm (#229) * csv prediction graph done * csv works, but not json!!! * sw, train working * cloud training working * finished census sample, cleaned up interface * review comments * small fixes to sd (#231) * small fixes * more small fixes * Rename from mlalpha to ml. (#232) * fixed prediction (#235) * small fixes (#236) * 1) prediction 'key_from_input' now the true key name 2) DF prediction now make csv_schema.json file 3) removed function that was not used. * update csv_schema.json in _package too * Cloudmlmerge (#238) * Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110) * Add gcs_copy_file() that is missing but is referenced in a couple of places. * Add DataFlow to pydatalab dependency list. * Fix travis test errors by reimplementing gcs copy. * Remove unnecessary shutil import. * Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102) * Add datalab user agent to CloudML trainer and predictor requests. (#112) * Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111) * Update README.md (#114) Added docs link. * Generate reST documentation for magic commands (#113) Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file. * Fix an issue that %%chart failed with UDF query. (#116) * Fix an issue that %%chart failed with UDF query. The problem is that the query is submitted to BQ without replacing variable values from user namespace. * Fix chart tests by adding ip.user_ns mock. * Fix charting test. * Add missing import "mock". * Fix chart tests. * Fix "%%bigquery schema" issue -- the command generates nothing in output. (#119) * Add some missing dependencies, remove some unused ones (#122) * Remove scikit-learn and scipy as dependencies * add more required packages * Add psutil as dependency * Update packages versions * Cleanup (#123) * Remove unnecessary semicolons * remove unused imports * remove unncessary defined variable * Fix query_metadata tests (#128) Fix query_metadata tests * Make the library pip-installable (#125) This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags: - Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms. - Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included. * Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131) * Fix an issue that setting project id from datalab does not set gcloud default project. (#136) * Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143) As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency. * Remove tensorflow and CloudML SDK from setup.py (#144) * Install TensorFlow 0.12.1. * Remove TensorFlow and CloudML SDK from setup.py. * Add comments why we ignore errors when importing mlalpha. * Fix project_id from `gcloud config` in py3 (#194) - `Popen.stdout` is a `bytes` in py3, needs `.decode()` - Before: ```py >>> %%sql -d standard ... select 3 Your active configuration is: [test] HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash. ``` ```sh $ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done Your active configuration is: [test] foo-bar Your active configuration is: [test] b'foo-bar' ``` - After: ```py >>> %%sql -d standard ... select 3 Your active configuration is: [test] QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o ``` ```sh $ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done Your active configuration is: [test] foo-bar Your active configuration is: [test] foo-bar ``` * Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195) - Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result - After (with Keep-Alive): ~1.5-3s - Query sends these 6 http requests and runtime appears to be dominated by network RTT * cast string to int (#217) `table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int() * bigquery.Api: Remove unused _DEFAULT_PAGE_SIZE (#221) Test plan: - Unit tests still pass
1 parent 52939ee commit 2295860

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+12742
-2634
lines changed

datalab/mlalpha/__init__.py renamed to datalab/ml/__init__.py

Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -14,20 +14,14 @@
1414

1515
from __future__ import absolute_import
1616

17-
from ._local_runner import LocalRunner
18-
from ._cloud_runner import CloudRunner
19-
from ._metadata import Metadata
20-
from ._local_predictor import LocalPredictor
21-
from ._cloud_predictor import CloudPredictor
22-
from ._job import Jobs
17+
from ._job import Jobs, Job
2318
from ._summary import Summary
24-
from ._tensorboard import TensorBoardManager
25-
from ._dataset import DataSet
26-
from ._package import Packager
27-
from ._cloud_models import CloudModels, CloudModelVersions
19+
from ._tensorboard import TensorBoard
20+
from ._dataset import CsvDataSet, BigQueryDataSet
21+
from ._cloud_models import Models, ModelVersions
2822
from ._confusion_matrix import ConfusionMatrix
23+
from ._feature_slice_view import FeatureSliceView
24+
from ._cloud_training_config import CloudTrainingConfig
25+
from ._util import *
2926

30-
from plotly.offline import init_notebook_mode
31-
32-
init_notebook_mode()
3327

datalab/ml/_cloud_models.py

Lines changed: 274 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,274 @@
1+
# Copyright 2016 Google Inc. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
4+
# in compliance with the License. You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software distributed under the License
9+
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
10+
# or implied. See the License for the specific language governing permissions and limitations under
11+
# the License.
12+
13+
"""Implements Cloud ML Model Operations"""
14+
15+
from googleapiclient import discovery
16+
import os
17+
import yaml
18+
19+
import datalab.context
20+
import datalab.storage
21+
import datalab.utils
22+
23+
from . import _util
24+
25+
class Models(object):
26+
"""Represents a list of Cloud ML models for a project."""
27+
28+
def __init__(self, project_id=None):
29+
"""
30+
Args:
31+
project_id: project_id of the models. If not provided, default project_id will be used.
32+
"""
33+
if project_id is None:
34+
project_id = datalab.context.Context.default().project_id
35+
self._project_id = project_id
36+
self._credentials = datalab.context.Context.default().credentials
37+
self._api = discovery.build('ml', 'v1', credentials=self._credentials)
38+
39+
def _retrieve_models(self, page_token, page_size):
40+
list_info = self._api.projects().models().list(
41+
parent='projects/' + self._project_id, pageToken=page_token, pageSize=page_size).execute()
42+
models = list_info.get('models', [])
43+
page_token = list_info.get('nextPageToken', None)
44+
return models, page_token
45+
46+
def get_iterator(self):
47+
"""Get iterator of models so it can be used as "for model in Models().get_iterator()".
48+
"""
49+
return iter(datalab.utils.Iterator(self._retrieve_models))
50+
51+
def get_model_details(self, model_name):
52+
"""Get details of the specified model from CloudML Service.
53+
54+
Args:
55+
model_name: the name of the model. It can be a model full name
56+
("projects/[project_id]/models/[model_name]") or just [model_name].
57+
Returns: a dictionary of the model details.
58+
"""
59+
full_name = model_name
60+
if not model_name.startswith('projects/'):
61+
full_name = ('projects/%s/models/%s' % (self._project_id, model_name))
62+
return self._api.projects().models().get(name=full_name).execute()
63+
64+
def create(self, model_name):
65+
"""Create a model.
66+
67+
Args:
68+
model_name: the short name of the model, such as "iris".
69+
Returns:
70+
If successful, returns informaiton of the model, such as
71+
{u'regions': [u'us-central1'], u'name': u'projects/myproject/models/mymodel'}
72+
Raises:
73+
If the model creation failed.
74+
"""
75+
body = {'name': model_name}
76+
parent = 'projects/' + self._project_id
77+
# Model creation is instant. If anything goes wrong, Exception will be thrown.
78+
return self._api.projects().models().create(body=body, parent=parent).execute()
79+
80+
def delete(self, model_name):
81+
"""Delete a model.
82+
83+
Args:
84+
model_name: the name of the model. It can be a model full name
85+
("projects/[project_id]/models/[model_name]") or just [model_name].
86+
"""
87+
full_name = model_name
88+
if not model_name.startswith('projects/'):
89+
full_name = ('projects/%s/models/%s' % (self._project_id, model_name))
90+
response = self._api.projects().models().delete(name=full_name).execute()
91+
if 'name' not in response:
92+
raise Exception('Invalid response from service. "name" is not found.')
93+
_util.wait_for_long_running_operation(response['name'])
94+
95+
def list(self, count=10):
96+
"""List models under the current project in a table view.
97+
98+
Args:
99+
count: upper limit of the number of models to list.
100+
Raises:
101+
Exception if it is called in a non-IPython environment.
102+
"""
103+
import IPython
104+
data = []
105+
# Add range(count) to loop so it will stop either it reaches count, or iteration
106+
# on self is exhausted. "self" is iterable (see __iter__() method).
107+
for _, model in zip(range(count), self):
108+
element = {'name': model['name']}
109+
if 'defaultVersion' in model:
110+
version_short_name = model['defaultVersion']['name'].split('/')[-1]
111+
element['defaultVersion'] = version_short_name
112+
data.append(element)
113+
114+
IPython.display.display(
115+
datalab.utils.commands.render_dictionary(data, ['name', 'defaultVersion']))
116+
117+
def describe(self, model_name):
118+
"""Print information of a specified model.
119+
120+
Args:
121+
model_name: the name of the model to print details on.
122+
"""
123+
model_yaml = yaml.safe_dump(self.get_model_details(model_name), default_flow_style=False)
124+
print model_yaml
125+
126+
127+
class ModelVersions(object):
128+
"""Represents a list of versions for a Cloud ML model."""
129+
130+
def __init__(self, model_name, project_id=None):
131+
"""
132+
Args:
133+
model_name: the name of the model. It can be a model full name
134+
("projects/[project_id]/models/[model_name]") or just [model_name].
135+
project_id: project_id of the models. If not provided and model_name is not a full name
136+
(not including project_id), default project_id will be used.
137+
"""
138+
if project_id is None:
139+
self._project_id = datalab.context.Context.default().project_id
140+
self._credentials = datalab.context.Context.default().credentials
141+
self._api = discovery.build('ml', 'v1', credentials=self._credentials)
142+
if not model_name.startswith('projects/'):
143+
model_name = ('projects/%s/models/%s' % (self._project_id, model_name))
144+
self._full_model_name = model_name
145+
self._model_name = self._full_model_name.split('/')[-1]
146+
147+
def _retrieve_versions(self, page_token, page_size):
148+
parent = self._full_model_name
149+
list_info = self._api.projects().models().versions().list(parent=parent,
150+
pageToken=page_token, pageSize=page_size).execute()
151+
versions = list_info.get('versions', [])
152+
page_token = list_info.get('nextPageToken', None)
153+
return versions, page_token
154+
155+
def get_iterator(self):
156+
"""Get iterator of versions so it can be used as
157+
"for v in ModelVersions(model_name).get_iterator()".
158+
"""
159+
return iter(datalab.utils.Iterator(self._retrieve_versions))
160+
161+
def get_version_details(self, version_name):
162+
"""Get details of a version.
163+
164+
Args:
165+
version: the name of the version in short form, such as "v1".
166+
Returns: a dictionary containing the version details.
167+
"""
168+
name = ('%s/versions/%s' % (self._full_model_name, version_name))
169+
return self._api.projects().models().versions().get(name=name).execute()
170+
171+
def deploy(self, version_name, path):
172+
"""Deploy a model version to the cloud.
173+
174+
Args:
175+
version_name: the name of the version in short form, such as "v1".
176+
path: the Google Cloud Storage path (gs://...) which contains the model files.
177+
178+
Raises: Exception if the path is invalid or does not contain expected files.
179+
Exception if the service returns invalid response.
180+
"""
181+
if not path.startswith('gs://'):
182+
raise Exception('Invalid path. Only Google Cloud Storage path (gs://...) is accepted.')
183+
184+
# If there is no "export.meta" or"saved_model.pb" under path but there is
185+
# path/model/export.meta or path/model/saved_model.pb, then append /model to the path.
186+
if (not datalab.storage.Item.from_url(os.path.join(path, 'export.meta')).exists() and
187+
not datalab.storage.Item.from_url(os.path.join(path, 'saved_model.pb')).exists()):
188+
if (datalab.storage.Item.from_url(os.path.join(path, 'model', 'export.meta')).exists() or
189+
datalab.storage.Item.from_url(os.path.join(path, 'model', 'saved_model.pb')).exists()):
190+
path = os.path.join(path, 'model')
191+
else:
192+
print('Cannot find export.meta or saved_model.pb, but continue with deployment anyway.')
193+
194+
body = {'name': self._model_name}
195+
parent = 'projects/' + self._project_id
196+
try:
197+
self._api.projects().models().create(body=body, parent=parent).execute()
198+
except:
199+
# Trying to create an already existing model gets an error. Ignore it.
200+
pass
201+
body = {
202+
'name': version_name,
203+
'deployment_uri': path,
204+
'runtime_version': '1.0',
205+
}
206+
response = self._api.projects().models().versions().create(body=body,
207+
parent=self._full_model_name).execute()
208+
if 'name' not in response:
209+
raise Exception('Invalid response from service. "name" is not found.')
210+
_util.wait_for_long_running_operation(response['name'])
211+
212+
def delete(self, version_name):
213+
"""Delete a version of model.
214+
215+
Args:
216+
version_name: the name of the version in short form, such as "v1".
217+
"""
218+
name = ('%s/versions/%s' % (self._full_model_name, version_name))
219+
response = self._api.projects().models().versions().delete(name=name).execute()
220+
if 'name' not in response:
221+
raise Exception('Invalid response from service. "name" is not found.')
222+
_util.wait_for_long_running_operation(response['name'])
223+
224+
def predict(self, version_name, data):
225+
"""Get prediction results from features instances.
226+
227+
Args:
228+
version_name: the name of the version used for prediction.
229+
data: typically a list of instance to be submitted for prediction. The format of the
230+
instance depends on the model. For example, structured data model may require
231+
a csv line for each instance.
232+
Note that online prediction only works on models that take one placeholder value,
233+
such as a string encoding a csv line.
234+
Returns:
235+
A list of prediction results for given instances. Each element is a dictionary representing
236+
output mapping from the graph.
237+
An example:
238+
[{"predictions": 1, "score": [0.00078, 0.71406, 0.28515]},
239+
{"predictions": 1, "score": [0.00244, 0.99634, 0.00121]}]
240+
"""
241+
full_version_name = ('%s/versions/%s' % (self._full_model_name, version_name))
242+
request = self._api.projects().predict(body={'instances': data},
243+
name=full_version_name)
244+
request.headers['user-agent'] = 'GoogleCloudDataLab/1.0'
245+
result = request.execute()
246+
if 'predictions' not in result:
247+
raise Exception('Invalid response from service. Cannot find "predictions" in response.')
248+
249+
return result['predictions']
250+
251+
def describe(self, version_name):
252+
"""Print information of a specified model.
253+
254+
Args:
255+
version: the name of the version in short form, such as "v1".
256+
"""
257+
version_yaml = yaml.safe_dump(self.get_version_details(version_name),
258+
default_flow_style=False)
259+
print version_yaml
260+
261+
def list(self):
262+
"""List versions under the current model in a table view.
263+
264+
Raises:
265+
Exception if it is called in a non-IPython environment.
266+
"""
267+
import IPython
268+
269+
# "self" is iterable (see __iter__() method).
270+
data = [{'name': version['name'].split()[-1],
271+
'deploymentUri': version['deploymentUri'], 'createTime': version['createTime']}
272+
for version in self]
273+
IPython.display.display(
274+
datalab.utils.commands.render_dictionary(data, ['name', 'deploymentUri', 'createTime']))

datalab/ml/_cloud_training_config.py

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# Copyright 2017 Google Inc. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
from collections import namedtuple
16+
17+
_CloudTrainingConfig = namedtuple("CloudConfig",
18+
['region', 'scale_tier', 'master_type', 'worker_type',
19+
'parameter_server_type', 'worker_count', 'parameter_server_count'])
20+
_CloudTrainingConfig.__new__.__defaults__ = ('BASIC', None, None, None, None, None)
21+
22+
23+
class CloudTrainingConfig(_CloudTrainingConfig):
24+
"""A config namedtuple containing cloud specific configurations for CloudML training.
25+
26+
Fields:
27+
region: the region of the training job to be submitted. For example, "us-central1".
28+
Run "gcloud compute regions list" to get a list of regions.
29+
scale_tier: Specifies the machine types, the number of replicas for workers and
30+
parameter servers. For example, "STANDARD_1". See
31+
https://cloud.google.com/ml/reference/rest/v1beta1/projects.jobs#scaletier
32+
for list of accepted values.
33+
master_type: specifies the type of virtual machine to use for your training
34+
job's master worker. Must set this value when scale_tier is set to CUSTOM.
35+
See the link in "scale_tier".
36+
worker_type: specifies the type of virtual machine to use for your training
37+
job's worker nodes. Must set this value when scale_tier is set to CUSTOM.
38+
parameter_server_type: specifies the type of virtual machine to use for your training
39+
job's parameter server. Must set this value when scale_tier is set to CUSTOM.
40+
worker_count: the number of worker replicas to use for the training job. Each
41+
replica in the cluster will be of the type specified in "worker_type".
42+
Must set this value when scale_tier is set to CUSTOM.
43+
parameter_server_count: the number of parameter server replicas to use. Each
44+
replica in the cluster will be of the type specified in "parameter_server_type".
45+
Must set this value when scale_tier is set to CUSTOM.
46+
"""
47+
pass

0 commit comments

Comments
 (0)