Skip to content

Commit ac0ad8d

Browse files
authored
Merge pull request #1228 from Azure/release_update/Release-76
update samples from Release-76 as a part of SDK release
2 parents 41a2ebd + 5019ad6 commit ac0ad8d

File tree

2 files changed

+7
-9
lines changed

2 files changed

+7
-9
lines changed

README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
# Azure Machine Learning service example notebooks
22

3-
> a community-driven repository of examples using mlflow for tracking can be found at https://github.com/Azure/azureml-examples
4-
53
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
64

75
![Azure ML Workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/media/concept-azure-machine-learning-architecture/workflow.png)

how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -460,8 +460,8 @@
460460
" name=\"Merge Taxi Data\",\n",
461461
" script_name=\"merge.py\", \n",
462462
" arguments=[\"--output_merge\", merged_data],\n",
463-
" inputs=[cleansed_green_data.parse_parquet_files(file_extension=None),\n",
464-
" cleansed_yellow_data.parse_parquet_files(file_extension=None)],\n",
463+
" inputs=[cleansed_green_data.parse_parquet_files(),\n",
464+
" cleansed_yellow_data.parse_parquet_files()],\n",
465465
" outputs=[merged_data],\n",
466466
" compute_target=aml_compute,\n",
467467
" runconfig=aml_run_config,\n",
@@ -497,7 +497,7 @@
497497
" name=\"Filter Taxi Data\",\n",
498498
" script_name=\"filter.py\", \n",
499499
" arguments=[\"--output_filter\", filtered_data],\n",
500-
" inputs=[merged_data.parse_parquet_files(file_extension=None)],\n",
500+
" inputs=[merged_data.parse_parquet_files()],\n",
501501
" outputs=[filtered_data],\n",
502502
" compute_target=aml_compute,\n",
503503
" runconfig = aml_run_config,\n",
@@ -533,7 +533,7 @@
533533
" name=\"Normalize Taxi Data\",\n",
534534
" script_name=\"normalize.py\", \n",
535535
" arguments=[\"--output_normalize\", normalized_data],\n",
536-
" inputs=[filtered_data.parse_parquet_files(file_extension=None)],\n",
536+
" inputs=[filtered_data.parse_parquet_files()],\n",
537537
" outputs=[normalized_data],\n",
538538
" compute_target=aml_compute,\n",
539539
" runconfig = aml_run_config,\n",
@@ -574,7 +574,7 @@
574574
" name=\"Transform Taxi Data\",\n",
575575
" script_name=\"transform.py\", \n",
576576
" arguments=[\"--output_transform\", transformed_data],\n",
577-
" inputs=[normalized_data.parse_parquet_files(file_extension=None)],\n",
577+
" inputs=[normalized_data.parse_parquet_files()],\n",
578578
" outputs=[transformed_data],\n",
579579
" compute_target=aml_compute,\n",
580580
" runconfig = aml_run_config,\n",
@@ -614,7 +614,7 @@
614614
" script_name=\"train_test_split.py\", \n",
615615
" arguments=[\"--output_split_train\", output_split_train,\n",
616616
" \"--output_split_test\", output_split_test],\n",
617-
" inputs=[transformed_data.parse_parquet_files(file_extension=None)],\n",
617+
" inputs=[transformed_data.parse_parquet_files()],\n",
618618
" outputs=[output_split_train, output_split_test],\n",
619619
" compute_target=aml_compute,\n",
620620
" runconfig = aml_run_config,\n",
@@ -690,7 +690,7 @@
690690
" \"n_cross_validations\": 5\n",
691691
"}\n",
692692
"\n",
693-
"training_dataset = output_split_train.parse_parquet_files(file_extension=None).keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
693+
"training_dataset = output_split_train.parse_parquet_files().keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
694694
"\n",
695695
"automl_config = AutoMLConfig(task = 'regression',\n",
696696
" debug_log = 'automated_ml_errors.log',\n",

0 commit comments

Comments
 (0)