Skip to content

Commit

Permalink
Fixing linting issues with broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
promisinganuj committed Nov 21, 2023
1 parent 974bde8 commit 70ab5be
Show file tree
Hide file tree
Showing 7 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ description: "Code samples showcasing how to apply DevOps concepts to the Modern

# DataOps for the Modern Data Warehouse

This repository contains numerous code samples and artifacts on how to apply DevOps principles to data pipelines built according to the [Modern Data Warehouse (MDW)](https://azure.microsoft.com/en-au/solutions/architecture/modern-data-warehouse/) architectural pattern on Microsoft Azure.
This repository contains numerous code samples and artifacts on how to apply DevOps principles to data pipelines built according to the [Modern Data Warehouse (MDW)](https://learn.microsoft.com/en-au/azure/architecture/solution-ideas/articles/enterprise-data-warehouse) architectural pattern on Microsoft Azure.

The samples are either focused on a single azure service (**Single Tech Samples**) or showcases an end to end data pipeline solution as a reference implementation (**End to End Samples**). Each sample contains code and artifacts relating one or more of the following

Expand Down
6 changes: 3 additions & 3 deletions e2e_samples/parking_sensors/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# DataOps - Parking Sensor Demo <!-- omit in toc -->

The sample demonstrate how DevOps principles can be applied end to end Data Pipeline Solution built according to the [Modern Data Warehouse (MDW)](https://azure.microsoft.com/en-au/solutions/architecture/modern-data-warehouse/) pattern.
The sample demonstrate how DevOps principles can be applied end to end Data Pipeline Solution built according to the [Modern Data Warehouse (MDW)](https://learn.microsoft.com/en-au/azure/architecture/solution-ideas/articles/enterprise-data-warehouse) pattern.

## Contents <!-- omit in toc -->

Expand Down Expand Up @@ -58,7 +58,7 @@ The sample demonstrate how DevOps principles can be applied end to end Data Pipe

## Solution Overview

The solution pulls near realtime [Melbourne Parking Sensor data](https://www.melbourne.vic.gov.au/about-council/governance-transparency/open-data/Pages/on-street-parking-data.aspx) from a publicly available REST api endpoint and saves this to [Azure Data Lake Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction). It then validates, cleanses, and transforms the data to a known schema using [Azure Databricks](https://azure.microsoft.com/en-us/products/databricks/). A second Azure Databricks job then transforms these into a [Star Schema](https://en.wikipedia.org/wiki/Star_schema) which are then loaded into [Azure Synapse Analytics (formerly SQLDW)](https://azure.microsoft.com/products/synapse-analytics/) using [Polybase](https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-guide?view=sql-server-ver15). The entire pipeline is orchestrated with [Azure Data Factory](https://azure.microsoft.com/en-us/products/data-factory/).
The solution pulls near realtime [Melbourne Parking Sensor data](https://www.melbourne.vic.gov.au/about-council/governance-transparency/open-data/Pages/on-street-parking-data.aspx) from a publicly available REST api endpoint and saves this to [Azure Data Lake Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction). It then validates, cleanses, and transforms the data to a known schema using [Azure Databricks](https://azure.microsoft.com/en-us/products/databricks/). A second Azure Databricks job then transforms these into a [Star Schema](https://en.wikipedia.org/wiki/Star_schema) which are then loaded into [Azure Synapse Analytics (formerly SQLDW)](https://azure.microsoft.com/en-us/products/synapse-analytics/) using [Polybase](https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-guide?view=sql-server-ver15). The entire pipeline is orchestrated with [Azure Data Factory](https://azure.microsoft.com/en-us/products/data-factory/).

### Architecture

Expand All @@ -85,7 +85,7 @@ It makes use of the following azure services:
- [Azure Data Factory](https://azure.microsoft.com/en-us/products/data-factory/)
- [Azure Databricks](https://azure.microsoft.com/en-us/products/databricks/)
- [Azure Data Lake Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)
- [Azure Synapse Analytics (formerly SQLDW)](https://azure.microsoft.com/products/synapse-analytics/)
- [Azure Synapse Analytics (formerly SQLDW)](https://azure.microsoft.com/en-us/products/synapse-analytics/)
- [Azure DevOps](https://azure.microsoft.com/en-us/products/devops/)
- [Application Insights](https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview)
- [PowerBI](https://www.microsoft.com/en-us/power-platform/products/power-bi/)
Expand Down
4 changes: 2 additions & 2 deletions e2e_samples/parking_sensors_synapse/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# DataOps - Parking Sensor (Synapse) <!-- omit in toc -->

The sample demonstrate how DevOps principles can be applied to an end to end Data Pipeline Solution built according to the [Modern Data Warehouse (MDW)](https://azure.microsoft.com/en-au/solutions/architecture/modern-data-warehouse/) pattern, implemented in Azure Synapse.
The sample demonstrate how DevOps principles can be applied to an end to end Data Pipeline Solution built according to the [Modern Data Warehouse (MDW)](https://learn.microsoft.com/en-au/azure/architecture/solution-ideas/articles/enterprise-data-warehouse) pattern, implemented in Azure Synapse.

## Contents <!-- omit in toc -->

Expand Down Expand Up @@ -59,7 +59,7 @@ See [here](#build-and-release-pipeline) for details.

It makes use of the following azure services:

- [Azure Synapse Analytics](https://azure.microsoft.com/products/synapse-analytics/)
- [Azure Synapse Analytics](https://azure.microsoft.com/en-us/products/synapse-analytics/)
- [Azure Data Lake Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)
- [Azure DevOps](https://azure.microsoft.com/en-us/products/devops/)
- [PowerBI](https://www.microsoft.com/en-us/power-platform/products/power-bi/)
Expand Down
2 changes: 1 addition & 1 deletion single_tech_samples/databricks/sample4_ci_cd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ The following are the prerequisites for deploying this template :
1. [Github account](https://github.com/)
2. [Azure DevOps account](https://dev.azure.com)
3. [Azure account](https://portal.azure.com)
4. [Azure Databricks Workspace](https://azure.microsoft.com/en-us/services/databricks/)
4. [Azure Databricks Workspace](https://azure.microsoft.com/en-us/products/databricks/)

### 2.2. Infrastructure as Code (IaC)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ The following technologies are used to build this sample:

- [Azure Data Factory](https://azure.microsoft.com/en-us/products/data-factory/)
- [Azure Batch](https://azure.microsoft.com/en-us/products/batch)
- [Azure Storage(ADLS)](https://azure.microsoft.com/services/storage/data-lake-storage/)
- [Azure Data Lake Storage](https://azure.microsoft.com/en-us/products/storage/data-lake-storage/)
- [NFS Mounts](https://learn.microsoft.com/azure/storage/blobs/network-file-system-protocol-support-how-to)

## How to use this sample
Expand Down
2 changes: 1 addition & 1 deletion single_tech_samples/streamanalytics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

![introductory diagram](./docs/images/ASA-job.PNG)

[Azure Stream Analytics](https://azure.microsoft.com/products/stream-analytics/) is a serverless real-time analytics service. The goal of this sample is to demonstrate how to develop a streaming pipeline, with IaC and testability in mind.
[Azure Stream Analytics](https://azure.microsoft.com/en-us/products/stream-analytics/) is a serverless real-time analytics service. The goal of this sample is to demonstrate how to develop a streaming pipeline, with IaC and testability in mind.

## Prerequisites

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ The solution runs a flow triggered on a storage file upload. It then runs a samp

It makes use of the following azure services:

- [Azure Synapse Analytics](https://azure.microsoft.com/products/synapse-analytics/)
- [Azure Synapse Analytics](https://azure.microsoft.com/en-us/products/synapse-analytics/)
- [Azure Data Lake Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)

## Key Concepts
Expand Down

0 comments on commit 70ab5be

Please sign in to comment.