Skip to content

Commit e13c56d

Browse files
committed
Updating hyperlinks
1 parent 33d1294 commit e13c56d

File tree

33 files changed

+88
-99
lines changed

33 files changed

+88
-99
lines changed

AI-and-Analytics/End-to-end-Workloads/Census/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# End-to-end machine learning workload: `Census` Sample
1+
# End-to-end machine learning workload: `Census` Sample
22

33
This sample code illustrates how to use Intel® Distribution of Modin for ETL operations and ridge regression algorithm from the Intel® oneAPI Data Analytics Library (oneDAL) accelerated scikit-learn library to build and run an end to end machine learning workload. Both Intel Distribution of Modin and oneDAL accelerated scikit-learn libraries are available together in [Intel AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html). This sample code demonstrates how to seamlessly run the end-to-end census workload using the toolkit, without any external dependencies.
44

@@ -37,7 +37,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
3737

3838
### Activate conda environment With Root Access
3939

40-
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and Intel® Distribution of Modin environment installation (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
40+
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and [Intel® Distribution of Modin environment installation] (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
4141

4242
Activate the conda environment with the following command:
4343

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_Extensions_AutoMixedPrecision/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Intel Extension for PyTorch is a Python package to extend the official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will contain functions and optimization (for example, take advantage of Intel's new hardware features).
44

5-
For comprehensive instructions regarding Intel Extension for PyTorch, go to https://github.com/intel/intel-extension-for-pytorch.
5+
For comprehensive instructions goto the github repo for [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
66

77
| Optimized for | Description
88
|:--- |:---

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_TorchCCL_Multinode_Training/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1-
# `Intel Extension for PyTorch Getting Started` Sample
1+
# `Intel Extension for PyTorch Getting Started` Sample
22

33
torch-ccl holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library (oneCCL).
44

55
Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training that implements such collectives like allreduce, allgather, and alltoall. For more information on oneCCL, please refer to the oneCCL documentation.
66

7-
For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to https://github.com/intel/torch-ccl and https://github.com/intel/optimized-models/tree/master/pytorch/distributed.
7+
For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to the following github repos:
8+
* [PyTorchand CCL](https://github.com/intel/torch-ccl)
9+
* [PyTorch](https://github.com/intel/optimized-models/tree/master/pytorch/distributed)
810

911
| Optimized for | Description
1012
|:--- |:---

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Horovod_Multinode_Training/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `Distributed TensorFlow with Horovod` Sample
1+
# `Distributed TensorFlow with Horovod` Sample
22
Today's modern computer systems are becoming heavily distributed. It is important to capitalize on scaling techniques to maximize the efficiency and performance of neural networks training, a resource-intensive process.
33

44
| Optimized for | Description
@@ -33,7 +33,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
3333
## Build and Run the Sample
3434

3535
### Running Samples In DevCloud (Optional)
36-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
36+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide] (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
3737

3838
### Pre-requirement
3939

AI-and-Analytics/Getting-Started-Samples/IntelModin_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `Intel Modin Getting Started` Sample
1+
# `Intel Modin Getting Started` Sample
22
This Getting Started sample code show how to use distributed Pandas using the Modin package. It demonstrates how to use software products that can be found in the [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
33

44
| Optimized for | Description
@@ -41,7 +41,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
4141

4242
### Activate conda environment With Root Access
4343

44-
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and Intel Distribution of Modin environment installation (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
44+
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and [Intel Distribution of Modin environment installation] (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
4545

4646
Activate the conda environment with the following command:
4747

AI-and-Analytics/Getting-Started-Samples/IntelPyTorch_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `PyTorch HelloWorld` Sample
1+
# `PyTorch HelloWorld` Sample
22
PyTorch* is a very popular framework for deep learning. Intel and Facebook* collaborate to boost PyTorch* CPU Performance for years. The official PyTorch has been optimized using oneAPI Deep Neural Network Library (oneDNN) primitives by default. This sample demonstrates how to train a PyTorch model and shows how Intel-optimized PyTorch* enables Intel® DNNL calls by default.
33

44
| Optimized for | Description
@@ -32,7 +32,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
3232

3333
## How to Build and Run
3434
### Running Samples In DevCloud (Optional)
35-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
35+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
3636

3737
1. Pre-requirement
3838

AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `TensorFlow HelloWorld` Sample
1+
# `TensorFlow HelloWorld` Sample
22
TensorFlow* is a widely-used machine learning framework in the deep learning arena, demanding efficient computational resource utilization. To take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow framework has been optimized using Intel® Deep Neural Networks (Intel® DNNL) primitives. This sample demonstrates how to train an example neural network and shows how Intel-optimized TensorFlow enables Intel® DNNL calls by default.
33

44
| Optimized for | Description
@@ -42,7 +42,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
4242
## Build and Run the Sample
4343

4444
### Running Samples In DevCloud (Optional)
45-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
45+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
4646

4747
### Pre-requirement
4848

AI-and-Analytics/Getting-Started-Samples/iLiT-Sample-for-Tensorflow/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ We will learn how to train a CNN model based on Keras with Tensorflow, use iLiT
5454

5555
### Running in Devcloud
5656

57-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU) as well as whether to run in batch or interactive mode. For more information, see the [Intel(R) oneAPI AI Analytics Toolkit Get Started Guide] https://devcloud.intel.com/oneapi/get-started/analytics-toolkit/)
57+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU) as well as whether to run in batch or interactive mode. For more information, see the [Intel(R) oneAPI AI Analytics Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/analytics-toolkit/)
5858

5959
### Running in Local Server
6060

DirectProgramming/C++/CompilerInfrastructure/Intrinsics/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The intrinsic samples are designed to show how to utilize the intrinsics support
1515

1616
Intrinsics are assembly-coded functions that allow you to use C++ function calls and variables in place of assembly instructions. Intrinsics are expanded inline, eliminating function call overhead. While providing the same benefits as using inline assembly, intrinsics improve code readability, assist instruction scheduling, and help when debugging. They provide access to instructions that cannot be generated using the C and C++ languages' standard constructs and allow code to leverage performance-enhancing features unique to specific processors.
1717

18-
Further information on intrinsics can be found here: https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/intrinsics.html#intrinsics_GUID-D70F9A9A-BAE1-4242-963E-C3A12DE296A1
18+
Further information on intrinsics can be found [here](https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/intrinsics.html#intrinsics_GUID-D70F9A9A-BAE1-4242-963E-C3A12DE296A1):
1919

2020
## Key Implementation Details
2121

DirectProgramming/C++/GraphTraversal/MergesortOMP/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The merge sort algorithm is a comparison-based sorting algorithm. In this sample, we use a top-down implementation, which recursively splits the list into two halves (called sublists) until each sublist is size 1. We then merge sublists two at a time to produce a sorted list. This sample could run in serial or parallel with OpenMP* Tasking #pragma omp task and #pragma omp taskwait.
44

5-
For more details about merge sort algorithm and top-down implementation, please refer to http://en.wikipedia.org/wiki/Merge_sort.
5+
For more details, see the wiki on [merge sort](http://en.wikipedia.org/wiki/Merge_sort) algorithm and top-down implementation.
66

77
| Optimized for | Description
88
|:--- |:---

DirectProgramming/C++/ParallelPatterns/openmp_reduction/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The `openmp_reduction` code sample is a simple program that calculates pi. This program is implemented using C++ and openMP for Intel(R) CPU and accelerators.
44

5-
For comprehensive instructions regarding DPC++ Programming, go to https://software.intel.com/en-us/oneapi-programming-guide and search based on relevant terms noted in the comments.
5+
For comprehensive instructions see the [DPC++ Programming](https://software.intel.com/en-us/oneapi-programming-guide) and search based on relevant terms noted in the comments.
66

77

88
| Optimized for | Description

DirectProgramming/C++/StructuredGrids/iso3dfd_omp_offload/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `ISO3DFD OpenMP Offload` Sample
1+
# `ISO3DFD OpenMP Offload` Sample
22

33
The ISO3DFD sample refers to Three-Dimensional Finite-Difference Wave Propagation in Isotropic Media. It is a three-dimensional stencil to simulate a wave propagating in a 3D isotropic medium and shows some of the more common challenges and techniques when targeting OMP Offload devices (GPU) in more complex applications to achieve good performance.
44

@@ -49,7 +49,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
4949
## Building the `ISO3DFD` Program for GPU
5050

5151
### Running Samples In DevCloud
52-
If running a sample in the Intel DevCloud, remember that you must specify the compute node (CPU, GPU) and run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/) and Intel® oneAPI HPC Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/hpc-toolkit/)
52+
If running a sample in the Intel DevCloud, remember that you must specify the compute node (CPU, GPU) and run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide] (https://devcloud.intel.com/oneapi/get-started/base-toolkit/) and [Intel® oneAPI HPC Toolkit Get Started Guide] (https://devcloud.intel.com/oneapi/get-started/hpc-toolkit/)
5353

5454
### On a Linux* System
5555
Perform the following steps:

DirectProgramming/DPC++/CombinationalLogic/mandelbrot/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Mandelbrot is an infinitely complex fractal patterning that is derived from a simple formula. It demonstrates using DPC++ for offloading computations to a GPU (or other devices) and shows how processing time can be optimized and improved with parallelism.
44

5-
For comprehensive instructions regarding DPC++ Programming, go to https://software.intel.com/en-us/oneapi-programming-guide and search based on relevant terms noted in the comments.
5+
For comprehensive instructions see the [DPC++ Programming](https://software.intel.com/en-us/oneapi-programming-guide) and search based on relevant terms noted in the comments.
66

77
| Optimized for | Description
88
|:--- |:---

DirectProgramming/DPC++/CombinationalLogic/sepia-filter/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# `Sepia-filter` Sample
22
The sepia filter is a program that converts a color image to a Sepia tone image, which is a monochromatic image with a distinctive Brown Gray color. The program works by offloading the compute intensive conversion of each pixel to Sepia tone and is implemented using DPC++ for CPU and GPU.
33

4-
For comprehensive instructions regarding DPC++ Programming, go to https://software.intel.com/en-us/oneapi-programming-guide and search based on relevant terms noted in the comments.
4+
For comprehensive instructions see the [DPC++ Programming](https://software.intel.com/en-us/oneapi-programming-guide) and search based on relevant terms noted in the comments.
55

66
| Optimized for | Description
77
|:--- |:---

DirectProgramming/DPC++/DenseLinearAlgebra/complex_mult/README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,11 @@ Complex numbers in parallel and verifies the results. It also implements
55
a custom device selector to target a specific vendor device. This program is
66
implemented using C++ and DPC++ language for Intel CPU and accelerators.
77
The Complex class is a custom class, and this program shows how we can use
8-
custom types of classes in a DPC++ program
9-
10-
| Optimized for | Description
11-
ii|:--- |:---
8+
custom types of classes in a DPC++ program.
9+
10+
11+
| Optimized for | Description
12+
|:--- |:---
1213
| OS | Linux Ubuntu 18.04, Windows 10
1314
| Hardware | Skylake with GEN9 or newer
1415
| Software | Intel® oneAPI DPC++/C++ Compiler

DirectProgramming/DPC++/DenseLinearAlgebra/matrix_mul/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
1-
# `matrix_mul` Sample
1+
# `matrix_mul` Sample
22
matrix_mul is a simple program that multiplies together two large matrices and
33
verifies the results. This program is implemented using two ways:
44
1. Data Parallel C++ (DPC++)
55
2. OpenMP (omp)
66

7-
For comprehensive instructions regarding DPC++ Programming, go to https://software.intel.com/en-us/oneapi-programming-guide and search based on relevant terms noted in the comments.
7+
For comprehensive instructions see the [DPC++ Programming](https://software.intel.com/en-us/oneapi-programming-guide) and search based on relevant terms noted in the comments.
8+
89

910
| Optimized for | Description
1011
|:--- |:---

DirectProgramming/DPC++/DenseLinearAlgebra/simple-add/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1-
# `simple-add-dpc++` Sample
1+
# `simple-add-dpc++` Sample
22

33
`simple-add-dpc++` provides the simplest example of DPC++ while providing an example of using both buffers and Unified Shared Memory.
44

5-
For comprehensive instructions regarding DPC++ Programming, go to https://software.intel.com/en-us/oneapi-programming-guide and search based on relevant terms noted in the comments.
5+
For comprehensive instructions see the [DPC++ Programming](https://software.intel.com/en-us/oneapi-programming-guide) and search based on relevant terms noted in the comments.
6+
67

78
| Optimized for | Description
89
|:--- |:---

DirectProgramming/DPC++/DenseLinearAlgebra/vector-add/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1-
# `vector-add` Sample
1+
# `vector-add` Sample
22

33
Vector Add is the equivalent of a ‘Hello, World!’ sample for data parallel programs. Building and running the code sample verifies that your development environment is set up correctly and demonstrates the use of the core features of DPC++.
44

5-
For comprehensive instructions regarding DPC++ Programming, go to https://software.intel.com/en-us/oneapi-programming-guide and search based on relevant terms noted in the comments.
5+
For comprehensive instructions see the [DPC++ Programming](https://software.intel.com/en-us/oneapi-programming-guide) and search based on relevant terms noted in the comments.
6+
67

78
| Optimized for | Description
89
|:--- |:---

0 commit comments

Comments
 (0)