Skip to content

Commit 9369b27

Browse files
committed
Typo fix: Global replace of "Github" with "GitHub"
1 parent f570830 commit 9369b27

File tree

7 files changed

+29
-29
lines changed

7 files changed

+29
-29
lines changed

_layouts/post.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ <h1 class="post-title">{{ page.title }}</h1>
1212

1313
<h2>Want to contribute?</h2>
1414
<div class='contributing'>
15-
This site is hosted entirely on <a href="{{ site.github.code }}">Github</a>. This site is no longer being actively contributed to by the original author (Wes Kendall), but it was placed on Github in the hopes that others would write high-quality MPI tutorials. Click <a href="{{ site.baseurl }}/about/">here</a> for more information about how you can contribute.
15+
This site is hosted entirely on <a href="{{ site.github.code }}">GitHub</a>. This site is no longer being actively contributed to by the original author (Wes Kendall), but it was placed on GitHub in the hopes that others would write high-quality MPI tutorials. Click <a href="{{ site.baseurl }}/about/">here</a> for more information about how you can contribute.
1616
</div>
1717
<br/>
1818

about.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ This site is a collaborative space for providing tutorials about MPI (the Messag
77

88
## Contributing
99

10-
This site is hosted as a static page on [Github]({{ site.github.code }}). It is no longer actively contributed to by the original author, and any potential authors are encouraged to fork the repository [here]({{ site.github.code }}) and start writing a lesson!
10+
This site is hosted as a static page on [GitHub]({{ site.github.code }}). It is no longer actively contributed to by the original author, and any potential authors are encouraged to fork the repository [here]({{ site.github.code }}) and start writing a lesson!
1111

12-
Github uses Jekyll, a markdown-based blogging framework for producing static HTML pages. For an introduction on using Jekyll with Github, checkout [this article](https://help.github.com/articles/using-jekyll-with-pages/).
12+
GitHub uses Jekyll, a markdown-based blogging framework for producing static HTML pages. For an introduction on using Jekyll with GitHub, checkout [this article](https://help.github.com/articles/using-jekyll-with-pages/).
1313

1414
All lessons are self-contained in their own directories in the *tutorials* directory of the main repository. New tutorials should go under this directory, and any code for the tutorials should go in the *code* directory of the tutorial and provide a Makefile with executable examples. Similarly, the structure of the posts should match other tutorials.
1515

16-
For those that have never used Github or may feel overwhelmed about contributing a tutorial, contact Wes Kendall first at wesleykendall AT gmail DOT com. If you wish to write a tutorial with images as a Microsoft Word document or PDF, I'm happy to translate the lesson into the proper format for the site.
16+
For those that have never used GitHub or may feel overwhelmed about contributing a tutorial, contact Wes Kendall first at wesleykendall AT gmail DOT com. If you wish to write a tutorial with images as a Microsoft Word document or PDF, I'm happy to translate the lesson into the proper format for the site.
1717

1818
> **Note** - The tutorials on this site need to remain as informative as possible and encompass useful topics related to MPI. Before writing a tutorial, collaborate with me through email (wesleykendall AT gmail DOT com) if you want to propose a lesson to the beginning MPI tutorial. Similarly, we can also start an advanced MPI tutorial page for more advanced topics.
1919
@@ -28,7 +28,7 @@ Disappointed with the amount of freely-available content on parallel programming
2828

2929
Dwaraka Nath is a masters graduate from Birla Institute of Technology and Science, Pilani, India. He loves blogging and occasionally does some code contributions as well.
3030

31-
You can find more about him on his [personal website](https://www.dwarak.in) and follow him on Github at [@dtsdwarak](https://github.com/dtsdwarak).
31+
You can find more about him on his [personal website](https://www.dwarak.in) and follow him on GitHub at [@dtsdwarak](https://github.com/dtsdwarak).
3232

3333
### Wesley Bland
3434
Wesley Bland is a researcher in High Performance Computing and a contributor to both MPICH and Open MPI. He graduated from the University of Tennessee, Knoxville with his PhD under Dr. Jack Dongarra. His research involved fault tolerance at scale using MPI. After leaving the university, he went to Argonne National Laboratory where he worked under Dr. Pavan Balaji as a postdoctoral appointee and continued his fault tolerance research while working on MPICH directly. He currently works at Intel Corporation on high performance runtimes, including MPI.

index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ Wanting to get started learning MPI? Head over to the [MPI tutorials]({{ site.ba
1212
Recommended books for learning MPI are located [here]({{ site.baseurl }}/recommended-books/).
1313

1414
## About
15-
This site is no longer being actively contributed to by its original author (Wes Kendall). However, mpitutorial.com has been placed on [Github]({{ site.github.code}}) so that others can contribute high-quality content. Click [here]({{ site.baseurl }}/about/) for more details on how to contribute.
15+
This site is no longer being actively contributed to by its original author (Wes Kendall). However, mpitutorial.com has been placed on [GitHub]({{ site.github.code}}) so that others can contribute high-quality content. Click [here]({{ site.baseurl }}/about/) for more details on how to contribute.

tutorials/introduction-to-groups-and-communicators/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ redirect_from: '/introduction-to-groups-and-communicators/'
99

1010
In all previous tutorials, we have used the communicator `MPI_COMM_WORLD`. For simple applications, this is sufficient as we have a relatively small number of processes and we usually either want to talk to one of them at a time or all of them at a time. When applications start to get bigger, this becomes less practical and we may only want to talk to a few processes at once. In this lesson, we show how to create new communicators to communicate with a subset of the original group of processes at once.
1111

12-
> **Note** - All of the code for this site is on [Github]({{ site.github.repo }}). This tutorial's code is under [tutorials/introduction-to-groups-and-communicators/code]({{ site.github.code }}/tutorials/introduction-to-groups-and-communicators/code).
12+
> **Note** - All of the code for this site is on [GitHub]({{ site.github.repo }}). This tutorial's code is under [tutorials/introduction-to-groups-and-communicators/code]({{ site.github.code }}/tutorials/introduction-to-groups-and-communicators/code).
1313
1414
## Overview of communicators
1515
As we have seen when learning about collective routines, MPI allows you to talk to all processes in a communicator at once to do things like distribute data from one process to many processes using `MPI_Scatter` or perform a data reduction using `MPI_Reduce`. However, up to now, we have only used the default communicator, `MPI_COMM_WORLD`.

tutorials/launching-an-amazon-ec2-mpi-cluster/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ starcluster sshmaster mpicluster
133133

134134
Once you are logged into the cluster, your current working directory will be `/root`. Change into the `/home/ubuntu` or `/home/sgeadmin` areas to compile code. These directories are mounted on a network file system and are viewable by all nodes in your cluster.
135135

136-
While you are in one of the mounted home directories, go ahead and check out the MPI tutorial code from its Github repository. The code is used by every lesson on this site:
136+
While you are in one of the mounted home directories, go ahead and check out the MPI tutorial code from its GitHub repository. The code is used by every lesson on this site:
137137

138138
```
139139
git clone git://github.com/wesleykendall/mpitutorial.git

tutorials/mpi-hello-world/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ redirect_from: '/mpi-hello-world/'
99

1010
In this lesson, I will show you a basic MPI hello world application and also discuss how to run an MPI program. The lesson will cover the basics of initializing MPI and running an MPI job across several processes. This lesson is intended to work with installations of MPICH2 (specifically 1.4). If you have not installed MPICH2, please refer back to the [installing MPICH2 lesson]({{ site.baseurl }}/tutorials/installing-mpich2/).
1111

12-
> **Note** - All of the code for this site is on [Github]({{ site.github.repo }}). This tutorial's code is under [tutorials/mpi-hello-world/code]({{ site.github.code }}/tutorials/mpi-hello-world/code).
12+
> **Note** - All of the code for this site is on [GitHub]({{ site.github.repo }}). This tutorial's code is under [tutorials/mpi-hello-world/code]({{ site.github.code }}/tutorials/mpi-hello-world/code).
1313
1414
## Hello world code examples
1515
Let's dive right into the code from this lesson located in [mpi_hello_world.c]({{ site.github.code }}/tutorials/mpi-hello-world/code/mpi_hello_world.c). Below are some excerpts from the code.

tutorials/mpi-send-and-receive/index.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ tags: MPI_Recv, MPI_Send
77
redirect_from: '/mpi-send-and-receive/'
88
---
99

10-
Sending and receiving are the two foundational concepts of MPI. Almost every single function in MPI can be implemented with basic send and receive calls. In this lesson, I will discuss how to use MPI's blocking sending and receiving functions, and I will also overview other basic concepts associated with transmitting data using MPI.
10+
Sending and receiving are the two foundational concepts of MPI. Almost every single function in MPI can be implemented with basic send and receive calls. In this lesson, I will discuss how to use MPI's blocking sending and receiving functions, and I will also overview other basic concepts associated with transmitting data using MPI.
1111

12-
> **Note** - All of the code for this site is on [Github]({{ site.github.repo }}). This tutorial's code is under [tutorials/mpi-send-and-receive/code]({{ site.github.code }}/tutorials/mpi-send-and-receive/code).
12+
> **Note** - All of the code for this site is on [GitHub]({{ site.github.repo }}). This tutorial's code is under [tutorials/mpi-send-and-receive/code]({{ site.github.code }}/tutorials/mpi-send-and-receive/code).
1313
1414
## Overview of sending and receiving with MPI
1515
MPI's send and receive calls operate in the following manner. First, process *A* decides a message needs to be sent to process *B*. Process A then packs up all of its necessary data into a buffer for process B. These buffers are often referred to as *envelopes* since the data is being packed into a single message before transmission (similar to how letters are packed into envelopes before transmission to the post office). After the data is packed into a buffer, the communication device (which is often a network) is responsible for routing the message to the proper location. The location of the message is defined by the process's rank.
1616

17-
Even though the message is routed to B, process B still has to acknowledge that it wants to receive A's data. Once it does this, the data has been transmitted. Process A is acknowledged that the data has been transmitted and may go back to work.
17+
Even though the message is routed to B, process B still has to acknowledge that it wants to receive A's data. Once it does this, the data has been transmitted. Process A is acknowledged that the data has been transmitted and may go back to work.
1818

1919
Sometimes there are cases when A might have to send many different types of messages to B. Instead of B having to go through extra measures to differentiate all these messages, MPI allows senders and receivers to also specify message IDs with the message (known as *tags*). When process B only requests a message with a certain tag number, messages with different tags will be buffered by the network until B is ready for them.
2020

@@ -44,28 +44,28 @@ MPI_Recv(
4444
Although this might seem like a mouthful when reading all of the arguments, they become easier to remember since almost every MPI call uses similar syntax. The first argument is the data buffer. The second and third arguments describe the count and type of elements that reside in the buffer. `MPI_Send` sends the exact count of elements, and `MPI_Recv` will receive **at most** the count of elements (more on this in the next lesson). The fourth and fifth arguments specify the rank of the sending/receiving process and the tag of the message. The sixth argument specifies the communicator and the last argument (for `MPI_Recv` only) provides information about the received message.
4545

4646
## Elementary MPI datatypes
47-
The `MPI_Send` and `MPI_Recv` functions utilize MPI Datatypes as a means to specify the structure of a message at a higher level. For example, if the process wishes to send one integer to another, it would use a count of one and a datatype of `MPI_INT`. The other elementary MPI datatypes are listed below with their equivalent C datatypes.
47+
The `MPI_Send` and `MPI_Recv` functions utilize MPI Datatypes as a means to specify the structure of a message at a higher level. For example, if the process wishes to send one integer to another, it would use a count of one and a datatype of `MPI_INT`. The other elementary MPI datatypes are listed below with their equivalent C datatypes.
4848

4949
| MPI datatype | C equivalent |
5050
| --- | --- |
5151
| MPI_SHORT | short int |
52-
| MPI_INT | int |
53-
| MPI_LONG | long int |
54-
| MPI_LONG_LONG | long long int |
55-
| MPI_UNSIGNED_CHAR | unsigned char |
56-
| MPI_UNSIGNED_SHORT | unsigned short int |
57-
| MPI_UNSIGNED | unsigned int |
58-
| MPI_UNSIGNED_LONG | unsigned long int |
59-
| MPI_UNSIGNED_LONG_LONG | unsigned long long int |
60-
| MPI_FLOAT | float |
61-
| MPI_DOUBLE | double |
62-
| MPI_LONG_DOUBLE | long double |
63-
| MPI_BYTE | char |
52+
| MPI_INT | int |
53+
| MPI_LONG | long int |
54+
| MPI_LONG_LONG | long long int |
55+
| MPI_UNSIGNED_CHAR | unsigned char |
56+
| MPI_UNSIGNED_SHORT | unsigned short int |
57+
| MPI_UNSIGNED | unsigned int |
58+
| MPI_UNSIGNED_LONG | unsigned long int |
59+
| MPI_UNSIGNED_LONG_LONG | unsigned long long int |
60+
| MPI_FLOAT | float |
61+
| MPI_DOUBLE | double |
62+
| MPI_LONG_DOUBLE | long double |
63+
| MPI_BYTE | char |
6464

6565
For now, we will only make use of these datatypes in the following MPI tutorials in the beginner category. Once we have covered enough basics, you will learn how to create your own MPI datatypes for characterizing more complex types of messages.
6666

6767
## MPI send / recv program
68-
As stated in the beginning, the code for this is available on [Github]({{ site.github.repo }}), and this tutorial's code is under [tutorials/mpi-send-and-receive/code]({{ site.github.code }}/tutorials/mpi-send-and-receive/code).
68+
As stated in the beginning, the code for this is available on [GitHub]({{ site.github.repo }}), and this tutorial's code is under [tutorials/mpi-send-and-receive/code]({{ site.github.code }}/tutorials/mpi-send-and-receive/code).
6969

7070
The first example in the tutorial code is in [send_recv.c]({{ site.github.code }}/tutorials/mpi-send-and-receive/code/send_recv.c). Some of the major parts of the program are shown below.
7171

@@ -91,7 +91,7 @@ if (world_rank == 0) {
9191
`MPI_Comm_rank` and `MPI_Comm_size` are first used to determine the world size along with the rank of the process. Then process zero initializes a number to the value of negative one and sends this value to process one. As you can see in the `else if` statement, process one is calling `MPI_Recv` to receive the number. It also prints off the received value.
9292
Since we are sending and receiving exactly one integer, each process requests that one `MPI_INT` be sent/received. Each process also uses a tag number of zero to identify the message. The processes could have also used the predefined constant `MPI_ANY_TAG` for the tag number since only one type of message was being transmitted.
9393
94-
You can run the example code by checking it out on [Github]({{ site.github.repo }}) and using the `run.py` script.
94+
You can run the example code by checking it out on [GitHub]({{ site.github.repo }}) and using the `run.py` script.
9595
9696
```
9797
>>> git clone {{ site.github.repo }}
@@ -119,7 +119,7 @@ while (ping_pong_count < PING_PONG_LIMIT) {
119119
"%d to %d\n", world_rank, ping_pong_count,
120120
partner_rank);
121121
} else {
122-
MPI_Recv(&ping_pong_count, 1, MPI_INT, partner_rank, 0,
122+
MPI_Recv(&ping_pong_count, 1, MPI_INT, partner_rank, 0,
123123
MPI_COMM_WORLD, MPI_STATUS_IGNORE);
124124
printf("%d received ping_pong_count %d from %d\n",
125125
world_rank, ping_pong_count, partner_rank);
@@ -153,7 +153,7 @@ This example is meant to be executed with only two processes. The processes firs
153153
1 received ping_pong_count 10 from 0
154154
```
155155

156-
The output of the programs on other machines will likely be different because of process scheduling. However, as you can see, process zero and one are both taking turns sending and receiving the ping pong counter to each other.
156+
The output of the programs on other machines will likely be different because of process scheduling. However, as you can see, process zero and one are both taking turns sending and receiving the ping pong counter to each other.
157157

158158
## Ring Program
159159
I have included one more example of `MPI_Send` and `MPI_Recv` using more than two processes. In this example, a value is passed around by all processes in a ring-like fashion. Take a look at [ring.c]({{ site.github.code }}/tutorials/mpi-send-and-receive/code/ring.c). The major portion of the code looks like this.

0 commit comments

Comments
 (0)