Skip to content

Commit

Permalink
Merge pull request nanomsg#304 from waywardmonkeys/typo-fixes
Browse files Browse the repository at this point in the history
Fix many typos.
  • Loading branch information
djc committed Aug 20, 2014
2 parents 57ee570 + c35b00c commit 7b5318b
Show file tree
Hide file tree
Showing 23 changed files with 58 additions and 59 deletions.
2 changes: 1 addition & 1 deletion doc/nn_cmsg.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ DESCRIPTION

These functions can be used to iterate over ancillary data attached to a message.

Structure 'nn_cmsghdr' represents a single anciallary property and contains following members:
Structure 'nn_cmsghdr' represents a single ancillary property and contains following members:

size_t cmsg_len;
int cmsg_level;
Expand Down
2 changes: 1 addition & 1 deletion doc/nn_poll.txt
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ nn_poll is a convenience function. You can achieve same behaviour by using
NN_RCVFD and NN_SNDFD socket options. However, using the socket options
allows for usage that's not possible with nn_poll, such as simultaneous polling
for both SP and OS-level sockets, integration of SP sockets with external event
loops et c.
loops etc.

EXAMPLE
-------
Expand Down
2 changes: 1 addition & 1 deletion doc/nn_pubsub.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Topic with zero length matches any message.
If the socket is subscribed to multiple topics, message matching any of them
will be delivered to the user.

The entire message, including the the topic, is delivered to the user.
The entire message, including the topic, is delivered to the user.

Socket Types
~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion doc/nn_symbol_info.txt
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ struct nn_symbol_properties {
/* The constant value */
int value;

/* The contant name */
/* The constant name */
const char* name;

/* The constant namespace, or zero for namespaces themselves */
Expand Down
2 changes: 1 addition & 1 deletion rfc/sp-protocol-ids-01.txt
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Internet-Draft List of SP protocol IDs June 2014
Protocol IDs denote the SP protocol used (such as request/reply or
publish/subscribe), while endpoint role determines the role of the
endpoint within the topology (requester vs. replier, publisher vs.
subscriber et c.) Both numbers are in network byte order.
subscriber etc.) Both numbers are in network byte order.

Protocol IDs are global, while endpoint roles are specific to any
given protocol. As such, protocol IDs are defined in this document,
Expand Down
2 changes: 1 addition & 1 deletion rfc/sp-protocol-ids-01.xml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@
<t>Protocol IDs denote the SP protocol used (such as request/reply or
publish/subscribe), while endpoint role determines the role of the
endpoint within the topology (requester vs. replier, publisher vs.
subscriber et c.) Both numbers are in network byte order.</t>
subscriber etc.) Both numbers are in network byte order.</t>

<t>Protocol IDs are global, while endpoint roles are specific to any given
protocol. As such, protocol IDs are defined in this document, while
Expand Down
3 changes: 1 addition & 2 deletions rfc/sp-publish-subscribe-01.txt
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,7 @@ Internet-Draft Publish/Subscribe SP May 2014
arbitrarily complex topology rather than of a single node-to-node
communication, several underlying protocols can be used in parallel.
For example, publisher can send a message to intermediary node via
TCP. The intermediate node can then forward the message via PGM et
c.
TCP. The intermediate node can then forward the message via PGM etc.

+---+ TCP +---+ PGM +---+
| |----------->| |---------->| |
Expand Down
2 changes: 1 addition & 1 deletion rfc/sp-publish-subscribe-01.xml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
arbitrarily complex topology rather than of a single node-to-node
communication, several underlying protocols can be used in parallel.
For example, publisher can send a message to intermediary node via TCP.
The intermediate node can then forward the message via PGM et c.</t>
The intermediate node can then forward the message via PGM etc.</t>

<figure>
<artwork>
Expand Down
28 changes: 14 additions & 14 deletions rfc/sp-request-reply-01.txt
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Internet-Draft Request/Reply SP August 2013
no matter what instance of the service have computed it.

Service that accepts empty requests and produces the number of
requests processed so far (1, 2, 3 et c.), on the other hand, is not
requests processed so far (1, 2, 3 etc.), on the other hand, is not
stateless. To prove it you can run two instances of the service.
First reply, no matter which instance produces it is going to be 1.
Second reply though is going to be either 2 (if processed by the same
Expand Down Expand Up @@ -153,7 +153,7 @@ Internet-Draft Request/Reply SP August 2013
"enterprise service bus" model. In the simplest case the bus can
be implemented as a simple hub-and-spokes topology. In complex
cases the bus can span multiple physical locations or multiple
oraganisations with intermediate nodes at the boundaries
organisations with intermediate nodes at the boundaries
connecting different parts of the topology.

In addition to distributing tasks to processing nodes, request/reply
Expand Down Expand Up @@ -183,18 +183,18 @@ Internet-Draft Request/Reply SP August 2013
As can be seen from the above, one request may be processed multiple
times. For example, reply may be lost on its way back to the client.
Client will assume that the request was not processed yet, it will
resend it and thus cause duplicit execution of the task.
resend it and thus cause duplicate execution of the task.

Some applications may want to prevent duplicit execution of tasks.
Some applications may want to prevent duplicate execution of tasks.
It often turns out that hardening such applications to be idempotent
is relatively easy as they already possess the tools to do so. For
example, a payment processing server already has access to a shared
database which it can use to verify that the payment with specified
ID was not yet processed.

On the other hand, many applications don't care about occasional
duplicitly processed tasks. Therefore, request/reply protocol does
not require the service to be idempotent. Instead, the idempotancy
duplicate processed tasks. Therefore, request/reply protocol does
not require the service to be idempotent. Instead, the idempotence
issue is left to the user to decide on.

Finally, it should be noted that this specification discusses several
Expand All @@ -213,7 +213,7 @@ Internet-Draft Request/Reply SP August 2013
communication, several underlying protocols can be used in parallel.
For example, a client may send a request via WebSocket, then, on the
edge of the company network an intermediary node may retransmit it
using TCP et c.
using TCP etc.



Expand Down Expand Up @@ -288,14 +288,14 @@ Internet-Draft Request/Reply SP August 2013

Thus, when a node is about to send a request, it can choose to send
it only to one of the channels that don't report pushback at the
moment. To implement approximately fair distibution of the workload
moment. To implement approximately fair distribution of the workload
the node choses a channel from that pool using the round-robin
algorithm.

As for delivering replies back to the clients, it should be
understood that the client may not be directly accessible (say using
TCP/IP) from the processing node. It may be beyond a firewall, have
no static IP address et c. Furthermore, the client and the processing
no static IP address etc. Furthermore, the client and the processing
may not even speak the same transport protocol -- imagine client
connecting to the topology using WebSockets and processing node via
SCTP.
Expand All @@ -317,7 +317,7 @@ Internet-Draft Request/Reply SP August 2013

The upside, on the other hand, is that the nodes in the topology
don't have to maintain any routing tables beside the simple table of
adjacent channels along with thier IDs. There's also no need for any
adjacent channels along with their IDs. There's also no need for any
additional protocols for distributing routing information within the
topology.

Expand Down Expand Up @@ -381,7 +381,7 @@ Internet-Draft Request/Reply SP August 2013
tag. That allows the algorithm to find out where the tags end and
where the message payload begins.

As for the reamining 31 bits, they are either request ID (in the last
As for the remaining 31 bits, they are either request ID (in the last
tag) or a channel ID (in all the remaining tags). The first channel
ID is added and processed by the REP endpoint closest to the
processing node. The last channel ID is added and processed by the
Expand Down Expand Up @@ -511,7 +511,7 @@ Internet-Draft Request/Reply SP August 2013
responsive. It can be thought of as a crude scheduling algorithm.
However crude though, it's probably still the best you can get
without knowing estimates of execution time for individual tasks, CPU
capacity of individual processing nodes et c.
capacity of individual processing nodes etc.

Alternatively, backpressure can be thought of as a congestion control
mechanism. When all available processing nodes are busy, it slows
Expand Down Expand Up @@ -694,7 +694,7 @@ Internet-Draft Request/Reply SP August 2013
If the request is successfully sent, the endpoint stores the request
including its request ID, so that it can be resent later on if
needed. At the same time it sets up a timer to trigger the re-
transimission in case the reply is not received within a specified
transmission in case the reply is not received within a specified
timeout. The user MUST be allowed to specify the timeout interval.
The default timeout interval must be 60 seconds.

Expand Down Expand Up @@ -795,7 +795,7 @@ Internet-Draft Request/Reply SP August 2013
legitimate setups can cause loop to be created.

With no additional guards against the loops, it's likely that
requests will be caugth inside the loop, rotating there forever, each
requests will be caught inside the loop, rotating there forever, each
message gradually growing in size as new prefixes are added to it by
each REP endpoint on the way. Eventually, a loop can cause
congestion and bring the whole system to a halt.
Expand Down
28 changes: 14 additions & 14 deletions rfc/sp-request-reply-01.xml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
no matter what instance of the service have computed it.</t>

<t>Service that accepts empty requests and produces the number
of requests processed so far (1, 2, 3 et c.), on the other hand, is
of requests processed so far (1, 2, 3 etc.), on the other hand, is
not stateless. To prove it you can run two instances of the service.
First reply, no matter which instance produces it is going to be 1.
Second reply though is going to be either 2 (if processed by the same
Expand Down Expand Up @@ -123,7 +123,7 @@
The "enterprise service bus" model. In the simplest case the bus
can be implemented as a simple hub-and-spokes topology. In complex
cases the bus can span multiple physical locations or multiple
oraganisations with intermediate nodes at the boundaries connecting
organisations with intermediate nodes at the boundaries connecting
different parts of the topology.</t>
</list>

Expand All @@ -149,18 +149,18 @@
<t>As can be seen from the above, one request may be processed multiple
times. For example, reply may be lost on its way back to the client.
Client will assume that the request was not processed yet, it will
resend it and thus cause duplicit execution of the task.</t>
resend it and thus cause duplicate execution of the task.</t>

<t>Some applications may want to prevent duplicit execution of tasks. It
<t>Some applications may want to prevent duplicate execution of tasks. It
often turns out that hardening such applications to be idempotent is
relatively easy as they already possess the tools to do so. For
example, a payment processing server already has access to a shared
database which it can use to verify that the payment with specified ID
was not yet processed.</t>

<t>On the other hand, many applications don't care about occasional
duplicitly processed tasks. Therefore, request/reply protocol does not
require the service to be idempotent. Instead, the idempotancy issue
duplicate processed tasks. Therefore, request/reply protocol does not
require the service to be idempotent. Instead, the idempotence issue
is left to the user to decide on.</t>

<t>Finally, it should be noted that this specification discusses several
Expand All @@ -182,7 +182,7 @@
communication, several underlying protocols can be used in parallel.
For example, a client may send a request via WebSocket, then, on the
edge of the company network an intermediary node may retransmit it
using TCP et c.</t>
using TCP etc.</t>

<figure>
<artwork>
Expand Down Expand Up @@ -248,14 +248,14 @@

<t>Thus, when a node is about to send a request, it can choose to send
it only to one of the channels that don't report pushback at the
moment. To implement approximately fair distibution of the workload
moment. To implement approximately fair distribution of the workload
the node choses a channel from that pool using the round-robin
algorithm.</t>

<t>As for delivering replies back to the clients, it should be understood
that the client may not be directly accessible (say using TCP/IP) from
the processing node. It may be beyond a firewall, have no static IP
address et c. Furthermore, the client and the processing may not even
address etc. Furthermore, the client and the processing may not even
speak the same transport protocol -- imagine client connecting to the
topology using WebSockets and processing node via SCTP.</t>

Expand All @@ -276,7 +276,7 @@

<t>The upside, on the other hand, is that the nodes in the topology don't
have to maintain any routing tables beside the simple table of
adjacent channels along with thier IDs. There's also no need for any
adjacent channels along with their IDs. There's also no need for any
additional protocols for distributing routing information within
the topology.</t>

Expand Down Expand Up @@ -334,7 +334,7 @@
That allows the algorithm to find out where the tags end and where
the message payload begins.</t>

<t>As for the reamining 31 bits, they are either request ID (in the last
<t>As for the remaining 31 bits, they are either request ID (in the last
tag) or a channel ID (in all the remaining tags). The first channel ID
is added and processed by the REP endpoint closest to the processing
node. The last channel ID is added and processed by the REP endpoint
Expand Down Expand Up @@ -445,7 +445,7 @@
responsive. It can be thought of as a crude scheduling algorithm.
However crude though, it's probably still the best you can get
without knowing estimates of execution time for individual tasks,
CPU capacity of individual processing nodes et c.</t>
CPU capacity of individual processing nodes etc.</t>

<t>Alternatively, backpressure can be thought of as a congestion control
mechanism. When all available processing nodes are busy, it slows
Expand Down Expand Up @@ -613,7 +613,7 @@
<t>If the request is successfully sent, the endpoint stores the request
including its request ID, so that it can be resent later on if
needed. At the same time it sets up a timer to trigger the
re-transimission in case the reply is not received within a specified
re-transmission in case the reply is not received within a specified
timeout. The user MUST be allowed to specify the timeout interval.
The default timeout interval must be 60 seconds.</t>

Expand Down Expand Up @@ -704,7 +704,7 @@
legitimate setups can cause loop to be created.</t>

<t>With no additional guards against the loops, it's likely that
requests will be caugth inside the loop, rotating there forever,
requests will be caught inside the loop, rotating there forever,
each message gradually growing in size as new prefixes are added to it
by each REP endpoint on the way. Eventually, a loop can cause
congestion and bring the whole system to a halt.</t>
Expand Down
6 changes: 3 additions & 3 deletions rfc/sp-tcp-mapping-01.txt
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ Internet-Draft TCP mapping for SPs March 2014

The fact that the first byte of the protocol header is binary zero
eliminates any text-based protocols that were accidentally connected
to the endpiont. Subsequent two bytes make the check even more
to the endpoint. Subsequent two bytes make the check even more
rigorous. At the same time they can be used as a debugging hint to
indicate that the connection is supposed to use one of the
scalability protocols -- ASCII representation of these bytes is 'SP'
Expand Down Expand Up @@ -143,7 +143,7 @@ Internet-Draft TCP mapping for SPs March 2014
+------------+-----------------+

It may seem that 64 bit message size is excessive and consumes too
much of valueable bandwidth, especially given that most scenarios
much of valuable bandwidth, especially given that most scenarios
call for relatively small messages, in order of bytes or kilobytes.

Variable length field may seem like a better solution, however, our
Expand All @@ -154,7 +154,7 @@ Internet-Draft TCP mapping for SPs March 2014
portion of the message and the performance impact is not even
measurable.

For small messages, the overal throughput is heavily CPU-bound, never
For small messages, the overall throughput is heavily CPU-bound, never
I/O-bound. In other words, CPU processing associated with each
individual message limits the message rate in such a way that network
bandwidth limit is never reached. In the future we expect it to be
Expand Down
8 changes: 4 additions & 4 deletions rfc/sp-tcp-mapping-01.xml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
<abstract>
<t>This document defines the TCP mapping for scalability protocols.
The main purpose of the mapping is to turn the stream of bytes
into stream of messages. Additionaly, the mapping provides some
into stream of messages. Additionally, the mapping provides some
additional checks during the connection establishment phase.</t>
</abstract>

Expand Down Expand Up @@ -80,7 +80,7 @@

<t>The fact that the first byte of the protocol header is binary zero
eliminates any text-based protocols that were accidentally connected
to the endpiont. Subsequent two bytes make the check even more
to the endpoint. Subsequent two bytes make the check even more
rigorous. At the same time they can be used as a debugging hint to
indicate that the connection is supposed to use one of the scalability
protocols -- ASCII representation of these bytes is 'SP' that can
Expand Down Expand Up @@ -123,7 +123,7 @@
</figure>

<t>It may seem that 64 bit message size is excessive and consumes too much
of valueable bandwidth, especially given that most scenarios call for
of valuable bandwidth, especially given that most scenarios call for
relatively small messages, in order of bytes or kilobytes.</t>

<t>Variable length field may seem like a better solution, however, our
Expand All @@ -133,7 +133,7 @@
<t>For large messages, 64 bits used by the field form a negligible portion
of the message and the performance impact is not even measurable.</t>

<t>For small messages, the overal throughput is heavily CPU-bound, never
<t>For small messages, the overall throughput is heavily CPU-bound, never
I/O-bound. In other words, CPU processing associated with each
individual message limits the message rate in such a way that network
bandwidth limit is never reached. In the future we expect it to be
Expand Down
Loading

0 comments on commit 7b5318b

Please sign in to comment.