Skip to content

Commit 0bf39bd

Browse files
committed
c++ -> cpp
1 parent 506863d commit 0bf39bd

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

blog/2021-03-29-send-block-lifecycle.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ The node then will send this message to its peers.
3636
For each peer there is an already established TCP connection and after a message is processed a new message listener is created.
3737
This is how the listener is installed in `bootstrap_server.cpp:151`
3838

39-
```c++
39+
```cpp
4040
void nano::bootstrap_server::receive ()
4141
{
4242
// ...
@@ -51,7 +51,7 @@ void nano::bootstrap_server::receive ()
5151
Which will put whatever we receive through the TCP connection into the `receive_buffer`.
5252
The function `receive_header_action` is immediately after and reads like this
5353

54-
```c++
54+
```cpp
5555
void nano::bootstrap_server::receive_header_action (boost::system::error_code const & ec, size_t size_a)
5656
{
5757
if (!ec)
@@ -74,7 +74,7 @@ void nano::bootstrap_server::receive_header_action (boost::system::error_code co
7474
```
7575
7676
What happens above is that the head of the `receive_buffer` is assigned to `type_stream` and `type_stream` is used to instanciate a `message_header` class. The logic in the constructor will deserialize the stream and, in particular, will fill the `header.type` attribute. This is because, provided no error happened, the next thing we do will depend on the `header.type` (the switch construct). Let's see the case for a publish message.
77-
```c++
77+
```cpp
7878
case nano::message_type::publish:
7979
{
8080
socket->async_read (receive_buffer, header.payload_length_bytes (), [this_l, header](boost::system::error_code const & ec, size_t size_a) {
@@ -86,7 +86,7 @@ case nano::message_type::publish:
8686
It's installing another listener, on the same buffer. The handler will call the `receive_publish_action` function in the same file, which validates the work in the carried block. It then adds the message to the `requests` deque. This will be ultimately processed by the `request_response_visitor` which in turn puts the message into the `entries` deque of the `tcp_message_manager`.
8787
### Processing message entries
8888
At this point the `network` class enters the stage. When initialized, this class runs the `process_messages` loop at `tcp.cpp:279`.
89-
```c++
89+
```cpp
9090
void nano::transport::tcp_channels::process_messages ()
9191
{
9292
while (!stopped) // while we are not shutting down the node
@@ -100,7 +100,7 @@ void nano::transport::tcp_channels::process_messages ()
100100
}
101101
```
102102
Internally the `process_message`, makes sure we have a channel open with the message originator. Then it creates a `network_message_visitor` relative to the channel and processes the publish message according to the following function in `network.cpp`:
103-
```c++
103+
```cpp
104104
void publish (nano::publish const & message_a) override
105105
{
106106
// ... logging and monitoring logic ...
@@ -118,7 +118,7 @@ Whenever a `node` class is instantiated it spawns a block processor thread. This
118118
The full logic can be found in `ledger.cpp` in the `send_block` function. At its core it's a pyramid of ifs which try to account for all possible things that might go wrong. For example if the work of of the block is sufficient (note that we already checked this when we received the block from another node).
119119
120120
At the top of the pyramid we finally execute the instruction
121-
```c++
121+
```cpp
122122
ledger.store.block_put (transaction, hash, block_a);
123123
```
124124

0 commit comments

Comments
 (0)