-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gnrc_gomach: a traffic adaptive MAC protocol for IEEE 802.15.4 networks (GoMacH). #5618
Conversation
Cool! I just skimmed over the code and found some parts that really look familiar to me 😄 I think it would be a good idea to refactor the commonalities of our code into more generic modules or MAC helpers. It looks like you also need to track neighbours and timeouts. Regarding the packet queue, there is gnrc/pktqueue.h, which is a lot simpler. I only found after I already had implemented mine. |
@daniel-k Exactly, I also agree that 😄 :
Actually, as I wrote above, after I have read your codes (Lw-MAC), I found many modules and functionalities of Lw-MAC are very useful and helpful, so why not build my own work on them 😄. So, to facilitate the iQueue-MAC implementation, I taken part of your work (like timeouts and neigbour list, etc.) for helping! Thanks for the nice work! |
Yes, iQueue-MAC also track neighbours and timeouts. But, during my experiments, I found that RTT has strong time drift. In my experiments, the time drift between two node can be 1ms per 20mins. So, as stated in Lw-MAC , it will be good to:
in the future. |
@zhuoshuguo You're very welcome to as much of my code as you need! It's just unfortunate to have redundant code.
I guess that this won't help at all. It's in inherent problem to drifting clocks on distributed systems. 1ms in 20min is somehow in the range of what I would expect (I did some calculations back then). Therefore, this was an upper bound to my sleep duration / time that nodes don't communicate. The comment about using xtimer instead of RTT was to simplify the code as having two timer systems makes it unecessarily complex. |
That looks rather big (~830 ppm). A typical cheap crystal on these systems have a drift of about ±30 or ±50ppm. |
Yes, that looks big. I have tested this drift effect in a simple experiment: use xtimer ( Node-A:
Node-B:
You can see the two nodes have different RTT ticks for 9 seconds (setted from xtimer), and the gap is not small. I am not sure which module (xtimer or RTT) is more accurate (maybe xtimer is more accurate than RTT), but I guess at least one of them has relatively big drift effect. Also, I am not sure whether the OS has also introduce some effect on this. And, yes, distributed system has to face timer drift effect. In iQueue-MAC, there is also a re-phase-lock scheme to fix that if the sender get lost with its receiver's phase (wake-up period) due to timer drift. |
What are you using for the RTT test? The 29.4 kHz frequency seems odd. |
I used the samr21-xpro board for the RTT test. By the way, the the ticks shown above is not for 10 sends, it is for 9secons, e.g., 294925 tickes for 9s (in this case the frequency could be about 32.769kHz). |
710e618
to
ec7a980
Compare
8bd2e9c
to
e1fb6a7
Compare
1789b59
to
3c05d32
Compare
17f4223
to
4815c2a
Compare
A curious question: is this part of a PhD thesis work or similar? The level of detail in the description of the algorithm is very good. |
Thanks. ;-) Actually, the origin idea of GoMacH comes from this paper, iQueue-MAC, which was indeed part of my PhD thesis when I was in China. Well, now (design and implementation of) GoMacH (which is very different from iQueue-MAC) is part of my RIOT-ADT project in INRIA. |
With ICN conference CCN-lite tutorial session going on, apply and play GoMacH on
On samr-21 board-1:
On samr-21 board-2:
|
Tested successfully on IoT-lab with |
(okay, after some warm-up period the latency gets a little better) |
Yap, that's great!~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor nits (the static inline
s are internal, so their mis-characterization shouldn't be reflected in size, but it is weird to have >2 line (or 5 lines if they contain an if-else or variables) functions be inline
). Please squash changes immediately.
sys/include/net/gnrc/gomach/gomach.h
Outdated
@@ -0,0 +1,353 @@ | |||
/* | |||
* Copyright (C) 2016 INRIA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2017
sys/include/net/gnrc/gomach/hdr.h
Outdated
@@ -0,0 +1,143 @@ | |||
/* | |||
* Copyright (C) 2016 INRIA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2017
@@ -0,0 +1,100 @@ | |||
/* | |||
* Copyright (C) 2016 INRIA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2017
@@ -0,0 +1,2203 @@ | |||
/* | |||
* Copyright (C) 2016 INRIA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2017
@@ -0,0 +1,1429 @@ | |||
/* | |||
* Copyright (C) 2016 INRIA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2017
return true; | ||
} | ||
|
||
static inline void _cp_tx_default(gnrc_netif_t *netif) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very big function for a static inline
gnrc_gomach_set_update(netif, false); | ||
} | ||
|
||
static inline void _t2k_wait_vtdma_tx_success(gnrc_netif_t *netif) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very big function for a static inline
gnrc_gomach_set_update(netif, true); | ||
} | ||
|
||
static inline void _t2k_wait_vtdma_tx_default(gnrc_netif_t *netif) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very big function for a static inline
#endif | ||
} | ||
|
||
static inline void gomach_t2k_update(gnrc_netif_t *netif) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very big function for a static inline
gnrc_gomach_set_update(netif, false); | ||
} | ||
|
||
static inline void _t2u_data_tx_success(gnrc_netif_t *netif) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
etc.
Address immediately!~ |
I'll got to go in about 20 min. Do you manage to do it until then? |
You mean you will go out? 😄 No problem, take your time. I will prepare everything ready as fast as possible and wait for your action. |
No, I'll be gone for the weekend in 20 min! ;-) |
(now 15 min ;-)) |
OK, no problem!~ Take your time~ |
Ah, maybe I got you wrong! You mean you will leave immediately! OK, I will update immediately! in 2 minutes |
Yes! |
Immediately!! |
10 min ;-) |
(maybe I have some time tonight to look at it again, but I can't promise) |
b289a00
to
ea5aeeb
Compare
I hope we finish it now!~ |
Squashe!~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK
Let's go!~ |
All green!~ |
Then let's go! |
Congratulation! 🎉 |
Huge thanks to @miri64 !!!~~~ Thanks a lot for helping this!!~ 😸 |
And huge thanks to all of you that provide comments and help on GoMacH!~ In the following days and in the future, I will keep maintaining and updating GoMacH (LWMAC) stuffs. Many thanks again!~ Especially @miri64 , next time we meet, beers and meals on me!~ 😄 |
This PR is an implementation of GoMacH, (we believe) "a General, nearly Optimal MAC protocol for multi-Hop communications", for IEEE 802.15.4 IoT/WSNs networks.
Its main features are:
Note: source of all the following figures is from our IEEE-LCN-2017 conference paper (GoMacH: A Traffic Adaptive Multi-channel MAC Protocol for IoT).
TODO
gnrc_netif2
Display achieved radio duty-cyle of the GoMacH
You can print out the achieved radio duty-cyle (a rough one) of GoMacH by setting the
GNRC_GOMACH_ENABLE_DUTYCYLE_RECORD
flag to "1".By doing so, each time when a device sends or receives a packet, it will print out its achieved duty-cycle (start counting from power-up or reboot).
For instance:
On the sender side:
On the receiver side, you will see:
The following figure shows the energy consumption rate measured on iotlab-m3 nodes when applying GoMacH as the MAC layer protocol, given an example duty-cycle of 30% (the duty-cycle can be definitely set to be much lower, this is just to show more cleanly the power consumption rate when the radio is on).

Generally, the power consumption rate can be brought down to 0.08 W (when the radio is turned off) from 0.12 W (when the radio is on) on iotlab-m3 nodes. In other words, in a nearly optimal case (with normal light traffic), currently, the duty-cycle scheme can manage to reduce about 1/3 of the power consumption on iotlab-m3 nodes. Notably, when the MAC is with the deep-sleep mode of RIOT in the future, the power saving feature will be more obvious.
Big picture
GoMacH is a completely asynchronous and distributed solution for IoT IEEE 802.15.4 networks, which doesn't require global synchronization to maintain its manipulations. The backbone of GoMacH is a typical low duty-cycling scheme for supporting basic transmissions. To tackle traffic variations and make GoMacH suitable for a larger range of applications (e.g., with bursty traffic), GoMacH adopts an efficient dynamic slots allocation scheme which utilizes the sender's MAC queue-length information to accurately allocate transmission slots with nearly no extra overheads. Furthermore, GoMacH adopts a novel multi-channel communication scheme that spreads its communications onto different channels. Packet transmissions that fail on one channel due to interferences will be soon recovered on another channel (like in TSCH).
GoMacH’s low duty-cycle scheme
In GoMacH, each node locally maintains its own superframe. The superframe starts with a short wake-up period (WP). Each node periodically wakes up in its own WP and listens for potential incoming packets. The WP period has an initially small duration, but is variable as in T-MAC, i.e., right after each packet reception in WP, the receiver automatically extends its WP to another basic WP duration to receive more potential incoming packets. On the other hand, after phase-locked with the receiver (the phase-lock scheme will be illustrated later), each sender wakes up right at the beginning of the receiver's WP and uses CSMA/CA to send packets, and turns off right after the transmission if it has no pending packets for the receiver, as illustrated above. With CSMA/CA, GoMacH allows multi senders to transmit packets to the same receiver in the latter's WP. Notably, in WP, it is restricted that a sender can send only one single packet to the receiver (even if it has multi pending packets). Buffered packets for the receiver will be transmitted in the following slots period.
Provide instantaneous high throughput: dynamic slots allocation
To truly provide sufficient throughput for varying network situations, the protocol adopts an efficient queue-length based slots allocation scheme, which has been previously proposed in iQueue-MAC.
In short, right before triggering each data packet transmission, each sender embeds its current queue-length value (the number of buffered packets for the targeted receiver, also called queue-length indicator) into the packet's MAC header. In WP (actually, in each packet reception), the receiver collects these queue-length indicators from all the senders. Then, right after WP, if the collected queue-length indicators are not all zero (indicating pending packets), the receiver immediately generates a slots-allocation scheduling beacon and broadcasts it out. The dynamically allocated transmission slots compose the variable TDMA period (vTDMA) in GoMacH.
Slots application is through data piggybacking, and the beacon only exists when there are slots to allocate. In light traffic scenarios with no packet pendings, GoMacH resumes to a typical low duty-cycle MAC, as shown in the first figure.
The above figure shows a typical WP+vTDMA transmission procedure in GoMacH (after phase-locked with the receiver). Sender-1 and sender-2 both have pending packets for the same receiver. After phase-locked, they wait for the wake-up period (WP) and use CSMA to send their first packets, while at the same time telling the receiver the exact number of pending packets. Right after the WP, the receiver immediately broadcasts a beacon that carries the vTDMA slots allocation list that schedules the following transmissions in the well-order TDMA manner on the receiver's sub-channel. Note that the vTDMA procedure is not shown in this figure since it is carried out on another channel.
High Robustness: multi-channel operation
--------------------------Channel usage in GoMacH
GoMacH is robust against external interferences. The protocol adopts multi-channel technique to spread its transmissions onto different channels. In short, GoMacH adopts:
1 ) a dual public channel scheme for transmissions in WP,
2 ) carries all the vTDMA slotted transmissions onto different sub-channels of different receivers.
By default, IEEE 802.15.4 channel 11 and 26 are selected as the dual public channels (public channel 1 and 2) for WP communications, while the rest of the channels are used as sub-channels for vTDMA transmissions. Of cause, the channel sequences of public channel 1 and 2 can be reconfigured according to users.
1. Dual public channel communication in WP
To guarantee the reliability of WP communication, GoMacH adopts a dual public channel scheme. All nodes in GoMacH adopt the same dual public channel sequences. Each node in GoMacH alternatively switches its radio between public channel 1 and 2 during consecutive WP periods, with increase of cycle counts. Senders that track the WP phase of the receiver also track the latter's public channel phase. The idea of the dual public channel scheme is simple: as depicted in the figure above, once transmissions in WP on one public channel fail due to external interferences in receiver's cycle-N, they can soon be recovered (retransmitted) right in the next cycle's cycle-(N+1) WP on the other public channel, like in TSCH.
2. vTDMA on sub-channels
When a receiver initiates vTDMA transmissions after WP (i.e., allocates transmission slots to senders), it carries out all the slotted transmissions onto its locally unique sub-channel. Through a sub-channel selection procedure right after powered up (GoMacH initialization), each node manages to maintain a locally unique sub-channel sequence among its one-hop neighbors, e.g., in the figure above, receiver-1's sub-channel is channel-15 and receiver-2's sub-channel is channel-21. Each time when a receiver initiates vTDMA communications, it embeds its sub-channel sequence onto its beacon and carries out the vTDMA procedure onto its sub-channel. After extracting the sub-channel sequence from the received beacon, the senders turn to the defined sub-channel and send their packets. Since different nodes have locally different sub-channels, vTDMA transmissions of nearby nodes can be carried out in parallel without collisions.
GoMacH’s phase-lock and broadcast schemes
phase-lock scheme
To save power from the sender side, GoMacH adopts phase-lock scheme to allow senders to track the receiver's wake-up phase and the related public channel phase of WP. Since each node's WP is alternatively located on public channel 1 or public channel 2, to catch one of the receiver's WP during the preamble period (which is slightly longer than the superframe cycle), the preamble stream is actually composed of two parallel sub-streams on the two public channels.
The above figures shows the "preamble stream + preamble-ACK + data +ACK" Phase-lock procedure in GoMacH. Notably, this figure only shows the communication procedure on one public-channel, there is another preamble stream broadcasted on the other public-channel, simultaneously, before the sender gets the preamble-ACK from the receiver.
broadcast scheme
The broadcast scheme is exactly the same as the preamble (phase-lock) scheme, except that each broadcast will fully last for one cycle duration.
How to test:
Currently,GoMacH requires RTT to run it, which provides the underlying timer source. But, as a future plan, to make GoMacH available for more devices, I will replace RTT with a more general timer source like xtimer-based
gnrc_mac's
timeout module.Now, GoMacH fully supports samr21-xpro and iotlab-m3 boards (that have RTT). If you don't have those nodes with you, you can still try GoMacH with iotlab-m3 nodes in the FIT IoTLAB remotely for free.
Of cause, you also can try it with other nodes that have RTT module (but, remember to add the board's name into the white-list of the test makefile).
Test by using the
gomach
test-example:I have simply copied the
default
example fromRIOT/examples
to build a text example inRIOT/tests/gomach
for testing this protocol.Manually send packet from one board to the other via interactive shell:
you can also broadcast a packet by typing:
Some evaluation results
For performance comparison, I compared GoMacH to an adapted X-MAC. A test-bed of SAMR21-xpro nodes was established in my office room environment, and several real-world experiments have been carried out. The above table shows the key parameters of GoMacH and X-MAC and some experimental settings we adopt through all the experiments.
1. Impact of transmission hops
We tested GoMacH and X-MAC on a multi-hop test-bed to evaluate the impact of transmission hops on their performances. The above figure shows the linear multi-hop test-bed that has a maximum of 6-hops transmissions. Packets are generated at the left-end node (sender) and relayed to the right-end node (receiver). Relay nodes don't generate packets. We fix the data rate of the sender to 2 packets per second. The two protocols adopt the same channel check rate of 2Hz (i.e., cycle duration is 500ms).
We vary the transmission hops (number of relay nodes) between the sender and receiver from 1 to 6 hops to generate different test scenarios. Each test scenario lasts for 300 seconds. Each test scenario has been repeated for three times and we average the results.
2. Impact of parallel sender number
This test experiment investigated the impact of parallel sender number on the performance of GoMacH and X-MAC. In this experiment, there is one receiver and several senders that all generate packets for the receiver. The data rate of all senders is fixed to 2 packets per second and both GoMacH and X-MAC adopt a cycle duration of 500ms. We vary the number of the senders to generate different test scenarios. Each test scenario lasts for 300 seconds.
PS: this experiment is based on the
RIOT/examples/default
application.3. Impact of data rate
This test experiment investigated the effectiveness of GoMacH's traffic adaptation capability (i.e., the dynamic slots allocation scheme). A tree network shown above was builded. The above figure shows the tree test-bed in which data packets are generated at the senders and relayed to the sink. Relay nodes don't generate packets. In this experiment, both GoMacH and X-MAC adopt a cycle duration of 500ms. We varied the data rate of senders from one packet per 10 seconds to an intensive value of 5 packets per second, to generate different test scenarios. Each test scenario lasts for 300 seconds.
PS: this experiment is based on the
RIOT/examples/default
application.4. Robustness against external interference
GoMacH adopts multi-channel technique to enhance its robustness against external interferences.
This test experiment investigated the effectiveness of GoMacH's multi-channel scheme, by using the tree test-bed shown above, with the exist of wireless interferences.
In this experiment, all the senders adopt a fixed data rate of 1 packet per second. Relay nodes don't generate packets. Both GoMacH and X-MAC adopt a cycle duration of 250ms. We use one SAMR21-xpro node as the jamming source which continuously generates busytone that covers the whole network. The busytone is generated on channel-26 which is used as the communication channel of X-MAC and one of the public channels of GoMacH. The busytone has a cycle of 1 second and we vary the active ratio of the jamming signal (from 0% to 100%) to generate different test scenarios.
PS: this experiment is based on the
RIOT/examples/default
application.5. Bursty experiment with RPL
This test experiment intends to evaluate/verify that GoMacH can cooperate well with popular upper layer protocols like RPL and UDP in bursty traffic conditions. Two test-beds of 11 SAMR21-xpro nodes were deployed over one layer of our office building. The above figure shows the test-beds and the communication stack we applied to all the nodes which includes UDP, RPL, IPv6 and 6LoWPAN (PS: this experiment is based on
RIOT/examples/gnrc_networking
application).We deploy two different test-beds of "Mesh" and "Local", with different topologies as shown above.
In the "Local" test scenario, the test-bed is deployed in one office room that all nodes are in each other's communication range, simulating a dense network. While in the "Mesh" test scenario, nodes are scattered over the corridors, simulating a multi-hop network. In each node, an application layer generates data packets and uses UDP to send the packets to the sink. RPL is used to automatically build the routing paths. In all the tests, each node generates bursty data packets with an interval of 30 seconds. In each bursty data period, each node generates a bunch of 6 packets simultaneously. Each node adopts a cycle duration of 800ms and generates a total number of 300 packets. Notably, in each test, we extract our results from a recording period of the last 33 minutes of the experiment (started from 3 minutes ahead of the bursty data period, as shown in the following figure) where the topology is stable. We run both "Mesh" and "Local" scenarios for three times and average the results.
This figure shows the dynamic slots allocation procedure in one "Mesh" test scenario, i.e., GoMacH dynamically allocates transmission slots to tackle burst traffic loads.
6. Stability evaluation
Stability evaluation 1
Finished a first long-time experiment to test GoMacH's stability.

Settings:
Topology: one sink (receiver) and 5 senders:
This experiment is based on
examples/default
;5 senders adopt a data rate of 1 packet/s (each packet is with a raw payload of 80 bytes) and transmit generated data to the sink;
MAC cycle duration for all nodes: 200ms;
MAC Tx-Queue size for each node: 10 packets;
Experiment duration: 64 hours (> 2 and a half days);
Results:
Packet delivery ration of the network: 1146117 / 1146229 = 99.99022883%
Stability evaluation 2
Evaluated GoMacH through a second long-time stability experiment:
A multi-hop test-bed of SAMR21-xpro with at most 3 hops transmission. All the nodes were deployed in my office room environment (all nodes are in each other's radio/interference range). All nodes (except sink) generate data packet and send/relay the packets to the sink.
Application: based on the default application in RIOT/examples/default
Cycle duration of all nodes: 200ms;
data rate: 1 packet / 1 seconds per node.
Maximum link layer packet retransmission: 6 times.
Experiment duration: 48 hours (two days).
Results:
Packet deliver ratio: 898816/898830 = 99.9984424195899%. Only 14 packets dropped.
Stability evaluation 3
Evaluated GoMacH through a third long-time stability experiment, settings:

This experiment is based on the
examples/gnrc_networking
which includes UDP and RPL.There are one sink and 5 senders.
The sink initiates the RPL network and sets itself as the root. All the senders send packets to sink using UDP.
In the experiment, there are simultaneously two types of traffics:
1, upward traffic generated by son noders (senders) heading to the sink,
2, downward traffic generated by the sink heading to all senders.
For upward traffic, all senders use UDP to send packets to the link local address of the sink;
For downward traffic, the sink uses UDP to send packets to the configured ipv6 addresses of all the senders which are informed to the sink through RPL networking, i.e., downward traffic is based on RPL routing.
PS: In this experiment, the application layer didn't have a packet retransmission scheme to recover packet drops.
Other key MAC settings:
Cycle duration of GoMacH for all nodes: 200ms;
Maximum link layer packet retransmission: 6 times.
Data rate:
The experiment lasted about 48 hours (two days).
Results:
For upward traffic: 431425 (received) / 431528 (generated) =99.976%
For downward traffic : 34594 (received) / 34604 (generated) = 99.971%
Stability evaluation 4
Finished a fourth long-time experiment (the final one) for evaluating GoMacH's stability:

Settings:
Topology: one sink (receiver) and 5 senders:
This experiment is based on
examples/default
5 senders adopt an intensive data rate for continuing generating data packets and sending to the sink;
Data rate for each sender: 10packets/1second;
Data size for each packet: 80 bytes (raw payload);
MAC cycle duration for all nodes: 200ms;
MAC Tx-Queue size for each node: 8 packets;
Experiment duration: 49 hours (> two days);
Notably, the taffic of the network (taffic loads from all the senders) is currently beyong the offered throughput of GoMacH. And by applying such an overwhelming traffic, the goal of this experiment is to check that:
Results:
Actually, with some adaptation/optimization (on the WP period of GoMacH, or having larger TX-queue size), GoMacH's throughput can be further boosted/improved, i.e., achieving higher traffic adaptation. That will be done in the future (maybe in following PRs.) ;-)