Skip to content

CoralRing is an ultra-low-latency, lock-free, garbage-free, batching and concurrent circular queue (ring) in off-heap shared memory for inter-process communication (IPC) in Java across different JVMs using memory-mapped files.

License

Notifications You must be signed in to change notification settings

coralblocks/CoralRing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoralRing

CoralRing is an ultra-low-latency, lock-free, garbage-free, batching and concurrent circular queue (ring) in off-heap shared memory for inter-process communication (IPC) in Java across different JVMs using memory-mapped files. It uses memory barriers through volatile operations instead of locks to allow messages to be sent as fast as possible.

An interesting characteristic of memory-mapped files is that they allow your shared memory to exceed the size of your machine physical memory (RAM) by relying on the OS's virtual memory mechanism. Therefore your shared memory is limited not by your RAM but by the size of your hard drive (HDD/SSD). The trade-off of a large memory-mapped file is performance as the OS needs to swap pages back and forth from hard drive to memory in a process called paging.

For maximum performance (lowest possible latency) you should place your memory-mapped file inside the Linux /dev/shm/ folder so that the contents of your file are entirely kept in RAM memory. Of course by doing so you are back to being limited to your available RAM memory. CoralRing uses a circular queue (ring) in shared memory so even with a small piece of memory you can transmit an unlimited number of messages to the other process.

For some performance numbers you can check this link.

Note: CoralRing allows the exchange of messages inside the same machine.
For a multicast message middleware solution to build distributed systems across multiple
machines using the sequencer architecture you should refer to this article.

Waiting Ring

WaitingRing

Because the ring is a bounded circular queue, the first approach is to have a waiting producer and consumer. In other words, the ring producer will wait when the ring is full and the ring consumer will wait when the ring is empty. Basically a slow consumer will cause the producer to wait for space to become available in the ring. The consumer reads the messages (all the messages) in the same order that they were sent by the producer.

  • Click here for a minimal example of using a waiting ring producer
  • Click here for a minimal example of using a waiting ring consumer

  • Click here for a basic example of using a waiting ring producer
  • Click here for a basic example of using a waiting ring consumer

Note that for maximum performance the producer and consumer should busy spin when waiting, polling the ring. However you can also choose to use a wait strategy from CoralQueue.

Waiting Broadcast Ring

WaitingBroadcastRing

You can also have a single producer broadcasting messages to multiple consumers so that each consumer gets all the messages in the same order that they were sent by the producer. Any slow consumer can cause the ring to get full and the producer to wait. As the slow consumer makes progress so will the producer.

  • Click here for a minimal example of using a waiting broadcast ring producer
  • Click here for a minimal example of using a waiting broadcast ring consumer

  • Click here for a basic example of using a waiting broadcast ring producer
  • Click here for a basic example of using a waiting broadcast ring consumer

Non-Waiting Ring

NonWaitingRing

Things get more interesting when we allow the ring producer to write as fast as possible without ever waiting on a full ring. Because the ring is a circular queue, the producer can just keep writing forever, overwriting the oldest messages on the head of the queue with the newest ones. In this new scenario, a lagging consumer that falls behind and loses messages will simply disconnect (give up) instead of causing the producer to wait. It has to disconnect because it must never skip messages from the producer.

long avail = ringConsumer.availableToFetch();
			
if (avail == 0) continue; // busy spin as the ring is empty
			
if (avail == -1) throw new RuntimeException("The consumer fell behind too much! (ring wrapped)");

This lagging consumer problem can be mitigated by creating a large memory-mapped file so that your shared memory ring is big enough to give room for the consumer to fall behind and catch up. However there is a more important issue that we need to address with a non-waiting ring which is when the consumer falls behind so much that it hits the edge of the circular ring. When that happens there is a small chance that the consumer will be reading the oldest message in the ring at the same time that the producer is overwriting it with the newest message. In other words, the consumer can trip over the producer.

Using a fall behind tolerance

The tripping over problem will only happen when the consumer falls behind N messages, where N is equal to the capacity of the ring. If it falls behind a little more, it simply disconnects. If it falls behind a little less it should still be able to read the next message without any issues. So the bigger the capacity of the ring the less likely it is for the consumer to trip over the producer because the more room it has to fall behind safely. Therefore, to reduce the chances for the consumer to get close to the edge, we can introduce a fall behind tolerance, in other words, we can make the consumer give up and disconnect early when it falls to a percentage P of the capacity of the ring.

The constructor of NonWaitingRingConsumer can take a float argument fallBehindTolerance to specify the percentage of the ring capacity to fall behind before disconnecting. When it falls below that threshold then its availableToFetch() method returns -1.

Unfortantely, although this will further reduce the chances for the consumer to read a corrupt message, it does not make it zero. Theoretically, the slowness of the consumer is so unpredictable that while it is reading a message there will always be a small chance that the producer is overwriting it. If we really want to eliminate this possibility completely we must use a checksum for each message.

Using a checksum for each message

To completely solve the corrupt message consumer problem, we can make the producer write a checksum together with each message so that the consumer can check the integrity of the message after it reads it. Although we use a fast hash algorithm (xxHash - https://xxhash.com/) to calculate the checksum, there is a small performance penalty to pay when you choose this approach.

The constructor of NonWaitingRingProducer can take a boolean argument writeChecksum to tell the producer to write the checksum. The constructor of NonWaitingRingConsumer can take a boolean argument checkChecksum to tell the consumer to check the checksum. The consumer can check for a checksum error by checking for a null value returned from fetch():

for(long i = 0; i < avail; i++) {
      
    MutableLong ml = ringConsumer.fetch();
      
    if (ml == null) {
        throw new RuntimeException("The consumer tripped over the producer! (checksum failed)");
    }
      
    // (...)
}

Note that when using the checksum approach there is no reason to also use a fall behind tolerance. You can catch the exception, assume that the consumer has fallen behind too much and disconnect (give up).

Using a very large memory-mapped file

There is also another simple approach to solve the tripping over problem: just allocate a very large memory-mapped file so that the producer never has to wrap around the ring. For example, let's say you want to send 100 million messages per day, with a maximum size of 1024 bytes. If you do the math you will see that this is only 95 gigabytes of hard drive space. And as a bonus, as long as you don't go above your predicted maximum number of messages (no wrapping around the ring), you will also have all your messages persisted to disk at the end of your daily session. Then to begin a new session you can move the session file someplace else for archiving, reset the message sequence back to 1, and start over again.

  • Click here for a minimal example of using a non-waiting ring producer
  • Click here for a minimal example of using a non-waiting ring consumer

  • Click here for a basic example of using a non-waiting ring producer
  • Click here for a basic example of using a non-waiting ring consumer

Non-Waiting Multicast Ring

A non-waiting ring can be used naturally to implement multicast consumers. That means that you can have multiple non-waiting ring consumers reading from the same non-waiting ring producer. All consumers will read all messages in the exact same order. A consumer can still fall behind and disconnect, but it will never miss a message or process a message out of order. It will also never make the producer wait as the producer does not even know how many consumers it is multicasting to. In other words, consumers can leave and join the ring at any moment without impacting the producer.

NonWaitingMulticastRing

CoralQueue

CoralRing is great for threads running in different JVMs. But what about threads running inside the same JVM? For that you can check our CoralQueue project which is a collection of circular data structures for inter-thread communication in Java.

CoralQueue

About

CoralRing is an ultra-low-latency, lock-free, garbage-free, batching and concurrent circular queue (ring) in off-heap shared memory for inter-process communication (IPC) in Java across different JVMs using memory-mapped files.

Topics

Resources

License

Stars

Watchers

Forks

Packages