Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Documentation for Monolake #1165

Merged
merged 3 commits into from
Nov 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions content/en/docs/monolake/Architecture/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "Architecture"
linkTitle: "Architecture"
weight: 3
keywords: ["Proxy", "Rust", "io_uring", "Architecture"]
description: "Architecture and design deep dive"
---


55 changes: 55 additions & 0 deletions content/en/docs/monolake/Architecture/connector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: "Connectors"
linkTitle: "Connectors"
weight: 3
description: "Deep dive into monoio-transport's modular connector architecture, connection composition patterns, and layered network protocols"
---

# Connector Trait

The core of the [monoio-transports](https://docs.rs/monoio-transports/latest/monoio_transports/) crate is its modular and composable connector architecture, which allows developers to easily build complex, high-performance network communication solutions.

At the heart of this design is the [Connector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/trait.Connector.html) trait, which defines a common interface for establishing network connections:

```rust
pub trait Connector<K> {
type Connection;
type Error;
fn connect(&self, key: K) -> impl Future<Output = Result<Self::Connection, Self::Error>>;
}
```

## Stacking Connectors

Connectors can be easily composed and stacked to create complex connection setups. For example, let's say you want to create an HTTPS connector that supports both HTTP/1.1 and HTTP/2 protocol

```rust
use monoio_transports::{
connectors::{TcpConnector, TlsConnector},
HttpConnector,
};

// Create a TCP connector
let tcp_connector = TcpConnector::default();

// Create a TLS connector on top of the TCP connector, with custom ALPN protocols
let tls_connector = TlsConnector::new_with_tls_default(tcp_connector, Some(vec!["http/1.1", "h2"]));

// Create an HTTP connector on top of the TLS connector, supporting both HTTP/1.1 and HTTP/2
let https_connector: HttpConnector<TlsConnector<TcpConnector>, _, _> = HttpConnector::default();
```

In this example, we start with a basic [TcpConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TcpConnector.html), add a [TlsConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TlsConnector.html) on top of it to provide TLS encryption, and then wrap the whole stack with an HttpConnector to handle both HTTP/1.1 and HTTP/2 protocols. This modular approach allows you to easily customize the connector stack to suit your specific needs.

# Connector Types

The [monoio-transports](https://docs.rs/monoio-transports/latest/monoio_transports/) crate provides several pre-built connector types that you can use as building blocks for your own solutions. Here's a table outlining the available connectors:

| Connector Type | Description |
|---------------|-------------|
| [TcpConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TcpConnector.html) | Establishes TCP connections |
| [UnixConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.UnixConnector.html) | Establishes Unix domain socket connections |
| [TlsConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TlsConnector.html) | Adds TLS encryption to an underlying L4 connector, supporting both native-tls and rustls backends |
| [HttpConnector](https://docs.rs/monoio-transports/latest/monoio_transports/http/struct.HttpConnector.html) | Handles HTTP protocol negotiation and connection setup |
| [PooledConnector](https://docs.rs/monoio-transports/latest/monoio_transports/pool/struct.PooledConnector.html) | Provides connection pooling capabilities for any underlying connector |

74 changes: 74 additions & 0 deletions content/en/docs/monolake/Architecture/context.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
---
title: "Context Management"
linkTitle: "Context"
weight: 4
---

# `certain_map`

In a service-oriented architecture, managing the context data that flows between different services is a critical aspect of the system design. The [`certain_map`](https://docs.rs/certain-map/latest/certain_map/) crate provides a powerful way to define and work with typed context data, ensuring the existence of required information at compile-time.

## The Problem `certain_map` Solves

When building modular services, it's common to have indirect data dependencies between components. For example, a downstream service may require information that was originally provided in an upstream request, but the intermediate services don't directly use that data. Traditionally, this would involve passing all potentially relevant data through the request/response types, which can quickly become unwieldy and error-prone.

Alternatively, you might use a `HashMap` to manage the context data, but this approach has a significant drawback: you cannot ensure at compile-time that the required key-value pairs have been set when the data is read. This can lead to unnecessary error handling branches or even panics in your program.

## How `certain_map` Helps

The `certain_map` crate solves this problem by providing a a typed-map-like struct that ensures the existence of specific items at compile-time. When you define a `Context` struct using `certain_map`, the compiler will enforce that certain fields are present, preventing runtime errors and simplifying the implementation of your services.

Here's an example of how you might set up the context for your project:

```rust
certain_map::certain_map! {
#[derive(Debug, Clone)]
#[empty(EmptyContext)]
#[full(FullContext)]
pub struct Context {
peer_addr: PeerAddr,
remote_addr: Option<RemoteAddr>,
}
}
```

In this example, the `Context` struct has two fields: `peer_addr` of type `PeerAddr`, and `remote_addr` of type `Option<RemoteAddr>`. The `#[empty(EmptyContext)]` and `#[full(FullContext)]` attributes define the type aliases for the empty and full versions of the context, respectively.

The key benefits of using `certain_map` for your context management are:

1. **Compile-time Guarantees**: The compiler will ensure that the necessary fields are present in the `Context` struct, preventing runtime errors and simplifying the implementation of your services.

2. **Modularity and Composability**: By defining a clear context structure, you can more easily compose services together, as each service can specify the context data it requires using trait bounds.

3. **Flexibility**: The `certain_map` crate provides a set of traits (`ParamSet`, `ParamRef`, `ParamTake`, etc.) that allow you to easily manipulate the context data, such as adding, removing, or modifying fields.

4. **Reduced Boilerplate**: Instead of manually creating and managing structs to hold the context data, the `certain_map` crate generates the necessary code for you, reducing the amount of boilerplate in your project.

## Using `certain_map` in Your Services

Once you've defined your `Context` struct, you can use it in your services to ensure that the required data is available. For example, consider the following `UpstreamHandler` service:

```rust
impl<CX, B> Service<(Request<B>, CX)> for UpstreamHandler
where
CX: ParamRef<PeerAddr> + ParamMaybeRef<Option<RemoteAddr>>,
B: Body<Data = Bytes, Error = HttpError>,
HttpError: From<B::Error>,
{
type Response = ResponseWithContinue<HttpBody>;
type Error = Infallible;

async fn call(&self, (mut req, ctx): (Request<B>, CX)) -> Result<Self::Response, Self::Error> {
add_xff_header(req.headers_mut(), &ctx);
#[cfg(feature = "tls")]
if req.uri().scheme() == Some(&http::uri::Scheme::HTTPS) {
return self.send_https_request(req).await;
}
self.send_http_request(req).await
}
}
```

In this example, the `UpstreamHandler` service expects the `Context` to contain the `PeerAddr` and optionally the `RemoteAddr`. The trait bounds `ParamRef<PeerAddr>` and `ParamMaybeRef<Option<RemoteAddr>>` ensure that these fields are available at compile-time, preventing potential runtime errors.

By using `certain_map` to manage your context data, you can improve the modularity, maintainability, and robustness of your service-oriented architecture.
115 changes: 115 additions & 0 deletions content/en/docs/monolake/Architecture/runtime.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
---
title: "Runtime"
linkTitle: "Runtime"
weight: 1
description: "Deep dive into Monolake's io_uring-based runtime and performance characteristics compared to traditional event-based runtimes"
---

## Runtime

In asynchronous Rust programs, a runtime serves as the backbone for executing asynchronous tasks. It manages the scheduling, execution, and polling of these tasks while handling I/O operations efficiently. A well-designed runtime is crucial for achieving optimal performance, particularly in I/O-bound workloads.

Monoio is a new, pure io-uring-based Rust asynchronous runtime that has been specifically designed to maximize efficiency and performance for I/O-bound tasks. By leveraging the advanced capabilities of io-uring directly, Monoio stands apart from other runtimes like Tokio-uring, which may operate on top of additional runtime layers. This direct integration with io-uring allows Monoio to take full advantage of the kernel's asynchronous I/O capabilities, resulting in improved performance metrics and reduced latency.

## Thread-Per-Core Model

One of the defining characteristics of Monoio is its thread-per-core architecture. Each core of the CPU runs a dedicated thread, allowing the runtime to avoid the complexities associated with shared data across multiple threads. This design choice means that users do not need to worry about whether their tasks implement Send or Sync, as data does not escape the thread at await points. This significantly simplifies concurrent programming. In contrast, Tokio utilizes a multi-threaded work-stealing scheduler. In this model, tasks can be migrated between threads, introducing complexities related to synchronization and data sharing. For example, a task scheduled in Tokio might be executed on any available thread, leading to potential context switching overhead.

## Event Notification vs. Completion-Based Runtimes

When working with asynchronous I/O in Rust, understanding the underlying mechanisms of different runtimes is crucial. Two prominent approaches are the io-uring-based runtimes(Monolake) and the traditional event notification based runtimes (Tokio, async-std) which use mechanisms like kequeue and epoll. The fundamental difference between these two models lies in how they manage resource ownership and I/O operations.

io_uring operates on a submission-based model, where the ownership of resources (such as buffers) is transferred to the kernel upon submission of an I/O request. This model allows for high performance and reduced context switching, as the kernel can process the requests asynchronously. When an I/O operation is completed, the ownership of the buffers is returned to the caller. This ownership transfer leads to several implications:

1. **Ownership Semantics**: In io-uring, since the kernel takes ownership of the buffers during the operation, it allows for more efficient memory management. The caller does not need to manage the lifecycle of the buffers while the operation is in progress.

2. **Concurrency Model**: The submission-based model allows for a more straightforward handling of concurrency, as multiple I/O operations can be submitted without waiting for each to complete. This can lead to improved throughput, especially in I/O-bound applications.

In contrast, Tokio employs systems like kequeue and epoll. In this model, the application maintains ownership of the buffers throughout the lifetime of the I/O operation. Instead of transferring ownership, Tokio merely borrows the buffers, which has several implications:

1. **Buffer Management**: Since Tokio borrows buffers, the application is responsible for managing their lifecycle. This can introduce complexity, especially when dealing with concurrent I/O operations, as developers must ensure that buffers are not inadvertently reused while still in use.

2. **Polling Mechanism**: The polling model in Tokio requires the application to actively wait for events, which can result in increased context switches and potentially less efficient use of system resources compared to the submission-based model of io-uring.

## Async IO Trait divergence

Due to these fundamental differences in how I/O operations are managed, the async I/O traits for Tokio and Monoio diverge significantly. Tokio’s APIs are built around the concepts of futures and asynchronous borrowing, while the io-uring APIs in Monoio follow a submission and completion model that emphasizes ownership transfer. In Tokio’s read/write traits, buffers are borrowed or mutably borrowed. In contrast, Monoio’s async traits involve transferring ownership of the buffers, which are returned to the caller upon completion of the operation.
To achieve this high level of efficiency, Monoio utilizes certain unstable Rust features and introduces a new I/O abstraction that is not compatible with Tokio's async I/O traits, which are the de facto standard in Rust. This new abstraction is represented through the AsyncReadRent and AsyncWriteRent traits:

<div class="code-compare">
<div class="code-block">
<h4>Native traits</h4>
{{< highlight rust >}}
pub trait AsyncWriteRent {
// Required methods
fn write<T: IoBuf>(
&mut self,
buf: T
) -> impl Future<Output = BufResult<usize, T>>;
fn writev<T: IoVecBuf>(
&mut self,
buf_vec: T
) -> impl Future<Output = BufResult<usize, T>>;
fn flush(&mut self) -> impl Future<Output = Result<()>>;
fn shutdown(&mut self) -> impl Future<Output = Result<()>>;
}
pub trait AsyncReadRent {
// Required methods
fn read<T: IoBufMut>(
&mut self,
buf: T
) -> impl Future<Output = BufResult<usize, T>>;
fn readv<T: IoVecBufMut>(
&mut self,
buf: T
) -> impl Future<Output = BufResult<usize, T>>;
}
{{< /highlight >}}
</div>
<div class="code-block">
<h4>Tokio traits</h4>
{{< highlight rust >}}
pub trait AsyncRead {
// Required method
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>
) -> Poll<Result<()>>;
}
pub trait AsyncWrite {
// Required methods
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8]
) -> Poll<Result<usize, Error>>;
fn poll_flush(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Result<(), Error>>;
fn poll_shutdown(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Result<(), Error>>;
}
{{< /highlight >}}
</div>
</div>

<style>
.code-compare {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 1rem;
}

.code-block {
min-width: 0;
}

.code-block h4 {
margin-top: 0;
margin-bottom: 0.5rem;
}
</style>
Loading