Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ it is manageable via Nephio.
Please use the documentation links below to learn more about Nephio, or check out our [Learning Nephio](https://wiki.nephio.org/display/HOME/Learning+with+Nephio+R1) series. The video series includes a [demo video](https://youtu.be/mFl71sy2Pdc) and short articles about different aspects of Nephio.

## User Documentation

* [Release Notes for each Nephio release](https://docs.nephio.org/docs/release-notes/)
* [Demo Sandbox Environment Installation](https://docs.nephio.org/docs/guides/install-guides/)
* [User Guide](https://docs.nephio.org/docs/guides/user-guides/)
Expand All @@ -131,3 +132,4 @@ Please use the documentation links below to learn more about Nephio, or check ou
* [Developer Guide](https://docs.nephio.org/docs/guides/contributor-guides/)
* [Project Resources](https://github.com/nephio-project/docs/blob/main/resources.md)


130 changes: 66 additions & 64 deletions content/en/docs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,89 +5,91 @@ menu: {main: {weight: 20}}
weight: 1
---

Our mission is "to deliver carrier-grade, simple, open, Kubernetes-based cloud
native intent automation and common automation templates that materially
simplify the deployment and management of multi-vendor cloud infrastructure and
network functions across large scale edge deployments." But what does that mean?
Our mission is "to deliver carrier-grade, simple, open, Kubernetes-based
cloud-native intent automation and common automation templates that materially
simplify the deployment and management of multivendor cloud infrastructure and
network functions across large-scale edge deployments." But what does that mean?
With this release and the accompanying documentation, we hope to make that
clear.

The mission outlines the basic goals and the [About Nephio
Our mission outlines the basic goals. The [About Nephio
page](https://nephio.org/about/) describes the high-level architecture of
Nephio. It is important to understand that Nephio is about managing complex,
inter-related workloads *at scale*. If we simply want to deploy a network
function, existing methods like Helm charts and scripts are sufficient.
Similarly, if we want to deploy some infrastructure, then using existing
Infrastructure-as-Code tools can accomplish that. Configuring running network
functions can already be done today with element managers.
interrelated workloads *at scale*. If we simply want to deploy a network
function, then existing methods, such as Helm charts and scripts, are
sufficient. Similarly, if we want to deploy some infrastructure, then using
existing Infrastructure-as-Code tools can accomplish that. Configuring running
network functions can already be done today with element managers.

So, why do we need Nephio? The problems Nephio wants to solve start only
once we try to operate at scale. "Scale" here does not simply mean "large number
of sites". It can be across many different dimensions: number of sites, number
of services, number of workloads, size of the individual workloads, number of
machines needed to operate the workloads, complexity of the organization running
the workloads, and other factors. The fact that our infrastructure, workloads,
and workload configurations are all interconnected dramatically increases
the difficulty in managing these architectures at scale.
Why do we need Nephio? The problems Nephio aims to solve start only
once we try to operate at scale. "Scale", in this case, does not simply mean
"a large number of sites". "Scale" can encompass many different areas: the
number of sites, services, and workloads, the size of each individual workload,
the number of machines needed to operate the workloads, the complexity of the
organization running the workloads, and other factors. The fact that our
infrastructure, workloads, and workload configurations are all interconnected
greatly increases the difficulty in managing these architectures at scale.

To address these challenges, Nephio follows a [few basic
principles](https://cloud.google.com/blog/topics/telecommunications/network-automation-csps-linus-nephio-cloud-native)
that experience has shown to enable higher scaling with less management overhead:
- *Intent-driven* to enable the user to specify "what they want" and let the
system figure out "how to make that happen".
- *Distributed actuation* to increase reliability across widely distributed
fleets.
- *Uniformity in systems* to reduce redundant tooling and processes, and
simplify operations.
that experience has shown to enable higher scaling with fewer management
overheads. These principles are as follows:
- It is *Intent-driven* to enable the user to specify what they want and
let the system figure out how to make that happen.
- It has *Distributed actuation* to increase reliability across widely
distributed fleets.
- It has *Uniformity in systems* to reduce redundant tooling and processes,
and simplify operations.

Additionally, Nephio leverages the "configuration as data" principle. This
Nephio also leverages the "configuration as data" principle. This
methodology means that the "intent" is captured in a machine-manageable format
that we can treat as data, rather than code. In Nephio, we use the Kubernetes
Resource Model (KRM) to capture intent. As Kubernetes itself is already an
Resource Model (KRM) to capture the intent. As Kubernetes itself is already an
intent-driven system, this model is well suited to our needs.

To understand why we need to treat configuration as data, let's consider an
example. In a given instance, a network function may have, say, 100 parameters
that need to be decided upon. When we have 100 such network functions, across
10,000 clusters, this results in 100,000,000 inputs we need to define and
manage. Handling that sheer number of values, with their interdependencies and a need
for consistency management between them, requires *data management* techniques,
not *code* management techniques. This is why existing methodologies begin to
break down at scale, particular edge-level scale.
To understand why we need to treat configuration as data, let us consider an
example. In a given instance, a network function may have, for example, 100
parameters that need to be decided upon. 100 such network functions, across
10,000 clusters, results in 100,000,000 inputs that need to be defined and
managed. Handling such a large number of values, with their interdependencies
and a need for consistency management between them, requires *data management*
techniques, rather than *code* management techniques. This is why existing
methodologies begin to break down at scale, particular at the edge-level scale.

Consider as well that no single human will understand all of those values. Those
values relate not only to workloads, but also to the infrastructure we need to
support those workloads. They require different areas of expertise and different
organizational boundaries of control. For example, you will need input from
network planning (IP address, VLAN tags, ASNs, etc.), input from
compute infrastructure teams (types of hardware or VMs available, OS available),
Kubernetes platform teams, security teams, network function experts, and many,
many other individuals and teams. Each of those teams will have their own
systems for tracking the values they control, and processes for allocating and
distributing those values. This coordination between teams is a fundamental
*organizational* problem with operating at scale. The existing tools and methods
do not even attempt to address these parts of the problem; they *start* once all
of the "input" decisions are made.
It should also be considered that no individual human will be able to understand
all of these values. These values relate not only to the workloads, but also to
the infrastructure that is required to support the workloads. They require
different areas of expertise and different organizational boundaries of control.
For example, you will need input from network planning (IP address, VLAN tags,
ASNs, and so on), input from the compute infrastructure teams (types of hardware,
or available VMs or OSs), the Kubernetes platform teams, the security teams, the
network function experts, and many other individuals and teams. Each of these
teams will have their own systems for tracking the values they control, as well
as processes for allocating and distributing those values. This coordination
between teams is a fundamental *organizational* problem with operating at scale.
The existing tools and methods do not even attempt to address these parts of the
problem. They *start* once all of the "input" decisions are made.

The Nephio project believes the organizational challenge around figuring out
these values is actually one of the primary limiting factors to achieving
efficient management of large, complex systems at scale. This gets even harder
when we realize that we need to manage changes to these values over time, and
understand how changes to some values implies the need to change other values.
This challenge is currently left to ad hoc processes that differ across
The Nephio project believes that the organizational challenge of figuring out
these values is one of the primary limiting factors to achieving the efficient
management of large, complex systems at scale. This challenge becomes even
greater when we understand the need to manage the changes to these values over
time, and how changes to some values implies the need to change other values.
This challenge is currently left to ad-hoc processes that differ across
organizations. Nephio is working on how to structure the intent to make it
manageable using data management techniques.

This release of Nephio focuses:
- Demonstrating the core Nephio principles such as Configuration-as-Data and
This Nephio release focuses on the following:
- Demonstrating the core Nephio principles, such as Configuration-as-Data and
leveraging the intent-driven, active-reconciliation nature of Kubernetes.
- Infrastructure orchestration/automation using controllers based on
the Cluster API. At this time only KIND cluster creation is supported.
- Orchestration/automation of 5G Core and RAN network functions deployment and
management.
the Cluster API. Currently, only Kubernetes in Docker (KIND) cluster creation
is supported.
- Orchestration/automation of the deployment and management of 5G Core and RAN
networks.

While the current releases uses Cluster API, KIND, and free5gc/OAI for demonstration
purposes, the exact same principles and even code can be used for managing other
infrastructure and network functions. The *uniformity in systems* principle
means that as long as something is manageable via the Kubernetes Resource Model,
it is manageable via Nephio.
While the current release uses Cluster API, KIND, and free5gc/OAI for
demonstration purposes, the same principles and code can be used for managing
other infrastructure and network functions. The *uniformity in systems*
principle means that as long as something is manageable via the Kubernetes
Resource Model, it is manageable via Nephio.