Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,16 @@ in all of this that Nephio is about managing complex, inter-related workloads
like Helm charts and scripts are sufficient. Similarly, if we want to deploy
some infrastructure, then using existing Infrastructure-as-Code tools can
accomplish that. Configuring running network functions can already be done today
with element managers. So, why do we need Nephio? The problems Nephio wants to
solve solve start only once we try to operate at scale. "Scale" here does not
simply mean "large number of sites". It can be across many different dimensions:
number of sites, number of services, number of workloads, size of the
individual workloads, number of machines needed to operate the workloads,
complexity of the organization running the workloads, and other factors. The
fact that our infrastructure, workloads, and the workload configurations are all
interconnected dramatically increases the difficulty in managing these
architectures at scale.
with element managers.

So, why do we need Nephio? The problems Nephio wants to solve solve start only
once we try to operate at scale. "Scale" here does not simply mean "large number
of sites". It can be across many different dimensions: number of sites, number
of services, number of workloads, size of the individual workloads, number of
machines needed to operate the workloads, complexity of the organization running
the workloads, and other factors. The fact that our infrastructure, workloads,
and the workload configurations are all interconnected dramatically increases
the difficulty in managing these architectures at scale.

To address these challenges, Nephio follows a [few basic
principles](https://cloud.google.com/blog/topics/telecommunications/network-automation-csps-linus-nephio-cloud-native)
Expand Down
2 changes: 1 addition & 1 deletion install-guide/PackageTransformations.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,8 +242,8 @@ nephio-example-packages-dc0b55fb7a17d107e834417a2c9d8fb37f36d7cb vlanindex

</details>

<summary>To see the versions of a particular package:</summary>
<details>
<summary>To see the versions of a particular package:</summary>

```
$ kpt alpha rpkg get --name nephio-workload-cluster
Expand Down
223 changes: 186 additions & 37 deletions install-guide/README.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,187 @@
## Overview

## Prerequisites

* Docker version
* KIND version
* Kubernetes Version
* Registry
* Network connections
* Software (Ansible, kubectl etc)
* Resource requirements for management and workload clusters
* CPU
* Memory
* Disk
* Network
## Support Matrix
* Supported Platforms / Operating Systems
* Supported cloud environmentsSupported platforms

## Sandbox installation
* Prerequisites
* Bootstrapping Management cluster
* Workload cluster creation
* Installation of components on management cluster
* Installation of components on workload cluster


## Installation method for each supported platform ( OS, K8s distro,)
* Management cluster setup
* Workload cluster setup

## Definition of successful installation
* running clusters
* running pods
* How to validate

## Troubleshooting steps
# Demonstration Environment Installation

## Table of Contents

- [Introduction](#introduction)
- [Installing on GCE](#installing-on-gce)
- [GCE Prerequisites](#gce-prerequisites)
- [Create a Virtual Machine on GCE](#create-a-virtual-machine-on-gce)
- [Follow installation on GCE](#follow-installation-on-gce)
- [Installing on a pre-provisioned VM](#installing-on-a-pre-provisioned-vm)
- [VM Prerequisites](#vm-prerequisites)
- [Kick off the installation on VM](#kick-off-installation-on-vm)
- [Follow installation on VM](#follow-installation-on-vm)
- [Access to the User Interfaces](#access-to-the-user-interfaces)
- [Open terminal](#open-terminal)

## Introduction

This installation guide will get you up and running with a Nephio demonstration
environment. This environment is a single VM that will be used in the exercises
to simulate a topology with a Nephio management cluster, a regional workload
cluster, and two edge workload clusters.


## Installing on GCE

### GCE Prerequisites

You need a account in GCP and `gcloud` available on your local environment.

### Create a Virtual Machine on GCE

```bash
gcloud compute instances create --machine-type e2-standard-8 \
--boot-disk-size 200GB \
--image-family=ubuntu-2004-lts \
--image-project=ubuntu-os-cloud \
--metadata=startup-script-url=https://raw.githubusercontent.com/nephio-project/test-infra/main/e2e/provision/init.sh \
nephio-r1-e2e
```

### Follow installation on GCE

If you want to watch the progress of the installation, give it about 30
seconds to reach a network accessible state, and then ssh in and tail the
startup script execution:

Googlers (you also need to run `gcert`):
```bash
gcloud compute ssh ubuntu@nephio-r1-e2e -- \
-o ProxyCommand='corp-ssh-helper %h %p' \
sudo journalctl -u google-startup-scripts.service --follow
```

Everyone else:
```bash
gcloud compute ssh ubuntu@nephio-r1-e2e -- \
sudo journalctl -u google-startup-scripts.service --follow
```

## Installing on a pre-provisioned VM

This install has been verified on VMs running on vSphere, Openstack, AWS, and
Azure.

### VM Prerequisites

Order or create a VM with the following specification:

- Linux Flavour: Ubuntu-20.04-focal
- 8 cores
- 32 GB memory
- 200 GB disk size
- default user with sudo passwordless permissions

**Configure a route for Kubernetes**

In some installations, the IP range used by Kubernetes in the sandbox can clash with the
IP address used by your VPN. In such cases, the VM will become unreachable during the
sandbox installation. If you have this situation, add the route below on your VM.

Log onto your VM and run the following commands,
replacing **\<interface-name\>** and **\<interface-gateway-ip\>** with your VMs values:

```bash
sudo bash -c 'cat << EOF > /etc/netplan/99-cloud-init-network.yaml
network:
ethernets:
<interface-name>:
routes:
- to: 172.18.2.6/32
via: <interface-gateway-ip>
metric: 100
version: 2
EOF'

sudo netplan apply
```

### Kick off installation on VM

Log onto your VM and run the following command:

```bash
wget -O - https://raw.githubusercontent.com/nephio-project/test-infra/main/e2e/provision/init.sh | \
sudo NEPHIO_DEBUG=false \
NEPHIO_USER=ubuntu \
bash
```

The following environment variables can be used to configure the installation:

| Variable | Values | Default Value | Description |
| ---------------------- | ---------------- | ------------- | ------------------------------------------------------ |
| NEPHIO_USER | userid | ubuntu | The user to install the sandbox on (must have sudo passwordless permissions) |
| NEPHIO_DEBUG | false or true | false | Controls debug output from the install |
| NEPHIO_HOME | path | /home/$NEPHIO_USER | The directory to check out the install scripts into |
| NEPHIO_DEPLOYMENT_TYPE | r1 or one-summit | r1 | Controls the type of installation to be carried out |
| RUN_E2E | false or true | false | Specifies whether end to end tests should be executed or not |
| NEPHIO_REPO | URL | https://github.com/nephio-project/test-infra.git |URL of the repository to be used for installation |

### Follow installation on VM

Monitor the installation on your terminal.

Log onto your VM using ssh on another terminal and use commands *docker* and *kubectl* to monitor the installation.

## Access to the User Interfaces

Once it's done, ssh in and port forward the port to the UI (7007) and to
Gitea's HTTP interface, if you want to have that (3000):

Googlers (you also need to run `gcert`):

```bash
gcloud compute ssh ubuntu@nephio-r1-e2e -- \
-o ProxyCommand='corp-ssh-helper %h %p' \
-L 7007:localhost:7007 \
-L 3000:172.18.0.200:3000 \
kubectl port-forward --namespace=nephio-webui svc/nephio-webui 7007
```

Others using GCE:

```bash
gcloud compute ssh ubuntu@nephio-r1-e2e -- \
-L 7007:localhost:7007 \
-L 3000:172.18.0.200:3000 \
kubectl port-forward --namespace=nephio-webui svc/nephio-webui 7007
```

Others on VMs:

```bash
ssh <user>@<vm-address> \
-L 7007:localhost:7007 \
-L 3000:172.18.0.200:3000 \
kubectl port-forward --namespace=nephio-webui svc/nephio-webui 7007
```

You can now navigate to:
- [http://localhost:7007/config-as-data](http://localhost:7007/config-as-data) to
browse the Nephio Web UI
- [http://localhost:3000/nephio](http://localhost:3000/nephio) to browse the Gitea UI

## Open terminal

You probably want a second ssh window open to run `kubectl` commands, etc.,
without the port forwarding (which would fail if you try to open a second ssh
connection with that setting).

Googlers:

```bash
gcloud compute ssh ubuntu@nephio-r1-e2e -- -o ProxyCommand='corp-ssh-helper %h %p'
```

Others on GCE:

```bash
gcloud compute ssh ubuntu@nephio-r1-e2e
```
Others on VMs:

```bash
ssh <user>@<vm-address>
```
File renamed without changes.
61 changes: 39 additions & 22 deletions release-notes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,25 @@

## Prerequisites

Refer to install guide (link here) for the prerequisites on supported environments.
Refer to [install
guide](https://github.com/nephio-project/docs/blob/main/install-guide/README.md)
for the prerequisites on supported environments.

## Support Matrix

* Supported Platforms / Operating Systems
* For sandbox installations Ubuntu 22.x running on :
* Bare metal
* Vsphere Version ?
* Openstack version ?
* Vagrant on Virtual box on Windows 10/11.
* Supported cloud environments
* Google Cloud Platform
* **Other k8s systems?**
The sandbox environment requires a physical or virtual machine with:
- Linux Flavour: Ubuntu-20.04-focal
- 8 cores
- 32 GB memory
- 200 GB disk size
- default user with sudo passwordless permissions

This install has been verified on VMs running on Google Cloud, OpenStack, AWS,
vSphere, and Azure. It has been verified on Vagrant VMs running on Windows and
Linux.

For non-sandbox installations, any conforming Kubernetes cluster is sufficient
for the management cluster.

## Features

Expand All @@ -37,39 +43,50 @@ Basic web UI to view and manage packages and the resources within them.

### Functionalities

* Create kubernetes clusters. This functionality ia based on cluster API. At
* Create Kubernetes clusters. This functionality is based on cluster API. At
this time only KIND clusters creation is supported.
* Fully automated deployment of UPF, SMF and AMF services of
[free5Gc](https://free5gc.org/) . These are deployed on multiple clusters
based on user's intent expressed via CRDs.
* Inter cluster networking setup.
* Deployment of other free5gc functions. Some manual configuration such as IP
addresses may be needed for these services.
* Deployment of other free5gc functions.
* Auto-scale up of UPF, SMF and AMF services based on changes to capacity
requirements expressed as user intent.

## Limitations

* In terms of infrastructure automation, only creation of KIND clusters is
supported.
* Deployment of free5gc functions other than SMF, UPF and AMF may need some
manual configuration such as IP addresses.
* Inter cluster networking is not dynamic which means as more clusters are
deployed some manual tweak will be needed for inter cluster communications.
* Provisioning of VLAN interfaces on nodes is manual at this time.
* Feedback of workload deployments from workload clusters to the management
cluster is limited. You may need to directly connect to the workload cluster
using kubectl to debug the deployment issues.
* Web UI features are limited to view/edit of Package Variants and Package
variant sets. More features will be added in subsequent releases.
* Web UI features are limited to view/edit of packages and resources in those
packages, and the deployment of those packages. More features will be added
in subsequent releases.
* When the capacity of UPF,SMF and AMF is changed, the free5gc operator on the
workload cluster will instantiate a new POD with correspondingly modified
resources (CPU, memory etc.) During this pods will restart. This is the
limitation of free5gc.
* Only Gitea works with automated cluster provisioning to create new
repositories and join them to Nephio. To use a different Git provider, you
must manually provision cluster repositories, register them to the Nephio
management server, and set up Config Sync on the workload cluster.
* The WebUI does not require authentication in the current demo configuration.
Testing of the WebUI with authentication configured has not been done at this
time.
* The WebUI only shows resources in the default namespace.
* While many types of Git authentication are supported, the testing was only
done with token-based Git authentication in Gitea.

## Known Issues and Workarounds

* In case of deploying sandbox environment on ubuntu VM running on openstack,
the deployment may fail. Reinstall the packages to get around this issue. (
**More details needed here**).
* End-to-end call issues and workarounds. (**More details needed here**)
* **Others???**
the deployment may fail. Reinstall the packages to get around this issue.
* Occasionally packages may take a long time to be approved by the auto-approval
controller.
* Occasionally calls to `kpt alpha rpkg copy` may fail with a message like
`Error: Internal error occurred: error applying patch: conflict: fragment line
does not match src line`. Try again in a little while, this may clear up on
its own.
3 changes: 2 additions & 1 deletion user-guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ standards.

![nephio-overview.png](nephio-overview.png)


## Overview of underlying technologies

### Custom Resources and Controllers
Expand Down Expand Up @@ -196,6 +195,8 @@ propagated via controlled automation down the tree.
### API

CRDs provided for UPF, SMF and AMF 5g core services
Specialization CRDs provided for integrating with IP address and VLAN allocation
backends.

### Web UI

Expand Down
Loading