diff --git a/OWNERS b/OWNERS index 0258c32f387a7..02a5269ff2efe 100644 --- a/OWNERS +++ b/OWNERS @@ -1,7 +1,6 @@ # Reviewers can /lgtm /approve but not sufficient for auto-merge without an # approver reviewers: -- tengqm - zhangxiaoyu-zidif - xiangpengzhao # Approvers have all the ability of reviewers but their /approve makes @@ -15,3 +14,4 @@ approvers: - zacharysarah - chenopis - mistyhacks +- tengqm diff --git a/content/en/_index.html b/content/en/_index.html index abeb666558a2f..f957d245bbb36 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -42,12 +42,17 @@
Sam Ghods, Co-Founder and Services Architect at Box, gives a passionate talk at KubeCon Seattle 2016 showing that with Kubernetes, we have for the first time a universal interface that one can build real deployment tooling against.
+By Sarah Wells, Technical Director for Operations and Reliability, Financial Times
+— Xinwei, Staff Engineer in Alibaba Cloud
+ +# Architecture Improvements +The Kubernetes containerd integration architecture has evolved twice. Each evolution has made the stack more stable and efficient. + +## Containerd 1.0 - CRI-Containerd (end of life) + + +For containerd 1.0, a daemon called cri-containerd was required to operate between Kubelet and containerd. Cri-containerd handled the [Container Runtime Interface (CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) service requests from Kubelet and used containerd to manage containers and container images correspondingly. Compared to the Docker CRI implementation ([dockershim](https://github.com/kubernetes/kubernetes/tree/v1.10.2/pkg/kubelet/dockershim)), this eliminated one extra hop in the stack. + +However, cri-containerd and containerd 1.0 were still 2 different daemons which interacted via grpc. The extra daemon in the loop made it more complex for users to understand and deploy, and introduced unnecessary communication overhead. + +## Containerd 1.1 - CRI Plugin (current) + + +In containerd 1.1, the cri-containerd daemon is now refactored to be a containerd CRI plugin. The CRI plugin is built into containerd 1.1, and enabled by default. Unlike cri-containerd, the CRI plugin interacts with containerd through direct function calls. This new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack. Users can now use Kubernetes with containerd 1.1 directly. The cri-containerd daemon is no longer needed. + +# Performance +Improving performance was one of the major focus items for the containerd 1.1 release. Performance was optimized in terms of pod startup latency and daemon resource usage. + +The following results are a comparison between containerd 1.1 and Docker 18.03 CE. The containerd 1.1 integration uses the CRI plugin built into containerd; and the Docker 18.03 CE integration uses the dockershim. + +The results were generated using the Kubernetes node performance benchmark, which is part of [Kubernetes node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-node-tests.md). Most of the containerd benchmark data is publicly accessible on the [node performance dashboard](http://node-perf-dash.k8s.io/). + +### Pod Startup Latency +The "105 pod batch startup benchmark" results show that the containerd 1.1 integration has lower pod startup latency than Docker 18.03 CE integration with dockershim (lower is better). + + + +### CPU and Memory +At the steady state, with 105 pods, the containerd 1.1 integration consumes less CPU and memory overall compared to Docker 18.03 CE integration with dockershim. The results vary with the number of pods running on the node, 105 is chosen because it is the current default for the maximum number of user pods per node. + +As shown in the figures below, compared to Docker 18.03 CE integration with dockershim, the containerd 1.1 integration has 30.89% lower kubelet cpu usage, 68.13% lower container runtime cpu usage, 11.30% lower kubelet resident set size (RSS) memory usage, 12.78% lower container runtime RSS memory usage. + + + +# crictl +Container runtime command-line interface (CLI) is a useful tool for system and application troubleshooting. When using Docker as the container runtime for Kubernetes, system administrators sometimes login to the Kubernetes node to run Docker commands for collecting system and/or application information. For example, one may use _docker ps_ and _docker inspect_ to check application process status, _docker images_ to list images on the node, and _docker info_ to identify container runtime configuration, etc. + +For containerd and all other CRI-compatible container runtimes, e.g. dockershim, we recommend using _crictl_ as a replacement CLI over the Docker CLI for troubleshooting pods, containers, and container images on Kubernetes nodes. + +_crictl_ is a tool providing a similar experience to the Docker CLI for Kubernetes node troubleshooting and _crictl_ works consistently across all CRI-compatible container runtimes. It is hosted in the [kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools) repository and the current version is [v1.0.0-beta.1](https://github.com/kubernetes-incubator/cri-tools/releases/tag/v1.0.0-beta.1). _crictl_ is designed to resemble the Docker CLI to offer a better transition experience for users, but it is not exactly the same. There are a few important differences, explained below. + +## Limited Scope - crictl is a Troubleshooting Tool +The scope of _crictl_ is limited to troubleshooting, it is not a replacement to docker or kubectl. Docker's CLI provides a rich set of commands, making it a very useful development tool. But it is not the best fit for troubleshooting on Kubernetes nodes. Some Docker commands are not useful to Kubernetes, such as _docker network_ and _docker build_; and some may even break the system, such as _docker rename_. _crictl_ provides just enough commands for node troubleshooting, which is arguably safer to use on production nodes. + +## Kubernetes Oriented +_crictl_ offers a more kubernetes-friendly view of containers. Docker CLI lacks core Kubernetes concepts, e.g. _pod_ and _[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)_, so it can't provide a clear view of containers and pods. One example is that _docker ps_ shows somewhat obscure, long Docker container names, and shows pause containers and application containers together: + + + +However, [pause containers](https://www.ianlewis.org/en/almighty-pause-container) are a pod implementation detail, where one pause container is used for each pod, and thus should not be shown when listing containers that are members of pods. + +_crictl_, by contrast, is designed for Kubernetes. It has different sets of commands for pods and containers. For example, _crictl pods_ lists pod information, and _crictl ps_ only lists application container information. All information is well formatted into table columns. + + + + +As another example, _crictl pods_ includes a _--namespace_ option for filtering pods by the namespaces specified in Kubernetes. + + + +For more details about how to use _crictl_ with containerd: + +* [Document](https://github.com/containerd/cri/blob/master/docs/crictl.md) +* [Demo video](https://asciinema.org/a/179047) + +# What about Docker Engine? +"Does switching to containerd mean I can't use Docker Engine anymore?" We hear this question a lot, the short answer is NO. + +Docker Engine is built on top of containerd. The next release of [Docker Community Edition (Docker CE)](https://www.docker.com/community-edition) will use containerd version 1.1. Of course, it will have the CRI plugin built-in and enabled by default. This means users will have the option to continue using Docker Engine for other purposes typical for Docker users, while also being able to configure Kubernetes to use the underlying containerd that came with and is simultaneously being used by Docker Engine on the same node. See the architecture figure below showing the same containerd being used by Docker Engine and Kubelet: + + + +Since containerd is being used by both Kubelet and Docker Engine, this means users who choose the containerd integration will not just get new Kubernetes features, performance, and stability improvements, they will also have the option of keeping Docker Engine around for other use cases. + +A containerd [namespace](https://github.com/containerd/containerd/blob/master/docs/namespaces.md) mechanism is employed to guarantee that Kubelet and Docker Engine won't see or have access to containers and images created by each other. This makes sure they won't interfere with each other. This also means that: + +* Users won't see Kubernetes created containers with the _docker ps_ command. Please use _crictl ps_ instead. And vice versa, users won't see Docker CLI created containers in Kubernetes or with _crictl ps_ command. The _crictl create_ and _crictl runp_ commands are only for troubleshooting. Manually starting pod or container with _crictl_ on production nodes is not recommended. +* Users won't see Kubernetes pulled images with the _docker images_ command. Please use the _crictl images_ command instead. And vice versa, Kubernetes won't see images created by _docker pull_, _docker load_ or _docker build_ commands. Please use the _crictl pull_ command instead, and _[ctr](https://github.com/containerd/containerd/blob/master/docs/man/ctr.1.md) cri load_ if you have to load an image. + +# Summary +* Containerd 1.1 natively supports CRI. It can be used directly by Kubernetes. +* Containerd 1.1 is production ready. +* Containerd 1.1 has good performance in terms of pod startup latency and system resource utilization. +* _crictl_ is the CLI tool to talk with containerd 1.1 and other CRI-conformant container runtimes for node troubleshooting. +* The next stable release of Docker CE will include containerd 1.1. Users have the option to continue using Docker for use cases not specific to Kubernetes, and configure Kubernetes to use the same underlying containerd that comes with Docker. + +We'd like to thank all the contributors from Google, IBM, Docker, ZTE, ZJU and many other individuals who made this happen! + +For a detailed list of changes in the containerd 1.1 release, please see the release notes here: [https://github.com/containerd/containerd/releases/tag/v1.1.0](https://github.com/containerd/containerd/releases/tag/v1.1.0) + +# Try it out +To setup a Kubernetes cluster using containerd as the container runtime: + +* For a production quality cluster on GCE brought up with kube-up.sh, see [here](https://github.com/containerd/cri/blob/v1.0.0/docs/kube-up.md). +* For a multi-node cluster installer and bring up steps using ansible and kubeadm, see [here](https://github.com/containerd/cri/blob/v1.0.0/contrib/ansible/README.md). +* For creating a cluster from scratch on Google Cloud, see [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way). +* For a custom installation from release tarball, see [here](https://github.com/containerd/cri/blob/v1.0.0/docs/installation.md). +* To install using LinuxKit on a local VM, see [here](https://github.com/linuxkit/linuxkit/tree/master/projects/kubernetes). + +# Contribute +The containerd CRI plugin is an open source github project within containerd [https://github.com/containerd/cri](https://github.com/containerd/cri). Any contributions in terms of ideas, issues, and/or fixes are welcome. The [getting started guide for developers](https://github.com/containerd/cri#getting-started-for-developers) is a good place to start for contributors. + +# Community +The project is developed and maintained jointly by members of the Kubernetes SIG-Node community and the containerd community. We'd love to hear feedback from you. To join the communities: + +* [sig-node community site](https://github.com/kubernetes/community/tree/master/sig-node) +* Slack: + * #sig-node channel in [kubernetes.slack.com](http://kubernetes.slack.com) + * #containerd channel in [https://dockr.ly/community](https://dockr.ly/community) +* Mailing List: [https://groups.google.com/forum/#!forum/kubernetes-sig-node](https://groups.google.com/forum/#!forum/kubernetes-sig-node) diff --git a/content/en/blog/_posts/2018-05-29-announcing-kustomize.md b/content/en/blog/_posts/2018-05-29-announcing-kustomize.md new file mode 100644 index 0000000000000..3ca0b54401ee8 --- /dev/null +++ b/content/en/blog/_posts/2018-05-29-announcing-kustomize.md @@ -0,0 +1,241 @@ +--- +layout: blog +title: Introducing kustomize; Template-free Configuration Customization for Kubernetes +date: 2018-05-29 +--- + +**Authors:** Jeff Regan (Google), Phil Wittrock (Google) + +[**kustomize**]: https://github.com/kubernetes-sigs/kustomize +[hello world]: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/helloWorld +[kustomization]: https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#kustomization +[mailing list]: https://groups.google.com/forum/#!forum/kustomize +[open an issue]: https://github.com/kubernetes-sigs/kustomize/issues/new +[subproject]: https://github.com/kubernetes/community/blob/master/keps/sig-cli/0008-kustomize.md +[SIG-CLI]: https://github.com/kubernetes/community/tree/master/sig-cli +[workflow]: https://github.com/kubernetes-sigs/kustomize/blob/master/docs/workflows.md + +If you run a Kubernetes environment, chances are you’ve +customized a Kubernetes configuration — you've copied +some API object YAML files and editted them to suit +your needs. + +But there are drawbacks to this approach — it can be +hard to go back to the source material and incorporate +any improvements that were made to it. Today Google is +announcing [**kustomize**], a command-line tool +contributed as a [subproject] of [SIG-CLI]. The tool +provides a new, purely *declarative* approach to +configuration customization that adheres to and +leverages the familiar and carefully designed +Kubernetes API. + +Here’s a common scenario. Somewhere on the internet you +find someone’s Kubernetes configuration for a content +management system. It's a set of files containing YAML +specifications of Kubernetes API objects. Then, in some +corner of your own company you find a configuration for +a database to back that CMS — a database you prefer +because you know it well. + +You want to use these together, somehow. Further, you +want to customize the files so that your resource +instances appear in the cluster with a label that +distinguishes them from a colleague’s resources who’s +doing the same thing in the same cluster. +You also want to set appropriate values for CPU, memory +and replica count. + +Additionally, you’ll want *multiple variants* of the +entire configuration: a small variant (in terms of +computing resources used) devoted to testing and +experimentation, and a much larger variant devoted to +serving outside users in production. Likewise, other +teams will want their own variants. + +This raises all sorts of questions. Do you copy your +configuration to multiple locations and edit them +independently? What if you have dozens of development +teams who need slightly different variations of the +stack? How do you maintain and upgrade the aspects of +configuration that they share in common? Workflows +using **kustomize** provide answers to these questions. + +## Customization is reuse + +Kubernetes configurations aren't code (being YAML +specifications of API objects, they are more strictly +viewed as data), but configuration lifecycle has many +similarities to code lifecycle. + +You should keep configurations in version +control. Configuration owners aren’t necessarily the +same set of people as configuration +users. Configurations may be used as parts of a larger +whole. Users will want to *reuse* configurations for +different purposes. + +One approach to configuration reuse, as with code +reuse, is to simply copy it all and customize the +copy. As with code, severing the connection to the +source material makes it difficult to benefit from +ongoing improvements to the source material. Taking +this approach with many teams or environments, each +with their own variants of a configuration, makes a +simple upgrade intractable. + +Another approach to reuse is to express the source +material as a parameterized template. A tool processes +the template—executing any embedded scripting and +replacing parameters with desired values—to generate +the configuration. Reuse comes from using different +sets of values with the same template. The challenge +here is that the templates and value files are not +specifications of Kubernetes API resources. They are, +necessarily, a new thing, a new language, that wraps +the Kubernetes API. And yes, they can be powerful, but +bring with them learning and tooling costs. Different +teams want different changes—so almost every +specification that you can include in a YAML file +becomes a parameter that needs a value. As a result, +the value sets get large, since all parameters (that +don't have trusted defaults) must be specified for +replacement. This defeats one of the goals of +reuse—keeping the differences between the variants +small in size and easy to understand in the absence of +a full resource declaration. + +## A new option for configuration customization + +Compare that to **kustomize**, where the tool’s +behavior is determined by declarative specifications +expressed in a file called `kustomization.yaml`. + +The **kustomize** program reads the file and the +Kubernetes API resource files it references, then emits +complete resources to standard output. This text output +can be further processed by other tools, or streamed +directly to **kubectl** for application to a cluster. + +For example, if a file called `kustomization.yaml` +containing + +``` + commonLabels: + app: hello + resources: + - deployment.yaml + - configMap.yaml + - service.yaml +``` + +is in the current working directory, along with +the three resource files it mentions, then running + +``` +kustomize build +``` + +emits a YAML stream that includes the three given +resources, and adds a common label `app: hello` to +each resource. + +Similarly, you can use a *commonAnnotations* field to +add an annotation to all resources, and a *namePrefix* +field to add a common prefix to all resource +names. This trivial yet common customization is just +the beginning. + +A more common use case is that you’ll need multiple +variants of a common set of resources, e.g., a +*development*, *staging* and *production* variant. + +For this purpose, **kustomize** supports the idea of an +*overlay* and a *base*. Both are represented by a +kustomization file. The base declares things that the +variants share in common (both resources and a common +customization of those resources), and the overlays +declare the differences. + +Here’s a file system layout to manage a *staging* and +*production* variant of a given cluster app: + +``` + someapp/ + ├── base/ + │ ├── kustomization.yaml + │ ├── deployment.yaml + │ ├── configMap.yaml + │ └── service.yaml + └── overlays/ + ├── production/ + │ └── kustomization.yaml + │ ├── replica_count.yaml + └── staging/ + ├── kustomization.yaml + └── cpu_count.yaml +``` + +The file `someapp/base/kustomization.yaml` specifies the +common resources and common customizations to those +resources (e.g., they all get some label, name prefix +and annotation). + +The contents of +`someapp/overlays/production/kustomization.yaml` could +be + +``` + commonLabels: + env: production + bases: + - ../../base + patches: + - replica_count.yaml +``` + +This kustomization specifies a *patch* file +`replica_count.yaml`, which could be: + +``` + apiVersion: apps/v1 + kind: Deployment + metadata: + name: the-deployment + spec: + replicas: 100 +``` + +A patch is a partial resource declaration, in this case +a patch of the deployment in +`someapp/base/deployment.yaml`, modifying only the +*replicas* count to handle production traffic. + +The patch, being a partial deployment spec, has a clear +context and purpose and can be validated even if it’s +read in isolation from the remaining +configuration. It’s not just a context free *{parameter +name, value}* tuple. + +To create the resources for the production variant, run + +``` +kustomize build someapp/overlays/production +``` + +The result is printed to stdout as a set of complete +resources, ready to be applied to a cluster. A +similar command defines the staging environment. + +## In summary + +With **kustomize**, you can manage an arbitrary number +of distinctly customized Kubernetes configurations +using only Kubernetes API resource files. Every +artifact that **kustomize** uses is plain YAML and can +be validated and processed as such. kustomize encourages +a fork/modify/rebase [workflow]. + +To get started, try the [hello world] example. +For discussion and feedback, join the [mailing list] or +[open an issue]. Pull requests are welcome. diff --git a/content/en/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md b/content/en/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md new file mode 100644 index 0000000000000..23ad393515c59 --- /dev/null +++ b/content/en/blog/_posts/2018-05-30-say-hello-to-discuss-kubernetes.md @@ -0,0 +1,24 @@ +--- +layout: blog +title: Say Hello to Discuss Kubernetes +date: Wednesday, May 30, 2018 +--- + +**Author**: Jorge Castro (Heptio) + +Communication is key when it comes to engaging a community of over 35,000 people in a global and remote environment. Keeping track of everything in the Kubernetes community can be an overwhelming task. On one hand we have our official resources, like Stack Overflow, GitHub, and the mailing lists, and on the other we have more ephemeral resources like Slack, where you can hop in, chat with someone, and then go on your merry way. + +Slack is great for casual and timely conversations and keeping up with other community members, but communication can't be easily referenced in the future. Plus it can be hard to raise your hand in a room filled with 35,000 participants and find a voice. Mailing lists are useful when trying to reach a specific group of people with a particular ask and want to keep track of responses on the thread, but can be daunting with a large amount of people. Stack Overflow and GitHub are ideal for collaborating on projects or questions that involve code and need to be searchable in the future, but certain topics like "What's your favorite CI/CD tool" or "[Kubectl tips and tricks](https://discuss.kubernetes.io/t/kubectl-tips-and-tricks/192)" are offtopic there. + +While our current assortment of communication channels are valuable in their own rights, we found that there was still a gap between email and real time chat. Across the rest of the web, many other open source projects like Docker, Mozilla, Swift, Ghost, and Chef have had success building communities on top of [Discourse](https://www.discourse.org/features), an open source discussion platform. So what if we could use this tool to bring our discussions together under a modern roof, with an open API, and perhaps not let so much of our information fade into the ether? There's only one way to find out: Welcome to [discuss.kubernetes.io](https://discuss.kubernetes.io) + +![discuss_screenshot](/images/blog/2018-05-30-say-hello-to-discuss-kubernetes.png) + + +Right off the bat we have categories that users can browse. Checking and posting in these categories allow users to participate in things they might be interested in without having to commit to subscribing to a list. Granular notification controls allow the users to subscribe to just the category or tag they want, and allow for responding to topics via email. + +Ecosystem partners and developers now have a place where they can [announce projects](https://discuss.kubernetes.io/c/announcements) that they're working on to users without wondering if it would be offtopic on an official list. We can make this place be not just about core Kubernetes, but about the hundreds of wonderful tools our community is building. + +This new community forum gives people a place to go where they can discuss Kubernetes, and a sounding board for developers to make announcements of things happening around Kubernetes, all while being searchable and easily accessible to a wider audience. + +Hop in and take a look. We're just getting started, so you might want to begin by [introducing yourself](https://discuss.kubernetes.io/t/introduce-yourself-here/56) and then browsing around. Apps are also available for [Android ](https://play.google.com/store/apps/details?id=com.discourse&hl=en_US&rdid=com.discourse&pli=1)and [iOS](https://itunes.apple.com/us/app/discourse-app/id1173672076?mt=8). diff --git a/content/en/docs/concepts/architecture/cloud-controller.md b/content/en/docs/concepts/architecture/cloud-controller.md index 1d76435975b39..81b1012579692 100644 --- a/content/en/docs/concepts/architecture/cloud-controller.md +++ b/content/en/docs/concepts/architecture/cloud-controller.md @@ -244,7 +244,7 @@ rules: The following cloud providers have implemented CCMs: * Digital Ocean -* Oracle +* [Oracle](https://github.com/oracle/oci-cloud-controller-manager) * Azure * GCE * AWS diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index f15d9659404ae..9fd80a3837961 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -43,7 +43,7 @@ The `conditions` field describes the status of all `Running` nodes. | Node Condition | Description | |----------------|-------------| | `OutOfDisk` | `True` if there is insufficient free space on the node for adding new pods, otherwise `False` | -| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last 40 seconds | +| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) | | `MemoryPressure` | `True` if pressure exists on the node memory -- that is, if the node memory is low; otherwise `False` | | `PIDPressure` | `True` if pressure exists on the processes -- that is, if there are too many processes on the node; otherwise `False` | | `DiskPressure` | `True` if pressure exists on the disk size -- that is, if the disk capacity is low; otherwise `False` | diff --git a/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md index 5f6c7e167a644..489c26ffc40e7 100644 --- a/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -66,7 +66,7 @@ If you are using a guide involving Salt, see [Configuring Kubernetes with Salt]( ## Optional Cluster Services -* [DNS Integration with SkyDNS](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service. +* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service. * [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it. diff --git a/content/en/docs/concepts/cluster-administration/federation.md b/content/en/docs/concepts/cluster-administration/federation.md index 92467f70c6adf..45c05da8e5000 100644 --- a/content/en/docs/concepts/cluster-administration/federation.md +++ b/content/en/docs/concepts/cluster-administration/federation.md @@ -4,6 +4,9 @@ content_template: templates/concept --- {{% capture overview %}} + +{{< include "federation-current-state.md" >}} + This page explains why and how to manage multiple Kubernetes clusters using federation. {{% /capture %}} @@ -96,7 +99,7 @@ The following guides explain some of the resources in detail: * [Services](/docs/concepts/cluster-administration/federation-service-discovery/) -The [API reference docs](/docs/reference/generated/federation/) list all the +The [API reference docs](/docs/reference/federation/) list all the resources supported by federation apiserver. ## Cascading deletion diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index 607716b68e783..61457b45469bd 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -106,6 +106,18 @@ imply any preferential status. [Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf). +### AOS from Apstra + +[AOS](http://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs. + +The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment. + +AOS has a rich set of REST API endpoints that enable Kubernetes to quickly change the network policy based on application requirements. Further enhancements will integrate the AOS Graph model used for the network design with the workload provisioning, enabling an end to end management system for both private and public clouds. + +AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux. + +Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/ + ### Big Cloud Fabric from Big Switch Networks [Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premise environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring. @@ -122,6 +134,12 @@ containers. Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing. +### CNI-Genie from Huawei + +[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://git.k8s.io/website/docs/concepts/cluster-administration/networking.md#kubernetes-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/). + +CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin. + ### Contiv [Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced. @@ -247,11 +265,6 @@ Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-pl or stand-alone. In either version, it doesn't require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes. -### CNI-Genie from Huawei - -[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://git.k8s.io/website/docs/concepts/cluster-administration/networking.md#kubernetes-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/). - -CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin. ## Other reading diff --git a/content/en/docs/concepts/configuration/assign-pod-node.md b/content/en/docs/concepts/configuration/assign-pod-node.md index 2e21fb59cc46c..4b0a310483013 100644 --- a/content/en/docs/concepts/configuration/assign-pod-node.md +++ b/content/en/docs/concepts/configuration/assign-pod-node.md @@ -16,7 +16,7 @@ that a pod ends up on a machine with an SSD attached to it, or to co-locate pods services that communicate a lot into the same availability zone. You can find all the files for these examples [in our docs -repo here](https://github.com/kubernetes/website/tree/{{< param "docsbranch" >}}/docs/user-guide/node-selection). +repo here](https://github.com/kubernetes/website/tree/{{< param "docsbranch" >}}/docs/concepts/configuration/). {{< toc >}} @@ -134,7 +134,7 @@ value is `another-node-label-value` should be preferred. You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`. You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use -[node taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes. +[node taints](/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes. If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod to be scheduled onto a candidate node. @@ -323,7 +323,7 @@ web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3 The above example uses `PodAntiAffinity` rule with `topologyKey: "kubernetes.io/hostname"` to deploy the redis cluster so that no two instances are located on the same host. -See [ZooKeeper tutorial](https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) +See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique. For more information on inter-pod affinity/anti-affinity, see the diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md index 2db15bf3f32ea..b1ba81939b90a 100644 --- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md @@ -446,7 +446,7 @@ extender. { "urlPrefix":"For our first Deployment, we'll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Basics.
+ +For our first Deployment, we'll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Basics.
Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!
diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html index 1cbeccdc1b80d..785503f8a8b9d 100644 --- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -86,7 +86,7 @@Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling in will reduce the number of Pods to the new desired state. Kubernetes also supports autoscaling of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.
+Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling in will reduce the number of Pods to the new desired state. Kubernetes also supports autoscaling of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.
Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.
diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index a47c589055c59..398b3b9e143cf 100644 --- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -68,26 +68,29 @@ When a PersistentVolumeClaim is created, a PersistentVolume is dynamically provi A [Secret](/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a password or key. The manifest files are already configured to use a Secret, but you have to create your own Secret. -1. Create the Secret object from the following command: +1. Create the Secret object from the following command. You will need to replace + `YOUR_PASSWORD` with the password you want to use. - kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD + ``` + kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD + ``` - {{< note >}} - **Note:** Replace `YOUR_PASSWORD` with the password you want to apply. - {{< /note >}} - 2. Verify that the Secret exists by running the following command: - kubectl get secrets + ``` + kubectl get secrets + ``` - The response should be like this: + The response should be like this: - NAME TYPE DATA AGE - mysql-pass Opaque 1 42s + ``` + NAME TYPE DATA AGE + mysql-pass Opaque 1 42s + ``` - {{< note >}} - **Note:** To protect the Secret from exposure, neither `get` nor `describe` show its contents. - {{< /note >}} +{{< note >}} +**Note:** To protect the Secret from exposure, neither `get` nor `describe` show its contents. +{{< /note >}} ## Deploy MySQL @@ -97,77 +100,96 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c 1. Deploy MySQL from the `mysql-deployment.yaml` file: - kubectl create -f mysql-deployment.yaml - -2. Verify that a PersistentVolume got dynamically provisioned: + ``` + kubectl create -f mysql-deployment.yaml + ``` - kubectl get pvc +2. Verify that a PersistentVolume got dynamically provisioned. Note that it can + It can take up to a few minutes for the PVs to be provisioned and bound. - {{< note >}} - **Note:** It can take up to a few minutes for the PVs to be provisioned and bound. - {{< /note >}} + ``` + kubectl get pvc + ``` - The response should be like this: + The response should be like this: - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - mysql-pv-claim Bound pvc-91e44fbf-d477-11e7-ac6a-42010a800002 20Gi RWO standard 29s + ``` + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + mysql-pv-claim Bound pvc-91e44fbf-d477-11e7-ac6a-42010a800002 20Gi RWO standard 29s + ``` 3. Verify that the Pod is running by running the following command: - kubectl get pods + ``` + kubectl get pods + ``` - {{< note >}} - **Note:** It can take up to a few minutes for the Pod's Status to be `RUNNING`. - {{< /note >}} + **Note:** It can take up to a few minutes for the Pod's Status to be `RUNNING`. - The response should be like this: + The response should be like this: - NAME READY STATUS RESTARTS AGE - wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s + ``` + NAME READY STATUS RESTARTS AGE + wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s + ``` ## Deploy WordPress -The following manifest describes a single-instance WordPress Deployment and Service. It uses many of the same features like a PVC for persistent storage and a Secret for the password. But it also uses a different setting: `type: NodePort`. This setting exposes WordPress to traffic from outside of the cluster. +The following manifest describes a single-instance WordPress Deployment and Service. It uses many of the same features like a PVC for persistent storage and a Secret for the password. But it also uses a different setting: `type: LoadBalancer`. This setting exposes WordPress to traffic from outside of the cluster. {{< code file="mysql-wordpress-persistent-volume/wordpress-deployment.yaml" >}} 1. Create a WordPress Service and Deployment from the `wordpress-deployment.yaml` file: - kubectl create -f wordpress-deployment.yaml + ``` + kubectl create -f wordpress-deployment.yaml + ``` 2. Verify that a PersistentVolume got dynamically provisioned: - kubectl get pvc + ``` + kubectl get pvc + ``` - {{< note >}} - **Note:** It can take up to a few minutes for the PVs to be provisioned and bound. - {{< /note >}} + **Note:** It can take up to a few minutes for the PVs to be provisioned and bound. - The response should be like this: + The response should be like this: - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - wp-pv-claim Bound pvc-e69d834d-d477-11e7-ac6a-42010a800002 20Gi RWO standard 7s + ``` + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + wp-pv-claim Bound pvc-e69d834d-d477-11e7-ac6a-42010a800002 20Gi RWO standard 7s + ``` 3. Verify that the Service is running by running the following command: - kubectl get services wordpress + ``` + kubectl get services wordpress + ``` - The response should be like this: + The response should be like this: - NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE - wordpress 10.0.0.89+ Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. + The open source project is hosted by the Cloud Native Computing Foundation (CNCF). +
diff --git a/layouts/partials/templates/blocks.html b/layouts/partials/templates/blocks.html index 280aa6aa5d14b..765b884ac70c2 100644 --- a/layouts/partials/templates/blocks.html +++ b/layouts/partials/templates/blocks.html @@ -9,8 +9,10 @@ {{ $section := $.ctx.Scratch.Get "section" }} {{ $headers := findRE "