Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin' into release-1.12
Browse files Browse the repository at this point in the history
  • Loading branch information
zparnold committed Sep 9, 2018
2 parents 28eb1cf + e79d0cc commit 55f8e53
Show file tree
Hide file tree
Showing 33 changed files with 2,086 additions and 185 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
layout: blog
title: '2018 Steering Committee Election Cycle Kicks Off'
date: 2018-09-06
---

**Author**: Paris Pittman (Google), Jorge Castro (Heptio), Ihor Dvoretskyi (CNCF)

Having a clear, definable governance model is crucial for the health of open source projects. For one of the highest velocity projects in the open source world, governance is critical especially for one as large and active as Kubernetes, which is one of the most high-velocity projects in the open source world. A clear structure helps users trust that the project will be nurtured and progress forward. Initially, this structure was laid by the former 7 member bootstrap committee composed of founders and senior contributors with a goal to create the foundational governance building blocks.

The [initial charter](https://git.k8s.io/steering/charter.md) and establishment of an election process to seat a full Steering Committee was a part of those first building blocks. Last year, the bootstrap committee [kicked off](https://groups.google.com/d/msg/kubernetes-dev/piPuoqFkJwA/mCjwLH81BgAJ) the first Kubernetes Steering Committee election which brought forth 6 new members from the community as voted on by contributors. These new members plus the bootstrap committee formed the [Steering Committee that we know today](https://github.com/kubernetes/steering). This yearly election cycle will continue to ensure that new representatives get cycled through to add different voices and thoughts on the Kubernetes project strategy.

The committee has worked hard on topics that will streamline the project and how we operate. SIG (Special Interest Group) governance was an overarching recurring theme this year: Kubernetes community is not a monolithic organization, but a huge, distributed community, where [Special Interest Groups (SIGs) and Working Groups (WGs)](https://github.com/kubernetes/community/blob/master/sig-list.md) are the atomic community units, that are making Kubernetes so successful from the ground.

### Contributors - this is where you come in.

There are three seats up for election this year. The [voters guide](https://git.k8s.io/community/events/elections/2018) will get you up to speed on the specifics of this years election including candidate bios as they are updated in real time. The [elections process doc](https://github.com/kubernetes/steering/blob/master/elections.md) will steer you towards eligibility, operations, and the fine print.

1) Nominate yourself, someone else, and/or put your support to others.

Want to help chart our course? Interested in governance and community topics? Add your name! _The nomination process is optional_.

2) Vote.

On September 19th, eligible voters will receive an email poll invite conducted by [CIVS](https://civs.cs.cornell.edu). The newly elected will be announced at the [weekly community meeting](https://github.com/kubernetes/community/tree/master/communication#weekly-meeting) on Thursday, October 4th at 5pm UTC.

To those who are running:

<img src="/images/blog/2018-09-06-2018-steering-committee-election-cycle-kicks-off/sc-elections.png" width="400">

### Helpful resources

* [Steering Committee](https://github.com/kubernetes/steering) - who sits on the committee and terms, their projects and meetings info
* [Steering Committee Charter](https://git.k8s.io/steering/charter.md) - this is a great read if you’re interested in running (or assessing for the best candidates!)
* [Election Process](https://git.k8s.io/steering/elections.md)
* [Voters Guide!](https://git.k8s.io/community/events/elections/2018) - Updated on a rolling basis. This guide will always have the latest information throughout the election cycle. The complete schedule of events and candidate bios will be housed here.
16 changes: 8 additions & 8 deletions content/en/case-studies/adform/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
cid: caseStudies
css: /css/style_case_studies.css
logo: adform_featured_logo.png
draft: true
draft: false
featured: true
weight: 1
weight: 47
quote: >
Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
---
Expand Down Expand Up @@ -35,15 +35,15 @@ <h2>Challenge</h2>

<h2>Solution</h2>
The team, which had already been using <a href="https://prometheus.io/">Prometheus</a> for monitoring, embraced <a href="https://kubernetes.io/">Kubernetes</a> and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."


</div>

<div class="col2">

<h2>Impact</h2>
"Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in <a href="https://grafana.com/">Grafana</a> dashboards provides great insight on your systems."


</div>

Expand Down Expand Up @@ -73,9 +73,9 @@ <h2>Adform made <a href="https://www.wsj.com/articles/fake-ad-operation-used-to-
</div>
<section class="section3">
<div class="fullcol">

The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."<br><br>
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they’re still doing it."
<br><br>
The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."
Expand Down Expand Up @@ -106,7 +106,7 @@ <h2>Adform made <a href="https://www.wsj.com/articles/fake-ad-operation-used-to-
</div>

<div class="fullcol">
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it’s cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we’re interested in is the <a href="https://github.com/virtual-kubelet/virtual-kubelet">Virtual Kubelet</a> that lets you spin up the working nodes on different clouds to do some computing."
<br><br>
Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we’re really happy that we joined it."
Expand Down
4 changes: 2 additions & 2 deletions content/en/case-studies/homeoffice/index.html
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
---
title: UK Home Office
title: Home Office UK
content_url: https://www.youtube.com/watch?v=F3iMkz_NSvU
---
---
2 changes: 1 addition & 1 deletion content/en/case-studies/ing/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
linkTitle: ING
case_study_styles: true
cid: caseStudies
weight: 20
weight: 50
featured: true
css: /css/style_case_studies.css
quote: >
Expand Down
4 changes: 2 additions & 2 deletions content/en/case-studies/pearson/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: true
featured: false
quote: >
We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online.
---
Expand Down Expand Up @@ -84,4 +84,4 @@ <h2>Impact</h2>
So far, about 15 production products are running on the new platform, including Pearson’s new flagship digital education service, the Global Learning Platform. The Cloud Platform team continues to prepare, onboard and support customers that are a good fit for the platform. Some existing products will be refactored into 12-factor apps, while others are being developed so that they can live on the platform from the get-go. "There are challenges with bringing in new customers of course, because we have to help them to see a different way of developing, a different way of building," says Shirley. <br><br>
But, he adds, "It is our corporate motto: Always Learning. We encourage those teams that haven’t started a cloud native journey, to see the future of technology, to learn, to explore. It will pique your interest. Keep learning."
</div>
</section>
</section>
14 changes: 7 additions & 7 deletions content/en/case-studies/pinterest/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: true
featured: false
weight: 30
quote: >
We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do.
Expand Down Expand Up @@ -32,15 +32,15 @@ <h2>Challenge</h2>
<br>

<h2>Solution</h2>
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.

</div>

<div class="col2">

<h2>Impact</h2>
"By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. "We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."


</div>

Expand All @@ -55,7 +55,7 @@ <h2>Impact</h2>
<section class="section2">
<div class="fullcol">
<h2></h2>Pinterest was born on the cloud—running on <a href="https://aws.amazon.com/">AWS</a> since day one in 2010—but even cloud native companies can experience some growing pains.</h2> Since its launch, Pinterest has become a household name, with more than 200 million active monthly users and 100 billion objects saved. Underneath the hood, there are 1,000 microservices running and hundreds of thousands of data jobs.<br><br>
With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production.
With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production.
So in 2016, the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure. <br><br>
The first phase involved moving to Docker. "Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. "To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn’t want to boil the ocean at that point."

Expand All @@ -68,7 +68,7 @@ <h2></h2>Pinterest was born on the cloud—running on <a href="https://aws.amazo
</div>
<section class="section3">
<div class="fullcol">

The first service that was migrated was the monolith API fleet that powers most of Pinterest. At the same time, Benedict’s infrastructure governance team built chargeback and capacity planning systems to analyze how the company uses its virtual machines on AWS. "It became clear that running on VMs is just not sustainable with what we’re doing," says Benedict. "A lot of resources were underutilized. There were efficiency efforts, which worked fine at a certain scale, but now you have to move to a more decentralized way of managing that. So orchestration was something we thought could help solve that piece."<br><br>
That led to the second phase of the roadmap. In July 2017, after an eight-week evaluation period, the team chose Kubernetes over other orchestration platforms. "Kubernetes lacked certain things at the time—for example, we wanted Spark on Kubernetes," says Benedict. "But we realized that the dev cycles we would put in to even try building that is well worth the outcome, both for Pinterest as well as the community. We’ve been in those conversations in the Big Data SIG. We realized that by the time we get to productionizing many of those things, we’ll be able to leverage what the community is doing."<br><br>
At the beginning of 2018, the team began onboarding its first use case into the Kubernetes system: Jenkins workloads. "Although we have builds happening during a certain period of the day, we always need to allocate peak capacity," says Benedict. "They don’t have any auto-scaling capabilities, so that capacity stays constant. It is difficult to speed up builds because ramping up takes more time. So given those kind of concerns, we thought that would be a perfect use case for us to work on."
Expand All @@ -86,7 +86,7 @@ <h2></h2>Pinterest was born on the cloud—running on <a href="https://aws.amazo
<div class="fullcol">
They ramped up the cluster, and working with a team of four people, got the Jenkins Kubernetes cluster ready for production. "We still have our static Jenkins cluster," says Benedict, "but on Kubernetes, we are doing similar builds, testing the entire pipeline, getting the artifact ready and just doing the comparison to see, how much time did it take to build over here. Is the SLA okay, is the artifact generated correct, are there issues there?" <br><br>
"So far it’s been good," he adds, "especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."<br><br>
By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the <a href="https://github.com/jenkinsci/kubernetes-plugin">Jenkins Kubernetes Plugin</a> to manage the lifecycle of workers. "We’re currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the <a href="https://github.com/jenkinsci/kubernetes-plugin">Jenkins Kubernetes Plugin</a> to manage the lifecycle of workers. "We’re currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."


</div>
Expand Down
Loading

0 comments on commit 55f8e53

Please sign in to comment.