You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixes issue where if the audit log is enabled and anonymous authentication is disabled, then an unauthenticated user request will cause a panic and crash the kube-apiserver. (#38717, @deads2k)
Known Issues for v1.5.1
hack/local-up-cluster.sh script times out waiting for apiserver to answer, see #38847.
To workaround this, modify the script to pass --anonymous-auth=true to sudo -E "${GO_OUT}/hyperkube" apiserver ... when starting kube-apiserver.
StatefulSets are beta now (fixes and stabilization)
Improved Federation Support
New command: kubefed
DaemonSets
Deployments
ConfigMaps
Simplified Cluster Deployment
Improvements to kubeadm
HA Setup for Master
Node Robustness and Extensibility
Windows Server Container support
CRI for pluggable container runtimes
kubelet API supports authentication and authorization
Features
Features for this release were tracked via the use of the kubernetes/features issues repo. Each Feature issue is owned by a Special Interest Group from kubernetes/community
API Machinery
[beta] kube-apiserver support for the OpenAPI spec is moving from alpha to beta. The first non-go client is based on it (kubernetes/features#53)
Apps
[stable] When replica sets cannot create pods, they will now report detail via the API about the underlying reason (kubernetes/features#120)
[stable] kubectl apply is now able to delete resources you no longer need with --prune (kubernetes/features#128)
[beta] Deployments that cannot make progress in rolling out the newest version will now indicate via the API they are blocked (docs) (kubernetes/features#122)
[beta] StatefulSets allow workloads that require persistent identity or per-instance storage to be created and managed on Kubernetes. (docs) (kubernetes/features#137)
[beta] In order to preserve safety guarantees the cluster no longer force deletes pods on un-responsive nodes and users are now warned if they try to force delete pods via the CLI. (docs) (kubernetes/features#119)
Auth
[alpha] Further polishing of the Role-based access control alpha API including a default set of cluster roles. (docs) (kubernetes/features#2)
[beta] Added ability to authenticate/authorize access to the Kubelet API (docs) (kubernetes/features#89)
[alpha] Improved UX and usability for the kubeadm binary that makes it easy to get a new cluster running. (docs) (kubernetes/features#11)
Cluster Ops
[alpha] Added ability to create/remove clusters w/highly available (replicated) masters on GCE using kube-up/kube-down scripts. (docs) (kubernetes/features#48)
[alpha] Cluster federation: Added support for DeleteOptions.OrphanDependents for federation resources. (docs) (kubernetes/features#99)
[alpha] Introducing kubefed, a new command line tool to simplify federation control plane. (docs) (kubernetes/features#97)
Network
[stable] Services can reference another service by DNS name, rather than being hosted in pods (kubernetes/features#33)
[beta] Opt in source ip preservation for Services with Type NodePort or LoadBalancer (docs) (kubernetes/features#27)
[stable] Enable DNS Horizontal Autoscaling with beta ConfigMap parameters support (docs)
Node
[alpha] Added ability to preserve access to host userns when userns remapping is enabled in container runtime (kubernetes/features#127)
[alpha] Introducing the v1alpha1 CRI API to allow pluggable container runtimes; an experimental docker-CRI integration is ready for testing and feedback. (docs) (kubernetes/features#54)
[alpha] Kubelet launches container in a per pod cgroup hiearchy based on quality of service tier (kubernetes/features#126)
[beta] Kubelet integrates with memcg notification API to detect when a hard eviction threshold is crossed (kubernetes/features#125)
[beta] Introducing the beta version containerized node conformance test gcr.io/google_containers/node-test:0.2 for users to verify node setup. (docs) (kubernetes/features#84)
[beta] PodDisruptionBudget has been promoted to beta, can be used to safely drain nodes while respecting application SLO's (docs) (kubernetes/features#85)
UI
[stable] Dashboard UI now shows all user facing objects and their resource usage. (docs) (kubernetes/features#136)
Windows
[alpha] Added support for Windows Server 2016 nodes and scheduling Windows Server Containers (docs) (kubernetes/features#116)
getDeviceNameFromMount() function doesn't return the volume path correctly when the volume path contains spaces [#37712](kubernetes#37712)
Federation alpha features do not have feature gates defined and
are hence enabled by default. This will be fixed in a future release.
[#38593](kubernetes#38593)
Federation control plane can be upgraded by updating the image
fields in the Deployment specs of the control plane components.
However, federation control plane upgrades were not tested in this
release 38537
For StatefulSet (previously PetSet), this change means creation of
replacement pods is blocked until old pods are definitely not running
(indicated either by the kubelet returning from partitioned state,
deletion of the Node object, deletion of the instance in the cloud provider,
or force deletion of the pod from the api-server).
This helps prevent "split brain" scenarios in clustered applications by
ensuring that unreachable pods will not be presumed dead unless some
"fencing" operation has provided one of the above indications.
For all other existing controllers except StatefulSet, this has no effect on
the ability of the controller to replace pods because the controllers do not
reuse pod names (they use generate-name).
User-written controllers that reuse names of pod objects should evaluate this change.
When deleting an object with kubectl delete ... --grace-period=0, the client will
begin a graceful deletion and wait until the resource is fully deleted. To force
deletion immediately, use the --force flag. This prevents users from accidentally
allowing two Stateful Set pods to share the same persistent volume which could lead to data
corruption [#37263](kubernetes#37263)
kube-apiserver learned the '--anonymous-auth' flag, which defaults to true. When enabled, requests to the secure port that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of 'system:anonymous' and a group of 'system:unauthenticated'.
Authenticated users are decorated with a 'system:authenticated' group.
IMPORTANT: See Action Required for important actions related to this change.
kubectl get -o jsonpath=... will now throw an error if the path is to a field not present in the json, even if the path is for a field valid for the type. This is a change from the pre-1.5 behavior, which would return the default value for some fields even if they were not present in the json. ([#37991](kubernetes#37991), [@pwittrock](http://github.com/pwittrock))
The strategicmerge patchMergeKey for VolumeMounts was changed from "name" to "mountPath". This was necessary because the name field refers to the name of the Volume, and is not a unique key for the VolumeMount. Multiple VolumeMounts will have the same Volume name if mounting the same volume more than once. The "mountPath" is verified to be unique and can act as the mergekey. ([#35071](https://github.coma/kubernetes/kubernetes/pull/35071), [@pwittrock](http://github.com/pwittrock))
**Important Security-related changes before upgrading
You MUST set --anonymous-auth=false flag on your kube-apiserver unless you are a developer testing this feature and understand it.
If you do not, you risk allowing unauthorized users to access your apiserver.
You MUST set --anonymous-auth=false flag on your federation apiserver unless you are a developer testing this feature and understand it.
If you do not, you risk allowing unauthorized users to access your federation apiserver.
You do not need to adjust this flag on Kubelet: there was no authorization for the Kubelet APIs in 1.4.
PetSet has been renamed to StatefulSet.
If you have existing PetSets, you must perform extra migration steps both
before and after upgrading to convert them to StatefulSets. (docs) ([#35663](kubernetes#35663), [@janetkuo](https://github.com/janetkuo))
The deprecated kubelet --configure-cbr0 flag has been removed, and with that the "classic" networking mode as well. If you depend on this mode, please investigate whether the other network plugins kubenet or cni meet your needs. ([#34906](kubernetes#34906), [@luxas](https://github.com/luxas))
If you used the PodDisruptionBudget feature in 1.4 (i.e. created PodDisruptionBudget objects), then BEFORE upgrading from 1.4 to 1.5, you must delete all PodDisruptionBudget objects (policy/v1alpha1/PodDisruptionBudget) that you have created. It is not possible to delete these objects after you upgrade, and their presence will prevent you from using the beta PodDisruptionBudget feature in 1.5 (which uses policy/v1beta1/PodDisruptionBudget). If you have already upgraded, you will need to downgrade the master to 1.4 to delete the policy/v1alpha1/PodDisruptionBudget objects.
External Dependency Version Information
Continuous integration builds have used the following versions of external dependencies, however, this is not a strong recommendation and users should consult an appropriate installation or upgrade guide before deciding what versions of etcd, docker or rkt to use.
kubelet: don't reject pods without adding them to the pod manager (#37661, @yujuhong)
Fix photon controller plugin to construct with correct PdID (#37167, @luomiao)
Fix the equality checks for numeric values in cluster/gce/util.sh. (#37638, @roberthbailey)
federation service controller: stop deleting services from underlying clusters when federated service is deleted. (#37353, @nikhiljindal)
Set Dashboard UI version to v1.5.0 (#37684, @rf232)
When deleting an object with --grace-period=0, the client will begin a graceful deletion and wait until the resource is fully deleted. To force deletion, use the --force flag. (#37263, @smarterclayton)
Removes shorthand flag -w from kubectl apply (#37345, @MrHohn)
Fix issue in converting AWS volume ID from mount paths (#36840, @jingxu97)
fix leaking memory backed volumes of terminated pods (#36779, @sjenning)
Default logging subsystem's resiliency was greatly improved, fluentd memory consumption and OOM error probability was reduced. (#37021, @Crassirostris)
Federation: allow specification of dns zone by ID (#36336, @justinsb)
K8s 1.5 keeps container-vm as the default node image on GCE for backwards compatibility reasons. Please beware that container-vm is officially deprecated (supported with security patches only) and you should replace it with GCI if at all possible. You can review the migration guide here for more detail: https://cloud.google.com/container-engine/docs/node-image-migration (#36822, @mtaufen)
Add a flag allowing contention profiling of the API server (#36756, @gmarek)
Rename --cgroups-per-qos to --experimental-cgroups-per-qos in Kubelet (#36767, @vishh)
Implement CanMount() for gfsMounter for linux (#36686, @rkouj)
Default host user namespace via experimental flag (#31169, @pweil-)
Use generous limits in the resource usage tracking tests (#36623, @yujuhong)
Update Dashboard UI version to 1.4.2 (#35895, @rf232)
Add support for service load balancer source ranges to Azure load balancers. (#36696, @brendandburns)
Fix fetching pids running in a cgroup, which caused problems with OOM score adjustments & setting the /system cgroup ("misc" in the summary API). (#36551, @timstclair)
federation: Adding support for DeleteOptions.OrphanDependents for federated replicasets and deployments. Setting it to false while deleting a federated replicaset or deployment also deletes the corresponding resource from all registered clusters. (#36476, @nikhiljindal)
Migrates addons from RCs to Deployments (#36008, @MrHohn)
Avoid setting S_ISGID on files in volumes (#36386, @sjenning)
federation: Adding support for DeleteOptions.OrphanDependents for federated daemonsets and ingresses. Setting it to false while deleting a federated daemonset or ingress also deletes the corresponding resource from all registered clusters. (#36330, @nikhiljindal)
Node Conformance Test: Containerize the node e2e test (#31093, @Random-Liu)
federation: Adding support for DeleteOptions.OrphanDependents for federated secrets. Setting it to false while deleting a federated secret also deletes the corresponding secrets from all registered clusters. (#36296, @nikhiljindal)
Deploy kube-dns with cluster-proportional-autoscaler (#33239, @MrHohn)
Adds support for StatefulSets in kubectl drain. (#35483, @ymqytw)
Switches to use the eviction sub-resource instead of deletion in kubectl drain, if server supports.
azure: load balancer preserves destination ip address (#36256, @colemickens)
[AppArmor] Hold bad AppArmor pods in pending rather than rejecting (#35342, @timstclair)
Federation: separate notion of zone-name & dns-suffix (#35372, @justinsb)
In order to bypass graceful deletion of pods (to immediately remove the pod from the API) the user must now provide the --force flag in addition to --grace-period=0. This prevents users from accidentally force deleting pods without being aware of the consequences of force deletion. Force deleting pods for resources like StatefulSets can result in multiple pods with the same name having running processes in the cluster, which may lead to data corruption or data inconsistency when using shared storage or common API endpoints. (#35484, @smarterclayton)
have basic kubectl crud agnostic of registered types (#36085, @deads2k)
Fix how we iterate over active jobs when removing them for Replace policy (#36161, @soltysh)
Adds TCPCloseWaitTimeout option to kube-proxy for sysctl nf_conntrack_tcp_timeout_time_wait (#35919, @bowei)
Pods that are terminating due to eviction by the nodecontroller (typically due to unresponsive kubelet, or network partition) now surface in kubectl get output (#36017, @foxish)
as being in state "Unknown", along with a longer description in kubectl describe output.
The hostname of the node (as autodetected by the kubelet, specified via --hostname-override, or determined by the cloudprovider) is now recorded as an address of type "Hostname" in the status of the Node API object. The hostname is expected to be resolveable from the apiserver. (#25532, @mkulke)
[Kubelet] Add alpha support for --cgroups-per-qos using the configured --cgroup-driver. Disabled by default. (#31546, @derekwaynecarr)
Move Statefulset (previously PetSet) to v1beta1 (#35731, @janetkuo)
The error handling behavior of pkg/client/restclient.Result has changed. Calls to Result.Raw() will no longer parse the body, although they will still return errors that react to pkg/api/errors.Is*() as in previous releases. Callers of Get() and Into() will continue to receive errors that are parsed from the body if the kind and apiVersion of the body match the Status object. (#36001, @smarterclayton)
This more closely aligns rest client as a generic RESTful client, while preserving the special Kube API extended error handling for the Get and Into methods (which most Kube clients use).
Making the pod.alpha.kubernetes.io/initialized annotation optional in PetSet pods (#35739, @foxish)
The main kubernetes repository stops hosting archived version of released clients. Please use client-go. (#35928, @caesarxuchao)
Correct the article in generated documents (#32557, @asalkeld)
Update PodAntiAffinity to ignore calls to subresources (#35608, @soltysh)
The apiserver can now select which type of kubelet-reported address to use for apiserver->node communications, using the --kubelet-preferred-address-types flag. (#35497, @liggitt)
update list of vailable resources (#32687, @jouve)
Remove stale volumes if endpoint/svc creation fails. (#35285, @humblec)
Fix issue in reconstruct volume data when kubelet restarts (#36616, @jingxu97)
Add sync state loop in master's volume reconciler (#34859, @jingxu97)
AWS: strong-typing for k8s vs aws volume ids (#35883, @justinsb)
Bump GCI version to gci-beta-55-8872-47-0 (#36679, @mtaufen)
gci-beta-55-8872-47-0:
Date: Nov 11, 2016
Kernel: ChromiumOS-4.4
Kubernetes: v1.4.5
Docker: v1.11.2
Changelog (vs 55-8872-18-0)
* Cherry-pick runc PR#608: Eliminate redundant parsing of mountinfo
* Updated kubernetes to v1.4.5
* Fixed a bug in e2fsprogs that caused mke2fs to take a very long time. Upstream fix: http://git.kernel.org/cgit/fs/ext2/e2fsprogs.git/commit/?h=next&id=d33e690fe7a6cbeb51349d9f2c7fb16a6ebec9c2
Fix fetching pids running in a cgroup, which caused problems with OOM score adjustments & setting the /system cgroup ("misc" in the summary API). (#36614, @timstclair)
DELETE requests can now pass in their DeleteOptions as a query parameter or a body parameter, rather than just as a body parameter. (#35806, @bdbauer)
rkt: Convert image name to be a valid acidentifier (#34375, @euank)
Remove stale volumes if endpoint/svc creation fails. (#35285, @humblec)
Remove Job also from .status.active for Replace strategy (#35420, @soltysh)
Update PodAntiAffinity to ignore calls to subresources (#35608, @soltysh)
Adds TCPCloseWaitTimeout option to kube-proxy for sysctl nf_conntrack_tcp_timeout_time_wait (#35919, @bowei)
Fix how we iterate over active jobs when removing them for Replace policy (#36161, @soltysh)
Bump GCI version to latest m55 version in GCE for K8s 1.4 (#36302, @mtaufen)
Add a check for file size if the reading content returns empty (#33976, @jingxu97)
Add a retry when reading a file content from a container (#35560, @jingxu97)
Skip CLOSE_WAIT e2e test if server is 1.4.5 (#36404, @bowei)
Avoid overriding system and kubelet cgroups on GCI (#35319, @vishh)
* Make the kubectl from k8s release the default on GCI
kubelet summary rootfs now refers to the filesystem that contains the Kubelet RootDirectory (var/lib/kubelet) instead of cadvisor's rootfs ( / ), since they may be different filesystems. (#35136, @dashpole)
Fix cadvisor_unsupported and the crossbuild (#35817, @luxas)
kubenet: SyncHostports for both running and ready to run pods. (#31388, @yifan-gu)
Remove scheduler flags that were marked as deprecated 2+ releases ago. (#34471, @timothysc)
Other notable changes
Make the fake RESTClient usable by all the API groups, not just core. (#35492, @madhusudancs)
Adding support for DeleteOptions.OrphanDependents for federated namespaces. Setting it to false while deleting a federated namespace also deletes the corresponding namespace from all registered clusters. (#34648, @nikhiljindal)
Kubelet flag '--mounter-path' renamed to '--experimental-mounter-path' (#35646, @vishh)
Node status updater should SetNodeStatusUpdateNeeded if it fails to update status (#34368, @jingxu97)
Deprecate OpenAPI spec for GroupVersion endpoints in favor of single spec /swagger.json (#35388, @mbohlool)
fixed typo in script which made setting custom cidr in gce using kube-up impossible (#35267, @tommywo)
The podGC controller will now always run, irrespective of the value supplied to the "terminated-pod-gc-threshold" flag supplied to the controller manager. (#35476, @foxish)
The specific behavior of the podGC controller to clean up terminated pods is still governed by the flag, but the podGC's responsibilities have evolved beyond just cleaning up terminated pods.
Update grafana version used by default in kubernetes to 3.1.1 (#35435, @Crassirostris)
vSphere Kube-up: resolve vm-names on all nodes (#35365, @kerneltime)
bootstrap: Start hostNetwork pods even if network plugin not ready (#33347, @justinsb)
Factor out post-init swagger and OpenAPI routes (#32590, @sttts)
pvc.Spec.Resources.Requests min and max can be enforced with a LimitRange of type "PersistentVolumeClaim" in the namespace (#30145, @markturansky)
Federated DaemonSet controller. Supports all the API that regular DaemonSet has. (#34319, @mwielgus)
New federation deployment mechanism now allows non-GCP clusters. (#34620, @madhusudancs)
* Writes the federation kubeconfig to the local kubeconfig file.
Update the series and the README to reflect the change. (#30374, @mbruzek)
Fix non-starting node controller in 1.4 branch (#34895, @wojtek-t)
Cherrypick #34851 "Only wait for cache syncs once in NodeController" (#34861, @jessfraz)
NodeController waits for informer sync before doing anything (#34809, @gmarek)
Make NodeController recognize deletion tombstones (#34786, @davidopp)
Fix panic in NodeController caused by receiving DeletedFinalStateUnknown object from the cache. (#34694, @gmarek)
Update GlusterFS provisioning readme with endpoint/service details (#31854, @humblec)
Add logging for enabled/disabled API Groups (#32198, @deads2k)
New federation deployment mechanism now allows non-GCP clusters. (#34620, @madhusudancs)
* Writes the federation kubeconfig to the local kubeconfig file.
Cherrypick #34851 "Only wait for cache syncs once in NodeController" (#34861, @jessfraz)
NodeController waits for informer sync before doing anything (#34809, @gmarek)
Make NodeController recognize deletion tombstones (#34786, @davidopp)
Fix panic in NodeController caused by receiving DeletedFinalStateUnknown object from the cache. (#34694, @gmarek)
Update GlusterFS provisioning readme with endpoint/service details (#31854, @humblec)
Add logging for enabled/disabled API Groups (#32198, @deads2k)
New federation deployment mechanism now allows non-GCP clusters. (#34620, @madhusudancs)
* Writes the federation kubeconfig to the local kubeconfig file.
Alpha JWS Discovery API for locating an apiserver securely (#32203, @dgoodwin)
Action Required
kube-apiserver learned the '--anonymous-auth' flag, which defaults to true. When enabled, requests to the secure port that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of 'system:anonymous' and a group of 'system:unauthenticated'. (#32386, @liggitt)
Authenticated users are decorated with a 'system:authenticated' group.
NOTE: anonymous access is enabled by default. If you rely on authentication alone to authorize access, change to use an authorization mode other than AlwaysAllow, or or set '--anonymous-auth=false'.
The NamespaceExists and NamespaceAutoProvision admission controllers have been removed. (#31250, @derekwaynecarr)
All cluster operators should use NamespaceLifecycle.
Federation binaries and their corresponding docker images - federation-apiserver and federation-controller-manager are now folded in to the hyperkube binary. If you were using one of these binaries or docker images, please switch to using the hyperkube version. Please refer to the federation manifests - federation/manifests/federation-apiserver.yaml and federation/manifests/federation-controller-manager-deployment.yaml for examples. (#29929, @madhusudancs)
Other notable changes
The kube-apiserver --service-account-key-file option can be specified multiple times, or can point to a file containing multiple keys, to enable rotation of signing keys. (#34029, @liggitt)
The apiserver now uses addresses reported by the kubelet in the Node object's status for apiserver->kubelet communications, rather than the name of the Node object. The address type used defaults to InternalIP, ExternalIP, and LegacyHostIP address types, in that order. (#33718, @justinsb)
Federated deployment controller that supports the same api as the regular kubernetes deployment controller. (#34109, @mwielgus)
Match GroupVersionKind against specific version (#34010, @soltysh)
kubectl: Add external ip information to node when '-o wide' is used (#33552, @floreks)
Update GCI base image: (#34156, @adityakali)
* Enabled VXLAN and IP_SET config options in kernel to support some networking tools (ebtools)
* OpenSSL CVE fixes
ContainerVm/GCI image: try to use ifdown/ifup if available (#33595, @freehan)
Use manifest digest (as docker-pullable://) as ImageID when available (exposes a canonical, pullable image ID for containers). (#33014, @DirectXMan12)
Add kubelet awareness to taint tolerant match caculator. (#26501, @resouer)
Fix nil pointer issue when getting metrics from volume mounter (#34251, @jingxu97)
Enforce Disk based pod eviction with GCI base image in Kubelet (#33520, @vishh)
Remove headers that are unnecessary for proxy target (#34076, @mbohlool)
Add missing argument to log message in federated ingress controller. (#34158, @quinton-hoole)
The kubelet --eviction-minimum-reclaim option can now take precentages as well as absolute values for resources quantities (#33392, @sjenning)
The implicit registration of Prometheus metrics for workqueue has been removed, and a plug-able interface was added. If you were using workqueue in your own binaries and want these metrics, add the following to your imports in the main package: "k8s.io/pkg/util/workqueue/prometheus". (#33792, @caesarxuchao)
Add kubectl --node-port option for specifying the service nodeport (#33319, @juanvallejo)
To reduce memory usage to reasonable levels in smaller clusters, kube-apiserver now sets the deserialization cache size based on the target memory usage. (#34000, @wojtek-t)
use service accounts as clients for controllers (#33310, @deads2k)
Add a new option "--local" to the kubectl annotate (#34074, @asalkeld)
Add a new option "--local" to the kubectl label (#33990, @asalkeld)
Initialize podsWithAffinity to avoid scheduler panic (#33967, @xiang90)
Fix base image pinning during upgrades via cluster/gce/upgrade.sh (#33147, @vishh)
Remove the flannel experimental overlay (#33862, @luxas)
CRI: Remove the mount name and port name. (#33970, @yifan-gu)
Enable kubectl describe rs to work when apiserver does not support pods (#33794, @nikhiljindal)
Fixes in HPA: consider only running pods; proper denominator in avg request calculations. (#33735, @jszczepkowski)
When CORS Handler is enabled, we now add a new HTTP header named "Access-Control-Expose-Headers" with a value of "Date". This allows the "Date" HTTP header to be accessed from XHR/JavaScript. (#33242, @dims)
Add port forwarding for rkt with kvm stage1 (#32126, @jjlakis)
The value of the versioned.Event object (returned by watch APIs) in the Swagger 1.2 schemas has been updated from *versioned.Event which was not expected by many client tools. The new value is consistent with other structs returned by the API. (#33007, @smarterclayton)
Remove cpu limits for dns pod to avoid CPU starvation (#33227, @vishh)
Allow secure access to apiserver from Admission Controllers (#31491, @dims)
Resolves x509 verification issue with masters dialing nodes when started with --kubelet-certificate-authority (#33141, @liggitt)
Fix possible panic in PodAffinityChecker (#33086, @ivan4th)
Upgrading Container-VM base image for k8s on GCE. Brief changelog as follows: (#32738, @Amey-D)
- Fixed performance regression in veth device driver
- Docker and related binaries are statically linked
- Fixed the issue of systemd being oom-killable
Move HighWaterMark to the top of the struct in order to fix arm (#33117, @luxas)
kubenet: SyncHostports for both running and ready to run pods. (#31388, @yifan-gu)
Limit the number of names per image reported in the node status (#32914, @yujuhong)
Some components like kube-dns and kube-proxy could fail to load the service account token when started within a pod. Properly handle empty configurations to try loading the service account config. (#31947, @smarterclayton)
Removed comments in json config when using kubectl edit with -o json (#31685, @jellonek)
fixes invalid null selector issue in sysdig example yaml (#31393, @baldwinSPC)
Rescheduler which ensures that critical pods are always scheduled enabled by default in GCE. (#31974, @piosz)
Added liveness probe to Heapster service. (#31878, @mksalawa)
Adding clusters to the list of valid resources printed by kubectl help (#31719, @nikhiljindal)
Kubernetes server components using kubeconfig files no longer default to http://localhost:8080. Administrators must specify a server value in their kubeconfig files. (#30808, @smarterclayton)
Include security options in the container created event (#31557, @timstclair)
Federation can now be deployed using the federation/deploy/deploy.sh script. This script does not depend on any of the development environment shell library/scripts. This is an alternative to the current federation-up.sh/federation-down.sh scripts. Both the scripts are going to co-exist in this release, but the federation-up.sh/federation-down.sh scripts might be removed in a future release in favor of federation/deploy/deploy.sh script. (#30744, @madhusudancs)
Add get/delete cluster, delete context to kubectl config (#29821, @alexbrand)
rkt: Force rkt fetch to fetch from remote to conform the image pull policy. (#31378, @yifan-gu)
Allow services which use same port, different protocol to use the same nodePort for both (#30253, @AdoHe)
Remove environment variables and internal Kubernetes Docker labels from cAdvisor Prometheus metric labels. (#31064, @grobie)
Old behavior:
environment variables explicitly whitelisted via --docker-env-metadata-whitelist were exported as container_env_*=*. Default is zero so by default non were exported
all docker labels were exported as container_label_*=*
New behavior:
Only container_name, pod_name, namespace, id, image, and name labels are exposed
no environment variables will be exposed ever via /metrics, even if whitelisted
Update GCI base image: (#34156, @adityakali)
* Enabled VXLAN and IP_SET config options in kernel to support some networking tools (ebtools)
* OpenSSL CVE fixes
ContainerVm/GCI image: try to use ifdown/ifup if available (#33595, @freehan)
Make the informer library available for the go client library. (#32718, @mikedanese)
Enforce Disk based pod eviction with GCI base image in Kubelet (#33520, @vishh)
Fix nil pointer issue when getting metrics from volume mounter (#34251, @jingxu97)
Enable kubectl describe rs to work when apiserver does not support pods (#33794, @nikhiljindal)
Add missing argument to log message in federated ingress controller. (#34158, @quinton-hoole)
Fix issue in updating device path when volume is attached multiple times (#33796, @jingxu97)
To reduce memory usage to reasonable levels in smaller clusters, kube-apiserver now sets the deserialization cache size based on the target memory usage. (#34000, @wojtek-t)
Fix possible panic in PodAffinityChecker (#33086, @ivan4th)
Fix race condition in setting node statusUpdateNeeded flag (#32807, @jingxu97)
kube-proxy: Add a lower-bound for conntrack (128k default) (#33051, @thockin)
Use patched golang1.7.1 for cross-builds targeting darwin (#33803, @ixdy)
Move HighWaterMark to the top of the struct in order to fix arm (#33117, @luxas)
Move HighWaterMark to the top of the struct in order to fix arm, second time (#33376, @luxas)
This is the first release tracked via the use of the kubernetes/features issues repo. Each Feature issue is owned by a Special Interest Group from kubernetes/community
API Machinery
[alpha] Generate audit logs for every request user performs against secured API server endpoint. (docs) (kubernetes/features#22)
[beta] kube-apiserver now publishes a swagger 2.0 spec in addition to a swagger 1.2 spec (kubernetes/features#53)
[beta] Server-side garbage collection is enabled by default. See user-guide
Apps
[alpha] Introducing 'ScheduledJobs', which allow running time based Jobs, namely once at a specified time or repeatedly at specified point in time. (docs) (kubernetes/features#19)
Auth
[alpha] Container Image Policy allows an access controller to determine whether a pod may be scheduled based on a policy (docs) (kubernetes/features#59)
[alpha] Access Review APIs expose authorization engine to external inquiries for delegation, inspection, and debugging (docs) (kubernetes/features#37)
Cluster Lifecycle
[alpha] Ensure critical cluster infrastructure pods (Heapster, DNS, etc.) can schedule by evicting regular pods when necessary to make the critical pods schedule. (docs) (kubernetes/features#62)
[alpha] Simplifies bootstrapping of TLS secured communication between the API server and kubelet. (docs) (kubernetes/features#43)
[alpha] Creating a Federated Ingress is as simple as submitting
an Ingress creation request to the Federation API Server. The
Federation control system then creates and maintains a single
global virtual IP to load balance incoming HTTP(S) traffic across
some or all the registered clusters, across all regions. Google's
GCE L7 LoadBalancer is the first supported implementation, and
is available in this release.
(docs)
(kubernetes/features#82)
[beta] Federated Replica Sets create and maintain matching
Replica Sets in some or all clusters in a federation, with the
desired replica count distributed equally or according to
specified per-cluster weights.
(docs)
(kubernetes/features#46)
[beta] Federated Secrets are created and kept consistent across all clusters in a federation.
(docs)
(kubernetes/features#68)
[beta] Federation API server gained support for events and many
federation controllers now report important events.
(docs)
(kubernetes/features#70)
[alpha] Creating a Federated Namespace causes matching
Namespaces to be created and maintained in all the clusters registered with that federation. (docs) (kubernetes/features#69)
[alpha] ingress has alpha support for a single master multi zone cluster (docs) (kubernetes/features#52)
Network
[alpha] Service LB now has alpha support for preserving client source IP (docs) (kubernetes/features#27)
[alpha] Pods now have alpha support for setting whitelisted, safe sysctls. Unsafe sysctls can be whitelisted on the kubelet. (docs) (kubernetes/features#34)
[alpha] Allows pods to require or prohibit (or prefer or prefer not) co-scheduling on the same node (or zone or other topology domain) as another set of pods. (docs (kubernetes/features#51)
Storage
[beta] Persistant Volume provisioning now supports multiple provisioners using StorageClass configuration. (docs) (kubernetes/features#36)
[stable] Kubernetes Dashboard UI - a great looking Kubernetes Dashboard UI with 90% CLI parity for at-a-glance management. docs
[stable] kubectl no longer applies defaults before sending objects to the server in create and update requests, allowing the server to apply the defaults. (kubernetes/features#55)
Known Issues
Completed pods lose logs across node upgrade (#32324)
non-hostNetwork daemonsets will almost always have a pod that fails to schedule (#32900)
Service loadBalancerSourceRanges doesn't respect updates (#33033)
disallow user to update loadbalancerSourceRanges (#33346)
Notable Changes to Existing Behavior
Deployments
ReplicaSets of paused Deployments are now scaled while the Deployment is paused. This is retroactive to existing Deployments.
When scaling a Deployment during a rollout, the ReplicaSets of all Deployments are now scaled proportionally based on the number of replicas they each have instead of only scaling the newest ReplicaSet.
kubectl rolling-update: < v1.4.0 client vs >=v1.4.0 cluster
Old version kubectl's rolling-update command is compatible with Kubernetes 1.4 and higher only if you specify a new replication controller name. You will need to update to kubectl 1.4 or higher to use the rolling update command against a 1.4 cluster if you want to keep the original name, or you'll have to do two rolling updates.
If you do happen to use old version kubectl's rolling update against a 1.4 cluster, it will fail, usually with an error message that will direct you here. If you saw that error, then don't worry, the operation succeeded except for the part where the new replication controller is renamed back to the old name. You can just do another rolling update using kubectl 1.4 or higher to change the name back: look for a replication controller that has the original name plus a random suffix.
Unfortunately, there is a much rarer second possible failure mode: the replication controller gets renamed to the old name, but there is a duplicated set of pods in the cluster. kubectl will not report an error since it thinks its job is done.
If this happens to you, you can wait at most 10 minutes for the replication controller to start a resync, the extra pods will then be deleted. Or, you can manually trigger a resync by change the replicas in the spec of the replication controller.
kubectl delete: < v1.4.0 client vs >=v1.4.0 cluster
If you use an old version kubectl to delete a replication controller or replicaset, then after the delete command has returned, the replication controller or the replicaset will continue to exist in the key-value store for a short period of time (<1s). You probably will not notice any difference if you use kubectl manually, but you might notice it if you are using kubectl in a script.
DELETE operation in REST API
Replication controller & Replicaset: the DELETE request of a replication controller or a replicaset becomes asynchronous by default. The object will continue to exist in the key-value store for some time. The API server will set its metadata.deletionTimestamp, add the "orphan" finalizer to its metadata.finalizers. The object will be deleted from the key-value store after the garbage collector orphans its dependents. Please refer to this user-guide for more information regarding the garbage collection.
Other objects: no changes unless you explicitly request orphaning.
Action Required Before Upgrading
If you are using Kubernetes to manage docker containers, please be aware Kubernetes has been validated to work with docker 1.9.1, docker 1.11.2 (#23397), and docker 1.12.0 (#28698)
If you upgrade your apiserver to 1.4.x but leave your kubelets at 1.3.x, they will not report init container status, but init containers will work properly. Upgrading kubelets to 1.4.x fixes this.
The NamespaceExists and NamespaceAutoProvision admission controllers have been removed, use the NamespaceLifecycle admission controller instead (#31250, @derekwaynecarr)
If upgrading Cluster Federation components from 1.3.x, the federation-apiserver and federation-controller-manager binaries have been folded into hyperkube. Please switch to using that instead. (#29929, @madhusudancs)
If you are using the PodSecurityPolicy feature (eg: kubectl get podsecuritypolicy does not error, and returns one or more objects), be aware that init containers have moved from alpha to beta. If there are any pods with the key pods.beta.kubernetes.io/init-containers, then that pod may not have been filtered by the PodSecurityPolicy. You should find such pods and either delete them or audit them to ensure they do not use features that you intend to be blocked by PodSecurityPolicy. (#31026, @erictune)
If upgrading Cluster Federation components from 1.3.x, please ensure your cluster name is a valid DNS label (#30956, @nikhiljindal)
kubelet's --config flag has been deprecated, use --pod-manifest-path instead (#29999, @mtaufen)
If upgrading Cluster Federation components from 1.3.x, be aware the federation-controller-manager now looks for a different secret name. Run the following to migrate (#28938, @madhusudancs)
kubectl --namespace=federation get secret federation-apiserver-secret -o json | sed 's/federation-apiserver-secret/federation-apiserver-kubeconfig/g' | kubectl create -f -
# optionally, remove the old secret
kubectl delete secret --namespace=federation federation-apiserver-secret
Kubernetes components no longer handle panics, and instead actively crash. All Kubernetes components should be run by something that actively restarts them. This is true of the default setups, but those with custom environments may need to double-check (#28800, @lavalamp)
kubelet now defaults to --cloud-provider=auto-detect, use --cloud-provider='' to preserve previous default of no cloud provider (#28258, @vishh)
Previous Releases Included in v1.4.0
For a detailed list of all changes that were included in this release, please refer to the following CHANGELOG entries:
AWS: Add ap-south-1 to list of known AWS regions (#28428, @justinsb)
Back porting critical vSphere bug fixes to release 1.3 (#31993, @dagnello)
Back port - Openstack provider allowing more than one service port for lbaas v2 (#32001, @dagnello)
Fix a bug in kubelet hostport logic which flushes KUBE-MARK-MASQ iptables chain (#32413, @freehan)
Fixes the panic that occurs in the federation controller manager when registering a GKE cluster to the federation. Fixes issue #30790. (#30940, @madhusudancs)
Behavior changes caused by enabling the garbage collector
kubectl rolling-update
Old version kubectl's rolling-update command is compatible with Kubernetes 1.4 and higher only if you specify a new replication controller name. You will need to update to kubectl 1.4 or higher to use the rolling update command against a 1.4 cluster if you want to keep the original name, or you'll have to do two rolling updates.
If you do happen to use old version kubectl's rolling update against a 1.4 cluster, it will fail, usually with an error message that will direct you here. If you saw that error, then don't worry, the operation succeeded except for the part where the new replication controller is renamed back to the old name. You can just do another rolling update using kubectl 1.4 or higher to change the name back: look for a replication controller that has the original name plus a random suffix.
Unfortunately, there is a much rarer second possible failure mode: the replication controller gets renamed to the old name, but there is a duplicate set of pods in the cluster. kubectl will not report an error since it thinks its job is done.
If this happens to you, you can wait at most 10 minutes for the replication controller to start a resync, the extra pods will then be deleted. Or, you can manually trigger a resync by change the replicas in the spec of the replication controller.
kubectl delete
If you use an old version kubectl to delete a replication controller or a replicaset, then after the delete command has returned, the replication controller or the replicaset will continue to exist in the key-value store for a short period of time (<1s). You probably will not notice any difference if you use kubectl manually, but you might notice it if you are using kubectl in a script. To fix it, you can poll the API server to confirm the object is deleted.
DELETE operation in REST API
Replication controller & Replicaset: the DELETE request of a replication controller or a replicaset becomes asynchronous by default. The object will continue to exist in the key-value store for some time. The API server will set its metadata.deletionTimestamp, add the "orphan" finalizer to its metadata.finalizers. The object will be deleted from the key-value store after the garbage collector orphans its dependents. Please refer to this user-guide for more information regarding the garbage collection.
Other objects: no changes unless you explicitly request orphaning.
AWS: Change default networking for kube-up to kubenet (#32239, @zmerlynn)
Make sure finalizers prevent deletion on storage that supports graceful deletion (#32351, @caesarxuchao)
Some components like kube-dns and kube-proxy could fail to load the service account token when started within a pod. Properly handle empty configurations to try loading the service account config. (#31947, @smarterclayton)
Use federated namespace instead of the bootstrap cluster's namespace in Ingress e2e tests. (#32105, @madhusudancs)
The NamespaceExists and NamespaceAutoProvision admission controllers have been removed. (#31250, @derekwaynecarr)
All cluster operators should use NamespaceLifecycle.
Federation binaries and their corresponding docker images - federation-apiserver and federation-controller-manager are now folded in to the hyperkube binary. If you were using one of these binaries or docker images, please switch to using the hyperkube version. Please refer to the federation manifests - federation/manifests/federation-apiserver.yaml and federation/manifests/federation-controller-manager-deployment.yaml for examples. (#29929, @madhusudancs)
Use upgraded container-vm by default on worker nodes for GCE k8s clusters (#31023, @vishh)
Other notable changes
Enable kubelet eviction whenever inodes free is < 5% on GCE (#31545, @vishh)
Some components like kube-dns and kube-proxy could fail to load the service account token when started within a pod. Properly handle empty configurations to try loading the service account config. (#31947, @smarterclayton)
Removed comments in json config when using kubectl edit with -o json (#31685, @jellonek)
fixes invalid null selector issue in sysdig example yaml (#31393, @baldwinSPC)
Rescheduler which ensures that critical pods are always scheduled enabled by default in GCE. (#31974, @piosz)
Added liveness probe to Heapster service. (#31878, @mksalawa)
Adding clusters to the list of valid resources printed by kubectl help (#31719, @nikhiljindal)
Kubernetes server components using kubeconfig files no longer default to http://localhost:8080. Administrators must specify a server value in their kubeconfig files. (#30808, @smarterclayton)
Include security options in the container created event (#31557, @timstclair)
Federation can now be deployed using the federation/deploy/deploy.sh script. This script does not depend on any of the development environment shell library/scripts. This is an alternative to the current federation-up.sh/federation-down.sh scripts. Both the scripts are going to co-exist in this release, but the federation-up.sh/federation-down.sh scripts might be removed in a future release in favor of federation/deploy/deploy.sh script. (#30744, @madhusudancs)
Add get/delete cluster, delete context to kubectl config (#29821, @alexbrand)
rkt: Force rkt fetch to fetch from remote to conform the image pull policy. (#31378, @yifan-gu)
Allow services which use same port, different protocol to use the same nodePort for both (#30253, @AdoHe)
Remove environment variables and internal Kubernetes Docker labels from cAdvisor Prometheus metric labels. (#31064, @grobie)
Old behavior:
environment variables explicitly whitelisted via --docker-env-metadata-whitelist were exported as container_env_*=*. Default is zero so by default non were exported
all docker labels were exported as container_label_*=*
New behavior:
Only container_name, pod_name, namespace, id, image, and name labels are exposed
no environment variables will be exposed ever via /metrics, even if whitelisted
Increase request timeout based on termination grace period (#31275, @dims)
Skip safe to detach check if node API object no longer exists (#30737, @saad-ali)
Nodecontroller doesn't flip readiness on pods if kubeletVersion < 1.2.0 (#30828, @bprashanth)
Update cadvisor to v0.23.9 to fix a problem where attempting to gather container filesystem usage statistics could result in corrupted devicemapper thin pool storage for Docker. (#30307, @sjenning)
Moved init-container feature from alpha to beta. (#31026, @erictune)
Security Action Required:
This only applies to you if you use the PodSecurityPolicy feature. You are using that feature if kubectl get podsecuritypolicy returns one or more objects. If it returns an error, you are not using it.
If there are any pods with the key pods.beta.kubernetes.io/init-containers, then that pod may not have been filtered by the PodSecurityPolicy. You should find such pods and either delete them or audit them to ensure they do not use features that you intend to be blocked by PodSecurityPolicy.
Explanation of Feature
In 1.3, an init container is specified with this annotation key
on the pod or pod template: pods.alpha.kubernetes.io/init-containers.
In 1.4, either that key or this key: pods.beta.kubernetes.io/init-containers,
can be used.
When you GET an object, you will see both annotation keys with the same values.
You can safely roll back from 1.4 to 1.3, and things with init-containers
will still work (pods, deployments, etc).
If you are running 1.3, only use the alpha annotation, or it may be lost when
rolling forward.
The status has moved from annotation key
pods.beta.kubernetes.io/init-container-statuses to
pods.beta.kubernetes.io/init-container-statuses.
Any code that inspects this annotation should be changed to use the new key.
State of Initialization will continue to be reported in both pods.alpha.kubernetes.io/initialized
and in podStatus.conditions.{status: "True", type: Initialized}
Action required: federation-only: Please update your cluster name to be a valid DNS label. (#30956, @nikhiljindal)
Updating federation.v1beta1.Cluster API to disallow subdomains as valid cluster names. Only DNS labels are allowed as valid cluster names now.
[Kubelet] Rename --config to --pod-manifest-path. --config is deprecated. (#29999, @mtaufen)
Other notable changes
rkt: Improve support for privileged pod (pod whose all containers are privileged) (#31286, @yifan-gu)
The pod annotation security.alpha.kubernetes.io/sysctls now allows customization of namespaced and well isolated kernel parameters (sysctls), starting with kernel.shm_rmid_forced, net.ipv4.ip_local_port_range and net.ipv4.tcp_syncookies for Kubernetes 1.4. (#27180, @sttts)
The pod annotation security.alpha.kubernetes.io/unsafe-sysctls allows customization of namespaced sysctls where isolation is unclear. Unsafe sysctls must be enabled at-your-own-risk on the kubelet with the --experimental-allowed-unsafe-sysctls flag. Future versions will improve on resource isolation and more sysctls will be considered safe.
Increase request timeout based on termination grace period (#31275, @dims)
Fixed two issues of kubectl bash completion. (#31135, @xingzhou)
Action required: If you have a running federation control plane, you will have to ensure that for all federation resources, the corresponding namespace exists in federation control plane. (#31139, @nikhiljindal)
federation-apiserver now supports NamespaceLifecycle admission control, which is enabled by default. Set the --admission-control flag on the server to change that.
The implicit registration of Prometheus metrics for request count and latency have been removed, and a plug-able interface was added. If you were using our client libraries in your own binaries and want these metrics, add the following to your imports in the main package: "k8s.io/pkg/client/metrics/prometheus". (#30638, @krousey)
Add support for --image-pull-policy to 'kubectl run' (#30614, @AdoHe)
x509 authenticator: get groups from subject's organization field (#30392, @ericchiang)
Add initial support for TokenFile to to the client config file. (#29696, @brendandburns)
update kubectl help output for better organization (#25524, @AdoHe)
Implement TLS bootstrap for kubelet using --experimental-bootstrap-kubeconfig (2nd take) (#30922, @yifan-gu)
rkt: Support subPath volume mounts feature (#30934, @yifan-gu)
Return container command exit codes in kubectl run/exec (#26541, @sttts)
Fix kubectl describe to display a container's resource limit env vars as node allocatable when the limits are not set (#29849, @aveshagarwal)
The valueFrom.fieldRef.name field on environment variables in pods and objects with pod templates now allows two additional fields to be used: (#27880, @smarterclayton)
* spec.nodeName will return the name of the node this pod is running on
* spec.serviceAccountName will return the name of the service account this pod is running under
Add Events for operation_executor to show status of mounts, failed/successful to show in describe events (#27778, @screeley44)
Alpha support for OpenAPI (aka. Swagger 2.0) specification served on /swagger.json (enabled by default) (#30233, @mbohlool)
Disable linux/ppc64le compilation by default (#30659, @ixdy)
Implement dynamic provisioning (beta) of PersistentVolumes via StorageClass (#29006, @jsafrane)
Allow setting permission mode bits on secrets, configmaps and downwardAPI files (#28936, @rata)
Skip safe to detach check if node API object no longer exists (#30737, @saad-ali)
The Kubelet now supports the --require-kubeconfig option which reads all client config from the provided --kubeconfig file and will cause the Kubelet to exit with error code 1 on error. It also forces the Kubelet to use the server URL from the kubeconfig file rather than the --api-servers flag. Without this flag set, a failure to read the kubeconfig file would only result in a warning message. (#30798, @smarterclayton)
In a future release, the value of this flag will be defaulted to true.
Set pod state as "unknown" when CNI plugin fails (#30137, @nhlfr)
Cluster Federation components can now be built and deployed using the make command. Please see federation/README.md for details. (#29515, @madhusudancs)
Modified influxdb petset to provision persistent volume. (#28840, @jszczepkowski)
Allow service names up to 63 characters (RFC 1035) (#29523, @fraenkel)
Change eviction policies in NodeController: (#28897, @gmarek)
add a "partialDisruption" mode, when more than 33% of Nodes in the zone are not Ready
add "fullDisruption" mode, when all Nodes in the zone are not Ready
Eviction behavior depends on the mode in which NodeController is operating:
if the new state is "partialDisruption" or "fullDisruption" we call a user defined function that returns a new QPS to use (default 1/10 of the default rate, and the default rate respectively),
if the new state is "normal" we resume normal operation (go back to default limiter settings),
if all zones in the cluster are in "fullDisruption" state we stop all evictions.
Add a flag for kubectl exposeto set ClusterIP and allow headless services (#28239, @ApsOps)
Federation API server kubeconfig secret consumed by federation-controller-manager has a new name. (#28938, @madhusudancs)
If you are upgrading your Cluster Federation components from v1.3.x, please run this command to migrate the federation-apiserver-secret to federation-apiserver-kubeconfig serect;
$ kubectl --namespace=federation get secret federation-apiserver-secret -o json | sed 's/federation-apiserver-secret/federation-apiserver-kubeconfig/g' | kubectl create -f -
You might also want to delete the old secret using this command:
If a service of type node port declares multiple ports, quota on "services.nodeports" will charge for each port in the service. (#29457, @derekwaynecarr)
Change setting "kubectl --record=false" to stop updating the change-cause when a previous change-cause is found. (#28234, @damemi)
Add "kubectl --overwrite" flag to automatically resolve conflicts between the modified and live configuration using values from the modified configuration. (#26136, @AdoHe)
Make discovery summarizer call servers in parallel (#26705, @nebril)
Don't recreate lb cloud resources on kcm restart (#29082, @bprashanth)
List all nodes and occupy cidr map before starting allocations (#29062, @bprashanth)
An alpha implementation of the TLS bootstrap API described in docs/proposals/kubelet-tls-bootstrap.md. (#25562, @gtank)
Action Required
[kubelet] Allow opting out of automatic cloud provider detection in kubelet. By default kubelet will auto-detect cloud providers (#28258, @vishh)
If you use one of the kube-dns replication controller manifest in cluster/saltbase/salt/kube-dns, i.e. cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}, either substitute one of __PILLAR__FEDERATIONS__DOMAIN__MAP__ or {{ pillar['federations_domain_map'] }} with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for {{ pillar['federations_domain_map'] } (#28132, @madhusudancs)
Adding loadBalancer services and nodeports services to quota system
Known Issues and Important Steps before Upgrading
The following versions of Docker Engine are supported - v1.10, v1.11
Although v1.9 is still compatible, we recommend upgrading to one of the supported versions.
All prior versions of docker will not be supported.
ThirdPartyResource
If you use ThirdPartyResource objects, they have moved from being namespaced-scoped to be cluster-scoped. Before upgrading to 1.3.0, export and delete any existing ThirdPartyResource objects using a 1.2.x client:
[kubelet] Allow opting out of automatic cloud provider detection in kubelet. By default kubelet will auto-detect cloud providers (#28258, @vishh)
If you use one of the kube-dns replication controller manifest in cluster/saltbase/salt/kube-dns, i.e. cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}, either substitute one of __PILLAR__FEDERATIONS__DOMAIN__MAP__ or {{ pillar['federations_domain_map'] }} with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for {{ pillar['federations_domain_map'] } (#28132, @madhusudancs)
Init containers enable pod authors to perform tasks before their normal containers start. Each init container is started in order, and failing containers will prevent the application from starting. (#23666, @smarterclayton)
Other notable changes
GCE provider: Limit Filter calls to regexps rather than large blobs (#27741, @zmerlynn)
Show LASTSEEN, the sorting key, as the first column in kubectl get event output (#27549, @therc)
Change default value of deleting-pods-burst to 1 (#27422, @gmarek)
A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled). (#26801, @saad-ali)
This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. It also removes the unmount/detach from the syncPod() path so volume clean up never blocks the syncPod loop.
This fixed environments where CPU and Memory Accounting were not enabled on the unit that launched the kubelet or docker from reporting the root cgroup when monitoring usage stats for those components.
New default horizontalpodautoscaler/v1 generator for kubectl autoscale. (#26775, @piosz)
Use autoscaling/v1 in kubectl by default.
federation: Adding dnsprovider flags to federation-controller-manager (#27158, @nikhiljindal)
federation service controller: fixing a bug so that existing services are created in newly registered clusters (#27028, @mfanjie)
Rename environment variables (KUBE_)ENABLE_NODE_AUTOSCALER to (KUBE_)ENABLE_CLUSTER_AUTOSCALER. (#27117, @mwielgus)
Kubernetes v1.3 introduces a new Attach/Detach Controller. This controller manages attaching and detaching of volumes on-behalf of nodes. (#26351, @saad-ali)
This ensures that attachment and detachment of volumes is independent of any single nodes’ availability. Meaning, if a node or kubelet becomes unavailable for any reason, the volumes attached to that node will be detached so they are free to be attached to other nodes.
Specifically the new controller watches the API server for scheduled pods. It processes each pod and ensures that any volumes that implement the volume Attacher interface are attached to the node their pod is scheduled to.
When a pod is deleted, the controller waits for the volume to be safely unmounted by kubelet. It does this by waiting for the volume to no longer be present in the nodes Node.Status.VolumesInUse list. If the volume is not safely unmounted by kubelet within a pre-configured duration (3 minutes in Kubernetes v1.3), the controller unilaterally detaches the volume (this prevents volumes from getting stranded on nodes that become unavailable).
In order to remain backwards compatible, the new controller only manages attach/detach of volumes that are scheduled to nodes that opt-in to controller management. Nodes running v1.3 or higher of Kubernetes opt-in to controller management by default by setting the "volumes.kubernetes.io/controller-managed-attach-detach" annotation on the Node object on startup. This behavior is gated by a new kubelet flag, "enable-controller-attach-detach,” (default true).
In order to safely upgrade an existing Kubernetes cluster without interruption of volume attach/detach logic:
First upgrade the master to Kubernetes v1.3.
This will start the new attach/detach controller.
The new controller will initially ignore volumes for all nodes since they lack the "volumes.kubernetes.io/controller-managed-attach-detach" annotation.
Then upgrade nodes to Kubernetes v1.3.
As nodes are upgraded, they will automatically, by default, opt-in to attach/detach controller management, which will cause the controller to start managing attaches/detaches for volumes that get scheduled to those nodes.
Move shell completion generation into 'kubectl completion' command (#23801, @sttts)
Fix strategic merge diff list diff bug (#26418, @AdoHe)
Setting TLS1.2 minimum because TLS1.0 and TLS1.1 are vulnerable (#26169, @victorgp)
Kubelet: Periodically reporting image pulling progress in log (#26145, @Random-Liu)
Federation service controller is one key component of federation controller manager, it watches federation service, creates/updates services to all registered clusters, and update DNS records to global DNS server. (#26034, @mfanjie)
With this PR, kubectl and other RestClient's using the AuthProvider framework can make OIDC authenticated requests, and, if there is a refresh token present, the tokens will be refreshed as needed. (#25270, @bobbyrullo)
Make addon-manager cross-platform and use it with hyperkube (#25631, @luxas)
kubelet: Optionally, have kubelet exit if lock file contention is observed, using --exit-on-lock-contention flag (#25596, @derekparker)
kubectl "rm" will suggest using "delete"; "ps" and "list" will suggest "get". (#25181, @janetkuo)
Add IPv6 address support for pods - does NOT include services (#23090, @tgraf)
Use local disk for ConfigMap volume instead of tmpfs (#25306, @pmorie)
Alpha support for scheduling pods on machines with NVIDIA GPUs whose kubelets use the --experimental-nvidia-gpus flag, using the alpha.kubernetes.io/nvidia-gpu resource (#24836, @therc)
AWS: SSL support for ELB listeners through annotations (#23495, @therc)
Implement kubectl rollout status that can be used to watch a deployment's rollout status (#19946, @janetkuo)
Significant scale improvements. Increased cluster scale by 400% to 1000 nodes with 30,000 pods per cluster.
Kubelet supports 100 pods per node with 4x reduced system overhead.
Simplified application deployment and management.
Dynamic Configuration (ConfigMap API in the core API group) enables application
configuration to be stored as a Kubernetes API object and pulled dynamically on
container startup, as an alternative to baking in command-line flags when a
container is built.
Turnkey Deployments (Deployment API (Beta) in the Extensions API group)
automate deployment and rolling updates of applications, specified
declaratively. It handles versioning, multiple simultaneous rollouts,
aggregating status across all pods, maintaining application availability, and
rollback.
Automated cluster management:
Kubernetes clusters can now span zones within a cloud provider. Pods from a
service will be automatically spread across zones, enabling applications to
tolerate zone failure.
Simplified way to run a container on every node (DaemonSet API (Beta) in the
Extensions API group): Kubernetes can schedule a service (such as a logging
agent) that runs one, and only one, pod per node.
TLS and L7 support (Ingress API (Beta) in the Extensions API group): Kubernetes
is now easier to integrate into custom networking environments by supporting
TLS for secure communication and L7 http-based traffic routing.
Graceful Node Shutdown (aka drain) - The new “kubectl drain” command gracefully
evicts pods from nodes in preparation for disruptive operations like kernel
upgrades or maintenance.
Custom Metrics for Autoscaling (HorizontalPodAutoscaler API in the Autoscaling
API group): The Horizontal Pod Autoscaling feature now supports custom metrics
(Alpha), allowing you to specify application-level metrics and thresholds to
trigger scaling up and down the number of pods in your application.
New GUI (dashboard) allows you to get started quickly and enables the same
functionality found in the CLI as a more approachable and discoverable way of
interacting with the system. Note: the GUI is enabled by default in 1.2 clusters.
The previous version, apiVersion: extensions/v1beta1, is still supported. Even if you roll back to 1.1, the objects created using
the new apiVersion will still be accessible, using the old version. You can
continue to use your existing JSON and YAML files until you are ready to switch
to batch/v1. We may remove support for Jobs with apiVersion: extensions/v1beta1 in 1.3 or 1.4.
HorizontalPodAutoscaler was Beta in 1.1 and is GA in 1.2 .
apiVersion: autoscaling/v1 is now available. Changes in this version are:
Field CPUUtilization which was a nested structure CPUTargetUtilization in
HorizontalPodAutoscalerSpec was replaced by TargetCPUUtilizationPercentage
which is an integer.
ScaleRef of type SubresourceReference in HorizontalPodAutoscalerSpec which
referred to scale subresource of the resource being scaled was replaced by
ScaleTargetRef which points just to the resource being scaled.
In extensions/v1beta1 if CPUUtilization in HorizontalPodAutoscalerSpec was not
specified it was set to 80 by default while in autoscaling/v1 HPA object
without TargetCPUUtilizationPercentage specified is a valid object. Pod
autoscaler controller will apply a default scaling policy in this case which is
equivalent to the previous one but may change in the future.
The previous version, apiVersion: extensions/v1beta1, is still supported. Even if you roll back to 1.1, the objects created using
the new apiVersions will still be accessible, using the old version. You can
continue to use your existing JSON and YAML files until you are ready to switch
to autoscaling/v1. We may remove support for HorizontalPodAutoscalers with apiVersion: extensions/v1beta1 in 1.3 or 1.4.
Kube-Proxy now defaults to an iptables-based proxy. If the --proxy-mode flag is
specified while starting kube-proxy (‘userspace’ or ‘iptables’), the flag value
will be respected. If the flag value is not specified, the kube-proxy respects
the Node object annotation: ‘net.beta.kubernetes.io/proxy-mode’. If the
annotation is not specified, then ‘iptables’ mode is the default. If kube-proxy
is unable to start in iptables mode because system requirements are not met
(kernel or iptables versions are insufficient), the kube-proxy will fall-back
to userspace mode. Kube-proxy is much more performant and less
resource-intensive in ‘iptables’ mode.
Node stability can be improved by reserving resources for the base operating system using --system-reserved and --kube-reserved Kubelet flags
Liveness and readiness probes now support more configuration parameters:
periodSeconds, successThreshold, failureThreshold
The new ReplicaSet API (Beta) in the Extensions API group is similar to
ReplicationController, but its selector is more general (supports set-based selector; whereas ReplicationController
only supports equality-based selector).
Scale subresource support is now expanded to ReplicaSets along with
ReplicationControllers and Deployments. Scale now supports two different types
of selectors to accommodate both equality-based selectors supported by ReplicationControllers and set-based selectors supported by Deployments and ReplicaSets.
“kubectl run” now produces Deployments (instead of ReplicationControllers) and
Jobs (instead of Pods) by default.
Pods can now consume Secret data in environment variables and inject those
environment variables into a container’s command-line args.
Stable version of Heapster which scales up to 1000 nodes: more metrics, reduced
latency, reduced cpu/memory consumption (~4mb per monitored node).
Pods now have a security context which allows users to specify:
attributes which apply to the whole pod:
User ID
Whether all containers should be non-root
Supplemental Groups
FSGroup - a special supplemental group
SELinux options
If a pod defines an FSGroup, that Pod’s system (emptyDir, secret, configMap,
etc) volumes and block-device volumes will be owned by the FSGroup, and each
container in the pod will run with the FSGroup as a supplemental group
Volumes that support SELinux labelling are now automatically relabeled with the
Pod’s SELinux context, if specified
A stable client library release_1_2 is added. The library is here, and detailed doc is here. We will keep the interface of this go client stable.
New Azure File Service Volume Plugin enables mounting Microsoft Azure File
Volumes (SMB 2.1 and 3.0) into a Pod. See example for details.
Logs usage and root filesystem usage of a container, volumes usage of a pod and node disk usage are exposed through Kubelet new metrics API.
Experimental Features
Dynamic Provisioning of PersistentVolumes: Kubernetes previously required all
volumes to be manually provisioned by a cluster administrator before use. With
this feature, volume plugins that support it (GCE PD, AWS EBS, and Cinder) can
automatically provision a PersistentVolume to bind to an unfulfilled
PersistentVolumeClaim.
Run multiple schedulers in parallel, e.g. one or more custom schedulers
alongside the default Kubernetes scheduler, using pod annotations to select
among the schedulers for each pod. Documentation is here, design doc is here.
More expressive node affinity syntax, and support for “soft” node affinity.
Node selectors (to constrain pods to schedule on a subset of nodes) now support
the operators {In, NotIn, Exists, DoesNotExist, Gt, Lt} instead of just conjunction of exact match on node label values. In
addition, we’ve introduced a new “soft” kind of node selector that is just a
hint to the scheduler; the scheduler will try to satisfy these requests but it
does not guarantee they will be satisfied. Both the “hard” and “soft” variants
of node affinity use the new syntax. Documentation is here (see section “Alpha feature in Kubernetes v1.2: Node Affinity“). Design doc is here.
A pod can specify its own Hostname and Subdomain via annotations (pod.beta.kubernetes.io/hostname, pod.beta.kubernetes.io/subdomain). If the Subdomain matches the name of a headless service in the same namespace, a DNS A record is also created for the pod’s FQDN. More
details can be found in the DNS README. Changes were introduced in PR #20688.
New SchedulerExtender enables users to implement custom
out-of-(the-scheduler)-process scheduling predicates and priority functions,
for example to schedule pods based on resources that are not directly managed
by Kubernetes. Changes were introduced in PR #13580. Example configuration and documentation is available here. This is an alpha feature and may not be supported in its current form at beta
or GA.
New Flex Volume Plugin enables users to use out-of-process volume plugins that
are installed to “/usr/libexec/kubernetes/kubelet-plugins/volume/exec/” on
every node, instead of being compiled into the Kubernetes binary. See example for details.
vendor volumes into a pod. It expects vendor drivers are installed in the
volume plugin path on each kubelet node. This is an alpha feature and may
change in future.
Kubelet exposes a new Alpha metrics API - /stats/summary in a user friendly format with reduced system overhead. The measurement is done in PR #22542.
Action required
Docker v1.9.1 is officially recommended. Docker v1.8.3 and Docker v1.10 are
supported. If you are using an older release of Docker, please upgrade. Known
issues with Docker 1.9.1 can be found below.
CPU hardcapping will be enabled by default for containers with CPU limit set,
if supported by the kernel. You should either adjust your CPU limit, or set CPU
request only, if you want to avoid hardcapping. If the kernel does not support
CPU Quota, NodeStatus will contain a warning indicating that CPU Limits cannot
be enforced.
The following applies only if you use the Go language client (/pkg/client/unversioned) to create Job by defining Go variables of type "k8s.io/kubernetes/pkg/apis/extensions".Job). We think this is not common, so if you are not sure what this means, you probably aren't doing this. If
you do this, then, at the time you re-vendor the "k8s.io/kubernetes/" code, you will need to set job.Spec.ManualSelector = true, or else set job.Spec.Selector = nil. Otherwise, the jobs you create may be rejected. See Specifying your own pod selector.
Deployment was Alpha in 1.1 (though it had apiVersion extensions/v1beta1) and
was disabled by default. Due to some non-backward-compatible API changes, any
Deployment objects you created in 1.1 won’t work with in the 1.2 release.
Before upgrading to 1.2, delete all Deployment alpha-version resources, including the Replication Controllers and Pods the Deployment manages. Then
create Deployment Beta resources after upgrading to 1.2. Not deleting the
Deployment objects may cause the deployment controller to mistakenly match
other pods and delete them, due to the selector API change.
Client (kubectl) and server versions must match (both 1.1 or both 1.2) for any
Deployment-related operations.
Behavior change:
Deployment creates ReplicaSets instead of ReplicationControllers.
Scale subresource now has a new targetSelector field in its status. This field supports the new set-based selectors supported
by Deployments, but in a serialized format.
Spec change:
Deployment’s selector is now more general (supports set-based selector; it only supported
equality-based selector in 1.1).
.spec.uniqueLabelKey is removed -- users can’t customize unique label key --
and its default value is changed from
“deployment.kubernetes.io/podTemplateHash” to “pod-template-hash”.
.spec.strategy.rollingUpdate.minReadySeconds is moved to .spec.minReadySeconds
DaemonSet was Alpha in 1.1 (though it had apiVersion extensions/v1beta1) and
was disabled by default. Due to some non-backward-compatible API changes, any
DaemonSet objects you created in 1.1 won’t work with in the 1.2 release.
Before upgrading to 1.2, delete all DaemonSet alpha-version resources. If you do not want to disrupt the pods, use kubectl delete daemonset
--cascade=false. Then create DaemonSet Beta resources after upgrading to 1.2.
Client (kubectl) and server versions must match (both 1.1 or both 1.2) for any
DaemonSet-related operations.
Behavior change:
DaemonSet pods will be created on nodes with .spec.unschedulable=true and will
not be evicted from nodes whose Ready condition is false.
Updates to the pod template are now permitted. To perform a rolling update of a
DaemonSet, update the pod template and then delete its pods one by one; they
will be replaced using the updated template.
Spec change:
DaemonSet’s selector is now more general (supports set-based selector; it only supported
equality-based selector in 1.1).
Running against a secured etcd requires these flags to be passed to
kube-apiserver (instead of --etcd-config):
--etcd-certfile, --etcd-keyfile (if using client cert auth)
--etcd-cafile (if not using system roots)
As part of preparation in 1.2 for adding support for protocol buffers (and the
direct YAML support in the API available today), the Content-Type and Accept
headers are now properly handled as per the HTTP spec. As a consequence, if
you had a client that was sending an invalid Content-Type or Accept header to
the API, in 1.2 you will either receive a 415 or 406 error.
The only client
this is known to affect is curl when you use -d with JSON but don't set a
content type, helpfully sends "application/x-www-urlencoded", which is not
correct.
Other client authors should double check that you are sending proper
accept and content type headers, or set no value (in which case JSON is the
default).
An example using curl:
curl -H "Content-Type: application/json" -XPOST -d
'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://127.0.0.1:8080/api/v1/namespaces"
The version of InfluxDB is bumped from 0.8 to 0.9 which means storage schema
change. More details here.
We have renamed “minions” to “nodes”. If you were specifying NUM_MINIONS or
MINION_SIZE to kube-up, you should now specify NUM_NODES or NODE_SIZE.
Known Issues
Paused deployments can't be resized and don't clean up old ReplicaSets.
Minimum memory limit is 4MB. This is a docker limitation
Minimum CPU limits is 10m. This is a Linux Kernel limitation
“kubectl rollout undo” (i.e. rollback) will hang on paused deployments, because
paused deployments can’t be rolled back (this is expected), and the command
waits for rollback events to return the result. Users should use “kubectl
rollout resume” to resume a deployment before rolling back.
“kubectl edit ” will open the editor multiple times, once for each
resource in the list.
If you create HPA object using autoscaling/v1 API without specifying
targetCPUUtilizationPercentage and read it using kubectl it will print default
value as specified in extensions/v1beta1 (see details in #23196).
If a node or kubelet crashes with a volume attached, the volume will remain
attached to that node. If that volume can only be attached to one node at a
time (GCE PDs attached in RW mode, for example), then the volume must be
manually detached before Kubernetes can attach it to other nodes.
If a volume is already attached to a node any subsequent attempts to attach it
again (due to kubelet restart, for example) will fail. The volume must either
be manually detached first or the pods referencing it deleted (which would
trigger automatic volume detach).
In very large clusters it may happen that a few nodes won’t register in API
server in a given timeframe for whatever reasons (networking issue, machine
failure, etc.). Normally when kube-up script will encounter even one NotReady
node it will fail, even though the cluster most likely will be working. We
added an environmental variable to kube-up ALLOWED_NOTREADY_NODES that
defines the number of nodes that if not Ready in time won’t cause kube-up
failure.
“kubectl rolling-update” only supports Replication Controllers (it doesn’t
support Replica Sets). It’s recommended to use Deployment 1.2 with “kubectl
rollout” commands instead, if you want to rolling update Replica Sets.
When live upgrading Kubelet to 1.2 without draining the pods running on the node,
the containers will be restarted by Kubelet (see details in #23104).
Docker Known Issues
1.9.1
Listing containers can be slow at times which will affect kubelet performance.
More information here
Docker daemon restarts can fail. Docker checkpoints have to deleted between
restarts. More information here
Pod IP allocation-related issues. Deleting the docker checkpoint prior to
restarting the daemon alleviates this issue, but hasn’t been verified to
completely eliminate the IP allocation issue. More information here
Daemon becomes unresponsive (rarely) due to kernel deadlocks. More information here
Provider-specific Notes
Various
Core changes:
Support for load balancers with source ranges
AWS
Core changes:
Support for ELBs with complex configurations: better subnet selection with
multiple subnets, and internal ELBs
Support for VPCs with private dns names
Multiple fixes to EBS volume mounting code for robustness, and to support
mounting the full number of AWS recommended volumes.
Multiple fixes to avoid hitting AWS rate limits, and to throttle if we do
Support for the EC2 Container Registry (currently in us-east-1 only)
With kube-up:
Automatically install updates on boot & reboot
Use optimized image based on Jessie by default
Add support for Ubuntu Wily
Master is configured with automatic restart-on-failure, via CloudWatch
Bootstrap reworked to be more similar to GCE; better supports reboots/restarts
Use an elastic IP for the master by default
Experimental support for node spot instances (set NODE_SPOT_PRICE=0.05)