Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
k8s.gcr.io image registry will be redirected to registry.k8s.io on Monday March 20th.
All images available in k8s.gcr.io are available at registry.k8s.io.
Please read the announcement for more details.
Also, this guide provide instructions about how to identify images to mirror and how to use mirrored images.
This book documents ClusterAPI v1.7. For other Cluster API versions please see the corresponding documentation:
Kubernetes is a complex system that relies on several components being configured correctly to have a working cluster. Recognizing this as a potential stumbling block for users, the community focused on simplifying the bootstrapping process. Today, over 100 Kubernetes distributions and installers have been created, each with different default configurations for clusters and supported infrastructure providers. SIG Cluster Lifecycle saw a need for a single tool to address a set of common overlapping installation concerns and started kubeadm.
Kubeadm was designed as a focused tool for bootstrapping a best-practices Kubernetes cluster. The core tenet behind the kubeadm project was to create a tool that other installers can leverage and ultimately alleviate the amount of configuration that an individual installer needed to maintain. Since it began, kubeadm has become the underlying bootstrapping tool for several other applications, including Kubespray, minikube, kind, etc.
However, while kubeadm and other bootstrap providers reduce installation complexity, they don’t address how to manage a cluster day-to-day or a Kubernetes environment long term. You are still faced with several questions when setting up a production environment, including:
How can I consistently provision machines, load balancers, VPC, etc., across multiple infrastructure providers and locations?
How can I automate cluster lifecycle management, including things like upgrades and cluster deletion?
How can I scale these processes to manage any number of clusters?
SIG Cluster Lifecycle began the Cluster API project as a way to address these gaps by building declarative, Kubernetes-style APIs, that automate cluster creation, configuration, and management. Using this model, Cluster API can also be extended to support any infrastructure provider (AWS, Azure, vSphere, etc.) or bootstrap provider (kubeadm is default) you need. See the growing list of available providers .
To manage the lifecycle (create, scale, upgrade, destroy) of Kubernetes-conformant clusters using a declarative API.
To work in different environments, both on-premises and in the cloud.
To define common operations, provide a default implementation, and provide the ability to swap out implementations for alternative ones.
To reuse and integrate existing ecosystem components rather than duplicating their functionality (e.g. node-problem-detector, cluster autoscaler, SIG-Multi-cluster).
To provide a transition path for Kubernetes lifecycle products to adopt Cluster API incrementally. Specifically, existing cluster lifecycle management tools should be able to adopt Cluster API in a staged manner, over the course of multiple releases, or even adopting a subset of Cluster API.
To add these APIs to Kubernetes core (kubernetes/kubernetes).
This API should live in a namespace outside the core and follow the best practices defined by api-reviewers, but is not subject to core-api constraints.
To manage the lifecycle of infrastructure unrelated to the running of Kubernetes-conformant clusters.
To force all Kubernetes lifecycle products (kOps, Kubespray, GKE, AKS, EKS, IKS etc.) to support or use these APIs.
To manage non-Cluster API provisioned Kubernetes-conformant clusters.
To manage a single cluster spanning multiple infrastructure providers.
To configure a machine at any time other than create or upgrade.
To duplicate functionality that exists or is coming to other tooling, e.g., updating kubelet configuration (c.f. dynamic kubelet configuration), or updating apiserver, controller-manager, scheduler configuration (c.f. component-config effort) after the cluster is deployed.
Cluster API is developed in the open, and is constantly being improved by our users, contributors, and maintainers. It is because of you that we are able to automate cluster lifecycle management for the community. Join us!
If you have questions or want to get the latest project news, you can connect with us in the following ways:
Chat with us on the Kubernetes Slack in the #cluster-api channel
Subscribe to the SIG Cluster Lifecycle Google Group for access to documents and calendars
Join our Cluster API working group sessions where we share the latest project news, demos, answer questions, and triage issues
Pull Requests and feedback on issues are very welcome!
See the issue tracker if you’re unsure where to start, especially the Good first issue and Help wanted tags, and
also feel free to reach out to discuss.
See also our contributor guide and the Kubernetes community page for more details on how to get involved.
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct .