I am using docker for mac 18.05.0-ce-mac66 (24545) (edge) with Kubernetes support and i am struggling with creating a kubernetes deployment referencing locally built image. Output of docker images.
Estimated Reading Time: 5minutesLet’s talk about CRI Vs CRI-Containerd…
Container Runtime Interface(a.ka. CRI) is a standard way to integrate Container Runtime with Kubernetes. It is new plugin interface for container runtimes. It is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile.Prior to the existence of CRI, container runtimes (e.g., docker, rkt ) were integrated with kubelet through implementing an internal, high-level interface in kubelet. Formerly known as OCID, CRI-O is strictly focused on OCI-compliant runtimes and container images.
Last month, the new CRI-O version 1.2 got announced. This is a minor release of the v1.0.x cycle supporting Kubernetes 1.7.With CRI, Kubernetes can be container runtime-agnostic. Now this provides flexibility to the providers of container runtimes who don’t need to implement features that Kubernetes already provides. CRI-O allows you to run containers directly from Kubernetes – without any unnecessary code or tooling.
For those viewers who compare Docker Runtime Engine Vs CRI-O, here is an important note –
CRI-O is not really a competition to the docker project – in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. Through this new runtime, it is expected to bring developers more flexibility by adding other image builders and tooling in the future.Please remember that CRI is not an interface for full-fledge, all inclusive container runtime.
How does CRI-O workflow look like ?
When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with container runtime to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Everything just works behind the scenes to manage Linux containers.
Kubelet is a node agent which has gRPC client which talks to gRPC server rightly called as shim. The shim then talks to container runtime. Today the default implementation is Docker Shim.Docker shim then talks to Docker daemon using the classical APIs. This works really well.
CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development.
Introducing CRI-containerd
CRI-containerd is containerd based implementation of CRI. This project started in April 2017. In order to have Kubernetes consume containerd for its container runtime, containerd team implemented the CRI interface. CRI is responsible for distribution and the lifecycle of pods and containers running on a cluster.The scope of containerd 1.0 aligns with the requirement of CRI. In case you want to deep-dive into it, don’t miss out this link.
Below is how CRI-containerd architecture look like:
In my last blog post, I talked about how to setup Multi-Node Kubernetes cluster using LinuxKit. It basically used Docker Engine to build up minimal & immutable Kubernetes OS images with LinuxKit. Under this blog post, we will see how to build it using CRI-containerd.
Infrastructure Setup:
- OS – Ubuntu 17.04
- System – ESXi VM
- Memory – 8 GB
- SSH Key generated using
ssh-keygen -t rsa
and put under $HOME/.ssh/ folder
Caution – Please note that this is still under experimental version. It is currently under active development and hence don’t expect it to work as full-fledge K8s cluster.
Copy the below script to your Linux System and execute it flawlessly –
View the code on Gist.
Did you notice the parameter – KUBE_RUNTIME=cri-containerd ?
The above parameter is used to specify that we want to build minimal and immutable K8s ISO image using CRI-containerd. If you don’t specify the parameter, it will use Docker Engine to build this ISO image.
The above script is going to take sometime to finish.
It’s always a good idea to see what CRI-containerd specific files are present.
Let us look into CRI-Containerd directory –
Looking at the content of cri-containerd.yml where it defines a service called as critical-containerd.
View the code on Gist.
The below list of files gets created with the output files like Kube-master-efi.iso right under Kubernetes directory –
By the end of this script you should be able to see Kubernetes LinuxKit OS booting up –
CRI-containerd let the user containers in the same sandbox share the network namespace and hence you will see the message ” This system is namespaced”.
You can find the overall screen logs to watch how it boots up –
View the code on Gist.
Next, let us try to see what container services are running:
You will notice that cri-containerd service is up and running.
Let us enter into one of tasks containers and initialize the master node with kubeadm-init script which comes by default –
(ns: getty) linuxkit-ee342f3aebd6:~# ctr tasks exec -t --exec-id 654 kubelet sh
Execute the below script –
/ # kubeadm-init.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Authorization modes: [Node RBAC]
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [linuxkit-ee342f3aebd6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
Now you should be able to join the other worker nodes( I have discussed the further steps under this link ).
Did you find this blog helpful? Feel free to share your experience. Get in touch @ajeetsraina.
If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.
Clap
kubeadm helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass Kubernetes Conformance tests. Kubeadm also supports other clusterlifecycle functions, such as upgrades, downgrade, and managing bootstrap tokens.
Because you can install kubeadm on various types of machine (e.g. laptop, server,Raspberry Pi, etc.), it’s well suited for integration with provisioning systemssuch as Terraform or Ansible.
kubeadm’s simplicity means it can serve a wide range of use cases:
- New users can start with kubeadm to try Kubernetes out for the first time.
- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications.
- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools.
kubeadm is designed to be a simple way for new users to start tryingKubernetes out, possibly for the first time, a way for existing users totest their application on and stitch together a cluster easily, and also to bea building block in other ecosystem and/or installer tool with a largerscope.
You can install kubeadm very easily on operating systems that supportinstalling deb or rpm packages. The responsible SIG for kubeadm,SIG Cluster Lifecycle, provides these packages pre-built for you,but you may also build them from source for other OSes.
kubeadm maturity
kubeadm’s overall feature state is GA. Some sub-features, like the configurationfile API are still under active development. The implementation of creating the clustermay change slightly as the tool evolves, but the overall implementation should be pretty stable.Any commands under
kubeadm alpha
are by definition, supported on an alpha level.Support timeframes
Kubernetes releases are generally supported for nine months, and during thatperiod a patch release may be issued from the release branch if a severe bug orsecurity issue is found. Here are the latest Kubernetes releases and the supporttimeframe; which also applies to
kubeadm
.Kubernetes version | Release month | End-of-life-month |
---|---|---|
v1.13.x | December 2018 | September 2019 |
v1.14.x | March 2019 | December 2019 |
v1.15.x | June 2019 | March 2020 |
v1.16.x | September 2019 | June 2020 |
Before you begin
- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
- 2 GB or more of RAM per machine. Any less leaves little room for yourapps.
- 2 CPUs or more on the control-plane node
- Full network connectivity among all machines in the cluster. A public orprivate network is fine.
Objectives
- Install a single control-plane Kubernetes cluster or high-availability cluster
- Install a Pod network on the cluster so that your Pods cantalk to each other
Instructions
Installing kubeadm on your hosts
See “Installing kubeadm”.
Note:If you have already installed kubeadm, run
apt-get update &&apt-get upgrade
or yum update
to get the latest version of kubeadm.When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop forkubeadm to tell it what to do. This crashloop is expected and normal.After you initialize your control-plane, the kubelet runs normally.
Initializing your control-plane node
The control-plane node is the machine where the control plane components run, includingetcd (the cluster database) and the API server (which the kubectl CLIcommunicates with).
- (Recommended) If you have plans to upgrade this single control-plane kubeadm clusterto high availability you should specify the
--control-plane-endpoint
to set the shared endpointfor all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer. - Choose a pod network add-on, and verify whether it requires any arguments tobe passed to kubeadm initialization. Depending on whichthird-party provider you choose, you might need to set the
--pod-network-cidr
toa provider-specific value. See Installing a pod network add-on. - (Optional) Since version 1.14, kubeadm will try to detect the container runtime on Linuxby using a list of well known domain socket paths. To use different container runtime orif there are more than one installed on the provisioned node, specify the
--cri-socket
argument tokubeadm init
. See Installing runtime. - (Optional) Unless otherwise specified, kubeadm uses the network interface associatedwith the default gateway to set the advertise address for this particular control-plane node’s API server.To use a different network interface, specify the
--apiserver-advertise-address=<ip-address>
argumenttokubeadm init
. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, youmust specify an IPv6 address, for example--apiserver-advertise-address=fd00::101
- (Optional) Run
kubeadm config images pull
prior tokubeadm init
to verifyconnectivity to gcr.io registries.
To initialize the control-plane node run:
Considerations about apiserver-advertise-address and ControlPlaneEndpoint
While
--apiserver-advertise-address
can be used to set the advertise address for this particularcontrol-plane node’s API server, --control-plane-endpoint
can be used to set the shared endpointfor all control-plane nodes.--control-plane-endpoint
allows IP addresses but also DNS names that can map to IP addresses.Please contact your network administrator to evaluate possible solutions with respect to such mapping.Here is an example mapping:
Where
192.168.0.102
is the IP address of this node and cluster-endpoint
is a custom DNS name that maps to this IP.This will allow you to pass --control-plane-endpoint=cluster-endpoint
to kubeadm init
and pass the same DNS name tokubeadm join
. Later you can modify cluster-endpoint
to point to the address of your load-balancer in anhigh availability scenario.Turning a single control plane cluster created without
--control-plane-endpoint
into a highly available clusteris not supported by kubeadm.More information
For more information about
kubeadm init
arguments, see the kubeadm reference guide.For a complete list of configuration options, see the configuration file documentation.
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in custom arguments.
To run
kubeadm init
again, you must first tear down the cluster.If you join a node with a different architecture to your cluster, make sure that your deployed DaemonSetshave container image support for this architecture.
kubeadm init
first runs a series of prechecks to ensure that the machineis ready to run Kubernetes. These prechecks expose warnings and exit on errors. kubeadm init
then downloads and installs the cluster control plane components. This may take several minutes.The output should look like:To make kubectl work for your non-root user, run these commands, which arealso part of the
kubeadm init
output:Alternatively, if you are the
root
user, you can run:Make a record of the
kubeadm join
command that kubeadm init
outputs. Youneed this command to join nodes to your cluster.The token is used for mutual authentication between the control-plane node and the joiningnodes. The token included here is secret. Keep it safe, because anyone with thistoken can add authenticated nodes to your cluster. These tokens can be listed,created, and deleted with the
kubeadm token
command. See thekubeadm reference guide.Installing a pod network add-on
Caution: This section contains important information about installation and deployment order. Read it carefully before proceeding.
![Docker kubernetes is starting Docker kubernetes is starting](/uploads/1/2/6/2/126243275/823911587.jpg)
You must install a pod network add-on so that your pods can communicate witheach other.
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed.kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
Several projects provide Kubernetes pod networks using CNI, some of which alsosupport Network Policy. See the add-ons page for a complete list of available network add-ons.- IPv6 support was added in CNI v0.6.0.- CNI bridge and local-ipam are the only supported IPv6 network plugins in Kubernetes version 1.9.
Note that kubeadm sets up a more secure cluster by default and enforces use of RBAC.Make sure that your network manifest supports RBAC.
Also, beware, that your Pod network must not overlap with any of the host networks as this can cause issues.If you find a collision between your network plugin’s preferred Pod network and some of your host networks, you should think of a suitable CIDR replacement and use that during
kubeadm init
with --pod-network-cidr
and as a replacement in your network plugin’s YAML.You can install a pod network add-on with the following command on the control-plane node or a node that has the kubeconfig credentials:
You can install only one pod network per cluster.
Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
AWS VPC CNI provides native AWS VPC networking to Kubernetes clusters.
For installation, please refer to the AWS VPC CNI setup guide.
For more information about using Calico, see Quickstart for Calico on Kubernetes, Installing Calico for policy and networking, and other related resources.
For Calico to work correctly, you need to pass
--pod-network-cidr=192.168.0.0/16
to kubeadm init
or update the calico.yml
file to match your Pod network. Note that Calico works on amd64
, arm64
, and ppc64le
only.Canal uses Calico for policy and Flannel for networking. Refer to the Calico documentation for the official getting started guide.
For Canal to work correctly,
--pod-network-cidr=10.244.0.0/16
has to be passed to kubeadm init
. Note that Canal works on amd64
only.For more information about using Cilium with Kubernetes, see Kubernetes Install guide for Cilium.
For Cilium to work correctly, you must pass
--pod-network-cidr=10.217.0.0/16
to kubeadm init
.These commands will deploy Cilium with its own etcd managed by etcd operator.
Note: If you are running kubeadm in a single node please untaint it so thatetcd-operator pods can be scheduled in the control-plane node.
To deploy Cilium you just need to run:
Once all Cilium pods are marked as
READY
, you start using your cluster.The output is similar to this:
Contiv-VPP employs a programmable CNF vSwitch based on FD.io VPP,offering feature-rich & high-performance cloud-native networking and services.
It implements k8s services and network policies in the user space (on VPP).
Please refer to this installation guide: Contiv-VPP Manual Installation
For
flannel
to work correctly, you must pass --pod-network-cidr=10.244.0.0/16
to kubeadm init
.Set
/proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more informationplease see here.Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network.see here.
Note that
flannel
works on amd64
, arm
, arm64
, ppc64le
and s390x
under Linux.Windows (amd64
) is claimed as supported in v0.11.0 but the usage is undocumented.For more information about
flannel
, see the CoreOS flannel repository on GitHub.Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking,simultaneous overlay-underlay support, network policy enforcement, network isolation,service chaining and flexible load balancing.
There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI.
Kindly refer to this quickstart: TungstenFabric
Set
/proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more informationplease see here.Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use
kubeadm init
with the --pod-network-cidr
flag.Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official setup guide.
Set
/proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more informationplease see here.The official Romana set-up guide is here.
Romana works on
amd64
only.Set
/proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more informationplease see here.The official Weave Net set-up guide is here.
Weave Net works on
amd64
, arm
, arm64
and ppc64le
without any extra action required.Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP addressif they don’t know their PodIP.Once a pod network has been installed, you can confirm that it is working bychecking that the CoreDNS pod is Running in the output of
kubectl get pods --all-namespaces
.And once the CoreDNS pod is up and running, you can continue by joining your nodes.If your network is not working or CoreDNS is not in the Running state, checkout our troubleshooting docs.
Control plane node isolation
By default, your cluster will not schedule pods on the control-plane node for securityreasons. If you want to be able to schedule pods on the control-plane node, e.g. for asingle-machine Kubernetes cluster for development, run:
With output looking something like:
This will remove the
node-role.kubernetes.io/master
taint from any nodes thathave it, including the control-plane node, meaning that the scheduler will then be ableto schedule pods everywhere.Joining your nodes
The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine:
- SSH to the machine
- Become root (e.g.
sudo su -
) - Run the command that was output by
kubeadm init
. For example:
If you do not have the token, you can get it by running the following command on the control-plane node:
The output is similar to this:
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,you can create a new token by running the following command on the control-plane node:
The output is similar to this:
If you don’t have the value of
--discovery-token-ca-cert-hash
, you can get it by running the following command chain on the control-plane node:The output is similar to this:
Note: To specify an IPv6 tuple for
<control-plane-host>:<control-plane-ip>
, IPv6 address must be enclosed in square brackets, for example: [fd00::101]:2073
.The output should look something like:
A few seconds later, you should notice this node in the output from
kubectl getnodes
when run on the control-plane node.(Optional) Controlling your cluster from machines other than the control-plane node
In order to get a kubectl on some other computer (e.g. laptop) to talk to yourcluster, you need to copy the administrator kubeconfig file from your control-plane nodeto your workstation like this:
Note:The example above assumes SSH access is enabled for root. If that is not thecase, you can copy the
admin.conf
file to be accessible by some other userand scp
using that other user instead.The
admin.conf
file gives the user superuser privileges over the cluster.This file should be used sparingly. For normal users, it’s recommended togenerate an unique credential to which you whitelist privileges. You can dothis with the kubeadm alpha kubeconfig user --client-name <CN>
command. That command will print out a KubeConfig file to STDOUT which youshould save to a file and distribute to your user. After that, whitelistprivileges by using kubectl create (cluster)rolebinding
.(Optional) Proxying API Server to localhost
If you want to connect to the API Server from outside the cluster you can use
kubectl proxy
:You can now access the API Server locally at
http://localhost:8001/api/v1
Tear down
To undo what kubeadm did, you should first drain thenode and makesure that the node is empty before shutting it down.
Talking to the control-plane node with the appropriate credentials, run:
Then, on the node being removed, reset all kubeadm installed state:
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
If you want to reset the IPVS tables, you must run the following command:
If you wish to start over simply run
kubeadm init
or kubeadm join
with theappropriate arguments.More options and information about the
kubeadm reset command
.Maintaining a cluster
Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found here.
Explore other add-ons
See the list of add-ons to explore other add-ons,including tools for logging, monitoring, network policy, visualization &control of your Kubernetes cluster.
What’s next
- Verify that your cluster is running properly with Sonobuoy
- Learn about kubeadm’s advanced usage in the kubeadm reference documentation
- Learn more about Kubernetes concepts and
kubectl
. - Configure log rotation. You can use logrotate for that. When using Docker, you can specify log rotation options for Docker daemon, for example
--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5
. See Configure and troubleshoot the Docker daemon for more details.
Feedback
- For bugs, visit kubeadm GitHub issue tracker
- For support, visit kubeadm Slack Channel:#kubeadm
- General SIG Cluster Lifecycle Development Slack Channel:#sig-cluster-lifecycle
- SIG Cluster Lifecycle SIG information
- SIG Cluster Lifecycle Mailing List:kubernetes-sig-cluster-lifecycle
Version skew policy
The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
Due to that we can’t see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters tov1.8.
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
- Kubernetes version and version-skew policy
- Kubeadm-specific installation guide
kubeadm works on multiple platforms
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390xfollowing the multi-platformproposal.
Multiplatform container images for the control plane and addons are also supported since v1.12.
Only some of the network providers offer solutions for all platforms. Please consult the list ofnetwork providers above or the documentation from each provider to figure out whether the providersupports your chosen platform.
Limitations
The cluster created here has a single control-plane node, with a single etcd databaserunning on it. This means that if the control-plane node fails, your cluster may losedata and may need to be recreated from scratch.
Workarounds:
- Regularly back up etcd. Theetcd data directory configured by kubeadm is at
/var/lib/etcd
on the control-plane node. - Use multiple control-plane nodes by completing theHA setup instead.
Troubleshooting
If you are running into difficulties with kubeadm, please consult our troubleshooting docs.
Feedback
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it onStack Overflow.Open an issue in the GitHub repo if you want toreport a problemorsuggest an improvement.