Kubernetes Install Guide On Linux: A Step-by-Step Tutorial

by Team 59 views
Kubernetes Install Guide on Linux: A Step-by-Step Tutorial

Are you ready to dive into the world of container orchestration? If so, you're likely looking at Kubernetes (often shortened to K8s), the leading platform for automating deployment, scaling, and management of containerized applications. This guide will walk you through a comprehensive, step-by-step process to install Kubernetes on Linux, ensuring you have a robust and functional cluster ready for your workloads. Whether you're a developer, system administrator, or just a tech enthusiast, this tutorial is designed to get you up and running with Kubernetes on your Linux environment.

Prerequisites

Before we begin the Kubernetes installation process, let's make sure you have everything you need. These prerequisites are essential for a smooth and successful setup. First, you'll need a Linux machine. This could be a physical server, a virtual machine (VM), or even a cloud instance. Popular choices include Ubuntu, CentOS, and Debian. Ensure your Linux distribution is up-to-date by running the appropriate update commands for your system (e.g., sudo apt update && sudo apt upgrade for Ubuntu/Debian, or sudo yum update for CentOS). Next, ensure you have root or sudo privileges. Most of the commands we'll be using require administrative access to install packages and configure system settings. Having a basic understanding of Linux command-line operations is also highly recommended. You'll be interacting with the system through the terminal, so familiarity with commands like cd, ls, mkdir, nano or vim will be beneficial.

We'll be using kubectl, the Kubernetes command-line tool, to interact with your cluster. Make sure you have curl or wget installed to download the necessary binaries. Lastly, you will need container runtime such as Docker or containerd. While Docker was the de facto standard, containerd is becoming increasingly popular for its simplicity and performance. We'll cover installing containerd in this guide. Ensure that your machine meets the minimum hardware requirements for Kubernetes. While these requirements can vary depending on the size and complexity of your applications, a good starting point is at least 2 CPUs and 4GB of RAM for a basic cluster. If you plan to run resource-intensive applications, you'll need more resources. With these prerequisites in check, you're well-prepared to embark on your Kubernetes journey on Linux!

Step 1: Install Container Runtime (containerd)

In this section, we will guide you through the installation of containerd, a popular and lightweight container runtime. Containerd is a CNCF (Cloud Native Computing Foundation) graduated project and is designed to be embedded into larger systems. First, let's update the package index and install the necessary packages to allow apt to use a repository over HTTPS. Run the following commands:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release

Next, add the Docker's official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Now, set up the stable repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the package index again:

sudo apt-get update

Install containerd:

sudo apt-get install -y containerd.io

Once containerd is installed, you need to configure it. Generate the default containerd configuration file:

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Now, you need to edit the /etc/containerd/config.toml file to set the SystemdCgroup option to true. Open the file with your favorite text editor (e.g., nano or vim):

sudo nano /etc/containerd/config.toml

Find the following section:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

And add the following line under [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]:

SystemdCgroup = true

Save the file and exit the text editor. Finally, restart containerd to apply the changes:

sudo systemctl restart containerd

Enable containerd to start on boot:

sudo systemctl enable containerd

To verify that containerd is running correctly, check its status:

sudo systemctl status containerd

If everything is set up correctly, you should see that containerd is active and running. Congratulations, you have successfully installed and configured containerd!

Step 2: Install kubeadm, kubelet, and kubectl

Now that our container runtime is set up, it's time to install the core Kubernetes components: kubeadm, kubelet, and kubectl. These tools are essential for creating and managing your Kubernetes cluster. kubeadm is a tool that simplifies the process of bootstrapping a Kubernetes cluster. kubelet is the agent that runs on each node in the cluster and ensures that containers are running in a Pod. kubectl is the command-line tool that allows you to interact with the Kubernetes API server to manage your cluster. First, let's add the Kubernetes package repository. This involves adding the Kubernetes signing key and the repository URL to your system's package manager. Run the following commands:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Next, update the package index again:

sudo apt-get update

Now, install kubeadm, kubelet, and kubectl:

sudo apt-get install -y kubelet kubeadm kubectl

To prevent accidental upgrades, hold the packages at their current version:

sudo apt-mark hold kubelet kubeadm kubectl

Verify the installation by checking the versions of the installed components:

kubeadm version
kubelet --version
kubectl version --client

These commands should output the versions of kubeadm, kubelet, and kubectl respectively. With these components installed, you're one step closer to having a fully functional Kubernetes cluster. In the next step, we'll use kubeadm to initialize the cluster.

Step 3: Initialize the Kubernetes Cluster

With kubeadm, kubelet, and kubectl installed, we can now initialize the Kubernetes cluster. This process sets up the control plane, which is the brain of your cluster, managing the worker nodes and orchestrating the deployment of your applications. First, let's initialize the Kubernetes cluster using kubeadm. You'll need to choose a pod network add-on for your cluster. A pod network add-on is a software component that provides networking between pods in your cluster. Popular options include Calico, Flannel, and Cilium. In this guide, we'll use Calico. Before initializing the cluster, disable swap. Kubernetes requires swap to be disabled to function correctly. You can disable swap temporarily with the following command:

sudo swapoff -a

To disable swap permanently, you need to edit the /etc/fstab file. Open the file with your favorite text editor:

sudo nano /etc/fstab

Comment out any lines that refer to swap partitions by adding a # at the beginning of the line. Save the file and exit the text editor. Now, initialize the Kubernetes cluster with kubeadm. Specify the pod network CIDR that Calico will use:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

This command will take a few minutes to complete. During the initialization process, kubeadm performs several tasks, including generating certificates, creating the necessary Kubernetes control plane components, and configuring kubelet. Once the initialization is complete, kubeadm will output a set of commands that you need to run to configure kubectl to connect to your cluster. Copy these commands and run them as your regular user:

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

These commands copy the Kubernetes configuration file to your home directory and set the correct permissions, allowing you to use kubectl to interact with your cluster. Now, apply the Calico network policy. This configures the pod network so that your pods can communicate with each other:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

It might take a few minutes for all the Calico pods to start up and become ready. You can monitor the status of the pods with the following command:

kubectl get pods -n kube-system

Once all the pods are running, you have successfully initialized your Kubernetes cluster and configured the pod network. In the next step, we'll add a worker node to the cluster.

Step 4: Join Worker Nodes to the Cluster

Now that the control plane is up and running, let's add worker nodes to the cluster. Worker nodes are the machines where your containerized applications will run. To join a worker node to the cluster, you'll need to run a kubeadm join command on the worker node. This command was output by kubeadm init when you initialized the cluster in the previous step. If you don't have the command, you can regenerate it on the control plane node with the following command:

sudo kubeadm token create --print-join-command

This command will output a kubeadm join command that looks something like this:

kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Copy this command and run it on your worker node. Make sure that the worker node has the same prerequisites as the control plane node, including containerd, kubelet, and kubeadm installed. The kubeadm join command will configure the worker node to connect to the control plane and join the cluster. It will also install the necessary components to run pods on the worker node. Once the kubeadm join command completes successfully, the worker node will be part of the Kubernetes cluster. You can verify that the worker node has joined the cluster by running the following command on the control plane node:

kubectl get nodes

This command will output a list of all the nodes in the cluster, including the control plane node and the worker node. The status of the worker node should be Ready. If the status is not Ready, it might take a few minutes for the node to become ready. You can check the logs of the kubelet service on the worker node to troubleshoot any issues. Repeat this process for any additional worker nodes that you want to add to the cluster. With worker nodes added to the cluster, you're ready to deploy your applications to Kubernetes!

Step 5: Deploy a Sample Application

Now that you have a running Kubernetes cluster with at least one worker node, let's deploy a sample application to test the cluster. We'll deploy a simple Nginx web server as a Kubernetes Deployment and expose it as a Service. First, create a Deployment for the Nginx web server. A Deployment is a Kubernetes object that manages a set of identical Pods. Create a file named nginx-deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

This Deployment creates two replicas of the Nginx web server. Now, apply the Deployment to your cluster:

kubectl apply -f nginx-deployment.yaml

This command creates the Deployment in your cluster. You can check the status of the Deployment with the following command:

kubectl get deployments

It might take a few minutes for the Deployment to be fully deployed. Once the Deployment is ready, you can expose it as a Service. A Service is a Kubernetes object that provides a stable IP address and DNS name for accessing your application. Create a file named nginx-service.yaml with the following content:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

This Service exposes the Nginx web server on port 80 and uses a LoadBalancer to provide external access to the application. Note that the LoadBalancer type might not work in all environments, such as Minikube or on-premises clusters. In those cases, you might need to use a NodePort or Ingress instead. Apply the Service to your cluster:

kubectl apply -f nginx-service.yaml

This command creates the Service in your cluster. You can check the status of the Service with the following command:

kubectl get services

It might take a few minutes for the Service to be fully provisioned. Once the Service is ready, you can access the Nginx web server by visiting the external IP address of the Service in your web browser. Congratulations, you have successfully deployed a sample application to your Kubernetes cluster!

Conclusion

In this guide, we've covered the step-by-step process of installing Kubernetes on Linux, from setting up the container runtime to deploying a sample application. We started by installing containerd, a lightweight container runtime, and configuring it to work with Kubernetes. Then, we installed kubeadm, kubelet, and kubectl, the core Kubernetes components. We used kubeadm to initialize the Kubernetes cluster and configured the pod network with Calico. We then added worker nodes to the cluster and deployed a sample Nginx web server to test the cluster. By following these steps, you should now have a fully functional Kubernetes cluster on your Linux environment. Kubernetes is a powerful platform for managing containerized applications, and this guide is just the beginning. There's much more to explore, including advanced networking, storage, security, and application deployment strategies. As you continue your Kubernetes journey, remember to consult the official Kubernetes documentation and community resources for more information and support. Happy orchestrating!