Kubernetes Cluster Setup On Ubuntu 20.04: A Step-by-Step Guide
Setting up a Kubernetes cluster on Ubuntu 20.04 might seem daunting at first, but with a systematic approach, it can be a smooth and rewarding experience. This guide will walk you through the entire process, ensuring you have a fully functional cluster ready for your deployments. Whether you're a seasoned DevOps engineer or just starting with Kubernetes, this tutorial aims to provide clarity and practical steps to get your cluster up and running.
Prerequisites
Before we dive into the setup, let's ensure you have everything you need:
- Ubuntu 20.04 Servers: You'll need at least two Ubuntu 20.04 servers – one for the master node and one or more for worker nodes. For a production environment, consider using at least three master nodes for high availability.
- User with sudo privileges: Ensure you have a user account with
sudoprivileges on all servers. - Internet Connection: All servers should have a stable internet connection to download necessary packages.
- Basic Linux Knowledge: Familiarity with basic Linux commands will be helpful.
Step 1: Update and Upgrade Packages
First, log in to each of your Ubuntu servers and update the package lists and upgrade the installed packages. This ensures you have the latest versions and security patches. Run the following commands:
sudo apt update
sudo apt upgrade -y
This command updates the package lists from the repositories and then upgrades all installed packages to their newest versions. The -y flag automatically answers "yes" to any prompts, streamlining the process.
Step 2: Install Docker
Docker is a containerization platform that Kubernetes uses to run applications. To install Docker, follow these steps:
-
Install required packages: These packages allow
aptto use a repository over HTTPS:sudo apt install apt-transport-https ca-certificates curl software-properties-common -y -
Add Docker's official GPG key: This verifies the integrity of the packages you'll be downloading.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - -
Add the Docker repository: This tells
aptwhere to find the Docker packages.sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" -
Update the package list: To include the Docker packages.
sudo apt update -
Install Docker Engine: This is the core Docker runtime.
sudo apt install docker-ce docker-ce-cli containerd.io -y -
Verify Docker installation: Check if Docker is running correctly.
sudo docker run hello-worldIf everything is set up correctly, you should see a "Hello from Docker!" message.
-
Enable Docker to start on boot: This ensures Docker starts automatically whenever the server restarts.
sudo systemctl enable docker
sudo systemctl start docker ```
Step 3: Install kubeadm, kubelet, and kubectl
kubeadm, kubelet, and kubectl are essential tools for setting up and managing a Kubernetes cluster.
- kubeadm: A command-line tool for bootstrapping Kubernetes clusters.
- kubelet: An agent that runs on each node in the cluster. It listens for instructions from the Kubernetes control plane and manages the containers on the node.
- kubectl: A command-line tool for interacting with the Kubernetes cluster.
Follow these steps to install these tools:
-
Add the Kubernetes apt repository: Similar to Docker, you need to add the Kubernetes repository to
apt.curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - -
Add the Kubernetes repository: Add the specific Kubernetes repository for Ubuntu 20.04.
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list ```
-
Update the package list: To include the Kubernetes packages.
sudo apt update ```
-
Install kubeadm, kubelet, and kubectl: Install the necessary Kubernetes tools, holding them at the current version to prevent automatic upgrades.
sudo apt install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ```
Step 4: Initialize the Kubernetes Cluster (Master Node)
Now, let's initialize the Kubernetes cluster on the master node. This process sets up the control plane components. Run the following command on your designated master node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
--pod-network-cidr: Specifies the IP address range for pods.10.244.0.0/16is commonly used with Calico, a popular networking solution. You can choose a different CIDR block if needed, but ensure it doesn't overlap with your existing network.
After running this command, you'll see output with instructions on how to configure kubectl to connect to the cluster and how to join worker nodes. It’s crucial to save these instructions.
-
Configure kubectl: Run the following commands as a regular user (not root) to configure
kubectl:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown (id -g) $HOME/.kube/config ```
-
Apply a Pod Network Add-on: Kubernetes requires a pod network add-on to enable communication between pods. We'll use Calico in this example. Apply the Calico manifest:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml ```
This command deploys Calico to your cluster, enabling networking policies and pod-to-pod communication. Alternatively, you can use other network add-ons like Flannel or Weave Net based on your requirements.
Step 5: Join Worker Nodes to the Cluster
Now, let's join the worker nodes to the cluster. On each worker node, run the kubeadm join command that was provided in the output of the kubeadm init command on the master node. It should look something like this:
kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
<master-ip>: The IP address of your master node.<master-port>: The port used by the Kubernetes API server (usually 6443).<token>: The token used for authenticating the worker node with the master node.<hash>: The SHA256 hash of the CA certificate.
If you've lost the kubeadm join command, you can regenerate it on the master node using the following commands:
sudo kubeadm token create --print-join-command
Run this command on the master node, and it will output the kubeadm join command you need to run on the worker nodes.
Step 6: Verify the Cluster
After joining the worker nodes, verify that all nodes are correctly registered in the cluster. Run the following command on the master node:
kubectl get nodes
You should see a list of all nodes, including the master node and the worker nodes. Ensure that the status of all nodes is Ready.
Step 7: Deploy a Sample Application
To ensure your cluster is working correctly, let's deploy a simple application. We'll deploy a basic Nginx deployment.
-
Create a deployment: Create an Nginx deployment using
kubectl.
kubectl create deployment nginx --image=nginx ```
-
Expose the deployment: Expose the deployment as a service so you can access it.
kubectl expose deployment nginx --port=80 --type=NodePort ```
-
Check the status of the deployment: Check if the deployment and service are running correctly.
kubectl get deployments kubectl get services ```
-
Access the application: Access the Nginx application through the NodePort. Find the NodePort number:
kubectl get service nginx ```
Look for the port number under the `PORT(S)` column. Then, access the application using the IP address of one of your worker nodes and the NodePort number in your web browser (e.g., `http://<worker-node-ip>:<nodeport>`).
Step 8: Securing Your Cluster
Securing your Kubernetes cluster is a critical aspect of maintaining a healthy and reliable environment. Here are some fundamental best practices to implement:
Role-Based Access Control (RBAC)
RBAC is a method of regulating access to computer or network resources based on the roles of individual users within your organization. In Kubernetes, RBAC allows you to define who can access what resources. By creating roles and role bindings, you can restrict access to sensitive resources, ensuring that only authorized users or services can perform specific actions.
Network Policies
Network Policies provide control over the communication between pods. By default, all pods within a Kubernetes cluster can communicate with each other without any restrictions. Network Policies allow you to define rules that specify which pods can communicate with each other, enhancing the security of your cluster by isolating sensitive applications and preventing unauthorized access.
Regular Security Audits
Regular security audits are crucial for identifying and addressing potential vulnerabilities in your Kubernetes cluster. These audits should include reviewing RBAC configurations, network policies, and other security-related settings. Tools like kube-bench can help automate some of these checks, ensuring your cluster adheres to security best practices.
Keeping Kubernetes Updated
Regularly updating Kubernetes is essential for patching security vulnerabilities and benefiting from the latest features and improvements. Security patches are often released to address newly discovered vulnerabilities, so staying up-to-date is critical for maintaining a secure cluster. Be sure to follow the official Kubernetes release notes and upgrade guides to ensure a smooth update process.
Using Admission Controllers
Admission Controllers are Kubernetes components that intercept requests to the API server prior to persistence of the object, but after the request is authenticated and authorized. They can be used to enforce various policies, such as restricting the types of images that can be deployed or requiring specific labels on resources. By using Admission Controllers, you can add an extra layer of security and policy enforcement to your Kubernetes cluster.
Conclusion
Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu 20.04. You've covered the essential steps, from installing Docker and Kubernetes tools to initializing the cluster, joining worker nodes, and deploying a sample application. Remember, this is just the beginning. As you become more comfortable with Kubernetes, explore more advanced features, networking options, and security practices to optimize your cluster for your specific needs. Keep experimenting, keep learning, and enjoy the journey into the world of container orchestration!