Kubernetes Cluster On Ubuntu 20.04: A Step-by-Step Guide
Alright, guys! Let's dive into the awesome world of Kubernetes and get a cluster up and running on Ubuntu 20.04. This guide is designed to be super straightforward, so even if you're relatively new to Kubernetes, you'll be able to follow along without any issues. We'll cover everything from prepping your machines to deploying your first application. So, buckle up and let's get started!
Prerequisites
Before we jump into the installation process, let's make sure we have all the necessary prerequisites covered. These are crucial for a smooth and successful Kubernetes deployment. Trust me, spending a little time here will save you a lot of headaches later on.
- Ubuntu 20.04 Servers: You'll need at least two Ubuntu 20.04 servers. One will act as the master node, and the other will be a worker node. For a production environment, it's highly recommended to have at least three master nodes for high availability. However, for this guide, we'll keep it simple with one master and one worker node. Ensure each server has a static IP address and proper DNS resolution.
- Sudo Privileges: Make sure you have sudo privileges on all the servers. This allows you to run commands as an administrator, which is essential for installing and configuring Kubernetes components.
- Internet Connection: All servers should have a stable internet connection. This is required to download the necessary packages and dependencies.
- Container Runtime: Kubernetes requires a container runtime to run containers. We'll be using Docker in this guide, but you can also use other runtimes like containerd or CRI-O. Docker is widely used and well-supported, making it a great choice for most users.
- Basic Linux Knowledge: A basic understanding of Linux commands and concepts will be helpful. You should be comfortable navigating the command line, editing files, and managing services.
Having these prerequisites in place will ensure that you're ready to proceed with the Kubernetes installation. If you're missing any of these, take a moment to set them up before moving on. It's always better to be prepared than to run into issues later!
Step 1: Installing Docker
First things first, we need to install Docker on all of our machines – both the master and worker nodes. Docker will be our container runtime, which Kubernetes uses to manage and run containers. Here’s how to get it done:
-
Update Package Index: Start by updating the package index to make sure you have the latest versions of the packages.
sudo apt update -
Install Required Packages: Install packages that allow apt to use a repository over HTTPS:
sudo apt install apt-transport-https ca-certificates curl software-properties-common -
Add Docker GPG Key: Add Docker’s official GPG key to your system. This verifies the authenticity of the packages you're about to install.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg -
Set Up Docker Repository: Add the Docker repository to your apt sources.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null -
Update Package Index Again: Update the package index again to include the new Docker repository.
sudo apt update -
Install Docker Engine: Finally, install Docker Engine, containerd, and Docker Compose.
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -
Verify Docker Installation: Check if Docker is installed correctly by running the Docker version command.
docker --version -
Start and Enable Docker: Start the Docker service and enable it to start on boot.
sudo systemctl start docker sudo systemctl enable docker -
Verify Docker Service Status: Verify the status of the Docker service to ensure it's running without any issues.
sudo systemctl status docker
Repeat these steps on all your servers (both master and worker nodes). With Docker installed, you're one step closer to having your Kubernetes cluster up and running. Next, we'll install kubeadm, kubelet, and kubectl.
Step 2: Installing kubeadm, kubelet, and kubectl
Okay, now that we have Docker installed, let's move on to installing the Kubernetes components: kubeadm, kubelet, and kubectl. These are essential for setting up and managing your Kubernetes cluster. Here's how to install them:
-
Update Package Index: Start by updating the package index:
sudo apt update -
Install Required Packages: Install packages that allow apt to use a repository over HTTPS:
sudo apt install apt-transport-https ca-certificates curl -
Add Kubernetes GPG Key: Add the Kubernetes GPG key to your system:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - -
Add Kubernetes Repository: Add the Kubernetes repository to your apt sources:
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list -
Update Package Index Again: Update the package index again to include the new Kubernetes repository:
sudo apt update -
Install kubeadm, kubelet, and kubectl: Install the kubeadm, kubelet, and kubectl packages:
sudo apt install -y kubelet kubeadm kubectl -
Hold Package Versions: Hold the package versions to prevent accidental upgrades:
sudo apt-mark hold kubelet kubeadm kubectl -
Verify Installation: Check the versions of kubeadm, kubelet, and kubectl to ensure they are installed correctly:
kubeadm version kubelet --version kubectl version --client
Repeat these steps on all your servers (both master and worker nodes). Now that you have kubeadm, kubelet, and kubectl installed, you're ready to initialize the Kubernetes cluster on the master node.
Step 3: Initializing the Kubernetes Cluster (Master Node)
Alright, let's get the Kubernetes cluster up and running! This step focuses on initializing the cluster on the master node. This is where the control plane components will be set up.
-
Initialize the Kubernetes Cluster: Use the
kubeadm initcommand to initialize the cluster. You'll need to specify the pod network CIDR. We'll use Calico as our network plugin, so we'll use its recommended CIDR. Important: ReplaceYOUR_MASTER_NODE_IPwith the actual IP address of your master node.sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=YOUR_MASTER_NODE_IP -
Configure kubectl: After the initialization is complete, you'll see instructions on how to configure kubectl to interact with the cluster. Follow these instructions to set up your user context.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config -
Save the Join Command: The
kubeadm initcommand will also output akubeadm joincommand. Save this command, as you'll need it to join the worker nodes to the cluster. It will look something like this:kubeadm join YOUR_MASTER_NODE_IP:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>Keep this command handy. You'll need it in the next step.
-
Install a Pod Network Add-on: Kubernetes requires a pod network add-on to enable communication between pods. We'll use Calico, a popular and powerful networking solution. Apply the Calico manifest:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml -
Verify Node Status: Check the status of the nodes to ensure the master node is ready:
kubectl get nodesThe master node should be in the
Readystate. It might take a few minutes for the status to becomeReadyas the Calico pods get deployed.
With the Kubernetes cluster initialized on the master node and the pod network add-on installed, you're ready to join the worker nodes to the cluster.
Step 4: Joining Worker Nodes to the Cluster
Now that the master node is set up, let's add the worker nodes to the cluster. This is where the magic happens, as the worker nodes will run your applications.
-
Run the Join Command: On each worker node, run the
kubeadm joincommand that you saved in the previous step. This command will configure the worker node to connect to the master node.kubeadm join YOUR_MASTER_NODE_IP:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> -
Verify Node Status (Master Node): Back on the master node, check the status of the nodes again to ensure the worker nodes have joined the cluster:
kubectl get nodesYou should see all the worker nodes in the
Readystate. Again, it might take a few minutes for the status to becomeReadyas the necessary components get deployed on the worker nodes.
If the worker nodes don't join the cluster, double-check the kubeadm join command for any typos or errors. Also, make sure that the worker nodes can communicate with the master node over the network.
With the worker nodes joined to the cluster, you now have a fully functional Kubernetes cluster ready to deploy applications.
Step 5: Deploying a Sample Application
Let's put our new Kubernetes cluster to the test by deploying a sample application. We'll deploy a simple Nginx deployment to verify that everything is working correctly.
-
Create a Deployment: Create a deployment YAML file named
nginx-deployment.yamlwith the following content:apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 -
Apply the Deployment: Apply the deployment using kubectl:
kubectl apply -f nginx-deployment.yaml -
Create a Service: Create a service to expose the Nginx deployment. Create a service YAML file named
nginx-service.yamlwith the following content:apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer -
Apply the Service: Apply the service using kubectl:
kubectl apply -f nginx-service.yaml -
Verify the Deployment and Service: Check the status of the deployment and service:
kubectl get deployments kubectl get servicesThe deployment should show the desired number of replicas and the current number of replicas. The service should have an external IP address assigned (if you're using a cloud provider with a load balancer). It might take a few minutes for the external IP to be assigned.
-
Access the Application: Access the Nginx application using the external IP address of the service in your web browser. You should see the default Nginx welcome page.
Congratulations! You've successfully deployed a sample application on your Kubernetes cluster.
Conclusion
And there you have it, folks! You've successfully installed a Kubernetes cluster on Ubuntu 20.04 and deployed a sample application. This is just the beginning of your Kubernetes journey. There's a whole world of possibilities waiting for you to explore, from deploying complex applications to managing your infrastructure at scale.
Remember to keep practicing and experimenting with different Kubernetes features and tools. The more you use it, the more comfortable you'll become with it. And don't hesitate to reach out to the Kubernetes community for help and guidance. They're a super friendly and supportive bunch.
Happy Kuberneting!