Kubernetes Cluster On Ubuntu 24.04: A Step-by-Step Guide

by Team 57 views
Kubernetes Cluster on Ubuntu 24.04: A Step-by-Step Guide

Hey guys! So, you're looking to dive into the world of Kubernetes and want to set up your own cluster on the latest Ubuntu 24.04? Awesome choice! Ubuntu 24.04 LTS (Noble Numbat) is a fantastic OS, and setting up a Kubernetes cluster on it is totally doable and a super valuable skill to have in your tech arsenal. Whether you're a seasoned pro or just dipping your toes in, this guide is going to walk you through everything you need to know to get your Kubernetes cluster up and running smoothly. We'll break down all the nitty-gritty details, from prerequisites to the final deployment, making sure you understand each step. Get ready to build your own powerful container orchestration system!

Understanding Kubernetes and Why You Need a Cluster

Alright, let's kick things off by talking about what Kubernetes actually is and why building a cluster is such a big deal. At its core, Kubernetes, often abbreviated as K8s, is an open-source system designed for automating the deployment, scaling, and management of containerized applications. Think of it as the ultimate manager for your applications running in containers like Docker. Instead of manually starting, stopping, and updating individual containers, Kubernetes handles all of that for you. It ensures your applications are always running, accessible, and can scale up or down based on demand. It's like having a super-smart assistant that keeps your apps in perfect working order, no matter what.

Now, why a cluster? A cluster is essentially a group of machines (physical or virtual) that work together as a single system to run your containerized applications. This isn't just about having multiple machines; it's about them collaborating. A Kubernetes cluster consists of at least two types of nodes: a control plane (or master node) and one or more worker nodes. The control plane is the brain of the operation, managing the overall state of the cluster and scheduling applications to run on the worker nodes. The worker nodes are where your actual applications (your containers) live and run. By using a cluster, you gain immense benefits: high availability (if one worker node goes down, your applications can be moved to another), scalability (easily add more worker nodes to handle more traffic), and resilience (Kubernetes can automatically restart failed containers or nodes). Setting up your own cluster, especially on a robust platform like Ubuntu 24.04, gives you full control and a deep understanding of how these powerful systems operate. It’s your own playground to experiment, learn, and deploy with confidence.

Prerequisites for Your Ubuntu 24.04 Kubernetes Cluster

Before we get our hands dirty with installing Kubernetes on Ubuntu 24.04, let's make sure you've got all your ducks in a row. Having the right prerequisites in place will make the whole setup process a breeze, trust me. So, what do you need? First off, you'll need at least two Ubuntu 24.04 machines. One will act as your control plane node (the boss), and the other(s) will be your worker nodes (the employees). These can be physical servers, virtual machines (like those you'd run with VirtualBox or VMware), or even cloud instances. For this guide, we'll assume you're using VMs, which is super common for testing and learning.

Each of these machines needs to meet some basic requirements. They should have at least 2 GB of RAM and 2 CPU cores each. While you can get away with less, this is a good starting point to avoid performance headaches. Also, ensure they have a stable internet connection as we'll be downloading packages and images. Crucially, all nodes must be able to communicate with each other over the network. This means they should be on the same network, or have appropriate firewall rules configured to allow communication on specific ports (we'll get to those ports later!).

On the software side, you'll need SSH access to all your nodes. This allows you to remotely manage them, which is essential for installing and configuring Kubernetes. Make sure you can ssh into each machine from your main workstation. Another critical component is disabling swap on all nodes. Kubernetes doesn't play well with swap enabled, as it can lead to unexpected behavior and performance issues. You'll need to run a command to turn it off and make sure it stays off after reboots. Finally, you'll need to install containerd or docker as your container runtime. Kubernetes needs a container runtime to manage the lifecycle of containers. containerd is a popular choice and is often recommended, but Docker is also a viable option. We'll cover the installation of containerd in this guide. Don't sweat it if this sounds like a lot; we'll guide you through each of these steps with clear commands. Just make sure your Ubuntu 24.04 installations are up-to-date with sudo apt update && sudo apt upgrade -y before we begin!

Installing Container Runtime: containerd on Ubuntu 24.04

Alright team, time to get our hands dirty with the first piece of crucial software: the container runtime. Kubernetes needs something to actually run your containers, and containerd is a fantastic, lightweight choice that integrates seamlessly. Ubuntu 24.04 makes installing it super straightforward. Let's get this done on all your nodes – both the one you'll designate as your control plane and all your worker nodes. This step ensures that every machine in your cluster is ready to host containers.

First, let's make sure your system is fully updated. This is a good habit for any new setup. Open up your terminal on each node and run:

sudo apt update && sudo apt upgrade -y

Now, we need to install the containerd package itself. Ubuntu 24.04 has it readily available in its repositories. So, on each node, execute:

sudo apt install -y containerd

This command will download and install containerd along with any necessary dependencies. Once it's installed, we need to configure it properly for Kubernetes. Kubernetes expects containerd to be configured in a specific way, particularly regarding its Systemd cgroup driver. We'll generate a default configuration file first, and then we'll edit it.

Run this command to create the default configuration file:

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Now, we need to edit this config.toml file. The most important change is to set the SystemdCgroup option to true. This tells containerd to use the systemd cgroup driver, which is what Kubernetes expects. Open the file with your favorite text editor (like nano or vim):

sudo nano /etc/containerd/config.toml

Inside the file, find the [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] section. You'll see a line SystemdCgroup = false. Change it to true:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

Save the file and exit the editor. After making this change, we need to restart the containerd service to apply the new configuration:

sudo systemctl restart containerd

And finally, let's enable the containerd service so it starts automatically when your nodes boot up:

sudo systemctl enable containerd

Boom! You've successfully installed and configured containerd on all your nodes. This is a huge step, and it means your machines are now ready to pull and run container images, which is exactly what Kubernetes will ask them to do. Give yourselves a pat on the back, guys!

Installing Kubernetes Components: kubeadm, kubelet, and kubectl

With containerd all set up, we're ready to install the core Kubernetes components: kubeadm, kubelet, and kubectl. These are the tools that will allow us to bootstrap and manage our cluster. We need to install these on all nodes in the cluster, just like we did with containerd.

Kubernetes components are packaged in a way that's easily installable on Ubuntu. We'll be adding the official Kubernetes package repository to your system. First, let's ensure our system is ready to accept packages from new repositories by installing some necessary tools:

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg

Next, we need to download the Google Cloud public signing key. This key is used to verify the authenticity of the Kubernetes packages:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Now, we add the Kubernetes repository itself to our APT sources. This tells your system where to find the Kubernetes packages:

 echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

It's essential to update your package list again so that APT knows about the new repository:

sudo apt update

Now, we can install the actual Kubernetes tools. We'll install kubeadm (for bootstrapping the cluster), kubelet (the agent that runs on each node), and kubectl (the command-line tool for interacting with the cluster). We'll also hold back their versions to prevent accidental upgrades that might break the cluster.

sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

The apt-mark hold command is super important here. It tells APT not to automatically upgrade these packages. This is crucial for cluster stability, as breaking changes can occur between Kubernetes versions. If you ever do want to upgrade, you'll need to explicitly unhold them first.

After installation, you might see messages about kubelet being masked. This is normal because kubeadm will configure kubelet when you initialize the control plane. You can verify the installation by checking the versions:

kubeadm version
kubectl version --client

And that's it! You've now got the essential Kubernetes building blocks installed on all your nodes. This sets the stage perfectly for the next, most exciting step: initializing our control plane and bringing the cluster to life!

Initializing the Kubernetes Control Plane

Alright folks, this is where the magic happens! We're about to initialize the control plane on our designated master node. This node will coordinate all the activities in your cluster. We'll be using kubeadm for this, which is the official tool designed to get a Kubernetes cluster up and running quickly.

Before we run the kubeadm init command, there's one crucial step we need to complete on all nodes, including the worker nodes. We need to disable swap. Kubernetes relies on the operating system's memory management, and swap can interfere with its scheduling and performance. If you haven't done this already, run the following commands on every node:

sudo swapoff -a

To make this change permanent across reboots, you need to edit the /etc/fstab file. Open it with your favorite editor:

sudo nano /etc/fstab

Comment out the line that refers to swap (it usually looks something like /swap.img ...). Just add a # at the beginning of the line. Save and exit.

Now, back to our control plane node! Let's initialize it. You'll need to specify a Pod network CIDR. This is a private IP address range that Kubernetes will use to assign IPs to your pods. A common choice is 10.244.0.0/16 for Flannel, or 192.168.0.0/16 if you plan to use Calico. We'll use 10.244.0.0/16 for this example, assuming you'll use Flannel later.

Execute the following command on your control plane node only:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

This command will take a few minutes to complete. kubeadm will set up all the necessary components for the control plane, configure kubelet, and generate a join command for your worker nodes. Once it's done, you'll see a success message, and crucially, a command to configure kubectl for your user, and another command with a token to join your worker nodes to the cluster.

Save that kubeadm join command! It's unique to your cluster and you'll need it very soon. It will look something like:

sudo kubeadm join <control-plane-ip>:6443 --token <your-token> \
    --discovery-token-ca-cert-hash sha256:<your-hash>

After kubeadm init finishes, you'll need to configure kubectl to talk to your new cluster. The easiest way is to run these commands as a regular user (not sudo):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now you can test if kubectl is working:

kubectl get nodes

You should see your control plane node listed, but it will likely be in a NotReady state. That's because we haven't installed a pod network add-on yet. That's our next mission!

Deploying a Pod Network Add-on (Flannel)

Your control plane is up and running, but your nodes are in a NotReady state, right? That's totally normal because, as we mentioned, Kubernetes needs a pod network to allow communication between pods on different nodes. Think of it as the internal highway system for your applications. Without it, pods can't talk to each other effectively across the cluster. One of the most popular and straightforward choices for this is Flannel.

Flannel provides a simple overlay network that covers all nodes in your cluster. It's easy to set up and works great for most use cases. We'll deploy Flannel using a Kubernetes manifest file, which is a YAML file that describes the desired state of your cluster objects.

First, you need to make sure you're still on your control plane node and that your kubectl is configured correctly (you should be able to run kubectl get nodes and see your control plane node). Now, let's download the Flannel manifest file:

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

This command downloads the latest version of the Flannel configuration directly from its GitHub repository. It's a good practice to review this file to understand what it does, but for now, we'll just apply it directly.

Now, apply the manifest to your cluster using kubectl:

kubectl apply -f kube-flannel.yml

This single command tells Kubernetes to create all the necessary resources (like Deployments, DaemonSets, ConfigMaps) to get Flannel running. Flannel will deploy a pod on each node in your cluster, and these pods will configure the network interfaces.

It might take a minute or two for Flannel to initialize and for the network to become fully functional. You can monitor the status by checking the pods in the kube-system namespace:

kubectl get pods -n kube-system

You should see pods related to Flannel starting up and eventually reaching a Running state. Once Flannel is running, your nodes should transition from NotReady to Ready. Let's check!

kubectl get nodes

Success! You should now see your control plane node listed with a Ready status. If you have multiple worker nodes, you'll see them here too once they've joined. This step is critical because it enables inter-node communication, which is fundamental for a functioning Kubernetes cluster. You've just set up the networking that allows your applications to scale and communicate seamlessly across your cluster. Pretty neat, huh?

Joining Worker Nodes to the Cluster

We're in the home stretch, guys! We've got our control plane humming, and our network is set up. Now it's time to bring our worker nodes into the fold. These are the machines that will actually run your application pods. Remember that kubeadm join command we saved earlier? This is where we use it!

Log into each of your worker nodes one by one. Ensure that you've performed the prerequisites on them as well: updated packages, installed containerd, disabled swap, and installed kubeadm, kubelet, and kubectl (though kubectl is mainly for the control plane, kubelet and containerd are essential on workers).

Now, on each worker node, paste the kubeadm join command you copied from the output of kubeadm init on your control plane. It will look something like this:

sudo kubeadm join <control-plane-ip>:6443 --token <your-token> \
    --discovery-token-ca-cert-hash sha256:<your-hash>

Run this command with sudo. This command tells the worker node to connect to the control plane, register itself as a worker, and start running the kubelet agent.

This join process usually takes less than a minute. Once it completes successfully, the worker node will be ready to receive pods.

After you've joined all your worker nodes, head back to your control plane node. You can now run kubectl get nodes again. You should see all your worker nodes appearing in the list, and they should all be in a Ready state. If they don't appear immediately, give it a minute or two, as it can take a little time for the control plane to recognize new nodes.

kubectl get nodes -o wide

The -o wide flag is super useful here, as it shows you the internal and external IP addresses of your nodes, which can be handy for debugging.

Congratulations! You have successfully joined your worker nodes to the Kubernetes cluster. You now have a multi-node Kubernetes cluster running on Ubuntu 24.04. This means you can start deploying applications, and Kubernetes will intelligently distribute them across your available worker nodes. You've built your own foundation for scalable and resilient applications. Awesome job, team!

Testing Your Kubernetes Cluster: Deploying a Sample Application

Alright, you've built it, and now it's time to test it! What good is a Kubernetes cluster if we don't deploy something to it, right? We'll deploy a simple, classic application: a small web server, like Nginx. This will prove that your cluster is functioning correctly, pods are starting, and you can access your application.

We'll use kubectl from your control plane node to deploy Nginx. We'll create a Deployment object, which tells Kubernetes how many replicas (copies) of our Nginx pod we want, and Service object, which provides a stable IP address and DNS name to access our application, even if the pods underneath change.

First, let's create a Deployment for Nginx. We want, say, 3 replicas of the Nginx pod. Save this as nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Now, apply this deployment to your cluster:

kubectl apply -f nginx-deployment.yaml

This will create the Deployment object, and Kubernetes will start creating 3 Nginx pods. You can check their status:

kubectl get pods -l app=nginx

You should see 3 pods running. If any are stuck in Pending or CrashLoopBackOff, check the logs with kubectl logs <pod-name>.

Next, let's create a Service to expose our Nginx deployment. We'll create a LoadBalancer type service, which is common for cloud environments, but for a local cluster, NodePort is often easier. Let's use NodePort for simplicity, which will expose the service on a specific port on each of your nodes.

Save this as nginx-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30080 # You can choose a port between 30000-32767
  type: NodePort

Apply this service:

kubectl apply -f nginx-service.yaml

Now, you can access your Nginx web server! Open a web browser and go to http://<your-node-ip>:30080. Replace <your-node-ip> with the IP address of any of your nodes (control plane or worker). You should see the default Nginx welcome page! This confirms that your pods are running, the network is working, and your service is correctly routing traffic.

To clean up later, you can delete the deployment and service:

kubectl delete deployment nginx-deployment
kubectl delete service nginx-service

Testing your cluster with a sample application is the best way to ensure everything is configured correctly. You've successfully deployed and accessed an application on your brand-new Kubernetes cluster on Ubuntu 24.04! High five!

Conclusion: Your Kubernetes Journey Begins!

And there you have it, folks! You've just successfully set up your very own Kubernetes cluster on Ubuntu 24.04 LTS. We've covered a lot of ground, from understanding the basics of Kubernetes and its architecture to installing containerd, configuring the core Kubernetes components (kubeadm, kubelet, kubectl), initializing the control plane, setting up the crucial pod network with Flannel, and finally, joining your worker nodes and deploying a sample application. It's a comprehensive journey, and you should feel incredibly proud of what you've accomplished!

Having your own Kubernetes cluster is an invaluable asset for learning, experimenting, and even for small-scale production environments. You now have a robust platform to deploy, manage, and scale your containerized applications with confidence. The skills you've gained here are highly sought after in the tech industry, and this is just the beginning of your cloud-native adventure.

Remember, the world of Kubernetes is vast and constantly evolving. This setup is a fantastic starting point. You can now explore more advanced topics like persistent storage, advanced networking, security, monitoring, and different deployment strategies. Don't be afraid to experiment, break things (and fix them!), and keep learning. The Kubernetes community is huge and incredibly supportive, so leverage resources like the official documentation, online forums, and community meetups.

Thanks for following along on this guide. Setting up a Kubernetes cluster can seem daunting at first, but with a structured approach and a bit of perseverance, it's totally achievable. Now go forth and orchestrate some amazing applications! Happy containerizing, everyone!