Kubernetes Cluster On VirtualBox: A Step-by-Step Guide

by Team 55 views
Kubernetes Cluster on VirtualBox: A Step-by-Step Guide

Setting up a Kubernetes cluster on VirtualBox is a fantastic way to learn and experiment with Kubernetes without needing dedicated hardware. This guide will walk you through the entire process, from installing the necessary tools to deploying your first application. Let's dive in, guys!

Prerequisites

Before we start, make sure you have the following installed and configured:

  • VirtualBox: You'll need VirtualBox to create and manage the virtual machines that will form your cluster. Download and install the latest version from the VirtualBox website.
  • kubectl: The Kubernetes command-line tool, kubectl, allows you to interact with your cluster. You can download it from the Kubernetes website.
  • Minikube: Minikube is a tool that makes it easy to run Kubernetes locally. While we won't be using it directly for the cluster, it's helpful for some initial setup tasks and understanding Kubernetes concepts. You can install it by following instructions on the Minikube website.
  • Helm: Helm is a package manager for Kubernetes, making it easier to deploy and manage applications. Download and install Helm from the Helm website.
  • A Package Manager: Make sure you have access to a package manager for your operating system, such as apt for Ubuntu/Debian or brew for macOS. We'll use this to install some necessary tools.

Step 1: Creating the Virtual Machines

Let's get started by creating the virtual machines that will form our Kubernetes cluster. In this setup, we'll create one master node and two worker nodes. The master node will manage the cluster, and the worker nodes will run our applications.

  1. Create the Master Node VM:
    • Open VirtualBox and click on "New".
    • Name the VM "k8s-master". Choose Linux as the type and Ubuntu (64-bit) as the version.
    • Allocate at least 4GB of RAM to the VM. Kubernetes can be resource-intensive, and 4GB will give you a comfortable margin.
    • Create a virtual hard disk. VDI (VirtualBox Disk Image) is a good choice. Dynamically allocated is fine.
    • Allocate at least 40GB of storage for the virtual hard disk. This will give you plenty of space for Kubernetes, Docker images, and applications.
  2. Create the Worker Node VMs:
    • Repeat the process above to create two more VMs. Name them "k8s-worker-1" and "k8s-worker-2".
    • Allocate at least 2GB of RAM to each worker node. You can adjust this based on the applications you plan to run.
    • Allocate at least 40GB of storage for each worker node.
  3. Configure Network Settings:
    • For each VM, go to Settings -> Network.
    • Under Adapter 1, select "Bridged Adapter". This will allow your VMs to access your local network and the internet. Make sure to select the correct network interface that is connected to the internet.
  4. Install Ubuntu Server on Each VM:
    • Download the Ubuntu Server ISO image from the Ubuntu website.
    • In VirtualBox, select each VM, go to Settings -> Storage, and add the ISO image to the virtual DVD drive.
    • Start each VM and follow the on-screen instructions to install Ubuntu Server. During the installation, make sure to:
      • Create a user account.
      • Enable OpenSSH server for remote access.

Step 2: Configuring the Nodes

Now that we have our virtual machines set up with Ubuntu Server, we need to configure them to work with Kubernetes. This involves installing Docker, Kubernetes components, and configuring networking.

  1. SSH into Each VM:
    • Find the IP address of each VM. You can usually find this information in your router's administration panel or by using the ip addr command within the VM.
    • Use SSH to connect to each VM from your host machine:
      ssh username@<vm_ip_address>
      
  2. Install Docker:
  • On each VM, run the following commands to install Docker:

    sudo apt update
    sudo apt install docker.io -y
    sudo systemctl start docker
    sudo systemctl enable docker
    sudo usermod -aG docker $USER
    newgrp docker
    

    The usermod command adds your user to the docker group, allowing you to run Docker commands without sudo. The newgrp docker command updates your current session to reflect this change.

  1. Install Kubernetes Components:
  • On each VM, run the following commands to install kubeadm, kubelet, and kubectl:

    sudo apt update
    sudo apt install apt-transport-https curl -y
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    sudo apt update
    sudo apt install kubelet kubeadm kubectl -y
    sudo apt-mark hold kubelet kubeadm kubectl
    

    These commands add the Kubernetes repository, install the necessary components, and prevent them from being accidentally updated.

  1. Initialize the Master Node:
  • On the k8s-master VM, initialize the Kubernetes cluster using kubeadm:

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16
    

    The --pod-network-cidr specifies the IP address range for pods in the cluster. This is important and should match the configuration of the network plugin we'll install later.

    • After the command completes, it will output a kubeadm join command. Copy this command; you'll need it to join the worker nodes to the cluster.

    • Also, follow the instructions to configure kubectl to connect to the cluster:

    mkdir -p $HOME/.kube
    sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  1. Join the Worker Nodes:
  • On each worker node (k8s-worker-1 and k8s-worker-2), run the kubeadm join command that you copied from the master node. It will look something like this:

    sudo kubeadm join <master_ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
    

    This command tells the worker nodes to connect to the master node and join the cluster.

Step 3: Installing a Network Plugin

Kubernetes requires a network plugin to enable communication between pods. We'll use Calico in this guide, but there are other options available. Calico provides network policy enforcement and efficient networking.

  1. Install Calico:
  • On the k8s-master VM, run the following command to install Calico:

    kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
    

    This command applies the Calico manifest, which creates the necessary resources in your Kubernetes cluster. It may take a few minutes for all the Calico pods to become ready.

  1. Verify the Cluster:
  • On the k8s-master VM, run the following command to check the status of your nodes:

    kubectl get nodes
    

    You should see your master node and both worker nodes listed, and their status should be Ready.

  • Run the following command to check the status of the pods:

    kubectl get pods --all-namespaces
    

    You should see pods related to Kubernetes system components and Calico. Make sure all the pods are in the Running state.

Step 4: Deploying Your First Application

Now that your Kubernetes cluster is up and running, let's deploy a simple application to test it out. We'll deploy a simple Nginx web server.

  1. Create a Deployment:
  • On the k8s-master VM, create a deployment using the following command:

    kubectl create deployment nginx --image=nginx
    

    This command creates a deployment named nginx that uses the nginx Docker image. A deployment ensures that a specified number of pod replicas are running at all times.

  1. Expose the Deployment:
  • Expose the deployment as a service using the following command:

    kubectl expose deployment nginx --port=80 --type=NodePort
    

    This command creates a service that exposes the nginx deployment on port 80. The --type=NodePort option makes the service accessible from outside the cluster on a specific port on each node.

  1. Find the Service Port:
  • Find the port that the service is exposed on using the following command:

    kubectl get service nginx
    

    Look for the NodePort value in the output. It will be a port number between 30000 and 32767.

  1. Access the Application:
  • Open a web browser and navigate to http://<any_node_ip>:<node_port>, replacing <any_node_ip> with the IP address of any of your nodes (master or worker) and <node_port> with the NodePort you found in the previous step. You should see the default Nginx welcome page.

Step 5: Using Helm to Deploy Applications

Helm makes it much easier to deploy and manage applications on Kubernetes. Let's use Helm to deploy a more complex application.

  1. Initialize Helm:
  • On the k8s-master VM, initialize Helm by running:

    helm init --client-only
    

    This command initializes Helm on your local machine. The --client-only flag is used because Tiller (the server-side component of Helm v2) is no longer recommended. We'll be using Helm v3, which eliminates the need for Tiller.

  1. Add a Helm Repository:
  • Add a Helm repository, such as the stable repository:

    helm repo add stable https://charts.helm.sh/stable
    helm repo update
    

    This adds the stable repository to your Helm configuration, allowing you to install charts from that repository.

  1. Deploy a Chart:
  • Deploy a chart, such as the WordPress chart:

    helm install my-wordpress stable/wordpress
    

    This command installs the WordPress chart with the release name my-wordpress. Helm will download the chart from the stable repository and deploy it to your Kubernetes cluster.

  1. Get the Application Information:
  • Get the information about the deployed application:

    helm status my-wordpress
    

    This command shows the status of the my-wordpress release, including any notes provided by the chart. The notes often contain instructions on how to access the application.

Troubleshooting

  • Nodes Not Ready: If your nodes are not showing as Ready, check the kubelet logs on each node for errors:

    sudo journalctl -u kubelet
    

    Common issues include networking problems, incorrect kubeadm join command, or resource constraints.

  • Pods Not Running: If your pods are not running, check the pod logs for errors:

    kubectl logs <pod_name> -n <namespace>
    

    Replace <pod_name> with the name of the pod and <namespace> with the namespace the pod is in. You can also describe the pod to see more information:

    kubectl describe pod <pod_name> -n <namespace>
    
  • Networking Issues: If you are having trouble accessing your applications, check the network policies and firewall rules. Make sure that traffic is allowed to the NodePort on your nodes.

Conclusion

Congratulations! You've successfully set up a Kubernetes cluster on VirtualBox. You've learned how to create virtual machines, install Kubernetes components, configure networking, deploy applications, and use Helm. This is a great foundation for exploring Kubernetes further and experimenting with different features and applications. Keep exploring, and have fun, you guys! You are now ready to deploy your applications on your local Kubernetes cluster.