Running Kuber­netes on Proxmox is a powerful way to manage container workloads in your own virtual en­vir­on­ment. This guide shows you how to build a stable Kuber­netes setup from scratch using virtual machines hosted on Proxmox. Whether you’re setting up a dev lab, CI pipeline or a small-scale pro­duc­tion cluster, we’ll walk you through everything from pre­requis­ites to load balancer con­fig­ur­a­tion.

Step 1: What you need before you start

Before diving into the setup, make sure your en­vir­on­ment meets a few technical re­quire­ments. Starting with a clean setup saves you a lot of time and helps avoid con­fig­ur­a­tion errors later.

You’ll need a working Proxmox VE in­stall­a­tion. For best per­form­ance, Proxmox should be set up as a bare-metal in­stall­a­tion. Make sure both the web interface and SSH access are enabled. You’ll need them to run commands, upload images, and automate con­fig­ur­a­tions.

To build a stable Kuber­netes cluster, you’ll also need several virtual machines, ideally set up as dedicated Kuber­netes nodes:

  • One master node (for the control plane)
  • As well as at least two worker nodes.

This setup gives you re­dund­ancy and mirrors real-world Kuber­netes ar­chi­tec­ture. For testing, a smaller cluster with one master and one worker is fine.

Your Proxmox host should also have a working bridge interface that lets your virtual machines (VMs) connect to the LAN and the internet. This is essential for down­load­ing updates and in­stalling Kuber­netes com­pon­ents.

Tip

For pro­duc­tion en­vir­on­ments, it’s a good idea to automate VM backups using Proxmox Backup Server. This lets you restore nodes quickly and keep downtime to a minimum.

IONOS Cloud Managed Kuber­netes
Container workloads in expert hands

The ideal platform for demanding, highly scalable container ap­plic­a­tions. Managed Kuber­netes works with many cloud-native solutions and includes 24/7 expert support.

Step 2: Download the cloud image and create a VM template

The easiest way to install Kuber­netes is by using cloud images– pre­con­figured OS images (like Ubuntu or Debian) optimised for cloud-init auto­ma­tion. In this guide, we’ll be using Ubuntu 22.04 LTS, owing to its stability, clear doc­u­ment­a­tion and easy in­teg­ra­tion with Kuber­netes.

Begin by logging in to your Proxmox host via SSH. Then switch to the directory where Proxmox stores ISO and image files. You’ll download the latest Ubuntu cloud image to this location:

cd /var/lib/vz/template/iso
bash

Download the Ubuntu cloud image:

wget -O ubuntu-22.04-server-cloudimg-amd64.img https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
bash
Note

Al­tern­at­ively, download the image locally and transfer it using scp (Secure Copy):

scp ubuntu-22.04-server-cloudimg-amd64.img root@<proxmox-ip>:/var/lib/vz/template/iso/
bash

Now create a base VM to use as a reusable template. Start by creating an empty VM with a unique ID, such as 9000, and assign it basic hardware resources:

qm create 9000 --name ubuntu-template --memory 2048 --net0 virtio,bridge=vmbr0
bash

Now import the down­loaded image as a disk into your Proxmox storage (here, local-lvm):

qm importdisk 9000 /var/lib/vz/template/iso/ubuntu-22.04-server-cloudimg-amd64.img local-lvm
bash

Next, attach the imported disk to the VM and set the correct con­trol­ler for it. This step connects the image to the virtual SCSI con­trol­ler used by the VM:

qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
bash

To auto­mat­ic­ally assign IP addresses, hostnames and SSH keys when cloning the VMs, you’ll need to add a Cloud-Init drive. This drive stores the con­fig­ur­a­tion data that Proxmox applies each time the VM boots. Use the following command to add the Cloud-Init drive and define the boot order:

qm set 9000 --ide2 local-lvm:cloudinit 
qm set 9000 --boot c --bootdisk scsi0
bash

Then enable the QEMU guest agent so that Proxmox can read status in­form­a­tion from the VM. It’s also a good idea to enable a serial console. This gives you low-level access to the VM in case of an emergency:

qm set 9000 --agent 1 
qm set 9000 --serial0 socket --vga serial0
bash

With the setup complete, it’s time to convert the virtual machine into a template. In Proxmox, templates act as reusable blue­prints meaning you can create as many clones from them as you need. This makes them ideal for setting up your Kuber­netes Proxmox nodes.

qm template 9000
bash

Your Ubuntu template is now ready. You’ll use it as the found­a­tion for both your master and worker nodes.

Step 3: Clone the master and worker VMs

In this step, you’ll clone the virtual machines from the Ubuntu template you set up earlier. These cloned VMs will act as the master and worker nodes of your Kuber­netes cluster. Each VM should have its own IP address, unique hostname, and SSH key for security. You don’t need to configure anything manually inside the VMs since Proxmox takes care of the base con­fig­ur­a­tion through cloud-init.

Start by cloning the base template (in this example, ID 9000) to create three virtual machines: one for the master and two for the worker nodes. You can also configure CPU and memory in­di­vidu­ally for each VM:

qm clone 9000 101 --name k8s-master-1 --full true 
qm set 101 --cores 2 --memory 4096 
qm clone 9000 102 --name k8s-worker-1 --full true 
qm set 102 --cores 2 --memory 4096 
qm clone 9000 103 --name k8s-worker-2 --full true 
qm set 103 --cores 2 --memory 4096
bash

Next, use cloud-init to configure the hostname, IP address and SSH key for each VM. You can either assign static IPs or use DHCP. This example uses static ad­dress­ing:

# Configure master
qm set 101 --ipconfig0 ip=192.168.1.10/24,gw=192.168.1.1 
qm set 101 --sshkey "$(cat ~/.ssh/id_rsa.pub)" 
qm set 101 --ciuser ubuntu 
qm set 101 --nameserver 192.168.1.1 
qm set 101 --description "K8s Master 1" 
# Configure worker 
qm set 102 --ipconfig0 ip=192.168.1.11/24,gw=192.168.1.1 
qm set 102 --sshkey "$(cat ~/.ssh/id_rsa.pub)" 
qm set 102 --ciuser ubuntu 
qm set 103 --ipconfig0 ip=192.168.1.12/24,gw=192.168.1.1 
qm set 103 --sshkey "$(cat ~/.ssh/id_rsa.pub)" 
qm set 103 --ciuser ubuntu
bash
Note

Make sure the IP addresses match your local network. Use values from your router’s IP range and assign a unique address to each VM.

Finally, start all three virtual machines:

qm start 101 
qm start 102 
qm start 103
bash

Wait a moment for the VMs to finish booting then test the con­nec­tion via SSH. Use the following command to connect to the master node:

ssh ubuntu@192.168.1.10

Step 4: Apply the base con­fig­ur­a­tion on all virtual machines

Before in­stalling Kuber­netes, make a few system-wide changes to each VM. Disable swap, adjust kernel settings for net­work­ing and IP for­ward­ing and sync the system clock. Doing so helps Kuber­netes run reliably and ensures the con­tain­ers can com­mu­nic­ate with each other over the network.

Kuber­netes requires swap to be disabled for its scheduler to work properly. You should also remove the swap entry in /etc/fstab so it’s not re-activated on reboot:

sudo swapoff -a 
sudo sed -i '/ swap / s/^/#/' /etc/fstab
bash

Next, configure the kernel so that network traffic between con­tain­ers and nodes is processed correctly:

cat <<'EOF' | sudo tee /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
net.bridge.bridge-nf-call-ip6tables = 1 
EOF 
# Apply changes
sudo sysctl --system
bash

Kuber­netes com­pon­ents and cer­ti­fic­ates rely on the system time being accurate. To keep the clock in sync, install and start chrony:

sudo apt update && sudo apt install -y chrony 
sudo systemctl enable --now chrony
bash

Finally, install some basic tools that you’ll need later:

sudo apt install -y curl apt-transport-https ca-certificates gnupg lsb-release
bash

At this point, each node should have swap disabled, net­work­ing con­figured and system time synced. This means your VMs are now ready to install Kuber­netes and set up the cluster.

Step 5: Choosing a Kuber­netes dis­tri­bu­tion

Before in­stalling Kuber­netes itself, you’ll need to choose the dis­tri­bu­tion that best fits your setup. In this guide, we’ll focus on two popular options:

  • RKE2 (Rancher Kuber­netes Engine 2): RKE2 is a full-featured, pro­duc­tion-grade Kuber­netes dis­tri­bu­tion developed by Rancher. It’s a solid choice if you plan to use the Rancher man­age­ment interface or want to run a cluster with multiple control plane nodes.
  • k3s: k3s is a light­weight Kuber­netes dis­tri­bu­tion designed for test en­vir­on­ments, home labs and systems with limited resources. It’s easy to install and uses less memory and CPU than a full Kuber­netes setup.

For this guide, we’ll use RKE2, as it’s well-suited for building a robust cluster that can scale beyond testing if needed. If you’re just ex­per­i­ment­ing or setting up a quick dev en­vir­on­ment, you might prefer k3s, but the in­stall­a­tion process will differ slightly.

Step 6: Install RKE2 on the master node

With the basic setup complete, you can now install RKE2 on your master node. Start by con­nect­ing to the master via SSH:

ssh ubuntu@192.168.1.10
bash

Next, download and run the RKE2 in­stall­a­tion script. To install a specific version, set the channel as follows:

curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_CHANNEL=v1.28 bash -
bash

Once installed, enable and start the RKE2 server service:

sudo systemctl enable --now rke2-server.service
bash

Use the following command to check if the service is running correctly:

sudo systemctl status rke2-server
bash

To manage the Kuber­netes cluster from your local machine, copy the kubeconfig file:

sudo chmod 644 /etc/rancher/rke2/rke2.yaml 
scp ubuntu@192.168.1.10:/etc/rancher/rke2/rke2.yaml ~/rke2-kubeconfig
bash

Update the file to match the master node’s IP so kubectl connects to the correct server:

sed -i 's/127.0.0.1:6443/192.168.1.10:6443/' ~/rke2-kubeconfig 
export KUBECONFIG=~/rke2-kubeconfig
bash

Use the following command to check the con­nec­tion to the master node:

kubectl get nodes
bash

If the master appears, the in­stall­a­tion was suc­cess­ful. You’re now ready to add the worker nodes.

Step 7: Install the RKE2 agent on the worker nodes

With the master node running, it’s time to add the workers. To do so, you’ll need to install the RKE2 agent on each worker and connect them to the master.

Start by re­triev­ing the node token from the master mode. You’ll need this token to au­then­tic­ate the worker nodes when they join the cluster:

sudo cat /var/lib/rancher/rke2/server/node-token
bash

Make a note of the token. You’ll need to use it on each worker node.

Next, connect to a worker node via SSH:

ssh ubuntu@192.168.1.11
bash

Download the RKE2 in­stall­a­tion script for and install the agent using:

curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_CHANNEL=v1.28 sh -
bash

Then create the config file that connects the worker to the master and includes the token:

sudo mkdir -p /etc/rancher/rke2 
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml 
server: https://192.168.1.10 
token: <INSERT_TOKEN_HERE> 
EOF
bash

Finally, enable and start the agent:

sudo systemctl enable --now rke2-agent.service
bash

Repeat these steps for each worker node. After a few minutes, run the following command to confirm they’ve all joined the cluster:

kubectl get nodes
bash

You should now see the master and all worker nodes listed in your Kuber­netes cluster. Your setup is complete and ready for network plugins, load balancers and other com­pon­ents.

Step 8: Install the network CNI and load balancer

With the master and worker nodes set up, your cluster needs two last com­pon­ents: a Container Network Interface (CNI) so the pods can com­mu­nic­ate with each other and a load balancer to make services available within your network. This guide uses Calico for net­work­ing and MetalLB for Layer 2 load balancing.

Calico handles pod-to-pod com­mu­nic­a­tion, assigns IP addresses and can also enforce network policies. Use this command to install it:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
bash

Once installed, check all Calico pods have started:

kubectl get pods -n kube-system
bash

All of them should show as Running or Completed. If any are still showing Pending, give it a few minutes. Calico needs time to roll out its network con­fig­ur­a­tion across the cluster.

Kuber­netes supports the Load­Bal­an­cer service type, which assigns external IPs to services. In a self-hosted en­vir­on­ment like Proxmox, this requires a tool like MetalLB. Use the following command to install it:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
bash

Next, create a pool of IP addresses that MetalLB can use when assigning external IPs to your Kuber­netes services. Use addresses that fit within your local network:

cat <<EOF | kubectl apply -f - 
apiVersion: metallb.io/v1beta1 
kind: IPAddressPool
metadata: 
name: my-ip-pool 
namespace: metallb-system 
spec: 
addresses: 
- 192.168.1.200-192.168.1.210 
--- 
apiVersion: metallb.io/v1beta1 
kind: L2Advertisement 
metadata: 
name: l2adv 
namespace: metallb-system 
spec: {} 
EOF
bash

Check the status of the MetalLB pods:

kubectl get pods -n metallb-system
bash

Once all pods show Running, your cluster is ready to go. Use the Load­Bal­an­cer service type to make your apps ac­cess­ible within your local network. With Kuber­netes now running on Proxmox, your setup is ready for deploying and managing ap­plic­a­tions.

GPU Servers
Power redefined with RTX PRO 6000 GPUs on dedicated hardware
  • New high-per­form­ance NVIDIA RTX PRO 6000 Blackwell GPUs available
  • Un­par­al­lel per­form­ance for complex AI and data tasks
  • Hosted in secure and reliable data centres
  • Flexible pricing based on your usage
Go to Main Menu