Create Kubernetes cluster with Ansible, Vagrant and kubeadm
Posted September 25, 2021 by Ivan Magdić ‐ 6 min read
In this post we'll look into Kubernetes and how to create Kubernetes cluster with one master and two worker nodes using Vagrant and Ansible.
Introduction
There are several ways to create a Kubernetes cluster, and in this blog post we will explain two of them. First one is suitable for testing, local development, and if you want to learn how Kubernetes works in general. Second one is more production ready using kubespray
where we only need to provision nodes and add their IPs into file. Both ways use kubeadm
tool to install Kubernetes.
Check my GitHub repository containing all files that are used in this blog post.
Objectives
- provision one master node and two worker nodes using Vagrant
- configure and install kubernetes on nodes using Ansible (our own playbook and Kubespray)
- join worker nodes to the cluster
Getting started
Prerequisite
- Vagrant
- Virtualbox
- Ansible
Arch linux installation
sudo pacman -Sy vagrant virtualbox ansible
Setup
We will start by creating an empty folder that will contain all required files to quickly start and create local kubernetes cluster
mkdir kuberenetes
cd kuberenetes
Provisioning nodes using Vagrant
We will be setting up a Kubernetes cluster that will consist of one master and two worker nodes. To accomplish that, we will be using Vagrant and VirtualBox to quickly provision virtual machines.
Create a dedicated SSH key pair to connect to the VMs:
ssh-keygen -b 4096
In the setup process, we named our key vagrant_key
.
Vagrantfile
Create Vagrantfile
touch Vagrantfile
To provision one master node with 2 CPU and 2GB of RAM, and two worker nodes with 1 CPU and 2GB RAM each, add the following to the newly created Vagrantfile:
IMAGE_NAME = "ubuntu/focal64"
N = 2
Vagrant.configure(2) do |config|
config.vm.provision "file", source: "~/.ssh/vagrant_key.pub", destination: "/home/vagrant/.ssh/vagrant_key.pub"
config.vm.provision :shell, :inline => "cat /home/vagrant/.ssh/vagrant_key.pub >> /home/vagrant/.ssh/authorized_keys", run: "always"
# Configure master node
config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.network "private_network", ip: "192.168.50.10"
master.vm.hostname = "k8s-master"
master.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
end
# Configure worker nodes
(1..N).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = IMAGE_NAME
node.vm.network "private_network", ip: "192.168.50.#{i+10}"
node.vm.hostname = "node-#{i}"
node.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
end
end
end
To start creating virtual machines specified in Vagrantfile
run the following command in terminal:
vagrant up
This will create one master node and two worker nodes using the Virtualbox
.
Ansible
Once we have virtual machines ready, we can start configuring them installing the required packages. Before we begin looking at the Ansible Playbooks, we must first configure Ansible to interact with the VMs (Kubernetes nodes).
Create a hosts.yml
file that will contain IP addresses and SSH credentials to VMs
touch hosts.yml
Add the following to the hosts.yml
file:
[masters]
master ansible_host=192.168.50.10 ansible_port=22 ansible_user=vagrant ansible_ssh_private_key_file=/home/ivan/.ssh/vagrant_key
[workers]
worker1 ansible_host=192.168.50.11 ansible_port=22 ansible_user=vagrant ansible_ssh_private_key_file=/home/ivan/.ssh/vagrant_key
worker2 ansible_host=192.168.50.12 ansible_port=22 ansible_user=vagrant ansible_ssh_private_key_file=/home/ivan/.ssh/vagrant_key
Listing the master and worker nodes in separate parts of the hosts file will allow us to later target the playbooks at the specific node type.
To check if all our VMs are working and accessible we can run the following ansible command:
ansible -i hosts all -m ping
If everything is ok, we should receive a response like following:
worker1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
worker2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
master | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Configure nodes and install kubernetes
touch node-init.yml
List of actions that the playbook will do:
- Install required apt packages
- Disable swap
- Enable and load Kernel modules
- Add Kernel settings
- Install containerd runtime
- Add apt repo for kubernetes
- Install Kubernetes components
Add the following content in the yaml file:
- hosts: all
become: yes
tasks:
- name: Install packages that allow apt to be used over HTTPS
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- apt-transport-https
- ca-certificates
- curl
- name: Create containerd config file
file:
path: "/etc/modules-load.d/containerd.conf"
state: "touch"
- name: Add conf for containerd
blockinfile:
path: "/etc/modules-load.d/containerd.conf"
block: |
overlay
br_netfilter
- name: Run modprobe
shell: |
sudo modprobe overlay
sudo modprobe br_netfilter
- name: Set system configurations for Kubernetes networking
file:
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
state: "touch"
- name: Add conf for containerd
blockinfile:
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
block: |
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
- name: Apply new settings
command: sudo sysctl --system
- name: Install containerd
shell: |
sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
- name: Disable swap
shell: |
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- name: Install and configure dependencies
shell: |
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- name: Create kubernetes repo file
file:
path: "/etc/apt/sources.list.d/kubernetes.list"
state: "touch"
- name: Add k8s source
blockinfile:
path: "/etc/apt/sources.list.d/kubernetes.list"
block: |
deb https://apt.kubernetes.io/ kubernetes-xenial main
- name: Install kubernetes
shell: |
sudo apt-get update
sudo apt-get install -y kubelet=1.20.5-00 kubeadm=1.20.5-00 kubectl=1.20.5-00
sudo apt-mark hold kubelet kubeadm kubectl
After that, we can run the ansible playbook to configure all nodes and install Kubernetes:
ansible-playbook -i hosts node-init.yml
Once the playbook is finished, we should see following content:
PLAY RECAP *********************************************************************************************************
master : ok=14 changed=13 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
worker1 : ok=14 changed=13 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
worker2 : ok=14 changed=13 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Contents of the playbook were taken from the official guide install-kubeadm and added as a tasks.
Initialize master node
Create playbook for master node
touch master-node-init.yml
Add the following content for the master playbook:
- hosts: masters
become: yes
tasks:
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=10.10.42.0/16
- name: Setup kubeconfig for vagrant user
command: "{{ item }}"
with_items:
- mkdir -p /home/vagrant/.kube
- cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
- chown vagrant:vagrant /home/vagrant/.kube/config
- name: Copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/config
remote_src: yes
owner: vagrant
- name: Install Pod network (Calico)
become_user: vagrant
shell: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
args:
chdir: $HOME
- name: Generate join command
command: kubeadm token create --print-join-command
register: join_command
- name: Copy join command to local file
become: false
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
Run the playbook:
ansible-playbook -i hosts master-node-init.yml
Once the playbook is finished, we should see following:
PLAY RECAP *********************************************************************************************************
master : ok=7 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Contents of the playbook were taken from the official guide create-cluster-kubeadm and added as a tasks.
Initialize worker nodes
Create playbook for worker nodes
touch worker-node-init.yml
Add the following content for the worker playbook:
- hosts: workers
become: yes
tasks:
- name: Copy the join command to server location
copy: src=join-command dest=/tmp/join-command.sh mode=0777
- name: Join the node to cluster
command: sh /tmp/join-command.sh
Run the playbook:
ansible-playbook -i hosts worker-node-init.yml
After the playbook is finished, we should see following content:
PLAY RECAP *********************************************************************************************************
worker1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
worker2 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Check the Kubernetes cluster status
To check if everything went well and is working correctly we can SSH into master node.
ssh [email protected] -i $HOME/.ssh/vagrant_key
Once connected, check the node status.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 7m6s v1.20.5
node-1 Ready <none> 2m22s v1.20.5
node-2 Ready <none> 2m22s v1.20.5
Kubespray
Coming soon…
Conclusion
With the help of Vagrant
we are able to quickly provision virtual machines and with Ansible
we are able to configure and install Kubernetes on all our nodes.
Check out related posts and guides for more information: