309 lines
9.0 KiB
Markdown
309 lines
9.0 KiB
Markdown
# Deploy Kubernetes with Ansible
|
|
|
|
![](https://www.rtcloud.ru/wp-content/uploads/2018/12/kubernetes.png)
|
|
## Requirements
|
|
We will need at least two nodes, Ansible, and a kubectl. That's enough to begin.
|
|
|
|
My ansible role: https://github.com/allanger/kubeadm-ansible-role
|
|
|
|
I am using Ubuntu 21.04 on all my servers so my Ansible role is written for Debian-based distros. (I will be happy if anybody adds support for other distros)
|
|
|
|
## Preparing system
|
|
If you're familiar with Linux, all you have to know to perform this step is that you need to be able to ssh into every node in the cluster.
|
|
In case you're a kinda Linux newbie: Ansible will connect to your servers via SSH and perform actions on remote hosts. So you need to able to ssh into nodes from your host. If you're installing a fresh Ubuntu Server, you will see a checkbox `Install OpenSSH Server`, just check it and you're fine
|
|
|
|
![Ubuntu-server-openssh](./Ubuntu-server-openssh.png)
|
|
|
|
If you've already skipped this installation step or you have another distro that doesn't contain this option, just install `openssh` and start it
|
|
```
|
|
# apt install openssh-server
|
|
# systemctl status ssh
|
|
```
|
|
![systemctl-ssh](./systemctl-ssh.png)
|
|
|
|
If the status is not `active (running)`, just do
|
|
```
|
|
# systemctl enable ssh
|
|
# systemctl start ssh
|
|
```
|
|
Now we can check ssh connection
|
|
|
|
On your main host execute
|
|
```
|
|
$ ssh ${USER}@${HOST}
|
|
```
|
|
Where USER is a username that you use to login to the remote machine and HOST is its host address
|
|
|
|
Then you need to copy your ssh-key to all machines (in case you don't have one, it's really easy to google how to create it)
|
|
```
|
|
$ eval $(ssh agent)
|
|
$ ssh-add ${PATH_TO_KEY}
|
|
$ ssh-copy-id ${USER}@${HOST}
|
|
```
|
|
|
|
## Firewall
|
|
I would recommend using **UFW**. We need to allow `ssh`, `http`, and `https` traffic on all nodes and allow `6443/tcp` on the master node
|
|
|
|
You can use my Ansible role to setup `UFW`
|
|
Checkout this repo: [ansible-ufw-role](https://github.com/allanger/ansible-ufw-role)
|
|
|
|
Or you can do it manually:
|
|
```
|
|
# -- On each node
|
|
# ufw default deny
|
|
# ufw limit ssh
|
|
# ufw allow http
|
|
# ufw allow https
|
|
|
|
# -- On master node only
|
|
# ufw allow 6443/tcp
|
|
|
|
# -- On all nodes
|
|
# ufw enable
|
|
```
|
|
|
|
All the preparation steps are done. Now we can begin
|
|
|
|
## Kubernetes
|
|
|
|
The first thing that I'd recommend doing is to read every step in my role to understand what's going on. Here I will try to describe each step so you will easily (I hope) understand how it works
|
|
|
|
### Install container runtime
|
|
|
|
Go to `/tasks/main.yaml`
|
|
|
|
As you can see, it's including another YAML file so follow all includes and see my comments here
|
|
|
|
I'm always using `containerd` so in this role I'm installing it.
|
|
But if you wanna use `docker` or `cri-o` you should find another instruction or even better contribute to my project and add support for another container runtime. File `/tasks/container-runtime/container-runtime.yaml` is designed to read the `container_runtime` variable and include steps for installing it.
|
|
|
|
In case you wanna use `containerd`, go to `/tasks/container-runtime/containerd/system-setup.yaml`. Here we are preparing our system for CRI installation.
|
|
|
|
```
|
|
- name: Add the overlay and br_netfilter modules
|
|
modprobe:
|
|
name: "{{ item }}"
|
|
state: present
|
|
loop:
|
|
- "overlay"
|
|
- "br_netfilter"
|
|
|
|
- name: Ensure dependencies are installed.
|
|
apt:
|
|
name:
|
|
- apt-transport-https
|
|
- ca-certificates
|
|
- gnupg2
|
|
state: present
|
|
|
|
- name: Add Docker apt key.
|
|
apt_key:
|
|
url: "{{ docker_apt_gpg_key }}"
|
|
id: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
|
|
state: present
|
|
register: add_repository_key
|
|
ignore_errors: "{{ docker_apt_ignore_key_error }}"
|
|
|
|
- name: Add Docker repository.
|
|
apt_repository:
|
|
repo: "{{ docker_apt_repository }}"
|
|
state: present
|
|
update_cache: true
|
|
```
|
|
|
|
I think jobs description are pretty informative in this case so let's go further.
|
|
|
|
Go back to `/tasks/container-runtime/container-runtime.yaml`
|
|
|
|
Here we are installing `containerd`.
|
|
```
|
|
- name: Ensure containerd is installed.
|
|
package:
|
|
name: containerd.io
|
|
state: present
|
|
|
|
- name: Ensure containerd is started and enabled at boot.
|
|
service:
|
|
name: containerd
|
|
state: started
|
|
enabled: true
|
|
|
|
- name: Ensure containerd config directory exists.
|
|
file:
|
|
path: /etc/containerd
|
|
state: directory
|
|
register: containerd_dir
|
|
|
|
- name: Get defaults from containerd.
|
|
command: containerd config default
|
|
changed_when: false
|
|
register: containerd_config_default
|
|
when: containerd_config_default_write
|
|
|
|
- name: Write defaults to config.toml.
|
|
copy:
|
|
dest: /etc/containerd/config.toml
|
|
content: "{{ containerd_config_default.stdout }}"
|
|
notify: restart containerd
|
|
when: containerd_config_default_write
|
|
```
|
|
|
|
|
|
### Install kubernetes
|
|
|
|
Now let's go to `/kubernetes/kubeernetes.yaml`
|
|
|
|
Kubernetes won't run on machines with swap enabled so we are disabling swap
|
|
```
|
|
- name: Disable swap
|
|
shell:
|
|
cmd: |
|
|
swapoff -a
|
|
args:
|
|
executable: /bin/bash
|
|
|
|
- name: Remove Swap from fstab
|
|
mount:
|
|
name: swap
|
|
fstype: swap
|
|
state: absent
|
|
```
|
|
Then we're preparing the system (checking dependencies and adding repos)
|
|
|
|
Configuring network
|
|
```
|
|
- name: Let iptables see bridged traffic
|
|
sysctl:
|
|
name: "{{ item }}"
|
|
value: "1"
|
|
state: present
|
|
loop:
|
|
- net.bridge.bridge-nf-call-iptables
|
|
- net.bridge.bridge-nf-call-ip6tables
|
|
- net.ipv4.ip_forward
|
|
```
|
|
|
|
Installing dependencies
|
|
```
|
|
- name: Install Kubernetes packages.
|
|
package:
|
|
name: "{{ item }}"
|
|
state: present
|
|
notify: restart kubelet
|
|
loop: "{{ kubernetes_packages }}"
|
|
```
|
|
|
|
Configuring the kubelet (Here we can define arguments which kubelet will use)
|
|
```
|
|
- name: Check for existence of kubelet environment file.
|
|
stat:
|
|
path: "{{ kubelet_environment_file_path }}"
|
|
register: kubelet_environment_file
|
|
- name: Set facts for KUBELET_EXTRA_ARGS task if environment file exists.
|
|
set_fact:
|
|
kubelet_args_path: "{{ kubelet_environment_file_path }}"
|
|
kubelet_args_line: "{{ 'KUBELET_EXTRA_ARGS=' + kubernetes_kubelet_extra_args }}"
|
|
kubelet_args_regexp: "^KUBELET_EXTRA_ARGS="
|
|
when: kubelet_environment_file.stat.exists
|
|
- name: Set facts for KUBELET_EXTRA_ARGS task if environment file doesn't exist.
|
|
set_fact:
|
|
kubelet_args_path: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
|
|
kubelet_args_line: '{{ ''Environment="KUBELET_EXTRA_ARGS='' + kubernetes_kubelet_extra_args + ''"'' }}'
|
|
kubelet_args_regexp: '^Environment="KUBELET_EXTRA_ARGS='
|
|
when: not kubelet_environment_file.stat.exists
|
|
- name: Configure KUBELET_EXTRA_ARGS.
|
|
lineinfile:
|
|
path: "{{ kubelet_args_path }}"
|
|
line: "{{ kubelet_args_line }}"
|
|
regexp: "{{ kubelet_args_regexp }}"
|
|
state: present
|
|
mode: 0644
|
|
register: kubelet_config_file
|
|
- name: Reload systemd unit if args were changed.
|
|
systemd:
|
|
state: restarted
|
|
daemon_reload: true
|
|
name: kubelet
|
|
when: kubelet_config_file is changed
|
|
```
|
|
|
|
And running the kubelet daemon
|
|
|
|
```
|
|
- name: Ensure kubelet is started and enabled at boot.
|
|
service:
|
|
name: kubelet
|
|
state: started
|
|
enabled: true
|
|
```
|
|
|
|
Know the "backend installation is done" and the last thing that we will install is `kubectl`. We need to install it only on the master node.
|
|
```
|
|
- name: Install kubectl.
|
|
package:
|
|
name: kubectl
|
|
state: present
|
|
when: node_type == 'master'
|
|
```
|
|
|
|
### Check the installation.
|
|
|
|
Create a file, for example `hosts.yaml` (you should read about ansible inventory files for better understanding)
|
|
|
|
```
|
|
# --------------------------------------
|
|
# -- Inventory file example
|
|
# -- This is gonna be two-nodes cluster
|
|
# --------------------------------------
|
|
---
|
|
k8s_master:
|
|
hosts:
|
|
${MASTER_NODE_ADDRESS}
|
|
vars:
|
|
node_type: "master"
|
|
ansible_user: ${REMOTE_USER_NAME}
|
|
key_path: /PATH/TO/YOUR/SSH/KEY,
|
|
|
|
k8s_node:
|
|
hosts:
|
|
${WORKER_NODE_ADDRES}
|
|
vars:
|
|
node_type: "worker"
|
|
ansible_user: ${REMOTE_USER)NAME}
|
|
key_path: /PATH/TO/YOUR/SSH/KEY,
|
|
```
|
|
|
|
Now run
|
|
|
|
```
|
|
$ ansible-playbook ./playbook.yaml -i hosts.yaml --tags=init
|
|
```
|
|
|
|
And see how Kubernetes is being installed on your nodes.
|
|
|
|
### Deploy cluster
|
|
|
|
To deploy your cluster you can just run
|
|
```
|
|
$ ansible-playbook ./playbook.yaml -i hosts.yaml
|
|
```
|
|
But I think that you should do it manually if it's your first time. Just to understand what's going on there.
|
|
Just connect to your master node and run
|
|
```
|
|
$ kubeadm init
|
|
```
|
|
|
|
When it's done, save the join command somewhere go to your worker node and execute the join command.
|
|
Then go back to the master node and do
|
|
```
|
|
$ mkdir ~/.kube
|
|
# cp /etc/kubernetes/admin.conf ~./kube/config
|
|
# chown ${USER} ~/.kube/config
|
|
$ kubectl get nodes
|
|
```
|
|
|
|
You should see at least two nodes
|
|
|
|
![Kubectl get node](./nodes.png)
|
|
|
|
That's it! Your cluster is deployed |