diff --git a/articles/1. Deploy Kubernetes/Deploy Kubernetes.md b/articles/1. Deploy Kubernetes/Deploy Kubernetes.md new file mode 100644 index 0000000..88de4e1 --- /dev/null +++ b/articles/1. Deploy Kubernetes/Deploy Kubernetes.md @@ -0,0 +1,309 @@ +# Deploy Kubernetes with Ansible + +![](https://www.rtcloud.ru/wp-content/uploads/2018/12/kubernetes.png) +## Requirements +We will need at least two nodes, Ansible, and a kubectl. That's enough to begin. + +My ansible role: https://github.com/allanger/kubeadm-ansible-role + +I am using Ubuntu 21.04 on all my servers so my Ansible role is written for Debian-based distros. (I will be happy if anybody adds support for other distros) + +## Preparing system +If you're familiar with Linux, all you have to know to perform this step is that you need to be able to ssh into every node in the cluster. +In case you're a kinda Linux newbie: Ansible will connect to your servers via SSH and perform actions on remote hosts. So you need to able to ssh into nodes from your host. If you're installing a fresh Ubuntu Server, you will see a checkbox `Install OpenSSH Server`, just check it and you're fine + +![Ubuntu-server-openssh](./Ubuntu-server-openssh.png) + +If you've already skipped this installation step or you have another distro that doesn't contain this option, just install `openssh` and start it +``` +# apt install openssh-server +# systemctl status ssh +``` +![systemctl-ssh](./systemctl-ssh.png) + +If the status is not `active (running)`, just do +``` +# systemctl enable ssh +# systemctl start ssh +``` +Now we can check ssh connection + +On your main host execute +``` +$ ssh ${USER}@${HOST} +``` +Where USER is a username that you use to login to the remote machine and HOST is its host address + +Then you need to copy your ssh-key to all machines (in case you don't have one, it's really easy to google how to create it) +``` +$ eval $(ssh agent) +$ ssh-add ${PATH_TO_KEY} +$ ssh-copy-id ${USER}@${HOST} +``` + +## Firewall +I would recommend using **UFW**. We need to allow `ssh`, `http`, and `https` traffic on all nodes and allow `6443/tcp` on the master node + +You can use my Ansible role to setup `UFW` +Checkout this repo: [ansible-ufw-role](https://github.com/allanger/ansible-ufw-role) + +Or you can do it manually: +``` +# -- On each node +# ufw default deny +# ufw limit ssh +# ufw allow http +# ufw allow https + +# -- On master node only +# ufw allow 6443/tcp + +# -- On all nodes +# ufw enable +``` + +All the preparation steps are done. Now we can begin + +## Kubernetes + +The first thing that I'd recommend doing is to read every step in my role to understand what's going on. Here I will try to describe each step so you will easily (I hope) understand how it works + +### Install container runtime + +Go to `/tasks/main.yaml` + +As you can see, it's including another YAML file so follow all includes and see my comments here + +I'm always using `containerd` so in this role I'm installing it. +But if you wanna use `docker` or `cri-o` you should find another instruction or even better contribute to my project and add support for another container runtime. File `/tasks/container-runtime/container-runtime.yaml` is designed to read the `container_runtime` variable and include steps for installing it. + +In case you wanna use `containerd`, go to `/tasks/container-runtime/containerd/system-setup.yaml`. Here we are preparing our system for CRI installation. + +``` + - name: Add the overlay and br_netfilter modules + modprobe: + name: "{{ item }}" + state: present + loop: + - "overlay" + - "br_netfilter" + + - name: Ensure dependencies are installed. + apt: + name: + - apt-transport-https + - ca-certificates + - gnupg2 + state: present + + - name: Add Docker apt key. + apt_key: + url: "{{ docker_apt_gpg_key }}" + id: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 + state: present + register: add_repository_key + ignore_errors: "{{ docker_apt_ignore_key_error }}" + + - name: Add Docker repository. + apt_repository: + repo: "{{ docker_apt_repository }}" + state: present + update_cache: true +``` + +I think jobs description are pretty informative in this case so let's go further. + +Go back to `/tasks/container-runtime/container-runtime.yaml` + +Here we are installing `containerd`. +``` + - name: Ensure containerd is installed. + package: + name: containerd.io + state: present + + - name: Ensure containerd is started and enabled at boot. + service: + name: containerd + state: started + enabled: true + + - name: Ensure containerd config directory exists. + file: + path: /etc/containerd + state: directory + register: containerd_dir + + - name: Get defaults from containerd. + command: containerd config default + changed_when: false + register: containerd_config_default + when: containerd_config_default_write + + - name: Write defaults to config.toml. + copy: + dest: /etc/containerd/config.toml + content: "{{ containerd_config_default.stdout }}" + notify: restart containerd + when: containerd_config_default_write +``` + + +### Install kubernetes + +Now let's go to `/kubernetes/kubeernetes.yaml` + +Kubernetes won't run on machines with swap enabled so we are disabling swap +``` + - name: Disable swap + shell: + cmd: | + swapoff -a + args: + executable: /bin/bash + + - name: Remove Swap from fstab + mount: + name: swap + fstype: swap + state: absent +``` +Then we're preparing the system (checking dependencies and adding repos) + +Configuring network +``` + - name: Let iptables see bridged traffic + sysctl: + name: "{{ item }}" + value: "1" + state: present + loop: + - net.bridge.bridge-nf-call-iptables + - net.bridge.bridge-nf-call-ip6tables + - net.ipv4.ip_forward +``` + +Installing dependencies +``` + - name: Install Kubernetes packages. + package: + name: "{{ item }}" + state: present + notify: restart kubelet + loop: "{{ kubernetes_packages }}" +``` + +Configuring the kubelet (Here we can define arguments which kubelet will use) +``` + - name: Check for existence of kubelet environment file. + stat: + path: "{{ kubelet_environment_file_path }}" + register: kubelet_environment_file + - name: Set facts for KUBELET_EXTRA_ARGS task if environment file exists. + set_fact: + kubelet_args_path: "{{ kubelet_environment_file_path }}" + kubelet_args_line: "{{ 'KUBELET_EXTRA_ARGS=' + kubernetes_kubelet_extra_args }}" + kubelet_args_regexp: "^KUBELET_EXTRA_ARGS=" + when: kubelet_environment_file.stat.exists + - name: Set facts for KUBELET_EXTRA_ARGS task if environment file doesn't exist. + set_fact: + kubelet_args_path: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" + kubelet_args_line: '{{ ''Environment="KUBELET_EXTRA_ARGS='' + kubernetes_kubelet_extra_args + ''"'' }}' + kubelet_args_regexp: '^Environment="KUBELET_EXTRA_ARGS=' + when: not kubelet_environment_file.stat.exists + - name: Configure KUBELET_EXTRA_ARGS. + lineinfile: + path: "{{ kubelet_args_path }}" + line: "{{ kubelet_args_line }}" + regexp: "{{ kubelet_args_regexp }}" + state: present + mode: 0644 + register: kubelet_config_file + - name: Reload systemd unit if args were changed. + systemd: + state: restarted + daemon_reload: true + name: kubelet + when: kubelet_config_file is changed +``` + +And running the kubelet daemon + +``` + - name: Ensure kubelet is started and enabled at boot. + service: + name: kubelet + state: started + enabled: true +``` + +Know the "backend installation is done" and the last thing that we will install is `kubectl`. We need to install it only on the master node. +``` + - name: Install kubectl. + package: + name: kubectl + state: present + when: node_type == 'master' +``` + +### Check the installation. + +Create a file, for example `hosts.yaml` (you should read about ansible inventory files for better understanding) + +``` +# -------------------------------------- +# -- Inventory file example +# -- This is gonna be two-nodes cluster +# -------------------------------------- +--- +k8s_master: + hosts: + ${MASTER_NODE_ADDRESS} + vars: + node_type: "master" + ansible_user: ${REMOTE_USER_NAME} + key_path: /PATH/TO/YOUR/SSH/KEY, + +k8s_node: + hosts: + ${WORKER_NODE_ADDRES} + vars: + node_type: "worker" + ansible_user: ${REMOTE_USER)NAME} + key_path: /PATH/TO/YOUR/SSH/KEY, +``` + +Now run + +``` +$ ansible-playbook ./playbook.yaml -i hosts.yaml --tags=init +``` + +And see how Kubernetes is being installed on your nodes. + +### Deploy cluster + +To deploy your cluster you can just run +``` +$ ansible-playbook ./playbook.yaml -i hosts.yaml +``` +But I think that you should do it manually if it's your first time. Just to understand what's going on there. +Just connect to your master node and run +``` +$ kubeadm init +``` + +When it's done, save the join command somewhere go to your worker node and execute the join command. +Then go back to the master node and do +``` +$ mkdir ~/.kube +# cp /etc/kubernetes/admin.conf ~./kube/config +# chown ${USER} ~/.kube/config +$ kubectl get nodes +``` + +You should see at least two nodes + +![Kubectl get node](./nodes.png) + +That's it! Your cluster is deployed \ No newline at end of file diff --git a/articles/1. Deploy Kubernetes/Ubuntu-server-openssh.png b/articles/1. Deploy Kubernetes/Ubuntu-server-openssh.png new file mode 100644 index 0000000..ba591b0 Binary files /dev/null and b/articles/1. Deploy Kubernetes/Ubuntu-server-openssh.png differ diff --git a/articles/1. Deploy Kubernetes/firewall.jpg b/articles/1. Deploy Kubernetes/firewall.jpg new file mode 100644 index 0000000..c3be418 Binary files /dev/null and b/articles/1. Deploy Kubernetes/firewall.jpg differ diff --git a/articles/1. Deploy Kubernetes/nodes.png b/articles/1. Deploy Kubernetes/nodes.png new file mode 100644 index 0000000..a578386 Binary files /dev/null and b/articles/1. Deploy Kubernetes/nodes.png differ diff --git a/articles/1. Deploy Kubernetes/systemctl-ssh.png b/articles/1. Deploy Kubernetes/systemctl-ssh.png new file mode 100644 index 0000000..fe52d5c Binary files /dev/null and b/articles/1. Deploy Kubernetes/systemctl-ssh.png differ diff --git a/articles/2. Add Kubernetes roles/Add Kubernetes Roles.md b/articles/2. Add Kubernetes roles/Add Kubernetes Roles.md new file mode 100644 index 0000000..47181ca --- /dev/null +++ b/articles/2. Add Kubernetes roles/Add Kubernetes Roles.md @@ -0,0 +1,208 @@ +# Add Kubernetes users with Ansible + +Hi! In the [previous article](https://gist.github.com/allanger/84db2647578316f8e721f7219052788f), I've told how to deploy k8s cluster with ansible. Now I'm going to tell, how to add users to your cluster to be able to control your k8s remotely. + +My Github: https://github.com/allanger/kubernetes-rbac-ansible-role + +Let's imagine you've deployed a bare-metal cluster and you ssh to the master node every time you wanna do something with it. It's not cool, right? So you need to add a user to your cluster. + +You can do it manually but, I think, after the first time you perform it, you'd like to do it automatically. That's why I've created this Ansible role. + +Clone the repo and go to /vars/main.yaml file. +If you know what you wanna do, set all variables yourself, but if you're not sure, you should update only the `username` var. +In this case, we're adding a cluster-admin user, because I guess if you're able to run this role against your master node, you're a cluster admin. +``` +--- +# -------------------------------------- +# -- K8s username +# -------------------------------------- +username: "admin" +# -------------------------------------- +# -- How many days certificate +# -- will be valid +# -------------------------------------- +certificate_expires_in: 500 +# -------------------------------------- +# -- K8s cluster name +# -------------------------------------- +cluster: "kubernetes" +# -------------------------------------- +# -- RoleBinding parameters +# -------------------------------------- +# -- Binding type: +# ---- ClusterRoleBinding +# ---- RoleBinding +# -------------------------------------- +binding_type: ClusterRoleBinding +# -------------------------------------- +# -- Role type +# -- ClusterRole +# -- Role +# -------------------------------------- +role_type: ClusterRole +# -------------------------------------- +# -- Cluster role name +# -- https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +# -------------------------------------- +role: cluster-admin +``` + +When you're done, let's go to `/tasks/main.yaml`. + +In the first block, we are creating a working directory. In this directory, Ansible will store certificates and configs. (It will be removed after the play, so it's a temporary dir) + +``` +- name: Prepare working directory + block: + - name: Set workdir as fact + set_fact: + working_dir: "{{ ansible_env.HOME }}/.certs/{{ username }}" + + - name: Create a directory if it does not exist + ansible.builtin.file: + path: "{{ working_dir }}" + state: directory + mode: "0775" +``` + +In the second block, we're installing packages that will be used while running the role. + +``` +- name: Ensure required packages are installed + block: + # -------------------------------------- + # -- yq is a lightweight and portable + # -- command-line YAML processor + # -------------------------------------- + - name: Ensure yq is installed + become: yes + get_url: + url: "https://github.com/mikefarah/yq/releases/download/{{ yq.version }}/{{ yq.binary }}" + dest: /usr/bin/yq + mode: "0777" + + - name: Ensure openssl is installed + package: + name: openssl + state: present + tags: packages +``` + +Then we will generate a certificate: + +``` +- name: Generate openssl certificate + block: + - name: Generate an OpenSSL private key + community.crypto.openssl_privatekey: + path: "{{ working_dir }}/{{ username }}.key" + size: 2048 + + - name: Generate an OpenSSL Certificate Signing Request + community.crypto.openssl_csr: + path: "{{ working_dir }}/{{ username }}.csr" + privatekey_path: "{{ working_dir }}/{{ username }}.key" + common_name: "{{ username }}" + + - name: Generate an OpenSSL certificate signed with your own CA certificate + become: yes + community.crypto.x509_certificate: + path: "{{ working_dir }}/{{ username }}.crt" + csr_path: "{{ working_dir }}/{{ username }}.csr" + ownca_path: /etc/kubernetes/pki/ca.crt + ownca_privatekey_path: /etc/kubernetes/pki/ca.key + provider: ownca + entrust_not_after: "+{{ certificate_expires_in }}d" + + tags: openssl +``` + +When the certificate is ready we need to add a user to our cluster + +``` +- name: Add user to cluster + block: + # -------------------------------------- + # -- Get k8s server from admin.conf + # -------------------------------------- + - name: Get k8s server + shell: yq e '.clusters[0] | select(.name == "{{ cluster }}").cluster.server' "{{ k8s_config_path }}" + register: kubernetes_server_output + # -------------------------------------- + # -- Get k8s certificate authority data + # -- from admin-conf + # -------------------------------------- + - name: Get k8s certificate authority data + shell: yq e '.clusters[0] | select(.name == "{{ cluster }}").cluster.certificate-authority-data' "{{ k8s_config_path }}" + register: kubernetes_cad_output + + - name: Get user cert data + shell: cat "{{ working_dir }}/{{ username }}.crt" | base64 -w 0 + register: user_cert_data_output + + - name: Get user key data + shell: cat "{{ working_dir }}/{{ username }}.key" | base64 -w 0 + register: user_key_data_output + + - name: Set variables for template + set_fact: + kubernetes_server: "{{ kubernetes_server_output.stdout }}" + kubernetes_cad: "{{ kubernetes_cad_output.stdout }}" + user_cert_data: " {{ user_cert_data_output.stdout }}" + user_key_data: " {{ user_key_data_output.stdout }}" + + - name: Create k8s user + ansible.builtin.shell: | + kubectl config set-credentials "{{ username }}"\ + --client-certificate="{{ working_dir }}/{{ username }}.crt" \ + --client-key="{{ working_dir }}/{{ username }}.key" + notify: remove certificates + + - name: Set user context + ansible.builtin.shell: | + kubectl config set-context "{{ username }}@{{ cluster }}" \ + --cluster={{ cluster }} --user="{{ username }}" + + - name: Create config file from template + template: + src: config.j2 + dest: "{{ working_dir }}/config" + + - name: Storing config on the local machine + ansible.builtin.fetch: + src: "{{ working_dir }}/config" + dest: ./ + flat: yes + tags: config +``` + +As you can see, in the step "Create k8s user" I'm notifying the handler that's gonna remove certs and configs after the run. If you wanna save them, just comment the string `notify: remove certificates` + +Now we've left with the last block: +``` +- name: Bind user to role + block: + - name: Generate role binding yaml + template: + src: role-binding.j2 + dest: "{{ working_dir }}/{{ username }}.yaml" + + - name: Apply role binding manifest + shell: kubectl apply -f "{{ working_dir }}/{{ username }}.yaml" + tags: add_user +``` +It's gonna gerenate k8s manifest for adding a RoleBinding or ClusterRoleBinding nad apply it. + +To run the playbook, simply do: +``` +$ ansible-playbook ./kubernetes-create-user.yaml -i ${PATH_TO_INVENTORY} +# -- then copy config file +$ cp config ~/.kube/config +# chown $USER ~/.kube/config +# -- to check that everything is great +# -- run the following and ensure +# -- you get all resources from you cluster +$ kubectl get all --all-namespaces +``` + +This role doesn't support adding user groups, so I would be happy if anybody will contribute. Or I will do it myself one day. \ No newline at end of file diff --git a/articles/3. Prepare k8s cluster.md/Preparing k8s cluster for real use.md b/articles/3. Prepare k8s cluster.md/Preparing k8s cluster for real use.md new file mode 100644 index 0000000..06e5764 --- /dev/null +++ b/articles/3. Prepare k8s cluster.md/Preparing k8s cluster for real use.md @@ -0,0 +1,62 @@ +# Preparing k8s cluster for real use + +After deployng a cluster and adding an admin user you may be confused what to do next. When I started learning how to use k8s I was confused, because I couln't undestand how to make anythin work. + +There are several components that you may want to install in you cluster. I will tell you about my setup. + +1. Monitoring + - Prometheus + - Grafana +2. Network + - Istio + - MetalLB +3. Storage Provisioner + - Rook +4. Deployment tools + - Keel + +There are many people that will say that I shouldn't stora data inside a cluster. But I will try to explain why I'm doing it. +To install most of them you can use `helm` charts. But when you've got a lot of helm package inside your cluster, i suppose, you'd like to have installation configured as code. So I will show how to use `Github Actions` to deploy charts. + +## Monitoring + +I'm using this helm chart: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack + +It will install `Prometheus`, `grafana` and `Alert-manager`. This is gonna be the first packages that I'm gonna install. +As you can see in `README.md` you can simply do + +``` +$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +$ helm repo update +$ helm install prometheus prometheus-community/kube-prometheus-stack +``` + +But when you've got several clusters or if one day your kube will fall and you will be trying to install all you packages from cli in the brand-new cluster, you mau want to automate theese steps. There a many ways to do it. You can just save the list of helm packages you need, you can write a script or create an Ansible playbook or role. But I guess, that the best way to do it is to create CI/CD pipeline that will install and update packages on pushes to repo. Of course you can run Ansible playbooks or scripts in CI/CI pipes, but this time I will show how to use Githun Actions for this kind of deployment. + +1. Create a fresh repo (I won't share my repo this time because there is some kind of sensitive data) +2. Create a `/.github/workflows/` dir +3. You can arrange files and folders here as you want. We beggining with one cluster so let the structure be simple. Let's create a file `prometheus.yml` + +``` +name: Prometheus +on: ['deployment'] + +jobs: + deployment: + runs-on: 'ubuntu-latest' + steps: + - uses: actions/checkout@v1 + + - name: 'Deploy' + uses: 'deliverybot/helm@v1' + with: + release: 'nginx' + namespace: 'default' + chart: 'app' + token: '${{ github.token }}' + values: | + name: foobar + value-files: values.yaml + env: + KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}' +``` \ No newline at end of file