cv and coverletter

This commit is contained in:
Allen Languor 2021-08-01 12:13:43 +03:00
parent fb10ca03e7
commit 46c9cedd32
26 changed files with 271 additions and 636 deletions

View File

@ -1,309 +0,0 @@
# Deploy Kubernetes with Ansible
![](https://www.rtcloud.ru/wp-content/uploads/2018/12/kubernetes.png)
## Requirements
We will need at least two nodes, Ansible, and a kubectl. That's enough to begin.
My ansible role: https://github.com/allanger/kubeadm-ansible-role
I am using Ubuntu 21.04 on all my servers so my Ansible role is written for Debian-based distros. (I will be happy if anybody adds support for other distros)
## Preparing system
If you're familiar with Linux, all you have to know to perform this step is that you need to be able to ssh into every node in the cluster.
In case you're a kinda Linux newbie: Ansible will connect to your servers via SSH and perform actions on remote hosts. So you need to able to ssh into nodes from your host. If you're installing a fresh Ubuntu Server, you will see a checkbox `Install OpenSSH Server`, just check it and you're fine
![Ubuntu-server-openssh](./Ubuntu-server-openssh.png)
If you've already skipped this installation step or you have another distro that doesn't contain this option, just install `openssh` and start it
```
# apt install openssh-server
# systemctl status ssh
```
![systemctl-ssh](./systemctl-ssh.png)
If the status is not `active (running)`, just do
```
# systemctl enable ssh
# systemctl start ssh
```
Now we can check ssh connection
On your main host execute
```
$ ssh ${USER}@${HOST}
```
Where USER is a username that you use to login to the remote machine and HOST is its host address
Then you need to copy your ssh-key to all machines (in case you don't have one, it's really easy to google how to create it)
```
$ eval $(ssh agent)
$ ssh-add ${PATH_TO_KEY}
$ ssh-copy-id ${USER}@${HOST}
```
## Firewall
I would recommend using **UFW**. We need to allow `ssh`, `http`, and `https` traffic on all nodes and allow `6443/tcp` on the master node
You can use my Ansible role to setup `UFW`
Checkout this repo: [ansible-ufw-role](https://github.com/allanger/ansible-ufw-role)
Or you can do it manually:
```
# -- On each node
# ufw default deny
# ufw limit ssh
# ufw allow http
# ufw allow https
# -- On master node only
# ufw allow 6443/tcp
# -- On all nodes
# ufw enable
```
All the preparation steps are done. Now we can begin
## Kubernetes
The first thing that I'd recommend doing is to read every step in my role to understand what's going on. Here I will try to describe each step so you will easily (I hope) understand how it works
### Install container runtime
Go to `/tasks/main.yaml`
As you can see, it's including another YAML file so follow all includes and see my comments here
I'm always using `containerd` so in this role I'm installing it.
But if you wanna use `docker` or `cri-o` you should find another instruction or even better contribute to my project and add support for another container runtime. File `/tasks/container-runtime/container-runtime.yaml` is designed to read the `container_runtime` variable and include steps for installing it.
In case you wanna use `containerd`, go to `/tasks/container-runtime/containerd/system-setup.yaml`. Here we are preparing our system for CRI installation.
```
- name: Add the overlay and br_netfilter modules
modprobe:
name: "{{ item }}"
state: present
loop:
- "overlay"
- "br_netfilter"
- name: Ensure dependencies are installed.
apt:
name:
- apt-transport-https
- ca-certificates
- gnupg2
state: present
- name: Add Docker apt key.
apt_key:
url: "{{ docker_apt_gpg_key }}"
id: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
state: present
register: add_repository_key
ignore_errors: "{{ docker_apt_ignore_key_error }}"
- name: Add Docker repository.
apt_repository:
repo: "{{ docker_apt_repository }}"
state: present
update_cache: true
```
I think jobs description are pretty informative in this case so let's go further.
Go back to `/tasks/container-runtime/container-runtime.yaml`
Here we are installing `containerd`.
```
- name: Ensure containerd is installed.
package:
name: containerd.io
state: present
- name: Ensure containerd is started and enabled at boot.
service:
name: containerd
state: started
enabled: true
- name: Ensure containerd config directory exists.
file:
path: /etc/containerd
state: directory
register: containerd_dir
- name: Get defaults from containerd.
command: containerd config default
changed_when: false
register: containerd_config_default
when: containerd_config_default_write
- name: Write defaults to config.toml.
copy:
dest: /etc/containerd/config.toml
content: "{{ containerd_config_default.stdout }}"
notify: restart containerd
when: containerd_config_default_write
```
### Install kubernetes
Now let's go to `/kubernetes/kubeernetes.yaml`
Kubernetes won't run on machines with swap enabled so we are disabling swap
```
- name: Disable swap
shell:
cmd: |
swapoff -a
args:
executable: /bin/bash
- name: Remove Swap from fstab
mount:
name: swap
fstype: swap
state: absent
```
Then we're preparing the system (checking dependencies and adding repos)
Configuring network
```
- name: Let iptables see bridged traffic
sysctl:
name: "{{ item }}"
value: "1"
state: present
loop:
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
- net.ipv4.ip_forward
```
Installing dependencies
```
- name: Install Kubernetes packages.
package:
name: "{{ item }}"
state: present
notify: restart kubelet
loop: "{{ kubernetes_packages }}"
```
Configuring the kubelet (Here we can define arguments which kubelet will use)
```
- name: Check for existence of kubelet environment file.
stat:
path: "{{ kubelet_environment_file_path }}"
register: kubelet_environment_file
- name: Set facts for KUBELET_EXTRA_ARGS task if environment file exists.
set_fact:
kubelet_args_path: "{{ kubelet_environment_file_path }}"
kubelet_args_line: "{{ 'KUBELET_EXTRA_ARGS=' + kubernetes_kubelet_extra_args }}"
kubelet_args_regexp: "^KUBELET_EXTRA_ARGS="
when: kubelet_environment_file.stat.exists
- name: Set facts for KUBELET_EXTRA_ARGS task if environment file doesn't exist.
set_fact:
kubelet_args_path: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
kubelet_args_line: '{{ ''Environment="KUBELET_EXTRA_ARGS='' + kubernetes_kubelet_extra_args + ''"'' }}'
kubelet_args_regexp: '^Environment="KUBELET_EXTRA_ARGS='
when: not kubelet_environment_file.stat.exists
- name: Configure KUBELET_EXTRA_ARGS.
lineinfile:
path: "{{ kubelet_args_path }}"
line: "{{ kubelet_args_line }}"
regexp: "{{ kubelet_args_regexp }}"
state: present
mode: 0644
register: kubelet_config_file
- name: Reload systemd unit if args were changed.
systemd:
state: restarted
daemon_reload: true
name: kubelet
when: kubelet_config_file is changed
```
And running the kubelet daemon
```
- name: Ensure kubelet is started and enabled at boot.
service:
name: kubelet
state: started
enabled: true
```
Know the "backend installation is done" and the last thing that we will install is `kubectl`. We need to install it only on the master node.
```
- name: Install kubectl.
package:
name: kubectl
state: present
when: node_type == 'master'
```
### Check the installation.
Create a file, for example `hosts.yaml` (you should read about ansible inventory files for better understanding)
```
# --------------------------------------
# -- Inventory file example
# -- This is gonna be two-nodes cluster
# --------------------------------------
---
k8s_master:
hosts:
${MASTER_NODE_ADDRESS}
vars:
node_type: "master"
ansible_user: ${REMOTE_USER_NAME}
key_path: /PATH/TO/YOUR/SSH/KEY,
k8s_node:
hosts:
${WORKER_NODE_ADDRES}
vars:
node_type: "worker"
ansible_user: ${REMOTE_USER)NAME}
key_path: /PATH/TO/YOUR/SSH/KEY,
```
Now run
```
$ ansible-playbook ./playbook.yaml -i hosts.yaml --tags=init
```
And see how Kubernetes is being installed on your nodes.
### Deploy cluster
To deploy your cluster you can just run
```
$ ansible-playbook ./playbook.yaml -i hosts.yaml
```
But I think that you should do it manually if it's your first time. Just to understand what's going on there.
Just connect to your master node and run
```
$ kubeadm init
```
When it's done, save the join command somewhere go to your worker node and execute the join command.
Then go back to the master node and do
```
$ mkdir ~/.kube
# cp /etc/kubernetes/admin.conf ~./kube/config
# chown ${USER} ~/.kube/config
$ kubectl get nodes
```
You should see at least two nodes
![Kubectl get node](./nodes.png)
That's it! Your cluster is deployed

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

View File

@ -1,208 +0,0 @@
# Add Kubernetes users with Ansible
Hi! In the [previous article](https://gist.github.com/allanger/84db2647578316f8e721f7219052788f), I've told how to deploy k8s cluster with ansible. Now I'm going to tell, how to add users to your cluster to be able to control your k8s remotely.
My Github: https://github.com/allanger/kubernetes-rbac-ansible-role
Let's imagine you've deployed a bare-metal cluster and you ssh to the master node every time you wanna do something with it. It's not cool, right? So you need to add a user to your cluster.
You can do it manually but, I think, after the first time you perform it, you'd like to do it automatically. That's why I've created this Ansible role.
Clone the repo and go to /vars/main.yaml file.
If you know what you wanna do, set all variables yourself, but if you're not sure, you should update only the `username` var.
In this case, we're adding a cluster-admin user, because I guess if you're able to run this role against your master node, you're a cluster admin.
```
---
# --------------------------------------
# -- K8s username
# --------------------------------------
username: "admin"
# --------------------------------------
# -- How many days certificate
# -- will be valid
# --------------------------------------
certificate_expires_in: 500
# --------------------------------------
# -- K8s cluster name
# --------------------------------------
cluster: "kubernetes"
# --------------------------------------
# -- RoleBinding parameters
# --------------------------------------
# -- Binding type:
# ---- ClusterRoleBinding
# ---- RoleBinding
# --------------------------------------
binding_type: ClusterRoleBinding
# --------------------------------------
# -- Role type
# -- ClusterRole
# -- Role
# --------------------------------------
role_type: ClusterRole
# --------------------------------------
# -- Cluster role name
# -- https://kubernetes.io/docs/reference/access-authn-authz/rbac/
# --------------------------------------
role: cluster-admin
```
When you're done, let's go to `/tasks/main.yaml`.
In the first block, we are creating a working directory. In this directory, Ansible will store certificates and configs. (It will be removed after the play, so it's a temporary dir)
```
- name: Prepare working directory
block:
- name: Set workdir as fact
set_fact:
working_dir: "{{ ansible_env.HOME }}/.certs/{{ username }}"
- name: Create a directory if it does not exist
ansible.builtin.file:
path: "{{ working_dir }}"
state: directory
mode: "0775"
```
In the second block, we're installing packages that will be used while running the role.
```
- name: Ensure required packages are installed
block:
# --------------------------------------
# -- yq is a lightweight and portable
# -- command-line YAML processor
# --------------------------------------
- name: Ensure yq is installed
become: yes
get_url:
url: "https://github.com/mikefarah/yq/releases/download/{{ yq.version }}/{{ yq.binary }}"
dest: /usr/bin/yq
mode: "0777"
- name: Ensure openssl is installed
package:
name: openssl
state: present
tags: packages
```
Then we will generate a certificate:
```
- name: Generate openssl certificate
block:
- name: Generate an OpenSSL private key
community.crypto.openssl_privatekey:
path: "{{ working_dir }}/{{ username }}.key"
size: 2048
- name: Generate an OpenSSL Certificate Signing Request
community.crypto.openssl_csr:
path: "{{ working_dir }}/{{ username }}.csr"
privatekey_path: "{{ working_dir }}/{{ username }}.key"
common_name: "{{ username }}"
- name: Generate an OpenSSL certificate signed with your own CA certificate
become: yes
community.crypto.x509_certificate:
path: "{{ working_dir }}/{{ username }}.crt"
csr_path: "{{ working_dir }}/{{ username }}.csr"
ownca_path: /etc/kubernetes/pki/ca.crt
ownca_privatekey_path: /etc/kubernetes/pki/ca.key
provider: ownca
entrust_not_after: "+{{ certificate_expires_in }}d"
tags: openssl
```
When the certificate is ready we need to add a user to our cluster
```
- name: Add user to cluster
block:
# --------------------------------------
# -- Get k8s server from admin.conf
# --------------------------------------
- name: Get k8s server
shell: yq e '.clusters[0] | select(.name == "{{ cluster }}").cluster.server' "{{ k8s_config_path }}"
register: kubernetes_server_output
# --------------------------------------
# -- Get k8s certificate authority data
# -- from admin-conf
# --------------------------------------
- name: Get k8s certificate authority data
shell: yq e '.clusters[0] | select(.name == "{{ cluster }}").cluster.certificate-authority-data' "{{ k8s_config_path }}"
register: kubernetes_cad_output
- name: Get user cert data
shell: cat "{{ working_dir }}/{{ username }}.crt" | base64 -w 0
register: user_cert_data_output
- name: Get user key data
shell: cat "{{ working_dir }}/{{ username }}.key" | base64 -w 0
register: user_key_data_output
- name: Set variables for template
set_fact:
kubernetes_server: "{{ kubernetes_server_output.stdout }}"
kubernetes_cad: "{{ kubernetes_cad_output.stdout }}"
user_cert_data: " {{ user_cert_data_output.stdout }}"
user_key_data: " {{ user_key_data_output.stdout }}"
- name: Create k8s user
ansible.builtin.shell: |
kubectl config set-credentials "{{ username }}"\
--client-certificate="{{ working_dir }}/{{ username }}.crt" \
--client-key="{{ working_dir }}/{{ username }}.key"
notify: remove certificates
- name: Set user context
ansible.builtin.shell: |
kubectl config set-context "{{ username }}@{{ cluster }}" \
--cluster={{ cluster }} --user="{{ username }}"
- name: Create config file from template
template:
src: config.j2
dest: "{{ working_dir }}/config"
- name: Storing config on the local machine
ansible.builtin.fetch:
src: "{{ working_dir }}/config"
dest: ./
flat: yes
tags: config
```
As you can see, in the step "Create k8s user" I'm notifying the handler that's gonna remove certs and configs after the run. If you wanna save them, just comment the string `notify: remove certificates`
Now we've left with the last block:
```
- name: Bind user to role
block:
- name: Generate role binding yaml
template:
src: role-binding.j2
dest: "{{ working_dir }}/{{ username }}.yaml"
- name: Apply role binding manifest
shell: kubectl apply -f "{{ working_dir }}/{{ username }}.yaml"
tags: add_user
```
It's gonna gerenate k8s manifest for adding a RoleBinding or ClusterRoleBinding nad apply it.
To run the playbook, simply do:
```
$ ansible-playbook ./kubernetes-create-user.yaml -i ${PATH_TO_INVENTORY}
# -- then copy config file
$ cp config ~/.kube/config
# chown $USER ~/.kube/config
# -- to check that everything is great
# -- run the following and ensure
# -- you get all resources from you cluster
$ kubectl get all --all-namespaces
```
This role doesn't support adding user groups, so I would be happy if anybody will contribute. Or I will do it myself one day.

View File

@ -1,62 +0,0 @@
# Preparing k8s cluster for real use
After deployng a cluster and adding an admin user you may be confused what to do next. When I started learning how to use k8s I was confused, because I couln't undestand how to make anythin work.
There are several components that you may want to install in you cluster. I will tell you about my setup.
1. Monitoring
- Prometheus
- Grafana
2. Network
- Istio
- MetalLB
3. Storage Provisioner
- Rook
4. Deployment tools
- Keel
There are many people that will say that I shouldn't stora data inside a cluster. But I will try to explain why I'm doing it.
To install most of them you can use `helm` charts. But when you've got a lot of helm package inside your cluster, i suppose, you'd like to have installation configured as code. So I will show how to use `Github Actions` to deploy charts.
## Monitoring
I'm using this helm chart: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
It will install `Prometheus`, `grafana` and `Alert-manager`. This is gonna be the first packages that I'm gonna install.
As you can see in `README.md` you can simply do
```
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/kube-prometheus-stack
```
But when you've got several clusters or if one day your kube will fall and you will be trying to install all you packages from cli in the brand-new cluster, you mau want to automate theese steps. There a many ways to do it. You can just save the list of helm packages you need, you can write a script or create an Ansible playbook or role. But I guess, that the best way to do it is to create CI/CD pipeline that will install and update packages on pushes to repo. Of course you can run Ansible playbooks or scripts in CI/CI pipes, but this time I will show how to use Githun Actions for this kind of deployment.
1. Create a fresh repo (I won't share my repo this time because there is some kind of sensitive data)
2. Create a `/.github/workflows/` dir
3. You can arrange files and folders here as you want. We beggining with one cluster so let the structure be simple. Let's create a file `prometheus.yml`
```
name: Prometheus
on: ['deployment']
jobs:
deployment:
runs-on: 'ubuntu-latest'
steps:
- uses: actions/checkout@v1
- name: 'Deploy'
uses: 'deliverybot/helm@v1'
with:
release: 'nginx'
namespace: 'default'
chart: 'app'
token: '${{ github.token }}'
values: |
name: foobar
value-files: values.yaml
env:
KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'
```

View File

@ -4,17 +4,18 @@
--- ---
k8s_master: k8s_master:
hosts: hosts:
91.232.225.93: 10.42.82.100:
vars: vars:
node_type: "master" node_type: "master"
ansible_user: "overlord" ansible_user: "overlord"
key_path: ~/.ssh/allanger.pub, key_path: ~/.ssh/allanger.pub
ansible_ssh_common_args: '-o "ProxyCommand ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p 91.232.225.93"'
k8s_servant: k8s_worker:
hosts: hosts:
10.42.82.101: 10.42.82.110:
10.42.82.102: 10.42.82.111:
10.42.82.103: 10.42.82.112:
vars: vars:
node_type: "worker" node_type: "worker"
ansible_user: "overlord" ansible_user: "overlord"

View File

@ -0,0 +1,40 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana
namespace: prometheus
spec:
selector:
# Which pods we want to expose as Istio router
# This label points to the default one installed from file istio-demo.yaml
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
# Here we specify which Kubernetes service names
# we want to serve through this Gateway
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: prometheus
spec:
hosts:
- "*"
gateways:
- grafana
http:
- match:
- uri:
prefix: /
route:
- destination:
host: prometheus-grafana
port:
number: 80

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 91.232.225.63-91.232.225.65

15
beat-checklist.md Normal file
View File

@ -0,0 +1,15 @@
- Metadata
- [ ] External Name
- [ ] BPM
- [ ] Tags
- [ ] Mood
- [ ] Description
- Audio
- [ ] WAV full
- [ ] MP3 preview
- [ ] Stems full
- Picture
- [ ] Cover
- [ ]

13
cv/md/coverletter.md Normal file
View File

@ -0,0 +1,13 @@
Hi! I'm a young DevOps engineer with 3++ years of hands-on experience in different IT areas. I had experience in several IT areas, and I understand how things may work for different teams, which is important for implementing the DevOps philosophy in the company.
After working in a QA engineer position, I understand how to test any solution and describe test cases.
Also, I'm not afraid of experiments when it comes to solving unusual problems and using new technologies. I know how things work on bare-metal and in the cloud, so I may help with migrating to the cloud and vice versa.
Also, I'm thinking about consequences, and I'm ready to take responsibility for every decision I've made.
I've got a will to learn, so I won't be stuck with old-fashioned technologies claiming they are the best fit if there are better alternatives. And I'm never saying that it's impossible to solve any problem unless it is.
When should you pay attention to my CV?
- You working with Kubernetes
- You need to improve/refactor/create/support CI/CD pipelines
- You need help with docker
- You're migrating to the cloud
- You need to support legacy application
- You need help with supporting micro-services application

View File

@ -0,0 +1,105 @@
# Nikolay Rodionov
```
St. Petersburg, Russia
- phone: +79996690458
- email: nicrodionov@gmail.com
```
---
## About me
![photo](../photo/bad-cv.jpg)
I'm an engineer with 3++ years of hands-on experience in different IT areas: from writing e2e auto-tests in JS to setting up a Kubernetes cluster from scratch on bare metal.
Just over a year ago, I've completely understood that being a system engineer is what I truly like to do.
A short list of things I love doing
- Setting up, managing, and supporting K8s clusters
- Writing scripts to automate manual actions (Go, Perl, Bash, or Ansible)
- Linux administrating
- Working with containers
- Setting up CI/CD (Gitlab-CI, GitHub Actions)
- Resolving incidents and troubleshooting problems
- "Everything as code (from QA to Infrastructure)"
## Expirience
### Itigris: _devops engineer/qa automation_
> 07.2019 - until present
I have started as a QA automation and then have moved to the system engineering
1. As a QA Automation
- Creating a codebase for E2e tests (**Java**, **NodeJS**) from scratch.
- Creating and supporting e2e and integration tests **Gitlab C**I pipeline with dynamically starting **Selenoid** server
- Automating basic actions with **bash**
- Working a lot with **Docker**, **docker-compose**, and **Dockerfiles**
- And a bit of SQL quering (**Postgres**)
2. As a System Engineer
- Writing scripts (**Go** and **Bash**)
- Supporting and setting up several **k8s** clusters (**AWS EKS**)
- Supporting services running in **docker** on **ec2** instances
- Supporting **ec2** instances too
- Troubleshooting incidents (**k8s**, **nginx**, **aws**)
- Disaster recovery (**k8s**, **docker**)
- Maintaing the process of "microservicing" the old monolith
- Deploying services written in Java, JS and Python
- And a bit of SQL administrating (**Postgres**)
- Setting up an infrastructure with **Terraform**
### Etersoft: _engineer_
> 03.2017 - 06.2019 officially (and until present as a side project)
I became as a "handyman" and was learning how to do anything in this company.
- Lots of **Linux**, containers (**docker**, cri-o), and virtualization (VirtualBox, Proxmox)
- A bit of networking (**IPtables** and UFW)
- A bit of **Kubernetes** (setting up and supporting little bare-metal clusters from scratch)
- Setting up virtual machines with **Vagrant** (VirtualBox and a bit **Docker**) and **Ansible**
- Setting up **Nginx** and **Envoy**
- **Bash** and **Perl** scripts for automating basic actions
- Create a codebase for E2e tests (**NodeJs** and the plain **Selenium**) from scratch.
- Create and support e2e tests **Gitlab CI** pipeline with a small static selenium server
- A bit of frontend development (**ReactJS**) so I have a basic understanding of HTML and CSS too
- A bit of SQL (**MySQL**)
## Skills
- Kubernetes
- Kubeadm
- EKS
- Docker, Containerd, Cri-O
- Kubectl, Helm, Kustomize
- Ingress Nginx, Istio
- Rook-Ceph
- Keel, Kube-Monkey
- Zalando Postgres
- AWS
- EC2
- RDS
- S3
- Route53
- EKS
- Elasticache (redis)
- Coding
- QA automation (JS, Java)
- Backend development with Go (GRPC and Rest API)
- Scripting (Go, Perl, Bash)
- Frontend development (ReactJS and Elm)
- Just coding (Haskell, Go, Perl, NodeJS)
- Others
- Linux, MacOS
- Docker
- Ansible, Terraform, Vagrant
- VirtualBox, Proxmox
- Nginx, Envoy
- Prometheus, grafana
- ELK
- Postgres, MySQL
## Cover Letter
Hi! I'm a young DevOps engineer with 3++ years of hands-on experience in different IT areas. I had experience in several IT areas, and I understand how things may work for different teams, which is important for implementing the DevOps philosophy in the company.
After working in a QA engineer position, I understand how to test any solution and describe test cases.
Also, I'm not afraid of experiments when it comes to solving unusual problems and using new technologies. I know how things work on bare-metal and in the cloud, so I may help with migrating to the cloud and vice versa.
Also, I'm thinking about consequences, and I'm ready to take responsibility for every decision I've made.
I've got a will to learn, so I won't be stuck with old-fashioned technologies claiming they are the best fit if there are better alternatives. And I'm never saying that it's impossible to solve any problem unless it is.
When should you pay attention to my CV?
- You working with Kubernetes
- You need to improve/refactor/create/support CI/CD pipelines
- You need help with docker
- You're migrating to the cloud
- You need to support legacy application
- You need help with supporting micro-services application

View File

@ -3,28 +3,23 @@
St. Petersburg, Russia St. Petersburg, Russia
- phone: +79996690458 - phone: +79996690458
- telegram: t.me/allanger (preferred) - email: nicrodionov@gmail.com
- email: allanguor@gmail.com
``` ```
--- ---
## About me ## About me
![photo](./bad-cv.jpg) ![photo](../photo/bad-cv.jpg)
I'm an engineer with 3++ years of hands-on experience in different IT areas: from writing e2e auto-tests in JS to setting up a Kubernetes cluster from scratch on bare metal. I'm an engineer with 3++ years of hands-on experience in different IT areas: from writing e2e auto-tests in JS to setting up a Kubernetes cluster from scratch on bare metal.
Just over a year ago, I've completely understood that being a system engineer is what I truly like to do. Just over a year ago, I've completely understood that being a system engineer is what I truly like to do.
A short list of things I love doing A short list of things I love doing
- Setting up, managing, and supporting K8s cluster - Setting up, managing, and supporting K8s clusters
- Writing scripts to automate manual actions (Go, Perl, Bash, or Ansible) - Writing scripts to automate manual actions (Go, Perl, Bash, or Ansible)
- Linux administrating - Linux administrating
- Working with containers - Working with containers
- Setting up CI/CD - Setting up CI/CD (Gitlab-CI, GitHub Actions)
- Resolving incidents and finding the roots of any problem - Resolving incidents and troubleshooting problems
- "Everything as code (from QA to Infrastructure)" - "Everything as code (from QA to Infrastructure)"
- Applying best practices when it's acceptable
My [github](https://github.com/allanger) ain't replete with cool open source projects, but you can find several k8s deployments and quickly written backend implementation for a Pomoday app.
## Expirience ## Expirience
### Itigris: _devops engineer/qa automation_ ### Itigris: _devops engineer/qa automation_
@ -47,7 +42,6 @@ I have started as a QA automation and then have moved to the system engineering
- Deploying services written in Java, JS and Python - Deploying services written in Java, JS and Python
- And a bit of SQL administrating (**Postgres**) - And a bit of SQL administrating (**Postgres**)
- Setting up an infrastructure with **Terraform** - Setting up an infrastructure with **Terraform**
### Etersoft: _engineer_ ### Etersoft: _engineer_
> 03.2017 - 06.2019 officially (and until present as a side project) > 03.2017 - 06.2019 officially (and until present as a side project)
I became as a "handyman" and was learning how to do anything in this company. I became as a "handyman" and was learning how to do anything in this company.
@ -68,8 +62,8 @@ I became as a "handyman" and was learning how to do anything in this company.
- EKS - EKS
- Docker, Containerd, Cri-O - Docker, Containerd, Cri-O
- Kubectl, Helm, Kustomize - Kubectl, Helm, Kustomize
- Ingress nginx, Istio - Ingress Nginx, Istio
- Rook-Ceph storage class - Rook-Ceph
- Keel, Kube-Monkey - Keel, Kube-Monkey
- Zalando Postgres - Zalando Postgres
- AWS - AWS
@ -92,4 +86,5 @@ I became as a "handyman" and was learning how to do anything in this company.
- VirtualBox, Proxmox - VirtualBox, Proxmox
- Nginx, Envoy - Nginx, Envoy
- Prometheus, grafana - Prometheus, grafana
- ELK - ELK
- Postgres, MySQL

View File

@ -0,0 +1,27 @@
## Cover letter
When I find out that SoundCloud is looking for a Production Engineer, I decide that I can't just get past it.
### About me
Hi!
I'm a DevOps engineer with almost five years of experience in different IT areas (QA, Development, System Engineering). Currently, I'm working at Itigris in the role of DevOps/System Engineer.
Some of my current responsibilities:
- Support Kubernetes clusters. (all our applications and self-hosted services are running in Kubernetes)
- Provide "Platform as a service" for developers and QA. For example, create and support reliable and fast CI/CD pipelines, support and administrate self-hosted services, etc.
- Help other teams with an understanding of Docker and containerization.
- Troubleshooting and administrating
### Why am I writing you?
After reading the job description, I've had a little doubt because I'm not sure that I can make a perfect fit right now. But it seems impossible to me not to try because I think SoundCloud is a great platform, and I will be happy to become a part of the team. I'm just tired of working on projects that don't make me feel satisfied, no matter how interesting tasks are. That's why I've decided to try.
SoundCloud is a platform that I'm using every day (as a listener and as a musician), and I believe that it's a kind of project that will make me feel that I'm working on something valuable. That's why I hope you will come back with feedback even if I'm not good enough to join the team right now. To help me understand which technologies I should learn deeper.
### Why, I think, you should pay attention to my CV?
- I'm a young engineer with a will to learn.
- I'm not afraid of non-standard solutions, and I'm not tied to old-familiar technologies.
- I always take responsibility for what I'm doing.
- I'm sure that things must be automated when it's possible.
Thank you
Nikolay Rodionov

BIN
cv/pdf/coverletter.pdf Normal file

Binary file not shown.

Binary file not shown.

BIN
cv/pdf/devops-cv.pdf Normal file

Binary file not shown.

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

View File

@ -1 +0,0 @@

View File

@ -1,24 +0,0 @@
# 07.06.2021
## Progress
```
Done:
- Itigris: setup certbot
- Itigris: testlink backups
In progress:
- Itigris: alcon move to jdk11
Todo:
- Music: finish "Rave Party" arrangement
- Music: mix "Rave Party"
- Develoment: Finish auth rpc for GEA
```
## About the day
Lots of work and time-wasting.
Think that I really should improve my English, my time-management skills and my willpower.
Should spend more time creating music and much less programming.
## Tags
```
#work #waste
```

10
lyrics/allanger/SOB.m2 Normal file
View File

@ -0,0 +1,10 @@
You could catch a snow flake and throw it like a shuriken
To put out the fire in my eyes I'm no more sure you can
Stop I'm putting my full metal jacket on
And fly outta the windows like I was never here at home
It's all me, it's all mine, all yourn. What it's all about
It's all right, cuz I'm no more your dog
I'm scared of what I've seen
Streets are coverd with the blood of the innocent

3
lyrics/allanger/scar.md Normal file
View File

@ -0,0 +1,3 @@
# Scar
I know who's s

View File

@ -1,5 +0,0 @@
# beveiler-init
## TODO
- [ ] Write encoded messages to file
- [ ] Chop file into chunks

View File

@ -1,7 +0,0 @@
# gitlab-env
TODO:
- [ ] Refactor proto
- [ ] Gitlab package in third_party
- [ ] Real-time getting projects end environments
- [ ]

View File

@ -0,0 +1,30 @@
# migrations
tables
actions_log
id
action name
action description
action trigger
db_lock
db_name
locked
Once started applications send grpc message with the following information
```
appVersion
databaseStructure[]
maps
```
1. App stated and send message to migrator
2. Migrator registered a new version in database actions_version and checks other versions
3. Migrator checks db structure and triggers actions (actions are version based)
4.