Compare commits

...
This repository has been archived on 2024-10-01. You can view files and clone it, but cannot push or open issues or pull requests.

1 Commits

Author SHA1 Message Date
52ec04c9ea
WIP: Working on content 2024-05-27 09:33:45 +02:00
7 changed files with 173 additions and 60 deletions

View File

@ -1,13 +1,12 @@
+++ ---
title = "Dynamic Environment Per Branch with ArgoCD" title: "Dynamic Environment Per Branch with ArgoCD"
date = 2023-02-25T14:00:00+01:00 date: 2023-02-25T14:00:00+01:00
image = "/posts/argocd-dynamic-environment-per-branch-part-1/cover.png" image: "/posts/argocd-dynamic-environment-per-branch-part-1/cover.png"
draft = false draft: false
categories = [ categories:
"Kubernetes", - "Kubernetes"
"CI-CD" - "CI-CD"
] ---
+++
[Do you remember?]({{< ref "dont-use-argocd-for-infrastructure" >}}) [Do you remember?]({{< ref "dont-use-argocd-for-infrastructure" >}})
> And using `helmfile`, I will install `ArgoCD` to my clusters, of course, because it's an awesome tool, without any doubts. But don't manage your infrastructure with it, because it's a part of your infrastructure, and it's a service that you provide to other teams. And I'll talk about in one of the next posts. > And using `helmfile`, I will install `ArgoCD` to my clusters, of course, because it's an awesome tool, without any doubts. But don't manage your infrastructure with it, because it's a part of your infrastructure, and it's a service that you provide to other teams. And I'll talk about in one of the next posts.

View File

@ -1,13 +1,12 @@
+++ ---
title = "ArgoCD vs Helmfile: Applications" title: "ArgoCD vs Helmfile: Applications"
date = 2023-02-13T12:14:09+01:00 date: 2023-02-13T12:14:09+01:00
image = "/posts/argocd-vs-helmfile/cover-applications.png" image: "/posts/argocd-vs-helmfile/cover-applications.png"
draft = false draft: false
categories = [ categories:
"Kubernetes", - "Kubernetes"
"CI-CD" - "CI-CD"
] ---
+++
> So as promised in [the previous ArgoCD post]({{< ref "dont-use-argocd-for-infrastructure" >}}), I'll try to show a simple example of Pull Requests for different kinds of setups. This is the first part. Putting everything in the same post seems kind of too much. > So as promised in [the previous ArgoCD post]({{< ref "dont-use-argocd-for-infrastructure" >}}), I'll try to show a simple example of Pull Requests for different kinds of setups. This is the first part. Putting everything in the same post seems kind of too much.

View File

@ -1,13 +1,12 @@
+++ ---
title = 'ArgoCD vs Helmfile: ApplicationSet' title: 'ArgoCD vs Helmfile: ApplicationSet'
date = 2023-02-15T10:14:09+01:00 date: 2023-02-15T10:14:09+01:00
image = "/posts/argocd-vs-helmfile/cover-applicationset.png" image: "/posts/argocd-vs-helmfile/cover-applicationset.png"
draft = false draft: false
categories = [ categories:
"Kubernetes", - "Kubernetes"
"CI-CD" - "CI-CD"
] ---
+++
This is a second post about *"argocding"* your infrastructure. [First can be found here]({{< ref "argocd-vs-helmfile-application" >}}). This is a second post about *"argocding"* your infrastructure. [First can be found here]({{< ref "argocd-vs-helmfile-application" >}}).

View File

@ -1,12 +1,11 @@
+++ ---
title = "Argocd vs Helmfile: Helmfile" title: "Argocd vs Helmfile: Helmfile"
date = 2023-02-17T12:48:51+01:00 date: 2023-02-17T12:48:51+01:00
draft = false draft: false
categories = [ categories:
"Kubernetes", - "Kubernetes"
"CI-CD" - "CI-CD"
] ---
+++
In two previous posts I've described how it's possible to install a couple of applications with [`Applications`]({{< relref "/post/allanger/argocd-vs-helmfile-application" >}}) and [`ApplicationSets`]({{< relref "/post/allanger/argocd-vs-helmfile-applicationset" >}}), and this one is the last in a row. And here I'm going to install the same applications (`VPA` and `Goldilocks`) with helmfile, and I will tell why I think that it's better than `ArgoCD` In two previous posts I've described how it's possible to install a couple of applications with [`Applications`]({{< relref "/post/allanger/argocd-vs-helmfile-application" >}}) and [`ApplicationSets`]({{< relref "/post/allanger/argocd-vs-helmfile-applicationset" >}}), and this one is the last in a row. And here I'm going to install the same applications (`VPA` and `Goldilocks`) with helmfile, and I will tell why I think that it's better than `ArgoCD`
So let's start. Here you can find the [initial config](https://git.badhouseplants.net/allanger/helmfile-vs-argo/src/branch/helmfile-main). Let's see what we got here: So let's start. Here you can find the [initial config](https://git.badhouseplants.net/allanger/helmfile-vs-argo/src/branch/helmfile-main). Let's see what we got here:
@ -451,7 +450,7 @@ vpa-system, goldilocks-dashboard, ServiceAccount (v1) has been added:
hook[prepare] logs | diff -u -N /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io hook[prepare] logs | diff -u -N /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io
hook[prepare] logs | --- /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io 2023-02-17 13:15:29 hook[prepare] logs | --- /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io 2023-02-17 13:15:29
hook[prepare] logs | +++ /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io 2023-02-17 13:15:29 hook[prepare] logs | --- /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalercheckpoints.autoscaling.k8s.io 2023-02-17 13:15:29
hook[prepare] logs | @@ -0,0 +1,216 @@ hook[prepare] logs | @@ -0,0 +1,216 @@
hook[prepare] logs | +apiVersion: apiextensions.k8s.io/v1 hook[prepare] logs | +apiVersion: apiextensions.k8s.io/v1
hook[prepare] logs | +kind: CustomResourceDefinition hook[prepare] logs | +kind: CustomResourceDefinition
@ -671,7 +670,7 @@ hook[prepare] logs | + storedVersions:
hook[prepare] logs | + - v1 hook[prepare] logs | + - v1
hook[prepare] logs | diff -u -N /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io hook[prepare] logs | diff -u -N /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io
hook[prepare] logs | --- /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io 2023-02-17 13:15:29 hook[prepare] logs | --- /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/LIVE-4051758900/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io 2023-02-17 13:15:29
hook[prepare] logs | +++ /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io 2023-02-17 13:15:29 hook[prepare] logs | --- /var/folders/w1/27ptcr29547f0g8732kmffwm0000gn/T/MERGED-3664876659/apiextensions.k8s.io.v1.CustomResourceDefinition..verticalpodautoscalers.autoscaling.k8s.io 2023-02-17 13:15:29
hook[prepare] logs | @@ -0,0 +1,550 @@ hook[prepare] logs | @@ -0,0 +1,550 @@
hook[prepare] logs | +apiVersion: apiextensions.k8s.io/v1 hook[prepare] logs | +apiVersion: apiextensions.k8s.io/v1
hook[prepare] logs | +kind: CustomResourceDefinition hook[prepare] logs | +kind: CustomResourceDefinition

View File

@ -1,12 +1,11 @@
+++ ---
title = 'Do we really need Continuous Reconciliation after all?' title: 'Do we really need Continuous Reconciliation after all?'
date = 2024-02-13T15:04:44+01:00 date: 2024-02-13T15:04:44+01:00
draft = true draft: true
categories = [ categories:
"Kubernetes", - "Kubernetes"
"CI-CD" - "CI-CD"
] ---
+++
> Well, alright, I guess it depends > Well, alright, I guess it depends

View File

@ -1,13 +1,13 @@
+++ ---
title = "Don't use ArgoCD for your infrastructure" title: "Don't use ArgoCD for your infrastructure"
date = 2023-02-09T12:47:32+01:00 date: 2023-02-09T12:47:32+01:00
draft = false draft: false
image = '/posts/dont-use-argocd-for-infrastructure/cover.png' image: /posts/dont-use-argocd-for-infrastructure/cover.png
categories = [ categories:
"Kubernetes", - "Kubernetes"
"CI-CD" - "CI-CD"
]
+++ ---
> Of course, it's just a clickbait title. Use whatever works for you. I will just describe why I wouldn't use `ArgoCD` for the infrastructure > Of course, it's just a clickbait title. Use whatever works for you. I will just describe why I wouldn't use `ArgoCD` for the infrastructure

View File

@ -0,0 +1,118 @@
---
title: "Testing External Snapshooter"
description: Trying to use the external-snapshooter
date: 2024-05-14T15:37:59+02:00
image:
math:
hidden: false
comments: true
draft: true
---
# Intro
# Installing
I've created a new empty k3s cluster, into which I've installed `coreDNS`, `cilium`, and `local-path-provisioner` by Rancher.
```shell
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-operator-9c465d6d8-jlbc9 1/1 Running 0 72s
kube-system cilium-gzwlp 1/1 Running 0 72s
kube-system local-path-provisioner-6896b5f8c-7mpfl 1/1 Running 0 50s
kube-system coredns-7db6d4f6d7-vqhpc 1/1 Running 0 61s
```
Now let's install the external-snapshooter, the project source code can be found here: <https://github.com/kubernetes-csi/external-snapshotter>, but they don't provide us with a helm chart, so I'll install it using that one <https://github.com/piraeusdatastore/helm-charts/tree/main/charts/snapshot-controller>
```shell
$ helm repo add piraeus-charts https://piraeus.io/helm-charts/
$ helm install snapshot-controller piraeus-charts/snapshot-controller -n kube-system
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-operator-9c465d6d8-jlbc9 1/1 Running 0 3m35s
kube-system cilium-gzwlp 1/1 Running 0 3m35s
kube-system local-path-provisioner-6896b5f8c-7mpfl 1/1 Running 0 3m13s
kube-system coredns-7db6d4f6d7-vqhpc 1/1 Running 0 3m24s
kube-system snapshot-controller-5fd4df575-2vmhl 1/1 Running 0 16s
kube-system snapshot-validation-webhook-79f9c6bb5f-p6hqx 1/1 Running 0 16s
$ kubectl get crd
NAME CREATED AT
...
volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io 2024-05-14T13:52:17Z
volumegroupsnapshotcontents.groupsnapshot.storage.k8s.io 2024-05-14T13:52:17Z
volumegroupsnapshots.groupsnapshot.storage.k8s.io 2024-05-14T13:52:17Z
volumesnapshotclasses.snapshot.storage.k8s.io 2024-05-14T13:52:17Z
volumesnapshotcontents.snapshot.storage.k8s.io 2024-05-14T13:52:18Z
volumesnapshots.snapshot.storage.k8s.io 2024-05-14T13:52:18Z
```
Let's create some dummy worklaod that will write something to a PVC
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
volumes:
- name: test
persistentVolumeClaim:
claimName: test
containers:
- name: test
image: alpine
volumeMounts:
- mountPath: /src
name: test
command:
- sh
args:
- -c
- sleep 1000
```
```shell
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test Bound pvc-4924e25f-84ae-4640-8199-0156659cb167 1Gi RWO local-path 2m7s
```
```shell
$ kubectl exec -it test -- sh
# -- Inside the container
$ echo 1 > /src/test
$ cat /src/test
1
```
So now let's try creating a snapshot
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
spec:
source:
volumeSnapshotContentName: test
```
```shell
$ kubectl get volumesnapshot
```