Add gcx time table note

This commit is contained in:
Nikolai Rodionov 2023-11-25 15:25:06 +01:00
parent 115e7c4d28
commit 0589361bc7
No known key found for this signature in database
GPG Key ID: 906851F91B1DA3EF
3 changed files with 162 additions and 0 deletions

102
grandcentrix/db-operator.md Normal file
View File

@ -0,0 +1,102 @@
## db-operator
I've created a flexible server here: https://github.com/grandcentrix/platform-operations/pull/3591/files
> You can switch to that branch and exec `terraform output` to get creds
Then I've create a local k8s cluster using `kind` and deployed db-operator there
```yaml
# helmfile.yaml
---
repositories:
- name: db-operator
url: https://db-operator.github.io/charts/
- name: jetstack
url: https://charts.jetstack.io
releases:
- name: cert-manager
chart: jetstack/cert-manager
version: v1.11.0
namespace: cert-manager
createNamespace: true
values:
- installCRDs: true
- name: db-operator
chart: db-operator/db-operator
version: 1.8.0
namespace: db-operator
installed: true
- name: db-instance
installed: true
namespace: db-operator
chart: db-operator/db-instances
version: 1.4.1
values:
- dbinstances:
azure-postgres:
monitoring:
enabled: false
adminSecretRef:
Name: postgres-generic-admin-secret
Namespace: db-operator
engine: postgres
generic:
host: gcx-solutions-postgresql-flexi.postgres.database.azure.com
port: 5432
secrets:
adminUser: ***
adminPassword: ***
sslConnection:
enabled: true
skipVerify: false
```
``` bash
$ helmfile -l name=cert=manager sync
$ helmfile -l name=db-operator sync
$ helmfile -l name=db-instance sync
$ kubectl get dbin
NAME PHASE STATUS
azure-postgres Running true
```
Then create a database resource
```yaml
# db.yaml
---
apiVersion: "kinda.rocks/v1beta1"
kind: "Database"
metadata:
name: postgres-db
spec:
secretName: bega-pg-sec
instance: azure-postgres
deletionProtected: false
postgres:
extensions:
- uuid-ossp
schemas:
- application
secretsTemplates:
CONNECTION_STRING: "jdbc:{{ .Protocol }}://{{ .DatabaseHost }}:5432/{{ .DatabaseName }}?ssl=true&sslmode=require&currentSchema=application"
backup:
enable: false
cron: ""
cleanup: true
```
```bash
$ kubectl apply -f db.yaml
$ kubectl get db
NAME PHASE STATUS PROTECTED DBINSTANCE AGE
postgres-db Ready true false azure-postgres 17m
```
And then you can get data from the `bega-pg-sec` to connect to the database.
This db-operator doesn't really work with schemas, it can create them, and drop the `public` one. So the approach of using schemas for applications won't really work. because it creates a user per db. We can add mode schemas to `.spec.postgres.schemas`, and then add another templated secret that will create a connection string for that schema, but a new user won't be created and applications will have to share one user.
If the `deletionProtected` is set to `true`, database on the server won't be removed when database resource is gone.

View File

@ -0,0 +1,59 @@
# How to handle PostgreSQL?
> On the weekly where we've started discussing it, the whole team wasn't present, so we've decided that it makes sense to start a discussion, so everybody could follow and add something.
## What are options?
### How to handle servers?
Currently, we're considering 2 ways:
1. Use Azure Postgres-as-Service (Flexible one)
2. Run PostgreSQL in Kubernetes.
> @allanger My thoughts about it. I like the idea of running PostgreSQL in K8s, but then we would have to take care about a lot of things that currently are put on Azure shoulders. And even though I can't say that I trust them unconditionally, I think that it might be better not to have that responsibility on us.
> Hosting PostgreSQL comes with availability, performance, disaster recovery, monitoring issues (I'm not sure what we're given by Azure here as well). Also, I'm not having good experience with Azure storage class (might be solved by another storage class). If we can make sure that we can handle all of that, I'd go for PostgreSQL in k8s, but the problem is that I don't believe that other teams will be happy about it, and we will have to convince them that everything is fine, and in case something is broken because of us having this way chosen, it can be awkward.
> But anyway I'd check how good it's performing, and how stable it is.
It would be good if we have a common list of pros and cons written somewhere where the whole team could contribute, so we can decide.
My personal list:
#### DB in K8s
Pros:
- We don't depend on Azure and control Postgres better (if we need it)
- It's probably going to be cheaper
- It's probably going to be easier to spin up environments, because less Terraform will be needed
- It is fun
Cons:
- We need to ensure performance and sustainability that might require a deeper PostgreSQL knowledge. And since we're not DB administrators (as far as I know) it may become a problem.
- We're responsible for DB. If azure has problems on theirs side it will be a smaller problem for our team. If our 'self-hosted' db is down, all the beef is going directly to us.
#### DB as a service
Pros:
- We don't have to be DB admins
- We already have backups built-in
- It's probably will be easier to make it sustainable, because it already should be that because it's a service
Cons:
- It's more expensive
- Terraform
- We need to manage database on service somehow separately from the server configuration
- Less fun
### How to handle databases?
- If we're going with Azure service, we can try using the [db-operator](https://github.com/db-operator/db-operator) that I'm working on. It's a fork here (so are 115 stars are gone :( ) but the whole team is now working on that version. The idea is that we have a server running and then operators is handling databases-and-users management. Then we can create a `Database` resource and just mount a secret that is created by the operator to the apps pods. Another good thing is that I'me a maintainer of that project as well, so I can add features, that we need, pretty easy. (If the rest of the team is fine with that). Also, this operator supports creating Database server on GCloud, and I think this feature cat be added for Azure, then it would also handle the db-instance creation. But it would require some time to implement. Another benefit is that we can stop using that solution easily, with no databases lost and move to another one if we don't like it/
- If we run postgres in k8s, we need to decide about how we would like to do that
1. Deploy a simple postgres stateful set. It can come configured by values, or, for example, the same db-operator can be used.
2. Use an operator that is creating database in k8s (e.g. [zalando-operator](https://github.com/zalando/postgres-operator))
#### Postgres in K8s
What should we consider?
- We should have backups and be sure that these backups are working.
- We should ensure a good performance
- If we're using CRD for databases, we need to make sure we know how to handle CRD updates
- We need to set up a proper monitoring.

View File

@ -0,0 +1 @@
|Date|Kind|Start