first doc for cnpg
parent
a78d4776fa
commit
2457c29b75
2 changed files with 263 additions and 2 deletions
|
|
@ -1,2 +0,0 @@
|
|||
# WIP
|
||||
|
||||
263
Guide-Database.md
Normal file
263
Guide-Database.md
Normal file
|
|
@ -0,0 +1,263 @@
|
|||
# CloudNative PostgreSQL Guide
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Installation](#installation)
|
||||
- [Backup & Recovery](#backup--recovery)
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers the deployment and management of PostgreSQL clusters using CloudNative-PG on Kubernetes with Flux CD. You'll learn how to install a highly available PostgreSQL cluster, configure automated backups to S3-compatible storage, and perform recovery operations when needed.
|
||||
|
||||
## Installation
|
||||
|
||||
The deployment should be done through the [cluster helm chart](https://github.com/cloudnative-pg/charts/tree/main/charts/cluster) from CloudNative-PG.
|
||||
|
||||
### Setting up Helm Repository
|
||||
|
||||
As we are using Flux CD, you need to first install the HelmRepository in your namespace:
|
||||
|
||||
``` yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: cnpg
|
||||
namespace: test
|
||||
spec:
|
||||
interval: 30m
|
||||
url: https://cloudnative-pg.github.io/charts
|
||||
```
|
||||
|
||||
### Basic Deployment
|
||||
|
||||
With the HelmRepository deployed in the `test` namespace as shown above, here is the simplest deployment example:
|
||||
|
||||
``` yaml
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: basic
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: cluster
|
||||
version: 0.3.1
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: cnpg
|
||||
namespace: test
|
||||
interval: 5m
|
||||
values:
|
||||
type: postgresql
|
||||
mode: standalone
|
||||
version:
|
||||
postgresql: "17.2"
|
||||
cluster:
|
||||
instances: 3
|
||||
storage:
|
||||
size: 10Gi
|
||||
resources:
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: "2Gi"
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: "2Gi"
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
Once applied, this will create a Custom Resource Definition (CRD) of type **clusters.postgresql.cnpg.io**. You can view it with the following command:
|
||||
|
||||
``` shell
|
||||
oc get clusters.postgresql.cnpg.io -n test
|
||||
NAME AGE INSTANCES READY STATUS PRIMARY
|
||||
basic-cluster 5m 3 3 Cluster in healthy state basic-cluster-1
|
||||
```
|
||||
|
||||
**💡 Tip**: The cluster name follows this naming convention: **${HELMRELEASE_METADATA_NAME}-cluster**. In our example, it will be **basic-cluster**.
|
||||
|
||||
|
||||
## Backup
|
||||
|
||||
This section explains how to configure automated backups using a modified version of the previous example.
|
||||
|
||||
Since we use Mount10 for backup storage, this example includes Mount10-specific configurations.
|
||||
|
||||
``` yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: basic-postgres-backup-s3
|
||||
type: Opaque
|
||||
stringData:
|
||||
ACCESS_KEY_ID: xxxx
|
||||
ACCESS_SECRET_KEY: xxxx
|
||||
---
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: basic
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: cluster
|
||||
version: 0.3.1
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: cnpg
|
||||
namespace: test
|
||||
interval: 5m
|
||||
values:
|
||||
type: postgresql
|
||||
mode: standalone
|
||||
version:
|
||||
postgresql: "17.2"
|
||||
cluster:
|
||||
instances: 3
|
||||
storage:
|
||||
size: 10Gi
|
||||
resources:
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: "2Gi"
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: "2Gi"
|
||||
backups:
|
||||
enabled: true
|
||||
endpointURL: https://glacier-1.kvant.cloud
|
||||
provider: s3
|
||||
s3:
|
||||
bucket: phoenix-openshift-backups
|
||||
wal:
|
||||
encryption: ""
|
||||
data:
|
||||
encryption: ""
|
||||
scheduledBackups:
|
||||
- name: daily-backup
|
||||
schedule: "@daily"
|
||||
backupOwnerReference: self
|
||||
method: barmanObjectStore
|
||||
retentionPolicy: "30d"
|
||||
valuesFrom:
|
||||
- kind: Secret
|
||||
name: basic-postgres-backup-s3
|
||||
valuesKey: ACCESS_KEY_ID
|
||||
targetPath: backups.s3.accessKey
|
||||
optional: false
|
||||
- kind: Secret
|
||||
name: basic-postgres-backup-s3
|
||||
valuesKey: ACCESS_SECRET_KEY
|
||||
targetPath: backups.s3.secretKey
|
||||
optional: false
|
||||
```
|
||||
|
||||
Backups will be stored in the S3 bucket at: `s3://phoenix-openshift-backups/basic-cluster/`
|
||||
|
||||
## Recovery
|
||||
|
||||
With proper preparation, the cluster helm chart allows you to perform recovery operations quickly and easily.
|
||||
|
||||
The example below shows a complete helm chart configuration that can be used for recovery by simply changing a few values.
|
||||
|
||||
``` yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: basic-postgres-backup-s3
|
||||
type: Opaque
|
||||
stringData:
|
||||
ACCESS_KEY_ID: xxxx
|
||||
ACCESS_SECRET_KEY: xxxx
|
||||
---
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: basic
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: cluster
|
||||
version: 0.3.1
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: cnpg
|
||||
namespace: test
|
||||
interval: 5m
|
||||
values:
|
||||
type: postgresql
|
||||
mode: standalone
|
||||
recovery:
|
||||
method: object_store
|
||||
clusterName: basic-cluster
|
||||
endpointURL: &endpoint_url https://glacier-1.kvant.cloud
|
||||
provider: s3
|
||||
s3:
|
||||
bucket: &s3_bucket phoenix-openshift-backups
|
||||
version:
|
||||
postgresql: "17.2"
|
||||
cluster:
|
||||
instances: 3
|
||||
storage:
|
||||
size: 10Gi
|
||||
resources:
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: "2Gi"
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: "2Gi"
|
||||
backups:
|
||||
enabled: true
|
||||
endpointURL: *endpoint_url
|
||||
provider: s3
|
||||
s3:
|
||||
bucket: *s3_bucket
|
||||
wal:
|
||||
encryption: ""
|
||||
data:
|
||||
encryption: ""
|
||||
scheduledBackups:
|
||||
- name: daily-backup
|
||||
schedule: "@daily"
|
||||
backupOwnerReference: self
|
||||
method: barmanObjectStore
|
||||
retentionPolicy: "30d"
|
||||
valuesFrom:
|
||||
- kind: Secret
|
||||
name: basic-postgres-backup-s3
|
||||
valuesKey: ACCESS_KEY_ID
|
||||
targetPath: recovery.s3.accessKey
|
||||
optional: false
|
||||
- kind: Secret
|
||||
name: basic-postgres-backup-s3
|
||||
valuesKey: ACCESS_KEY_ID
|
||||
targetPath: recovery.s3.accessKey
|
||||
optional: false
|
||||
- kind: Secret
|
||||
name: basic-postgres-backup-s3
|
||||
valuesKey: ACCESS_KEY_ID
|
||||
targetPath: backups.s3.accessKey
|
||||
optional: false
|
||||
- kind: Secret
|
||||
name: basic-postgres-backup-s3
|
||||
valuesKey: ACCESS_SECRET_KEY
|
||||
targetPath: backups.s3.secretKey
|
||||
optional: false
|
||||
```
|
||||
|
||||
This example is similar to the previous one, with the addition of a recovery section.
|
||||
|
||||
**Important**: You can deploy this configuration as shown. The recovery section is ignored when the mode is set to `standalone`.
|
||||
|
||||
To switch from standalone to recovery mode, follow these steps:
|
||||
|
||||
1. Change `spec.values.mode` from `standalone` to `recovery`
|
||||
2. Change `spec.values.backups.enabled` to `false`
|
||||
|
||||
Once recovery is complete, follow these steps to return to normal operation:
|
||||
|
||||
1. Change `spec.values.mode` from `recovery` to `standalone`
|
||||
2. Change `spec.values.backups.enabled` to `true`
|
||||
Loading…
Add table
Add a link
Reference in a new issue