Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions product_docs/docs/postgres_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ This use case can also be extended to [replica clusters](replica_cluster.md),
as they can simply rely on the WAL archive to synchronize across long
distances, extending disaster recovery goals across different regions.

When you [configure a WAL archive](wal_archiving.md), {{name.ln}} provides
out-of-the-box an [RPO](before_you_start.md#rpo) <= 5 minutes for disaster
When you [configure a WAL archive](wal_archiving.md), EDB Postgres for Kubernetes provides
out-of-the-box an [RPO](before_you_start.md#rpo) 5 minutes for disaster
recovery, even across regions.

!!! Important
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ as it is composed of a community PostgreSQL image and the latest
Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
automatically create the target bucket, assuming it already exists. Only the
`barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
is not the first operation run on an empty bucket, {{name.ln}} will throw an
is not the first operation run on an empty bucket, EDB Postgres for Kubernetes will throw an
error. As a result, to ensure reliable, future-proof operations and avoid
potential issues, we strongly recommend that you create and configure your
object store bucket *before* creating a `Cluster` resource that references it.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'Before You Start'
title: 'Before you start'
originalFilePath: 'src/before_you_start.md'
---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -661,7 +661,7 @@ spec:
```

All the requirements must be met for the clone operation to work, including
the same PostgreSQL version (in our case 18.0).
the same PostgreSQL version (in our case 18.1).

#### TLS certificate authentication

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'Instance pod configuration'
title: 'Instance Pod configuration'
originalFilePath: 'src/cluster_conf.md'
---

Expand Down
44 changes: 35 additions & 9 deletions product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ originalFilePath: 'src/cnp_i.md'


The **CloudNativePG Interface** ([CNPG-I](https://github.com/cloudnative-pg/cnpg-i))
is a standard way to extend and customize {{name.ln}} without modifying its
is a standard way to extend and customize EDB Postgres for Kubernetes without modifying its
core codebase.

## Why CNP-I?

{{name.ln}} supports a wide range of use cases, but sometimes its built-in
EDB Postgres for Kubernetes supports a wide range of use cases, but sometimes its built-in
functionality isn’t enough, or adding certain features directly to the main
project isn’t practical.

Expand All @@ -23,7 +23,7 @@ Before CNP-I, users had two main options:
Both approaches created maintenance overhead, slowed upgrades, and delayed delivery of critical features.

CNP-I solves these problems by providing a stable, gRPC-based integration
point for extending {{name.ln}} at key points in a cluster’s lifecycle —such
point for extending EDB Postgres for Kubernetes at key points in a cluster’s lifecycle —such
as backups, recovery, and sub-resource reconciliation— without disrupting the
core project.

Expand All @@ -39,7 +39,7 @@ CNP-I is inspired by the Kubernetes
The operator communicates with registered plugins using **gRPC**, following the
[CNPG-I protocol](https://github.com/cloudnative-pg/cnpg-i/blob/main/docs/protocol.md).

{{name.ln}} discovers plugins **at startup**. You can register them in one of two ways:
EDB Postgres for Kubernetes discovers plugins **at startup**. You can register them in one of two ways:

- Sidecar container – run the plugin inside the operator’s Deployment
- Standalone Deployment – run the plugin as a separate workload in the same
Expand Down Expand Up @@ -89,7 +89,7 @@ operator’s and allows independent scaling. In this setup, the plugin exposes a
TCP gRPC endpoint behind a Service, with **mTLS** for secure communication.

!!! Warning
{{name.ln}} does **not** discover plugins dynamically. If you deploy a new
EDB Postgres for Kubernetes does **not** discover plugins dynamically. If you deploy a new
plugin, you must **restart the operator** to detect it.

Example Deployment:
Expand All @@ -113,7 +113,7 @@ spec:

The related Service for the plugin must include:

- The label `k8s.enterprisedb.io/plugin: <plugin-name>` — required for {{name.ln}} to
- The label `k8s.enterprisedb.io/plugin: <plugin-name>` — required for EDB Postgres for Kubernetes to
discover the plugin
- The annotation `k8s.enterprisedb.io/pluginPort: <port>` — specifies the port where the
plugin’s gRPC server is exposed
Expand All @@ -140,7 +140,7 @@ spec:

### Configuring TLS Certificates

When a plugin runs as a `Deployment`, communication with {{name.ln}} happens
When a plugin runs as a `Deployment`, communication with EDB Postgres for Kubernetes happens
over the network. To secure it, **mTLS is enforced**, requiring TLS
certificates for both sides.

Expand All @@ -166,10 +166,36 @@ spec:
You can provide your own certificate bundles, but the recommended method is
to use [Cert-manager](https://cert-manager.io).

#### Customizing the Certificate DNS Name

By default, EDB Postgres for Kubernetes uses the Service name as the server name for TLS
verification when connecting to the plugin. If your environment requires the
certificate to have a different DNS name (e.g., `barman-cloud.svc`), you can
customize it using the `k8s.enterprisedb.io/pluginServerName` annotation:

```yaml
apiVersion: v1
kind: Service
metadata:
annotations:
k8s.enterprisedb.io/pluginClientSecret: cnpg-i-plugin-example-client-tls
k8s.enterprisedb.io/pluginServerSecret: cnpg-i-plugin-example-server-tls
k8s.enterprisedb.io/pluginPort: "9090"
k8s.enterprisedb.io/pluginServerName: barman-cloud.svc
name: barman-cloud
namespace: postgresql-operator-system
spec:
[...]
```

This allows the operator to verify the plugin's certificate against the
specified DNS name instead of the default Service name. The server certificate
must include this DNS name in its Subject Alternative Names (SAN).

## Using a plugin

To enable a plugin, configure the `.spec.plugins` section in your `Cluster`
resource. Refer to the {{name.ln}} API Reference for the full
resource. Refer to the EDB Postgres for Kubernetes API Reference for the full
[PluginConfiguration](https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-k8s-enterprisedb-io-v1-PluginConfiguration)
specification.

Expand Down Expand Up @@ -202,7 +228,7 @@ deployed:
## Community plugins

The CNP-I protocol has quickly become a proven and reliable pattern for
extending {{name.ln}} while keeping the core project maintainable.
extending EDB Postgres for Kubernetes while keeping the core project maintainable.
Over time, the community has built and shared plugins that address real-world
needs and serve as examples for developers.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'Connection pooling'
title: 'Connection Pooling'
originalFilePath: 'src/connection_pooling.md'
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'PostgreSQL Database Management'
title: 'PostgreSQL Database management'
originalFilePath: 'src/declarative_database_management.md'
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'PostgreSQL Role Management'
title: 'PostgreSQL Role management'
originalFilePath: 'src/declarative_role_management.md'
---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Any alterations to the images within a catalog trigger automatic updates for

## {{name.ln}} Catalogs

The {{name.ln}} project maintains `ClusterImageCatalog` manifests for all
The EDB Postgres for Kubernetes project maintains `ClusterImageCatalog` manifests for all
supported images.

These catalogs are regularly updated and published in the
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/postgres_for_kubernetes/1/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ version for 12-18 months before upgrading.

{{name.ln}} works with both PostgreSQL, EDB Postgres Extended and EDB Postgres
Advanced server, and is available under the
[EDB Limited Use License](https://www.enterprisedb.com/limited-use-license).
[EDB End User License Agreement](https://www.enterprisedb.com/legal/EDB-Eula).

You can [evaluate {{name.ln}} for free](evaluation.md) as part of a trial subscription.
You need a valid EDB subscription to use {{name.ln}} in production.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ You can install the manifest for the latest version of the operator by running:

```sh
kubectl apply --server-side -f \
https://get.enterprisedb.io/pg4k/pg4k-1.27.1.yaml
https://get.enterprisedb.io/pg4k/pg4k-1.26.3.yaml
```

You can verify that with:
Expand Down Expand Up @@ -155,8 +155,7 @@ plane for self-managed Kubernetes installations).
## Upgrades

!!! Warning CRITICAL WARNING: UPGRADING OPERATORS

OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](migrating_edb_registries) first.
OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](/postgres_for_kubernetes/latest/migrating_edb_registries) first.

!!! Important
Please carefully read the [release notes](rel_notes)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'Postgres instance manager'
title: 'Postgres Instance Manager'
originalFilePath: 'src/instance_manager.md'
---

Expand Down
48 changes: 24 additions & 24 deletions product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -101,27 +101,27 @@ Once you have this in place, you can apply your manifest normally with
To deploy a cluster using the EPAS [operand](/postgres_for_kubernetes/latest/private_edb_registries/#operand-images) you must reference the Ironbank operand image appropriately in the `Cluster` resource YAML.
For example, to deploy a {{name.abbr}} Cluster using the EPAS 16 operand:

1. Create or edit a `Cluster` resource YAML file with the following content:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-example-full
spec:
imageName: registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-17:17
imagePullSecrets:
- name: my_ironbank_secret
```

2. Apply the YAML:

```
kubectl apply -f <filename>
```

3. Verify the status of the resource:

```
kubectl get clusters
```
1. Create or edit a `Cluster` resource YAML file with the following content:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-example-full
spec:
imageName: registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-17:17
imagePullSecrets:
- name: my_ironbank_secret
```

2. Apply the YAML:

```
kubectl apply -f <filename>
```

3. Verify the status of the resource:

```
kubectl get clusters
```
30 changes: 15 additions & 15 deletions product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ them in your systems.

#### Debian packages

For example, let's install the 1.27.1 release of the plugin, for an Intel based
For example, let's install the 1.26.3 release of the plugin, for an Intel based
64 bit server. First, we download the right `.deb` file.

```sh
wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.27.1/kubectl-cnp_1.27.1_linux_x86_64.deb \
wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.26.3/kubectl-cnp_1.26.3_linux_x86_64.deb \
--output-document kube-plugin.deb
```

Expand All @@ -50,17 +50,17 @@ $ sudo dpkg -i kube-plugin.deb
Selecting previously unselected package cnp.
(Reading database ... 6688 files and directories currently installed.)
Preparing to unpack kube-plugin.deb ...
Unpacking kubectl-cnp (1.27.1) ...
Setting up kubectl-cnp (1.27.1) ...
Unpacking kubectl-cnp (1.26.3) ...
Setting up kubectl-cnp (1.26.3) ...
```

#### RPM packages

As in the example for `.rpm` packages, let's install the 1.27.1 release for an
As in the example for `.rpm` packages, let's install the 1.26.3 release for an
Intel 64 bit machine. Note the `--output` flag to provide a file name.

```sh
curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.27.1/kubectl-cnp_1.27.1_linux_x86_64.rpm \
curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.26.3/kubectl-cnp_1.26.3_linux_x86_64.rpm \
--output kube-plugin.rpm
```

Expand All @@ -74,7 +74,7 @@ Dependencies resolved.
Package Architecture Version Repository Size
====================================================================================================
Installing:
cnp x86_64 1.27.1-1 @commandline 20 M
cnp x86_64 1.26.3 @commandline 20 M

Transaction Summary
====================================================================================================
Expand Down Expand Up @@ -243,9 +243,9 @@ sandbox-3 0/604DE38 0/604DE38 0/604DE38 0/604DE38 00:00:00 00:00:00 00
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
sandbox-1 0/604DE38 Primary OK BestEffort 1.27.1 k8s-eu-worker
sandbox-2 0/604DE38 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker2
sandbox-3 0/604DE38 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker
sandbox-1 0/604DE38 Primary OK BestEffort 1.26.3 k8s-eu-worker
sandbox-2 0/604DE38 Standby (async) OK BestEffort 1.26.3 k8s-eu-worker2
sandbox-3 0/604DE38 Standby (async) OK BestEffort 1.26.3 k8s-eu-worker
```

If you require more detailed status information, use the `--verbose` option (or
Expand Down Expand Up @@ -299,9 +299,9 @@ sandbox-primary primary 1 1 1
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
sandbox-1 0/6053720 Primary OK BestEffort 1.27.1 k8s-eu-worker
sandbox-2 0/6053720 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker2
sandbox-3 0/6053720 Standby (async) OK BestEffort 1.27.1 k8s-eu-worker
sandbox-1 0/6053720 Primary OK BestEffort 1.26.3 k8s-eu-worker
sandbox-2 0/6053720 Standby (async) OK BestEffort 1.26.3 k8s-eu-worker2
sandbox-3 0/6053720 Standby (async) OK BestEffort 1.26.3 k8s-eu-worker
```

With an additional `-v` (e.g. `kubectl cnp status sandbox -v -v`), you can
Expand Down Expand Up @@ -524,12 +524,12 @@ Archive: report_operator_<TIMESTAMP>.zip

```output
====== Begin of Previous Log =====
2023-03-28T12:56:41.251711811Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.27.1","build":{"Version":"1.27.1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
2023-03-28T12:56:41.251711811Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.26.3","build":{"Version":"1.26.3+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
2023-03-28T12:56:41.251851909Z {"level":"info","ts":"2023-03-28T12:56:41Z","logger":"setup","msg":"Starting pprof HTTP server","addr":"0.0.0.0:6060"}
<snipped …>

====== End of Previous Log =====
2023-03-28T12:57:09.854306024Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.27.1","build":{"Version":"1.27.1+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
2023-03-28T12:57:09.854306024Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting EDB Postgres for Kubernetes Operator","version":"1.26.3","build":{"Version":"1.26.3+dev107","Commit":"cc9bab17","Date":"2023-03-28"}}
2023-03-28T12:57:09.854363943Z {"level":"info","ts":"2023-03-28T12:57:09Z","logger":"setup","msg":"Starting pprof HTTP server","addr":"0.0.0.0:6060"}
```

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'Kubernetes Upgrade and Maintenance'
title: 'Kubernetes upgrade and maintenance'
originalFilePath: 'src/kubernetes_upgrade.md'
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'Labels and annotations'
title: 'Labels and Annotations'
originalFilePath: 'src/labels_annotations.md'
---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ If you are not using an EDB subscription token and installing from public reposi

OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](migrating_edb_registries) first.

!!! Warning CRITICAL WARNING: UPGRADING OPERATORS
OpenShift users, or any customer attempting an operator upgrade, MUST configure the new unified repository pull secret (docker.enterprisedb.com/k8s) before running the upgrade. If the old, deprecated repository path is still in use during the upgrade process, image pull failure will occur, leading to deployment failure and potential downtime. Follow the [Central Migration Guide](/postgres_for_kubernetes/latest/migrating_edb_registries) first.

The following documentation is only for users who have installed the operator using a license key.

## Company level license keys
Expand Down Expand Up @@ -96,8 +99,8 @@ This field will take precedence over `licenseKey`: it will be refreshed
when you change the secret, in order to extend the expiration date, or switching from a trial
license to a production license.

{{name.ln}} is distributed under the EDB Limited Usage License
Agreement, available at [enterprisedb.com/limited-use-license](https://www.enterprisedb.com/limited-use-license).
EDB Postgres for Kubernetes is distributed under the EDB End User License
Agreement, available at [enterprisedb.com/legal/EDB-Eula](https://www.enterprisedb.com/legal/EDB-Eula).

{{name.ln}}: Copyright (C) 2019-2022 EnterpriseDB Corporation.

Expand Down
Loading